Talk:Likelihood-ratio test

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
High Importance
 Field: Probability and statistics
One of the 500 most frequently viewed mathematics articles.
WikiProject Statistics (Rated Start-class, High-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

Start-Class article Start  This article has been rated as Start-Class on the quality scale.
 High  This article has been rated as High-importance on the importance scale.
 

Added General Audience Introduction and Created Examples Contents[edit]

The instructions for creating less technical articles suggest starting with a simplier explanation upfront and then get into the technical details later. With a table of contents, the instructions indicate that provides for having something accessible for those that haven't extensively studied this topic; while at the same time, leaving a meaty article for those interested in something more sophisticated. I dont know if I pulled it off perfectly, but I think it improves the article in a way in which who ever put the "to technical" banner would approve.

At the same time I moved the example into its own contents tab to seperate it from the theory portion.

Jeremiahrounds 18:59, 20 June 2007 (UTC)


Would it be possible for any one to add a proof of why the test follows a chi squared distribution ? —Preceding unsigned comment added by Thedreamshaper (talkcontribs) 20:52, 17 February 2010 (UTC)

I think the introduction might benifit from a re-write, perhaps this formula would be more appropriate than the asymptotic version: \Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}. --131.111.243.37 (talk) 10:18, 25 May 2010 (UTC)

I added a non-technical description about when these tests arise in practice to the first paragraph. Not an expert, but using this page without something like that was not helpful. —Preceding unsigned comment added by 98.143.103.218 (talk) 04:07, 29 September 2010 (UTC)

difficult take on the likelihood viewpoint?[edit]

I believe this essentially obscures the idea here:

\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.

The likelihood ratio test is the ratio of the probability of the result GIVEN the maximum likelihood estimator in the domain of the null and alternative hypothesis.

The supremums in that equation sort of combine the maximum likelihood method into the theory of likelihood ratios.

I am not making this up. For example, the text Hoel, Introduction of Statistical Theory uses L(x| theta0) / L(x | theta) where each theta is the maximum likelihood estimate applicable to each hypothesis.

You can more simply state it as Hoel does and just note that the thetas are produced by maximum likelihood estimates. So the supremum doesnt need to appear in the theory of likelihood ratios. Then you get a ratio of probabilities that is easier to read and even think about.

I actually initially called the offered equation an error. But that is a bridge to far I think. Putting the supremums in the context where you appear to be maximizing something after the data is taken isnt very useful for understanding the actual method though.

Jeremiahrounds 12:11, 20 June 2007 (UTC)

I don't think there is any maximum involved in the Likelihood-ratio test, you just have to make the ratio of the likelihood under hypothesis H0 and H1. I'm not an expert in statistics but I think this equation introduces a confusion between Likelihood-ratio test and maximum likelihood estimation. I have never seen it presented this way anyway... Sylenius 14:45, 27 June 2007 (UTC)

I think Jeremiahrounds is mistaken. In case the MLEs actually exist, the likelihood-ratio test statistic is in fact equal to what Hoel's book says it is, and also it is equal to the expression in TeX above, which appears in this article. But the likelihood-ratio test statistic can exist even in cases where MLEs don't exist, simply because the sup exists and the max does not, i.e. the sup is not actually attained. Moreover, the problem of non-unique MLEs doesn't matter, since it is only the value of the sup rather than the value of θ where the sup occurs that matters. Michael Hardy 19:05, 27 June 2007 (UTC)

Untitled[edit]

Can someone please replace the awful ascii-art in this article with TeX, please?


I may get to that if someone doesn't beat me to it. Hundreds of articles here are in need of TeX to replace what was used here before 2003. Michael Hardy 22:57 Feb 2, 2003 (UTC)


The article uses λ in some places, and Λ in others -- is this intentional, or should they all be one or the other?

This article needs thorough checking and copyediting.


(Capital) Λ is the most frequently used notation for the test statistic. Michael Hardy 20:12 Feb 4, 2003 (UTC)

Can the Likelihoor ratio test be used in place of the F-test for a fixed effects models. Any diffrences from the F-test in this case? What about using LRT for testing fixed effects in mixed model?

The F-test is the likelihood ratio test in such models. Michael Hardy 22:30, 3 September 2005 (UTC)

Hi. I may be misguided or mistaken here, I'm hardly expert. But I think the definition of the test statistic given is inconsistent with the test statistic given. The unrestricted numerator will be larger than the restricted denominator, so the ratio will be greater than 1, and its log will be positive, so -2 log Λ will be negative and can hardly be chi-square distributed. I think that either the ratio should be inverted, or the test statistic multiplied by negative 1, to keep things consistent. (My apologies again if I'm making a basic mistake, a possibility of which the likelihood is high.) Stevewaldman (talk) 00:58, 20 January 2008 (UTC)

"asymptotically"[edit]

"If the null hypothesis is true, then −2 log Λ will be asymptotically χ2 distributed" The validity conditions of this theorem should be given. "asymptotically" when what tends to what value ?

I have now answered this question in the article. Michael Hardy 02:36, 28 October 2005 (UTC)
There's really no further restriction on the random variables ("n independent identically distributed random variables")? Dchudz 15:22, 13 July 2007 (UTC)

References[edit]

This article lacks references. For a instance, who proved that the likelihood ratio has density function is \chi^{2}+O_{p}(n^{-1})?

I believe the critical paper is WILKS, SS (1938): "The Large Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses," Annals of Mathematical Statistics, 9, 60-62.

Freely available online at http://projecteuclid.org/euclid.aoms/1177732360 —Preceding unsigned comment added by 61.18.170.102 (talk) 18:15, 6 April 2008 (UTC)

Coins[edit]

Hi, I think your example of the coins is fine but needs elaborating.

  • You haven't defined mij which I assume is the probability of event j when the two coins have the same probability of event j. It might be better calling it mj then mij.
  • I think you should put in the equation for the likelihood ratio lambda, then follow it with the -2 log lambda (-2LL) equation
  • I'm not sure your -2LL equation is right, though I may be wrong. It looks to me as if your -2LL equation converts to lambda squared equals the ratio of the max likelihood of the data for the two hypotheses.

Desmond D.Campbell@iop.kcl.ac.uk 89.241.126.245 01:36, 24 March 2007 (UTC)

This page is FUNDAMENTALLY WRONG.[edit]

Where to begin? A likelihood ratio test is for simple-vs-simple hypothesis. The test statistic given is a generalized, or maximum, likelihood ratio statistic. It may be commonly referred to in conversation as an LRT, but no competent mathematical statistics text will refer to it as such.

The distinction is critical. For example, the Neyman-Pearson lemma, mentioned in the article, is only directly applicable to the simple-vs-simple test. It may be extended to some composite alternatives (UMP test) through eg. Monotone likelihood ratios. For most practical composite hypotheses, the best results are generally more restrictive, eg. UMPU.

As for the flag about "too technical for a general audience". Blah. No choice, but one has to understand some mathematical statistics to have a chance of understanding LRT's. Conversely, a "General Audience" will have little concern over LRT's.

Anyway, another page for the "expert needed" flag... --Zaqrfv (talk) 09:37, 25 August 2008 (UTC)

"Zaqrfv", your main point is wrong. It is true that the LR test referred to in the Neyman–Pearson lemma is for simple-versus-simple. But to say that no respectable text will use this term for the generalized version is wrong. It's quite commonplace. Michael Hardy (talk) 20:51, 17 March 2009 (UTC)
Blah? I wholeheartedly disagree that this article can't be directed at a more general audience (e.g. physicians wanting to interpret the diagnostic validity of a test). The more complex stuff is fine towards the end of the article, but let's put the accessible stuff up front. Currently, the Wikipedia article is one of the least accessible articles on LRs on the web. After having read this article, I still have no idea what they are.164.111.16.221 (talk) 13:50, 5 November 2008 (UTC)

Hi, I think that the definition given for the ratio is wrong: "The numerator corresponds to the maximum probability of an observed result under the null hypothesis. The denominator corresponds to the maximum probability of an observed result under the alternative hypothesis." I was checking in some books, Mathematics and Statistics for science page 157 for example and the definition is the other way around. Then the interpretation needs another review also. —Preceding unsigned comment added by Isapedraza (talkcontribs) 12:52, 26 February 2009 (UTC)

It can be done either way; you just need to say that in one case you reject the null if the ratio is too big and in the other case if it's too small; the test is the same either way (in the sense that any dataset will lead to rejection in one case if and only if it leads to rejection in the other case). Michael Hardy (talk) 20:51, 17 March 2009 (UTC)
A problem might be in the Criticism > Practical paragraph, which states that a disease is present if the likelihood ratio is large. This would be the other way around if we want to be consistent with the definition given. Jonas Wagner (talk) 13:16, 10 June 2010 (UTC)

What is f(.)[edit]

Are we talking cdf or pdf? The probability that x is observed exactly as is? or that x or something more extreme than x was observed cancan101 (talk) 03:02, 18 February 2009 (UTC)

In standard usage a lower-case ƒ is the pdf, and capital F is the cdf. Michael Hardy (talk) 22:09, 20 March 2009 (UTC)

Dubious[edit]

I have revised the section, including the para marked dubious. Is it better/good enough? Otherwise give details of apparent problem points. Melcombe (talk) 10:16, 18 February 2009 (UTC)

I have revised the section, the paragraph does not seem to hold good under the revised definition and hence is omitted. Kniwor (talk) 18:58, 23 August 2009 (UTC)

Inconsistencies (Revised for improvement)[edit]

The sections and the definitions(though correct) seem inconsistent to me, and thoroughly confusing for a reader unfamiliar with the topic. I have revised and rewritten the first two sections to avoid any confusion and make things clear and consistent. Please point out any errors. Kniwor (talk) 18:57, 23 August 2009 (UTC)


The ratio[edit]

Since the test is for nested ones, so it is better to state in the following way: D=2\left(L\left(unconstrained\right)-L\left(constrained\right)\right)

rather than the original articulation in the article.

For the non-logarithmized one, D=\frac{l\left(unconstrained\right)}{l\left(constrained\right)}

Jackzhp (talk) 22:18, 9 February 2011 (UTC)


Wilk's theorem[edit]

can someone put a reference so we can see where to look for its precise format. Jackzhp (talk) 22:21, 9 February 2011 (UTC)

I've added the relevant ref: Wilks, S. S. (1938). "The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses". The Annals of Mathematical Statistics 9: 60–62. doi:10.1214/aoms/1177732360.  edit --Qwfp (talk) 19:37, 11 February 2011 (UTC)

Background?[edit]

Wasn't the likelihood ratio test a result of Søren Johansen's work, or am I mistaken? Shouldn't he be mentioned in the Background section? I've only ever heard this referred to as Johansen's likelihood ratio and Johansen's likelihood ratio test. — Preceding unsigned comment added by 64.71.89.15 (talk) 19:34, 29 November 2011 (UTC)

I think Johansen just developed a specific likelihood ratio test for cointegration. He was born in 1939 and Wilks' theorem about the asymptotic distribution of the log-likelihood ratio dates from 1938, so it seems improbable that likelihood ratio tests in general are a result of Johansen's work. Qwfp (talk) 20:10, 29 November 2011 (UTC)
Thanks! I understand now. Apologies for forgetting to login and sign my previous post - didn't realize I was logged out. John Shandy`talk 20:57, 29 November 2011 (UTC)

Definition of Deviance is wrong[edit]

The definition of deviance on the page now, 25. of January 2013 is wrong.

It should be: -2ln [ likelihood of fitted model / likelihood of saturated model ] Which is the correct definition from Hosmer and Lemeshow's Applied logistic regression p. 13. — Preceding unsigned comment added by 62.242.0.66 (talk) 11:29, 25 January 2013 (UTC)

Deviance is not mentioned in this article at all. There is a different quantity denoted by D. 81.98.35.149 (talk) 19:19, 25 January 2013 (UTC)