|WikiProject Statistics||(Rated Start-class, Top-importance)|
Can someone add a more detailed explaination and an example to this article?
The symbol used for the ratio is the symbol used in the likelihood ratio test article, even though the likelihoods there are the maximum likelihoods. I suppose seeing as there is only one possible value under each hypothesis that specifying they are the supremums is not strictly necessary but it might be technichally correct.
Also the likelihood ratio test article says that the null hypothesis has to be a subset of the alternative hypothesis whereas here that is not the case. Possibly this is a generalised likelihood ratio test as described here: http://www.cbr.washington.edu/papers/zabel/chp3.doc7.html where there are only two possible values of the parameter theta?
- The main problem is that the LRT article is written even more badly than this one. Statements such as "the null hypothesis has to be a subset of the alternative hypothesis" represent a fundamental lack-of-understanding of hypothesis testing; to the contrary, the hypotheses must be disjoint. What is (before I get to work with editing...) called an LRT on the LRT page, should indeed be called a generalized (or maximum) likelihood ratio test. --Zaqrfv (talk) 08:41, 27 August 2008 (UTC)
The recent addition (May 8 2008) needs correction to the algebra which is beyond my skill. The term involving the different of two variances needs changing to a form which involves the difference of the reciprocals of the variances. Melcombe (talk) 09:46, 8 May 2008 (UTC)
Is there any reason why the two rejection regions defined at the start of the proof need to be and ? The A and are confusingly similar, especially as subscripts. MDReid (talk) 02:28, 1 August 2008 (UTC)
How many problems are there in this page?
Let me count.
- Undefined notation, like L(.|.), in the introduction paragraph, and no clear verbal statement of the result.
- No mention of randomized testing, which is critical to the N-P lemma when discrete distributions are involved.
- The proof (should a proof even be here, rather than in references?) is unnecessarily long-winded and notation-heavy, and isn't even general (doesn't appear to allow randomized tests, and therefore doesn't cover discrete distributions).
- An example that tells me I reject in favour of , with , if the sample variance is sufficiently small. Umm, I think this is a least powerful test, or something.
- And can't we find a less-messy example for demonstration anyway?
- I'd like a measure theoretic version of this Lemma with the Radon Nikodym derivative. In Mathematical Finance, not all understand statistics jargon. —Preceding unsigned comment added by 220.127.116.11 (talk) 09:11, 23 July 2009 (UTC)
I think most of your suggested update is great. However, as the original author of the proof I'd like to comment. I put it there since I needed to understand the lemma one day and there was nothing online, hence I derived it. OK the notation may not be the best. Your proposed proof is quick, however its style "start by looking at this weird inequality I'd never dream up in a million years" doesn't offer any real understanding.
I really don't think that generalising to include randomized testing is worthwhile, since in reality it is never, ever used. —Preceding unsigned comment added by 18.104.22.168 (talk) 14:17, 28 August 2008 (UTC)
- The draft proof is actually a fairly proof from statistics texts (essentially the same as given in Lehman, for example). With regard to randomization, this is again standard for any proper treatment of the NP lemma in statistics texts. Without it, the lemma is incomplete, since one leaves discrete data (or more correctly, discrete likelihood ratios) uncovered. "Most powerful" tests for Poisson data could take some very strange forms, if one doesn't allow randomized testing. --Zaqrfv (talk) 23:17, 2 September 2008 (UTC)
The initial definition tests H1 against H0 ($\Lambda < k$), but the example tests H0 against H1 ($\Lambda > k$). —Preceding unsigned comment added by Cerfe (talk • contribs) 17:07, 19 August 2010 (UTC)
claim that "the test statistic can be shown to be a scaled Chi-square distributed random variable"
In the section titled "example", this article claims that "the test statistic can be shown to be a scaled Chi-square distributed random variable", but does not provide a source or any explanation of how this can be shown. — Preceding unsigned comment added by Rcorty (talk • contribs) 21:02, 19 May 2016 (UTC)