# Neyman–Pearson lemma

In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman-Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all $\alpha$ level tests while subsequently minimizing type II error, traditionally denoted by $\beta$ . Their seminal paper of 1933, including the Neyman-Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error ($\alpha$ ), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman-Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.

## Statement

Consider a test with hypotheses $H_{0}:\theta =\theta _{0}$ and $H_{1}:\theta =\theta _{1}$ , where the probability density function (or probability mass function) is $\rho (x\mid \theta _{i})$ for $i=0,1$ .

For any hypothesis test with rejection set $R$ , and any $\alpha \in [0,1]$ , we say that it satisfies condition $P_{\alpha }$ if

• $\alpha =Pr_{\theta _{0}}(X\in R)$ • That is, the test has size $\alpha$ (that is, the probability of falsely rejecting the null hypothesis is $\alpha$ ).
• $\exists \eta \geq 0$ such that{\begin{aligned}x\in &R\setminus A\implies \rho (x\mid \theta _{1})>\eta \rho (x\mid \theta _{0})\\x\in &R^{c}\setminus A\implies \rho (x\mid \theta _{1})<\eta \rho (x\mid \theta _{0})\end{aligned}} where $A$ is a set ignorable in both $\theta _{0}$ and $\theta _{1}$ cases: $Pr_{\theta _{0}}(X\in A)=Pr_{\theta _{1}}(X\in A)=0$ .
• That is, we have a strict likelihood ratio test, except on an ignorable subset.

For any $\alpha \in [0,1]$ , let the set of level $\alpha$ tests be the set of all hypothesis tests with size at most $\alpha$ . That is, letting its rejection set be $R$ , we have $Pr_{\theta _{0}}(X\in R)\leq \alpha$ .

Neyman-Pearson lemma — Existence:

If a hypothesis test satisfies $P_{\alpha }$ condition, then it is a uniformly most powerful (UMP) test in the set of level $\alpha$ tests.

Uniqueness: If there exists a hypothesis test $R_{NP}$ that satisfies $P_{\alpha }$ condition, with $\eta >0$ , then every UMP test $R$ in the set of level $\alpha$ tests satisfies $P_{\alpha }$ condition with the same $\eta$ .

Further, the $R_{NP}$ test and the $R$ test agree with probability $1$ whether $\theta =\theta _{0}$ or $\theta =\theta _{1}$ .

In practice, the likelihood ratio is often used directly to construct tests — see likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).

Proof

Given any hypothesis test with rejection set $R$ , define its statistical power function $\beta _{R}(\theta )=Pr_{\theta }(X\in R)$ .

Existence:

Given some hypothesis test that satisfies $P_{\alpha }$ condition, call its rejection region $R_{NP}$ (where NP stands for Neyman-Pearson).

For any level $\alpha$ hypothesis test with rejection region $R$ we have $[1_{R_{NP}}(x)-1_{R}(x)][\rho (x|\theta _{1})-\eta \rho (x|\theta _{0})]\geq 0$ except on some ignorable set $A$ .

Then integrate it over $x$ to obtain $0\leq [\beta _{R_{NP}}(\theta _{1})-\beta _{R}(\theta _{1})]-\eta [\beta _{R_{NP}}(\theta _{0})-\beta _{R}(\theta _{0})]$ .

Since $\beta _{R_{NP}}(\theta _{0})=\alpha$ and $\beta _{R}(\theta _{0})\leq \alpha$ , we find that $\beta _{R_{NP}}(\theta _{1})\geq \beta _{R}(\theta _{1})$ .

Thus the $R_{NP}$ rejection test is a UMP test in the set of level $\alpha$ tests.

Uniqueness:

For any other UMP level $\alpha$ test, with rejection region $R$ , we have from Existence part, $[\beta _{R_{NP}}(\theta _{1})-\beta _{R}(\theta _{1})]\geq \eta [\beta _{R_{NP}}(\theta _{0})-\beta _{R}(\theta _{0})]$ .

Since the $R$ test is UMP, the left side must be zero. Since $\eta >0$ the right side gives $\beta _{R}(\theta _{0})=\beta _{R_{NP}}(\theta _{0})=\alpha$ , so the $R$ test has size $\alpha$ .

Since the integrand $[1_{R_{NP}}(x)-1_{R}(x)][\rho (x|\theta _{1})-\eta \rho (x|\theta _{0})]$ is nonnegative, and integrates to zero, it must be exactly zero except on some ignorable set $A$ .

Since the $R_{NP}$ test satisfies $P_{\alpha }$ condition, let the ignorable set in the definition of $P_{\alpha }$ condition be $A_{NP}$ .

$R\setminus (R_{NP}\cup A_{NP})$ is ignorable, since for all $x\in R\setminus (R_{NP}\cup A_{NP})$ , we have $[1_{R_{NP}}(x)-1_{R}(x)][\rho (x|\theta _{1})-\eta \rho (x|\theta _{0})]=\eta \rho (x|\theta _{0})-\rho (x|\theta _{1})>0$ .

Similarly, $R_{NP}\setminus (R\cup A_{NP})$ is ignorable.

Define $A_{R}:=(R\Delta R_{NP})\cup A_{NP}$ (where $\Delta$ means symmetric difference). It is the union of three ignorable sets, thus it is an ignorable set.

Then we have $x\in R\setminus A_{R}\implies \rho (x|\theta _{1})>\eta \rho (x|\theta _{0})$ and $x\in R^{c}\setminus A_{R}\implies \rho (x|\theta _{1})<\eta \rho (x|\theta _{0})$ . So the $R$ rejection test satisfies $P_{\alpha }$ condition with the same $\eta$ .

Since $A_{R}$ is ignorable, its subset $R\Delta R_{NP}\subset A_{R}$ is also ignorable. Consequently, the two tests agree with probability $1$ whether $\theta =\theta _{0}$ or $\theta =\theta _{1}$ .

## Example

Let $X_{1},\dots ,X_{n}$ be a random sample from the ${\mathcal {N}}(\mu ,\sigma ^{2})$ distribution where the mean $\mu$ is known, and suppose that we wish to test for $H_{0}:\sigma ^{2}=\sigma _{0}^{2}$ against $H_{1}:\sigma ^{2}=\sigma _{1}^{2}$ . The likelihood for this set of normally distributed data is

${\mathcal {L}}\left(\sigma ^{2}\mid \mathbf {x} \right)\propto \left(\sigma ^{2}\right)^{-n/2}\exp \left\{-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right\}.$ We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome:

$\Lambda (\mathbf {x} )={\frac {{\mathcal {L}}\left({\sigma _{0}}^{2}\mid \mathbf {x} \right)}{{\mathcal {L}}\left({\sigma _{1}}^{2}\mid \mathbf {x} \right)}}=\left({\frac {\sigma _{0}^{2}}{\sigma _{1}^{2}}}\right)^{-n/2}\exp \left\{-{\frac {1}{2}}(\sigma _{0}^{-2}-\sigma _{1}^{-2})\sum _{i=1}^{n}(x_{i}-\mu )^{2}\right\}.$ This ratio only depends on the data through $\sum _{i=1}^{n}(x_{i}-\mu )^{2}$ . Therefore, by the Neyman–Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on $\sum _{i=1}^{n}(x_{i}-\mu )^{2}$ . Also, by inspection, we can see that if $\sigma _{1}^{2}>\sigma _{0}^{2}$ , then $\Lambda (\mathbf {x} )$ is a decreasing function of $\sum _{i=1}^{n}(x_{i}-\mu )^{2}$ . So we should reject $H_{0}$ if $\sum _{i=1}^{n}(x_{i}-\mu )^{2}$ is sufficiently large. The rejection threshold depends on the size of the test. In this example, the test statistic can be shown to be a scaled Chi-square distributed random variable and an exact critical value can be obtained.

## Application in economics

A variant of the Neyman–Pearson lemma has found an application in the seemingly unrelated domain of the economics of land value. One of the fundamental problems in consumer theory is calculating the demand function of the consumer given the prices. In particular, given a heterogeneous land-estate, a price measure over the land, and a subjective utility measure over the land, the consumer's problem is to calculate the best land parcel that they can buy – i.e. the land parcel with the largest utility, whose price is at most their budget. It turns out that this problem is very similar to the problem of finding the most powerful statistical test, and so the Neyman–Pearson lemma can be used.

## Uses in electrical engineering

The Neyman–Pearson lemma is quite useful in electronics engineering, namely in the design and use of radar systems, digital communication systems, and in signal processing systems. In radar systems, the Neyman–Pearson lemma is used in first setting the rate of missed detections to a desired (low) level, and then minimizing the rate of false alarms, or vice versa. Neither false alarms nor missed detections can be set at arbitrarily low rates, including zero. All of the above goes also for many systems in signal processing.

## Uses in particle physics

The Neyman–Pearson lemma is applied to the construction of analysis-specific likelihood-ratios, used to e.g. test for signatures of new physics against the nominal Standard Model prediction in proton-proton collision datasets collected at the LHC.