# Likelihood-ratio test

Not to be confused with the use of likelihood ratios in diagnostic testing.

In statistics, a likelihood ratio test is a statistical test used for comparing the goodness of fit of two models, one of which (the null model) is a special case of the other (the alternative model). The test is based on the likelihood ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether to reject the null model in favour of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks’ theorem.

In the case of distinguishing between two models, each of which has no unknown parameters, use of the likelihood ratio test can be justified by the Neyman–Pearson lemma, which demonstrates that such a test has the highest power among all competitors.[1]

## Definition

### Simple hypotheses

A statistical model is often a parametrized family of probability density functions or probability mass functions ${\displaystyle f(x|\theta )}$. A simple-vs.-simple hypothesis test has completely specified models under both the null and alternative hypotheses, which for convenience are written in terms of fixed values of a notional parameter ${\displaystyle \theta }$:

{\displaystyle {\begin{aligned}H_{0}&:&\theta =\theta _{0},\\H_{1}&:&\theta =\theta _{1}.\end{aligned}}}

Note that under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test is based on the likelihood ratio, which is often denoted by ${\displaystyle \Lambda }$ (the capital Greek letter lambda). The likelihood ratio is defined as follows:[2][3]

${\displaystyle \Lambda (x)={\frac {L(\theta _{0}\mid x)}{L(\theta _{1}\mid x)}}={\frac {f(\bigcup _{i}\,x_{i}\mid \theta _{0})}{f(\bigcup _{i}\,x_{i}\mid \theta _{1})}}}$

or

${\displaystyle \Lambda (x)={\frac {L(\theta _{0}\mid x)}{\sup\{\,L(\theta \mid x):\theta \in \{\theta _{0},\theta _{1}\}\}}},}$

where ${\displaystyle \theta \mapsto L(\theta \mid x)}$ is the likelihood function, and ${\displaystyle \sup }$ is the supremum function. Note that some references may use the reciprocal as the definition.[4] In the form stated here, the likelihood ratio is small if the alternative model is better than the null model and the likelihood ratio test provides the decision rule as follows:

If ${\displaystyle \Lambda >c}$, do not reject ${\displaystyle H_{0}}$;
If ${\displaystyle \Lambda , reject ${\displaystyle H_{0}}$;
Reject with probability ${\displaystyle q}$ if ${\displaystyle \Lambda =c.}$

The values ${\displaystyle c,\;q}$ are usually chosen to obtain a specified significance level ${\displaystyle \alpha }$, through the relation

${\displaystyle q\cdot P(\Lambda =c\mid H_{0})+P(\Lambda .

The Neyman-Pearson lemma states that this likelihood ratio test is the most powerful among all level ${\displaystyle \alpha }$ tests for this problem.[1]

### Composite hypotheses

A null hypothesis is often stated by saying the parameter ${\displaystyle \theta }$ is in a specified subset ${\displaystyle \Theta _{0}}$ of the parameter space ${\displaystyle \Theta }$.

{\displaystyle {\begin{aligned}H_{0}&:&\theta \in \Theta _{0}\\H_{1}&:&\theta \in \Theta _{0}^{\complement }\end{aligned}}}

The likelihood function is ${\displaystyle L(\theta \mid x)=f(x\mid \theta )}$ (the probability density function or probability mass function), which is a function of the parameter ${\displaystyle \theta }$ with ${\displaystyle x}$ held fixed at the value that was actually observed, i.e., the data. The likelihood ratio test statistic is [5]

${\displaystyle \Lambda (x)={\frac {\sup\{\,L(\theta \mid x):\theta \in \Theta _{0}\,\}}{\sup\{\,L(\theta \mid x):\theta \in \Theta \,\}}}.}$

Here, the ${\displaystyle \sup }$ notation refers to the supremum function.

A likelihood ratio test is any test with critical region (or rejection region) of the form ${\displaystyle \{x\mid \Lambda \leq c\}}$ where ${\displaystyle c}$ is any number satisfying ${\displaystyle 0\leq c\leq 1}$. Many common test statistics such as the Z-test, the F-test, Pearson's chi-squared test and the G-test are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.

## Interpretation

Being a function of the data ${\displaystyle x}$, the likelihood ratio is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be rejected.

The likelihood-ratio test requires nested models – models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters. If the models are not nested, then a generalization of the likelihood-ratio test can usually be used instead: the relative likelihood.

## Distribution: Wilks’ theorem

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result by Samuel S. Wilks, says that as the sample size ${\displaystyle n}$ approaches ${\displaystyle \infty }$, the test statistic ${\displaystyle -2\log(\Lambda )}$ for a nested model will be asymptotically chi-squared distributed (${\displaystyle \chi ^{2}}$) with degrees of freedom equal to the difference in dimensionality of ${\displaystyle \Theta }$ and ${\displaystyle \Theta _{0}}$, when ${\displaystyle H_{0}}$ holds true.[6] This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio ${\displaystyle \Lambda }$ for the data and compare ${\displaystyle -2\log(\Lambda )}$ to the ${\displaystyle \chi ^{2}}$ value corresponding to a desired statistical significance as an approximate statistical test.

### Extensions

Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space. This is commonly violated in random or mixed effects models, for example, when one of the variance components is negligible relative to the others. In some such cases, one variance component is essentially zero relative to the others or the models are not properly nested. Pinheiro and Bates showed in 2000 that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naïve ${\displaystyle \chi ^{2}}$ – often dramatically so.[7] The naïve assumptions could give significance probabilities (p-values) that are far too large on average in some cases and far too small in others.

In general, to test random effects, they recommend using Restricted maximum likelihood (REML). For fixed effects testing, they say, “a likelihood ratio test for REML fits is not feasible, because” changing the fixed effects specification changes the meaning of the mixed effects, and the restricted model is therefore not nested within the larger model.[7]

As a demonstration, they set either one or two random effects variances to zero in simulated tests. In those particular examples, the simulated p-values with k restrictions most closely matched a 50-50 mixture of ${\displaystyle \chi ^{2}(k)}$ and ${\displaystyle \chi ^{2}(k-1)}$. (With k = 1, ${\displaystyle \chi ^{2}(0)}$ is 0 with probability 1. This means that a good approximation was ${\displaystyle 0.5\chi ^{2}(1)}$.)[7]

Pinheiro and Bates also simulated tests of different fixed effects. In one test of a factor with 4 levels (degrees of freedom = 3), they found that a 50-50 mixture of ${\displaystyle \chi ^{2}(3)}$ and ${\displaystyle \chi ^{2}(4)}$ was a good match for actual p-values obtained by simulation – and the error in using the naïve ${\displaystyle \chi ^{2}(3)}$ “may not be too alarming.[7] However, in another test of a factor with 15 levels, they found a reasonable match to ${\displaystyle \chi ^{2}(18)}$ – 4 more degrees of freedom than the 14 that one would get from a naïve (inappropriate) application of Wilks’ theorem, and the simulated p-value was several times the naïve ${\displaystyle \chi ^{2}(14)}$.” They conclude that for testing fixed effects, it’s wise to use simulation. (And they provided a “simulate.lme” function in their “nlme” package for S-PLUS and R to support doing that.)

To be clear: These limitations on Wilks’ theorem do not negate any power properties of a particular likelihood ratio test. The only issue is that a ${\displaystyle \chi ^{2}}$ distribution is sometimes not appropriate for determining the statistical significance of the result.

## Use

Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by D) is twice the log of the likelihoods ratio, i.e., it is twice the difference in the log-likelihoods:

{\displaystyle {\begin{aligned}D&=-2\ln \left({\frac {\text{likelihood for null model}}{\text{likelihood for alternative model}}}\right)\\[5pt]&=2\ln \left({\frac {\text{likelihood for alternative model}}{\text{likelihood for null model}}}\right)\\[5pt]&=2\times [\ln({\text{likelihood for alternative model}})-\ln({\text{likelihood for null model}})]\\[5pt]\end{aligned}}}

The model with more parameters (here alternative) will always fit at least as well—i.e., have the same or greater log-likelihood—than the model with fewer parameters (here null). Whether the fit is significantly better and should thus be preferred is determined by deriving the probability or p-value of the difference D. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to ${\displaystyle df_{\text{alt}}-df_{\text{null}}}$,[8] respectively the number of free parameters of models alternative and null.

Here is an example of use. If the null model has 1 parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of ${\displaystyle 2\times (-8012-(-8024))=24}$ with ${\displaystyle 3-1=2}$ degrees of freedom, and is equal to ${\displaystyle 6\times 10^{-6}}$. Certain assumptions[6] must be met for the statistic to follow a chi-squared distribution, and often empirical p-values are computed.

## Examples

### Coin tossing

An example, in the case of Pearson’s test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.

${\displaystyle {\begin{array}{c|cc}X&{\text{Heads}}&{\text{Tails}}\\\hline {\text{Coin 1}}&k_{\mathrm {1H} }&k_{\mathrm {1T} }\\{\text{Coin 2}}&k_{\mathrm {2H} }&k_{\mathrm {2T} }\end{array}}}$

Here Θ consists of the possible combinations of values of the parameters ${\displaystyle p_{\mathrm {1H} }}$, ${\displaystyle p_{\mathrm {1T} }}$, ${\displaystyle p_{\mathrm {2H} }}$, and ${\displaystyle p_{\mathrm {2T} }}$, which are the probability that coins 1 and 2 come up heads or tails. In what follows, ${\displaystyle i=1,2}$ and ${\displaystyle j=\mathrm {H,T} }$. The hypothesis space H is constrained by the usual constraints on a probability distribution, ${\displaystyle 0\leq p_{ij}\leq 1}$, and ${\displaystyle p_{i\mathrm {H} }+p_{i\mathrm {T} }=1}$. The space of the null hypothesis ${\displaystyle H_{0}}$ is the subspace where ${\displaystyle p_{1j}=p_{2j}}$. Writing ${\displaystyle n_{ij}}$ for the best values for ${\displaystyle p_{ij}}$ under the hypothesis H, the maximum likelihood estimate is given by

${\displaystyle n_{ij}={\frac {k_{ij}}{k_{i\mathrm {H} }+k_{i\mathrm {T} }}}.}$

Similarly, the maximum likelihood estimates of ${\displaystyle p_{ij}}$ under the null hypothesis ${\displaystyle H_{0}}$ are given by

${\displaystyle m_{ij}={\frac {k_{1j}+k_{2j}}{k_{\mathrm {1H} }+k_{\mathrm {2H} }+k_{\mathrm {1T} }+k_{\mathrm {2T} }}},}$

which does not depend on the coin i.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional ${\displaystyle H_{0}}$, the asymptotic distribution for the test will be ${\displaystyle \chi ^{2}(1)}$, the ${\displaystyle \chi ^{2}}$ distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

${\displaystyle -2\log \Lambda =2\sum _{i,j}k_{ij}\log {\frac {n_{ij}}{m_{ij}}}.}$