# Cochran's theorem

In statistics, Cochran's theorem, devised by William G. Cochran,[1] is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.[2]

## Statement

Suppose U1, ..., UN are i.i.d. standard normally distributed random variables, and there exist matrices ${\displaystyle B^{(1)},B^{(2)},\ldots B^{(k)}}$, with ${\displaystyle \sum _{i=1}^{k}B^{(i)}=I_{N}}$. Further suppose that ${\displaystyle r_{1}+\cdots +r_{k}=N}$, where ri is the rank of ${\displaystyle B^{(i)}}$. If we write

${\displaystyle Q_{i}=\sum _{j=1}^{N}\sum _{\ell =1}^{N}U_{j}B_{j,\ell }^{(i)}U_{\ell }}$

so that the ${\displaystyle Q_{i}}$ are quadratic forms, then Cochran's theorem states that the Qi are independent, and each Qi has a chi-squared distribution with ri degrees of freedom.[1]

Less formally, it is the number of linear combinations included in the sum of squares defining Qi, provided that these linear combinations are linearly independent.

### Proof

We first show that the matrices B(i) can be simultaneously diagonalized and that their non-zero eigenvalues are all equal to +1. We then use the vector basis that diagonalize them to simplify their characteristic function and show their independence and distribution.[3]

Each of the matrices B(i) has rank ri and thus ri non-zero eigenvalues. For each i, the sum ${\displaystyle C^{(i)}\equiv \sum _{j\neq i}B^{(j)}}$ has at most rank ${\displaystyle \sum _{j\neq i}r_{j}=N-r_{i}}$. Since ${\displaystyle B^{(i)}+C^{(i)}=I_{N\times N}}$, it follows that C(i) has exactly rank N − ri.

Therefore B(i) and C(i) can be simultaneously diagonalized. This can be shown by first diagonalizing B(i). In this basis, it is of the form:

${\displaystyle {\begin{bmatrix}\lambda _{1}&0&0&\cdots &\cdots &&0\\0&\lambda _{2}&0&\cdots &\cdots &&0\\0&0&\ddots &&&&\vdots \\0&\vdots &&\lambda _{r_{i}}&&\\0&\vdots &&&0&\\0&\vdots &&&&\ddots \\0&0&\ldots &&&&0\end{bmatrix}}.}$

Thus the lower ${\displaystyle (N-r_{i})}$ rows are zero. Since ${\displaystyle C^{(i)}=I-B^{(i)}}$, it follows that these rows in C(i) in this basis contain a right block which is a ${\displaystyle (N-r_{i})\times (N-r_{i})}$ unit matrix, with zeros in the rest of these rows. But since C(i) has rank N − ri, it must be zero elsewhere. Thus it is diagonal in this basis as well. It follows that all the non-zero eigenvalues of both B(i) and C(i) are +1. Moreover, the above analysis can be repeated in the diagonal basis for ${\displaystyle C^{(1)}=B^{(2)}+\sum _{j>2}B^{(j)}}$. In this basis ${\displaystyle C^{(1)}}$ is the identity of an ${\displaystyle (N-r_{1})\times (N-r_{1})}$ vector space, so it follows that both B(2) and ${\displaystyle \sum _{j>2}B^{(j)}}$ are simultaneously diagonalizable in this vector space (and hence also together B(1)). By iteration it follows that all B-s are simultaneously diagonalizable.

Thus there exists an orthogonal matrix ${\displaystyle S}$ such that for all ${\displaystyle i}$, ${\displaystyle S^{\mathrm {T} }B^{(i)}S\equiv B^{(i)\prime }}$ is diagonal, where any entry ${\displaystyle B_{x,y}^{(i)\prime }}$ is equal to 1 for ${\displaystyle \sum _{i=1}^{i-1}r_{j}>x=y\leq \sum _{i=1}^{i}r_{j}}$ and is equal to 0 for any other indices.

Let ${\displaystyle U_{i}^{\prime }}$ denote some specific linear combination of all ${\displaystyle U_{i}}$ after transformation by ${\displaystyle S}$. Note that ${\displaystyle \sum _{i=1}^{N}(U_{i}^{\prime })^{2}=\sum _{i=1}^{N}U_{i}^{2}}$ due to the length preservation of the orthogonal matrix S.

The characteristic function of Qi is:

{\displaystyle {\begin{aligned}\varphi _{i}(t)={}&(2\pi )^{-N/2}\int du_{1}\int du_{2}\cdots \int du_{N}e^{itQ_{i}}\cdot e^{-{\frac {u_{1}^{2}}{2}}}\cdot e^{-{\frac {u_{2}^{2}}{2}}}\cdots e^{-{\frac {u_{N}^{2}}{2}}}\\={}&(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int du_{j}\right)e^{itQ_{i}}\cdot e^{-\sum _{j=1}^{N}{\frac {u_{j}^{2}}{2}}}\\={}&(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int du_{j}^{\prime }\right)e^{it\cdot \sum _{m=r_{1}+\cdots +r_{i-1}+1}^{r_{1}+\cdots +r_{i}}(u_{m}^{\prime })^{2}}\cdot e^{-\sum _{j=1}^{N}{\frac {{u_{j}^{\prime }}^{2}}{2}}}\\={}&(2\pi )^{-N/2}\left(\int e^{u^{2}(it-{\frac {1}{2}})}du\right)^{r_{i}}\left(\int e^{-{\frac {u^{2}}{2}}}du\right)^{N-r_{i}}\\={}&(1-2it)^{-r_{i}/2}\end{aligned}}}

This is the Fourier transform of the chi-squared distribution with ri degrees of freedom. Therefore this is the distribution of Qi.

Moreover, the characteristic function of the joint distribution of all the Qis is:

{\displaystyle {\begin{aligned}\varphi (t_{1},t_{2},\ldots ,t_{k})&=(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int dU_{j}\right)e^{i\sum _{i=1}^{k}t_{i}\cdot Q_{i}}\cdot e^{-\sum _{j=1}^{N}{\frac {U_{j}^{2}}{2}}}\\&=(2\pi )^{-N/2}\left(\prod _{j=1}^{N}\int dU_{j}^{\prime }\right)e^{i\cdot \sum _{i=1}^{k}t_{i}\sum _{k=r_{1}+\cdots +r_{i-1}+1}^{r_{1}+\cdots +r_{i}}(U_{k}^{\prime })^{2}}\cdot e^{-\sum _{j=1}^{N}{\frac {{U_{j}^{\prime }}^{2}}{2}}}\\&=(2\pi )^{-N/2}\prod _{i=1}^{k}\left(\int e^{u^{2}(it_{i}-{\frac {1}{2}})}du\right)^{r_{i}}\\&=\prod _{i=1}^{k}(1-2it_{i})^{-r_{i}/2}=\prod _{i=1}^{k}\varphi _{i}(t_{i})\end{aligned}}}

From this it follows that all the Qis are independent.

## Examples

### Sample mean and sample variance

If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then

${\displaystyle U_{i}={\frac {X_{i}-\mu }{\sigma }}}$

is standard normal for each i. It is possible to write

${\displaystyle \sum _{i=1}^{n}U_{i}^{2}=\sum _{i=1}^{n}\left({\frac {X_{i}-{\overline {X}}}{\sigma }}\right)^{2}+n\left({\frac {{\overline {X}}-\mu }{\sigma }}\right)^{2}}$

(here ${\displaystyle {\overline {X}}}$ is the sample mean). To see this identity, multiply throughout by ${\displaystyle \sigma ^{2}}$ and note that

${\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}}+{\overline {X}}-\mu )^{2}}$

and expand to give

${\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}})^{2}+\sum ({\overline {X}}-\mu )^{2}+2\sum (X_{i}-{\overline {X}})({\overline {X}}-\mu ).}$

The third term is zero because it is equal to a constant times

${\displaystyle \sum ({\overline {X}}-X_{i})=0,}$

and the second term has just n identical terms added together. Thus

${\displaystyle \sum (X_{i}-\mu )^{2}=\sum (X_{i}-{\overline {X}})^{2}+n({\overline {X}}-\mu )^{2},}$

and hence

${\displaystyle \sum \left({\frac {X_{i}-\mu }{\sigma }}\right)^{2}=\sum \left({\frac {X_{i}-{\overline {X}}}{\sigma }}\right)^{2}+n\left({\frac {{\overline {X}}-\mu }{\sigma }}\right)^{2}=Q_{1}+Q_{2}.}$

Now the rank of B(2) is just 1 (it is the square of just one linear combination of the standard normal variables). The rank of B(1) can be shown to be n − 1, and thus the conditions for Cochran's theorem are met.

Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.[4]

### Distributions

The result for the distributions is written symbolically as

${\displaystyle \sum \left(X_{i}-{\overline {X}}\right)^{2}\sim \sigma ^{2}\chi _{n-1}^{2}.}$
${\displaystyle n({\overline {X}}-\mu )^{2}\sim \sigma ^{2}\chi _{1}^{2},}$

Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by

${\displaystyle {\frac {n\left({\overline {X}}-\mu \right)^{2}}{{\frac {1}{n-1}}\sum \left(X_{i}-{\overline {X}}\right)^{2}}}\sim {\frac {\chi _{1}^{2}}{{\frac {1}{n-1}}\chi _{n-1}^{2}}}\sim F_{1,n-1}}$

where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.

### Estimation of variance

To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution

${\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum \left(X_{i}-{\overline {X}}\right)^{2}.}$

Cochran's theorem shows that

${\displaystyle {\frac {n{\widehat {\sigma }}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}}$

and the properties of the chi-squared distribution show that

{\displaystyle {\begin{aligned}E\left({\frac {n{\widehat {\sigma }}^{2}}{\sigma ^{2}}}\right)&=E\left(\chi _{n-1}^{2}\right)\\{\frac {n}{\sigma ^{2}}}E\left({\widehat {\sigma }}^{2}\right)&=(n-1)\\E\left({\widehat {\sigma }}^{2}\right)&={\frac {\sigma ^{2}(n-1)}{n}}\end{aligned}}}

## Alternative formulation

The following version is often seen when considering linear regression.[5] Suppose that ${\displaystyle Y\sim N_{n}(0,\sigma ^{2}I_{n})}$ is a standard multivariate normal random vector (here ${\displaystyle I_{n}}$ denotes the n-by-n identity matrix), and if ${\displaystyle A_{1},\ldots ,A_{k}}$ are all n-by-n symmetric matrices with ${\displaystyle \sum _{i=1}^{k}A_{i}=I_{n}}$. Then, on defining ${\displaystyle r_{i}=\operatorname {Rank} (A_{i})}$, any one of the following conditions implies the other two:

• ${\displaystyle \sum _{i=1}^{k}r_{i}=n,}$
• ${\displaystyle Y^{T}A_{i}Y\sim \sigma ^{2}\chi _{r_{i}}^{2}}$ (thus the ${\displaystyle A_{i}}$ are positive semidefinite)
• ${\displaystyle Y^{T}A_{i}Y}$ is independent of ${\displaystyle Y^{T}A_{j}Y}$ for ${\displaystyle i\neq j.}$