# Fieller's theorem

In statistics, Fieller's theorem allows the calculation of a confidence interval for the ratio of two means.

## Approximate confidence interval

Variables a and b may be measured in different units, so there is no way to directly combine the standard errors as they may also be in different units. The most complete discussion of this is given by Fieller (1954).[1]

Fieller showed that if a and b are (possibly correlated) means of two samples with expectations ${\displaystyle \mu _{a}}$ and ${\displaystyle \mu _{b}}$, and variances ${\displaystyle \nu _{11}\sigma ^{2}}$ and ${\displaystyle \nu _{22}\sigma ^{2}}$ and covariance ${\displaystyle \nu _{12}\sigma ^{2}}$, and if ${\displaystyle \nu _{11},\nu _{12},\nu _{22}}$ are all known, then a (1 − α) confidence interval (mLmU) for ${\displaystyle \mu _{a}/\mu _{b}}$ is given by

${\displaystyle (m_{L},m_{U})={\frac {1}{(1-g)}}\left[{\frac {a}{b}}-{\frac {g\nu _{12}}{\nu _{22}}}\mp {\frac {t_{r,\alpha }s}{b}}{\sqrt {\nu _{11}-2{\frac {a}{b}}\nu _{12}+{\frac {a^{2}}{b^{2}}}\nu _{22}-g\left(\nu _{11}-{\frac {\nu _{12}^{2}}{\nu _{22}}}\right)}}\right]}$

where

${\displaystyle g={\frac {t_{r,\alpha }^{2}s^{2}\nu _{22}}{b^{2}}}.}$

Here ${\displaystyle s^{2}}$ is an unbiased estimator of ${\displaystyle \sigma ^{2}}$ based on r degrees of freedom, and ${\displaystyle t_{r,\alpha }}$ is the ${\displaystyle \alpha }$-level deviate from the Student's t-distribution based on r degrees of freedom.

Three features of this formula are important in this context:

a) The expression inside the square root has to be positive, or else the resulting interval will be imaginary.

b) When g is very close to 1, the confidence interval is infinite.

c) When g is greater than 1, the overall divisor outside the square brackets is negative and the confidence interval is exclusive.

## Other methods

One problem is that, when g is not small, the confidence interval can blow up when using Fieller's theorem. Andy Grieve has provided a Bayesian solution where the CIs are still sensible, albeit wide.[2] Bootstrapping provides another alternative that does not require the assumption of normality.[3]

## History

Edgar C. Fieller (1907–1960) first started working on this problem while in Karl Pearson's group at University College London, where he was employed for five years after graduating in Mathematics from King's College, Cambridge. He then worked for the Boots Pure Drug Company as a statistician and operational researcher before becoming deputy head of operational research at RAF Fighter Command during the Second World War, after which he was appointed the first head of the Statistics Section at the National Physical Laboratory.[4]