# Inverse-variance weighting

Jump to: navigation, search

In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the weighted average. Each random variable is weighted in inverse proportion to its variance.

Given a sequence of independent observations yi with variances σi2, the inverse-variance weighted average is given by[1]

${\displaystyle {\hat {y}}={\frac {\sum _{i}y_{i}/\sigma _{i}^{2}}{\sum _{i}1/\sigma _{i}^{2}}}.}$

The inverse-variance weighted average has the least variance among all weighted averages, which can be calculated as

${\displaystyle D^{2}({\hat {y}})={\frac {1}{\sum _{i}1/\sigma _{i}^{2}}}.}$

If the variances of the measurements are all equal, then the inverse-variance weighted average becomes the simple average.

Inverse-variance weighting is typically used in statistical meta-analysis to combine the results from independent measurements.

## Context

Suppose an experimenter wishes to measure the value of a quantity, say the acceleration due to gravity of Earth, whose true value happens to be ${\displaystyle \mu }$. A careful experimenter makes multiple measurements, which we denote with ${\displaystyle n}$ random variables ${\displaystyle X_{1},X_{2},...,X_{n}}$. If they are all noisy but unbiased, i.e., the measuring device does not systematically overestimate or underestimate the true value and the errors are scattered symmetrically, then the expectation value ${\displaystyle E[X_{i}]=\mu }$ ${\displaystyle \forall i}$. The scatter in the measurement is then characterised by the variance of the random variables ${\displaystyle Var(X_{i}):=\sigma _{i}^{2}}$, and if the measurements are performed under identical scenarios, then all the ${\displaystyle \sigma _{i}}$ are the same, which we shall refer to by ${\displaystyle \sigma }$. Given the ${\displaystyle n}$ measurements, a typical estimator for ${\displaystyle \mu }$, denoted as ${\displaystyle {\hat {\mu }}}$, is given by the simple average ${\displaystyle {\overline {X}}={\frac {1}{n}}\sum _{i}X_{i}}$. Note that this empirical average is also a random variable, whose expectation value ${\displaystyle E[{\overline {X}}]}$ is ${\displaystyle \mu }$ but also has a scatter. If the individual measurements are uncorrelated, the square of the error in the estimate is given by ${\displaystyle Var({\overline {X}})={\frac {1}{n^{2}}}\sum _{i}\sigma _{i}^{2}=\left({\frac {\sigma }{\sqrt {n}}}\right)^{2}}$. Hence, if all the ${\displaystyle \sigma _{i}}$ are equal, then the error in the estimate decreases with increase in ${\displaystyle n}$ as ${\displaystyle 1/{\sqrt {n}}}$, thus making the more number of observations preferred.

Instead of ${\displaystyle n}$ repeated measurements with one instrument, if the experimenter makes ${\displaystyle n}$ of the same quantity with ${\displaystyle n}$ different instruments with varying quality of measurements, then there is no reason to expect the different ${\displaystyle \sigma _{i}}$ to be the same. Some instruments could be noisier than others. In the example of measuring the acceleration due to gravity, the different "instruments" could be measuring ${\displaystyle g}$ from a simple pendulum, from analysing a projectile motion etc. The simple average is no longer an optimal estimator, since the error in ${\displaystyle {\overline {X}}}$ might actually exceed the error in the least noisy measurement if different measurements have very different errors. Instead of discarding the noisy measurements that increase the final error, the experimenter can combine all the measurements with appropriate weights so as to give more importance to the least noisy measurements and vice versa. Given the knowledge of ${\displaystyle \sigma _{1}^{2},\sigma _{2}^{2},...,\sigma _{n}^{2}}$, an optimal estimator to measure ${\displaystyle \mu }$ would be a weighted mean of the measurements ${\displaystyle {\hat {\mu }}={\frac {\sum _{i}w_{i}X_{i}}{\sum _{i}w_{i}}}}$, for the particular choice of the weights ${\displaystyle w_{i}=1/\sigma _{i}^{2}}$. The variance of the estimator ${\displaystyle Var({\hat {\mu }})={\frac {\sum _{i}w_{i}^{2}\sigma _{i}^{2}}{\left(\sum _{i}w_{i}\right)^{2}}}}$, which for the optimal choice of the weights become ${\displaystyle Var({\hat {\mu }}_{\text{opt}})=\left(\sum _{i}\sigma _{i}^{-2}\right)^{-1}.}$

Note that since ${\displaystyle Var({\hat {\mu }}_{\text{opt}})<\min _{j}\sigma _{j}^{2}}$, the estimator has a scatter smaller than the scatter in any individual measurement. Furthermore, the scatter in ${\displaystyle {\hat {\mu }}_{\text{opt}}}$ decreases with adding more measurements, however noisier those measurements may be.

## Derivation

Consider a generic weighted sum ${\displaystyle Y=\sum _{i}w_{i}X_{i}}$, where the weights ${\displaystyle w_{i}}$ are normalised such that ${\displaystyle \sum _{i}w_{i}=1}$. If the ${\displaystyle X_{i}}$ are all independent, the variance of ${\displaystyle Y}$ is given by ${\displaystyle Var(Y)=\sum _{i}w_{i}^{2}\sigma _{i}^{2}.}$ For optimality, we wish to minimise the variance in ${\displaystyle Var(Y)}$ which could be done by equating the gradient with respect to the weights of ${\displaystyle Var(Y)}$ to zero, while maintaining the constraint that ${\displaystyle \sum _{i}w_{i}=1}$. Using a Lagrange multiplier ${\displaystyle w_{0}}$ to enforce the constraint, we express the variance ${\displaystyle Var(Y)=\sum _{i}w_{i}^{2}\sigma _{i}^{2}-w_{0}(\sum _{i}w_{i}-1)}$

For ${\displaystyle k>0}$,

${\displaystyle 0={\frac {\partial }{\partial w_{k}}}Var(Y)=2w_{k}\sigma _{k}^{2}-w_{0},}$

which implies that ${\displaystyle w_{k}={\frac {w_{0}/2}{\sigma _{k}^{2}}}.}$

The main take away here is that ${\displaystyle w_{k}\propto 1/\sigma _{k}^{2}}$. Since ${\displaystyle \sum _{i}w_{i}=1}$,

${\displaystyle {\frac {2}{w_{0}}}=\sum _{i}{\frac {1}{\sigma _{i}^{2}}}:={\frac {1}{\sigma _{0}^{2}}}.}$

The individual normalised weights are ${\displaystyle w_{k}={\frac {1}{\sigma _{k}^{2}}}\left(\sum _{i}{\frac {1}{\sigma _{i}^{2}}}\right)^{-1}.}$

It is easy to see that this extremum solution corresponds to the minimum from the second partial derivative test by noting that the variance is a quadratic function of the weights. Thus, the minimum variance of the estimator is then given by ${\displaystyle Var(Y)=\sum _{i}{\frac {\sigma _{0}^{4}}{\sigma _{i}^{4}}}\sigma _{i}^{2}=\sigma _{0}^{4}\sum _{i}{\frac {1}{\sigma _{i}^{2}}}=\sigma _{0}^{4}{\frac {1}{\sigma _{0}^{2}}}=\sigma _{0}^{2}={\frac {1}{\sum _{i}1/\sigma _{i}^{2}}}.}$

## References

1. ^ Joachim Hartung; Guido Knapp; Bimal K. Sinha (2008). Statistical meta-analysis with applications. John Wiley & Sons. ISBN 978-0-470-29089-7.