# Inverse-variance weighting

Jump to: navigation, search

In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the weighted average. Each random variable is weighted in inverse proportion to its variance.

Given a sequence of independent observations yi with variances σi2, the inverse-variance weighted average is given by[1]

${\displaystyle {\hat {y}}={\frac {\sum _{i}y_{i}/\sigma _{i}^{2}}{\sum _{i}1/\sigma _{i}^{2}}}.}$

The inverse-variance weighted average has the least variance among all weighted averages, which can be calculated as

${\displaystyle D^{2}({\hat {y}})={\frac {1}{\sum _{i}1/\sigma _{i}^{2}}}.}$

If the variances of the measurements are all equal, then the inverse-variance weighted average becomes the simple average.

Inverse-variance weighting is typically used in statistical meta-analysis to combine the results from independent measurements.

## References

1. ^ Joachim Hartung; Guido Knapp; Bimal K. Sinha (2008). Statistical meta-analysis with applications. John Wiley & Sons. ISBN 978-0-470-29089-7.