Algebraic formula for the variance

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In probability theory and statistics, there are several algebraic formulae for the variance available for deriving the variance of a random variable. The usefulness of these depends on what is already known about the random variable; for example a random variable may be defined in terms of its probability density function or by construction from other random variables. The context here is that of deriving algebraic expressions for the theoretical variance of a random variable, in contrast to questions of estimating the variance of a population from sample data for which there are special considerations in implementing computational algorithms.

In terms of raw moments[edit]

If the raw moments E(X) and E(X 2) of a random variable X are known (where E(X) is the expected value of X), then Var(X) is given by

\operatorname{Var}(X) = \operatorname{E}(X^2) - [\operatorname{E}(X)]^2   .

The result is called the KönigHuygens theorem in French-language literature[citation needed] and known as Steiner translation theorem in Germany.[citation needed]

There is a corresponding formula for use in estimation of the variance from sample data, that can be of use in hand calculations. This is a closely related identity that is structured to create an unbiased estimate of the population variance


\hat{\sigma}^2 = \frac{1}{N-1}\sum_{i=1}^N(x_i-\bar{x})^2 = \frac{N}{N-1}\left(\frac{1}{N}\left(\sum_{i=1}^N x_i^2\right) - \bar{x}^2\right)
\equiv \frac{1}{N-1}\left(\left(\sum_{i=1}^N x_i^2\right) - N \left(\bar{x}\right)^2\right)   .

However, use of these formulas can be unwise in practice when using floating point arithmetic with limited precision: subtracting two values having a similar magnitude can lead to catastrophic cancellation,[1] and thus causing a loss of significance when \operatorname{E}(X)^2 \gg \operatorname{Var}(X). There exist other numerically stable algorithms for calculating variance for use with floating point numbers.

Proof[edit]

The computational formula for the population variance follows in a straightforward manner from the linearity of expected values and the definition of variance:


\begin{align}
\operatorname{Var}(X)&=\operatorname{E}\left[(X - \operatorname{E}(X))^2\right]\\
                     &=\operatorname{E}\left[X^2 - 2X\operatorname{E}(X) + [\operatorname{E}(X)]^2\right]\\
                     &=\operatorname{E}(X^2) - \operatorname{E}[2X\operatorname{E}(X)] + [\operatorname{E}(X)]^2\\
                     &=\operatorname{E}(X^2) - 2\operatorname{E}(X)\operatorname{E}(X) + [\operatorname{E}(X)]^2\\
                     &=\operatorname{E}(X^2) - 2[\operatorname{E}(X)]^2 + [\operatorname{E}(X)]^2\\
                     &=\operatorname{E}(X^2) - [\operatorname{E}(X)]^2
\end{align}

Generalization to covariance[edit]

This formula can be generalized for covariance, with two random variables Xi and Xj:

\operatorname{Cov}(X_i, X_j) = \operatorname{E}(X_iX_j) -\operatorname{E}(X_i)\operatorname{E}(X_j)

as well as for the n by n covariance matrix of a random vector of length n:

 \operatorname{Var}(\mathbf{X}) = \operatorname{E}(\mathbf{X X^\top}) - \operatorname{E}(\mathbf{X})\operatorname{E}(\mathbf{X})^\top

and for the n by m cross-covariance matrix between two random vectors of length n and m:


\operatorname{Cov}(\textbf{X},\textbf{Y})=
\operatorname{E}(\mathbf{X Y^\top}) - \operatorname{E}(\mathbf{X})\operatorname{E}(\mathbf{Y})^\top

where expectations are taken element-wise and \mathbf{X}=\{X_1,X_2,\ldots,X_n\} and \mathbf{Y}=\{Y_1,Y_2,\ldots,Y_m\} are random vectors of respective lengths n and m.

Note that this formula suffers from the same loss of significance as the formula for variance if used for calculating estimates of the covariance.

See also[edit]

References[edit]

  1. ^ Donald E. Knuth (1998). The Art of Computer Programming, volume 2: Seminumerical Algorithms, 3rd edn., p. 232. Boston: Addison-Wesley.