# Algebraic formula for the variance

In probability theory and statistics, there are several algebraic formulae for the variance available for deriving the variance of a random variable. The usefulness of these depends on what is already known about the random variable; for example a random variable may be defined in terms of its probability density function or by construction from other random variables. The context here is that of deriving algebraic expressions for the theoretical variance of a random variable, in contrast to questions of estimating the variance of a population from sample data for which there are special considerations in implementing computational algorithms.

## In terms of raw moments

If the raw moments E(X) and E(X 2) of a random variable X are known (where E(X) is the expected value of X), then Var(X) is given by

$\operatorname {Var} (X)=\operatorname {E} (X^{2})-[\operatorname {E} (X)]^{2}.$ The result is called the KönigHuygens formula in French-language literature and known as Steiner translation theorem in Germany.

There is a corresponding formula for use in estimation of the variance from sample data, that can be of use in hand calculations. This is a closely related identity that is structured to create an unbiased estimate of the population variance

${\hat {\sigma }}^{2}={\frac {1}{N-1}}\sum _{i=1}^{N}(x_{i}-{\bar {x}})^{2}={\frac {N}{N-1}}\left({\frac {1}{N}}\left(\sum _{i=1}^{N}x_{i}^{2}\right)-{\bar {x}}^{2}\right)\equiv {\frac {1}{N-1}}\left(\left(\sum _{i=1}^{N}x_{i}^{2}\right)-N\left({\bar {x}}\right)^{2}\right).$ However, use of these formulas can be unwise in practice when using floating point arithmetic with limited precision: subtracting two values having a similar magnitude can lead to catastrophic cancellation, and thus causing a loss of significance when $\operatorname {E} (X)^{2}\gg \operatorname {Var} (X)$ . There several other numerically stable algorithms for calculating variance for use with floating point numbers.

### Proof

The computational formula for the population variance follows in a straightforward manner from the linearity of expected values and the definition of variance:

{\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[(X-\operatorname {E} (X))^{2}\right]\\&=\operatorname {E} \left[X^{2}-2X\operatorname {E} (X)+[\operatorname {E} (X)]^{2}\right]\\&=\operatorname {E} (X^{2})-\operatorname {E} [2X\operatorname {E} (X)]+[\operatorname {E} (X)]^{2}\\&=\operatorname {E} (X^{2})-2\operatorname {E} (X)\operatorname {E} (X)+[\operatorname {E} (X)]^{2}\\&=\operatorname {E} (X^{2})-2[\operatorname {E} (X)]^{2}+[\operatorname {E} (X)]^{2}\\&=\operatorname {E} (X^{2})-[\operatorname {E} (X)]^{2}\end{aligned}} ### Generalization to covariance

This formula can be generalized for covariance, with two random variables Xi and Xj:

$\operatorname {Cov} (X_{i},X_{j})=\operatorname {E} (X_{i}X_{j})-\operatorname {E} (X_{i})\operatorname {E} (X_{j})$ as well as for the n by n covariance matrix of a random vector of length n:

$\operatorname {Var} (\mathbf {X} )=\operatorname {E} (\mathbf {XX^{\top }} )-\operatorname {E} (\mathbf {X} )\operatorname {E} (\mathbf {X} )^{\top }$ and for the n by m cross-covariance matrix between two random vectors of length n and m:

$\operatorname {Cov} ({\textbf {X}},{\textbf {Y}})=\operatorname {E} (\mathbf {XY^{\top }} )-\operatorname {E} (\mathbf {X} )\operatorname {E} (\mathbf {Y} )^{\top }$ where expectations are taken element-wise and $\mathbf {X} =\{X_{1},X_{2},\ldots ,X_{n}\}$ and $\mathbf {Y} =\{Y_{1},Y_{2},\ldots ,Y_{m}\}$ are random vectors of respective lengths n and m.

Note that this formula suffers from the same loss of significance as the formula for variance if used for calculating estimates of the covariance, and alternative algorithms should be used instead.