# Method of moments (statistics)

In statistics, the method of moments is a method of estimation of population parameters.

It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters.

The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson.[citation needed]

## Method

Suppose that the problem is to estimate ${\displaystyle k}$ unknown parameters ${\displaystyle \theta _{1},\theta _{2},\dots ,\theta _{k}}$ characterizing the distribution ${\displaystyle f_{W}(w;\theta )}$ of the random variable ${\displaystyle W}$.[1] Suppose the first ${\displaystyle k}$ moments of the true distribution (the "population moments") can be expressed as functions of the ${\displaystyle \theta }$s:

{\displaystyle {\begin{aligned}\mu _{1}&\equiv \operatorname {E} [W]=g_{1}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\[4pt]\mu _{2}&\equiv \operatorname {E} [W^{2}]=g_{2}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\&\,\,\,\vdots \\\mu _{k}&\equiv \operatorname {E} [W^{k}]=g_{k}(\theta _{1},\theta _{2},\ldots ,\theta _{k}).\end{aligned}}}

Suppose a sample of size ${\displaystyle n}$ is drawn, resulting in the values ${\displaystyle w_{1},\dots ,w_{n}}$. For ${\displaystyle j=1,\dots ,k}$, let

${\displaystyle {\widehat {\mu }}_{j}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}^{j}}$

be the j-th sample moment, an estimate of ${\displaystyle \mu _{j}}$. The method of moments estimator for ${\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{k}}$ denoted by ${\displaystyle {\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\dots ,{\widehat {\theta }}_{k}}$ is defined as the solution (if there is one) to the equations:[citation needed]

{\displaystyle {\begin{aligned}{\widehat {\mu }}_{1}&=g_{1}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\[4pt]{\widehat {\mu }}_{2}&=g_{2}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\&\,\,\,\vdots \\{\widehat {\mu }}_{k}&=g_{k}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}).\end{aligned}}}

The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased.

In some respects, when estimating parameters of a known family of probability distributions, this method was superseded by Fisher's method of maximum likelihood, because maximum likelihood estimators have higher probability of being close to the quantities to be estimated and are more often unbiased[citation needed].

However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be quickly and easily calculated by hand.

Estimates by the method of moments may be used as the first approximation to the solutions of the likelihood equations, and successive improved approximations may then be found by the Newton–Raphson method. In this way the method of moments can assist in finding maximum likelihood estimates.

In some cases, infrequent with large samples but not so infrequent with small samples, the estimates given by the method of moments are outside of the parameter space (as shown in the example below); it does not make sense to rely on them then. That problem never arises in the method of maximum likelihood[citation needed]. Also, estimates by the method of moments are not necessarily sufficient statistics, i.e., they sometimes fail to take into account all relevant information in the sample.

When estimating other structural parameters (e.g., parameters of a utility function, instead of parameters of a known probability distribution), appropriate probability distributions may not be known, and moment-based estimates may be preferred to maximum likelihood estimation.

## Examples

An example application of the method of moments is to estimate polynomial probability density distributions. In this case, an approximate polynomial of order ${\displaystyle N}$ is defined on an interval ${\displaystyle [a,b]}$. The method of moments then yields a system of equations, whose solution involves the inversion of a Hankel matrix.[2]

### Uniform distribution

Consider the uniform distribution on the interval ${\displaystyle [a,b]}$, ${\displaystyle U(a,b)}$. If ${\displaystyle W\sim U(a,b)}$ then we have

${\displaystyle \mu _{1}=\operatorname {E} [W]={\frac {1}{2}}(a+b)}$
${\displaystyle \mu _{2}=\operatorname {E} [W^{2}]={\frac {1}{3}}(a^{2}+ab+b^{2})}$

Solving these equations gives

${\displaystyle {\widehat {a}}=\mu _{1}\pm {\sqrt {3\left(\mu _{2}-\mu _{1}^{2}\right)}}}$
${\displaystyle {\widehat {b}}=2\mu _{1}-a}$

Given a set of samples ${\displaystyle \{w_{i}\}}$ we can use the sample moments ${\displaystyle {\widehat {\mu }}_{1}}$ and ${\displaystyle {\widehat {\mu }}_{2}}$ in these formulae in order to estimate ${\displaystyle a}$ and ${\displaystyle b}$.

Note, however, that this method can produce inconsistent results in some cases. For example, the set of samples ${\displaystyle \{0,0,0,0,1\}}$ results in the estimate ${\displaystyle {\widehat {a}}={\frac {1}{5}}-{\frac {2{\sqrt {3}}}{5}},{\widehat {b}}={\frac {1}{5}}+{\frac {2{\sqrt {3}}}{5}}}$ even though ${\displaystyle {\widehat {b}}<1}$ and so it is impossible for the set ${\displaystyle \{0,0,0,0,1\}}$ to have been drawn from ${\displaystyle U({\widehat {a}},{\widehat {b}})}$ in this case.