Effective sample size

In statistics, effective sample size is a notion defined for a sample from a distribution when the observations in the sample are correlated or weighted.[1]

Correlated observations

Suppose a sample of several observations ${\displaystyle y_{i}}$ is drawn from a distribution with mean ${\displaystyle \mu }$ and standard deviation ${\displaystyle \sigma }$. Then the mean of this distribution is estimated by the mean of the sample:

${\displaystyle {\hat {\mu }}={\frac {1}{n}}\sum _{i=1}^{n}y_{i}.}$

In that case, the variance of ${\displaystyle {\hat {\mu }}}$ is given by

${\displaystyle \operatorname {Var} ({\hat {\mu }})={\frac {\sigma ^{2}}{n}}}$

However, if the observations in the sample are correlated, then ${\displaystyle \operatorname {Var} ({\hat {\mu }})}$ is somewhat higher. For instance, if all observations in the sample are completely correlated (${\displaystyle \rho _{(i,j)}=1}$), then ${\displaystyle \operatorname {Var} ({\hat {\mu }})=\sigma ^{2}}$ regardless of ${\displaystyle n}$.

The effective sample size ${\displaystyle n_{\text{eff}}}$ is the unique value (not necessarily an integer) such that

${\displaystyle \operatorname {Var} ({\hat {\mu }})={\frac {\sigma ^{2}}{n_{\text{eff}}}}}$

${\displaystyle n_{\text{eff}}}$ is a function of the correlation between observations in the sample. Suppose that all the correlations are the same and nonnegative, i.e. if ${\displaystyle i\neq j}$, then ${\displaystyle \rho _{(i,j)}=\rho \geq 0}$. In that case, if ${\displaystyle \rho =0}$, then ${\displaystyle n_{\text{eff}}=n}$. Similarly, if ${\displaystyle \rho =1}$ then ${\displaystyle n_{\text{eff}}=1}$. More generally,

${\displaystyle n_{\text{eff}}={\frac {n}{1+(n-1)\rho }}}$

The case where the correlations are not uniform is somewhat more complicated. Note that if the correlation is negative, the effective sample size may be larger than the actual sample size. Similarly, it is possible to construct correlation matrices that have an ${\displaystyle n_{\text{eff}}>n}$ even when all correlations are positive. Intuitively, ${\displaystyle n_{\text{eff}}}$ may be thought of as the information content of the observed data.

Weighted samples

If the data has been weighted, then several observations composing a sample have been pulled from the distribution with effectively 100% correlation with some previous sample. In this case, the effect is known as Kish's Effective Sample Size[2]

${\displaystyle n_{\text{eff}}={\frac {(\sum _{i=1}^{n}w_{i})^{2}}{\sum _{i=1}^{n}w_{i}^{2}}}}$

References

1. ^ Tom Leinster (December 18, 2014). "Effective Sample Size" (html).
2. ^