# Martingale central limit theorem

In probability theory, the central limit theorem says that, under certain conditions, the sum of many independent identically-distributed random variables, when scaled appropriately, converges in distribution to a standard normal distribution. The martingale central limit theorem generalizes this result for random variables to martingales, which are stochastic processes where the change in the value of the process from time t to time t + 1 has expectation zero, even conditioned on previous outcomes.

## Statement

Here is a simple version of the martingale central limit theorem: Let $X_{1},X_{2},\dots \,$ be a martingale with bounded increments; that is, suppose

$\operatorname {E} [X_{t+1}-X_{t}\vert X_{1},\dots ,X_{t}]=0\,,$ and

$|X_{t+1}-X_{t}|\leq k$ almost surely for some fixed bound k and all t. Also assume that $|X_{1}|\leq k$ almost surely.

Define

$\sigma _{t}^{2}=\operatorname {E} [(X_{t+1}-X_{t})^{2}|X_{1},\ldots ,X_{t}],$ and let

$\tau _{\nu }=\min \left\{t:\sum _{i=1}^{t}\sigma _{i}^{2}\geq \nu \right\}.$ Then

${\frac {X_{\tau _{\nu }}}{\sqrt {\nu }}}$ converges in distribution to the normal distribution with mean 0 and variance 1 as $\nu \to +\infty \!$ . More explicitly,

$\lim _{\nu \to +\infty }\operatorname {P} \left({\frac {X_{\tau _{\nu }}}{\sqrt {\nu }}} ## The sum of variances must diverge to infinity

The statement of the above result implicitly assumes the variances sum to infinity, so the following holds with probability 1:

$\sum _{t=1}^{\infty }\sigma _{t}^{2}=\infty$ This ensures that with probability 1:

$\tau _{v}<\infty ,\forall v\geq 0$ This condition is violated, for example, by a martingale that is defined to be zero almost surely for all time.

## Intuition on the result

The result can be intuitively understood by writing the ratio as a summation:

${\frac {X_{\tau _{v}}}{\sqrt {v}}}={\frac {X_{1}}{\sqrt {v}}}+{\frac {1}{\sqrt {v}}}\sum _{i=1}^{\tau _{v}-1}(X_{i+1}-X_{i}),\forall \tau _{v}\geq 1$ The first term on the right-hand-side asymptotically converges to zero, while the second term is qualitatively similar to the summation formula for the central limit theorem in the simpler case of i.i.d. random variables. While the terms in the above expression are not necessarily i.i.d., they are uncorrelated and have zero mean. Indeed:

$E[(X_{i+1}-X_{i})]=0,\forall i\in \{1,2,3,...\}$ $E[(X_{i+1}-X_{i})(X_{j+1}-X_{j})]=0,\forall i\neq j,i,j\in \{1,2,3,...\}$ 