# Kolmogorov's two-series theorem

In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.

## Statement of the theorem

Let ${\displaystyle \left(X_{n}\right)_{n=1}^{\infty }}$ be independent random variables with expected values ${\displaystyle \mathbf {E} \left[X_{n}\right]=\mu _{n}}$ and variances ${\displaystyle \mathbf {Var} \left(X_{n}\right)=\sigma _{n}^{2}}$, such that ${\displaystyle \sum _{n=1}^{\infty }\mu _{n}}$ converges in ℝ and ${\displaystyle \sum _{n=1}^{\infty }\sigma _{n}^{2}}$ converges in ℝ. Then ${\displaystyle \sum _{n=1}^{\infty }X_{n}}$ converges in ℝ almost surely.

## Proof

Assume WLOG ${\displaystyle \mu _{n}=0}$. Set ${\displaystyle S_{N}=\sum _{n=1}^{N}X_{n}}$, and we will see that ${\displaystyle \limsup _{N}S_{N}-\liminf _{N}S_{N}=0}$ with probability 1.

For every ${\displaystyle m\in \mathbb {N} }$,

${\displaystyle \limsup _{N\to \infty }S_{N}-\liminf _{N\to \infty }S_{N}=\limsup _{N\to \infty }\left(S_{N}-S_{m}\right)-\liminf _{N\to \infty }\left(S_{N}-S_{m}\right)\leq 2\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|}$

Thus, for every ${\displaystyle m\in \mathbb {N} }$ and ${\displaystyle \epsilon >0}$,

{\displaystyle {\begin{aligned}\mathbb {P} \left(\limsup _{N\to \infty }\left(S_{N}-S_{m}\right)-\liminf _{N\to \infty }\left(S_{N}-S_{m}\right)\geq \epsilon \right)&\leq \mathbb {P} \left(2\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|\geq \epsilon \ \right)\\&=\mathbb {P} \left(\max _{k\in \mathbb {N} }\left|\sum _{i=1}^{k}X_{m+i}\right|\geq {\frac {\epsilon }{2}}\ \right)\\&\leq \limsup _{N\to \infty }4\epsilon ^{-2}\sum _{i=m+1}^{m+N}\sigma _{i}^{2}\\&=4\epsilon ^{-2}\lim _{N\to \infty }\sum _{i=m+1}^{m+N}\sigma _{i}^{2}\end{aligned}}}

While the second inequality is due to Kolmogorov's inequality.

By the assumption that ${\displaystyle \sum _{n=1}^{\infty }\sigma _{n}^{2}}$ converges, it follows that the last term tends to 0 when ${\displaystyle m\to \infty }$, for every arbitrary ${\displaystyle \epsilon >0}$.

## References

• Durrett, Rick. Probability: Theory and Examples. Duxbury advanced series, Third Edition, Thomson Brooks/Cole, 2005, Section 1.8, pp. 60–69.
• M. Loève, Probability theory, Princeton Univ. Press (1963) pp. Sect. 16.3
• W. Feller, An introduction to probability theory and its applications, 2, Wiley (1971) pp. Sect. IX.9