# Kolmogorov's inequality

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.[citation needed]

## Statement of the inequality

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,

${\displaystyle \Pr \left(\max _{1\leq k\leq n}|S_{k}|\geq \lambda \right)\leq {\frac {1}{\lambda ^{2}}}\operatorname {Var} [S_{n}]\equiv {\frac {1}{\lambda ^{2}}}\sum _{k=1}^{n}\operatorname {Var} [X_{k}]={\frac {1}{\lambda ^{2}}}\sum _{k=1}^{n}{\text{E}}[X_{k}^{2}],}$

where Sk = X1 + ... + Xk.

The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

## Proof

The following argument is due to Kareem Amin and employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence ${\displaystyle S_{1},S_{2},\dots ,S_{n}}$ is a martingale. Without loss of generality, we can assume that ${\displaystyle S_{0}=0}$ and ${\displaystyle S_{i}\geq 0}$ for all ${\displaystyle i}$. Define ${\displaystyle (Z_{i})_{i=0}^{n}}$ as follows. Let ${\displaystyle Z_{0}=0}$, and

${\displaystyle Z_{i+1}=\left\{{\begin{array}{ll}S_{i+1}&{\text{ if }}\displaystyle \max _{1\leq j\leq i}S_{j}<\lambda \\Z_{i}&{\text{ otherwise}}\end{array}}\right.}$

for all ${\displaystyle i}$. Then ${\displaystyle (Z_{i})_{i=0}^{n}}$ is also a martingale. Since ${\displaystyle S_{i}-S_{i-1}}$ is independent and mean zero,

{\displaystyle {\begin{aligned}\sum _{i=1}^{n}{\text{E}}[(S_{i}-S_{i-1})^{2}]&=\sum _{i=1}^{n}{\text{E}}[S_{i}^{2}-2S_{i}S_{i-1}+S_{i-1}^{2}]\\&=\sum _{i=1}^{n}{\text{E}}\left[S_{i}^{2}-2(S_{i-1}+S_{i}-S_{i-1})S_{i-1}+S_{i-1}^{2}\right]\\&=\sum _{i=1}^{n}{\text{E}}\left[S_{i}^{2}-S_{i-1}^{2}\right]-2{\text{E}}\left[S_{i-1}(S_{i}-S_{i-1})\right]\\&={\text{E}}[S_{n}^{2}]-{\text{E}}[S_{0}^{2}]={\text{E}}[S_{n}^{2}].\end{aligned}}}

The same is true for ${\displaystyle (Z_{i})_{i=0}^{n}}$. Thus

{\displaystyle {\begin{aligned}{\text{Pr}}\left(\max _{1\leq i\leq n}S_{i}\geq \lambda \right)&={\text{Pr}}[Z_{n}\geq \lambda ]\\&\leq {\frac {1}{\lambda ^{2}}}{\text{E}}[Z_{n}^{2}]={\frac {1}{\lambda ^{2}}}\sum _{i=1}^{n}{\text{E}}[(Z_{i}-Z_{i-1})^{2}]\\&\leq {\frac {1}{\lambda ^{2}}}\sum _{i=1}^{n}{\text{E}}[(S_{i}-S_{i-1})^{2}]={\frac {1}{\lambda ^{2}}}{\text{E}}[S_{n}^{2}]={\frac {1}{\lambda ^{2}}}{\text{Var}}[S_{n}]\end{aligned}}}

This inequality was generalized by Hájek and Rényi in 1955.