# Law of the iterated logarithm

Plot of $S_n/n$ (red), its variance $1/\sqrt{n}$ given by CLT (blue) and its bound $\sqrt{2\log\log n/n}$ given by LIL (green). Notice the way it randomly switches from the upper bound given by the law of large numbers to the lower bound. Both axes are non-linearly transformed (as explained in figure summary) to make this effect more visible .

In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Y. Khinchin (1924).[1] Another statement was given by A.N. Kolmogorov in 1929.[2]

## Statement

Let {Yn} be independent, identically distributed random variables with means zero and unit variances. Let Sn = Y1 + … + Yn. Then

$\limsup_{n \to \infty} \frac{S_n}{\sqrt{n \log\log n}} = \sqrt{2}, \qquad \text{a.s.},$

where “log” is the natural logarithm, “lim sup” denotes the limit superior, and “a.s.” stands for “almost surely”.[3] [4]

## Discussion

The law of iterated logarithms operates “in between” the law of large numbers and the central limit theorem. There are two versions of the law of large numbers — the weak and the strong — and they both claim that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely:

$\frac{S_n}{n} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{n} \ \xrightarrow{a.s.} 0, \qquad \text{as}\ \ n\to\infty.$

On the other hand, the central limit theorem states that the sums Sn scaled by the factor n−½ converge in distribution to a standard normal distribution. By Kolmogorov's zero-one law, for any fixed M, the probability that the event $\limsup_n \frac{S_n}{\sqrt{n}} > M$ occurs is 0 or 1. Then

$P(\limsup_n \frac{S_n}{\sqrt{n}} > M) \geq \limsup_n P(\frac{S_n}{\sqrt{n}} > M) = P(\mathcal{N}(0, 1) > M) > 0$

so $\limsup_n \frac{S_n}{\sqrt{n}}=\infty$ with probability 1. An identical argument shows that $\liminf_n \frac{S_n}{\sqrt{n}}=-\infty$ with probability 1 as well. This implies that these quantities converge neither in probability nor almost surely:

$\frac{S_n}{\sqrt n} \ \stackrel{p}{\nrightarrow}\ \forall, \qquad \frac{S_n}{\sqrt n} \ \stackrel{a.s.}{\nrightarrow}\ \forall, \qquad \text{as}\ \ n\to\infty.$

The law of the iterated logarithm provides the scaling factor where the two limits become different:

$\frac{S_n}{\sqrt{n\log\log n}} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{\sqrt{n\log\log n}} \ \stackrel{a.s.}{\nrightarrow}\ 0, \qquad \text{as}\ \ n\to\infty.$

Thus, although the quantity $S_n/\sqrt{n\log\log n}$ is less than any predefined ε > 0 with probability approaching one, that quantity will nevertheless be dropping out of that interval infinitely often, and in fact will be visiting the neighborhoods of any point in the interval (0,√2) almost surely.

## Generalizations and variants

The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to Khintchine and Kolmogorov in the 1920s.

Since then, there has been a tremendous amount of work on the LIL for various kinds of dependent structures and for stochastic processes. Following is a small sample of notable developments.

Hartman-Wintner (1940) generalized LIL to random walks with increments with zero mean and finite variance.

Strassen (1964) studied LIL from the point of view of invariance principles.

Stout (1970) generalized the LIL to stationary ergodic martingales.

Acosta (1983) gave a simple proof of Hartman-Wintner version of LIL.

Wittmann (1985) generalized Hartman-Wintner version of LIL to random walks satisfying milder conditions.

Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence). This is notable as it is outside the realm of classical probability theory.