Hewitt–Savage zero–one law

The Hewitt–Savage zero–one law is a theorem in probability theory, similar to Kolmogorov's zero–one law and the Borel–Cantelli lemma, that specifies that a certain type of event will either almost surely happen or almost surely not happen. It is sometimes known as the Hewitt–Savage law for symmetric events. It is named after Edwin Hewitt and Leonard Jimmie Savage.[1]

Statement of the Hewitt–Savage zero–one law

Let ${\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}$ be a sequence of independent and identically-distributed random variables taking values in a set ${\displaystyle \mathbb {X} }$. The Hewitt–Savage zero–one law says that any event whose occurrence or non-occurrence is determined by the values of these random variables and whose occurrence or non-occurrence is unchanged by finite permutations of the indices, has probability either 0 or 1 (a "finite" permutation is one that leaves all but finitely many of the indices fixed).

Somewhat more abstractly, define the exchangeable sigma algebra or sigma algebra of symmetric events ${\displaystyle {\mathcal {E}}}$ to be the set of events (depending on the sequence of variables ${\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}$) which are invariant under finite permutations of the indices in the sequence ${\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}$. Then ${\displaystyle A\in {\mathcal {E}}\implies \mathbb {P} (A)\in \{0,1\}}$.

Since any finite permutation can be written as a product of transpositions, if we wish to check whether or not an event ${\displaystyle A}$ is symmetric (lies in ${\displaystyle {\mathcal {E}}}$), it is enough to check if its occurrence is unchanged by an arbitrary transposition ${\displaystyle (i,j)}$, ${\displaystyle i,j\in \mathbb {N} }$.

Examples

Example 1

Let the sequence ${\displaystyle \left\{X_{n}\right\}_{n=1}^{\infty }}$ take values in ${\displaystyle [0,\infty )}$. Then the event that the series ${\displaystyle \sum _{n=1}^{\infty }X_{n}}$ converges (to a finite value) is a symmetric event in ${\displaystyle {\mathcal {E}}}$, since its occurrence is unchanged under transpositions (for a finite re-ordering, the convergence or divergence of the series—and, indeed, the numerical value of the sum itself—is independent of the order in which we add up the terms). Thus, the series either converges almost surely or diverges almost surely. If we assume in addition that the common expected value ${\displaystyle \mathbb {E} [X_{n}]>0}$ (which essentially means that ${\displaystyle \mathbb {P} (X_{n}=0)<1}$ because of the random variables' non-negativity), we may conclude that

${\displaystyle \mathbb {P} \left(\sum _{n=1}^{\infty }X_{n}=+\infty \right)=1,}$

i.e. the series diverges almost surely. This is a particularly simple application of the Hewitt–Savage zero–one law. In many situations, it can be easy to apply the Hewitt–Savage zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one.

Example 2

Continuing with the previous example, define

${\displaystyle S_{N}=\sum _{n=1}^{N}X_{n},}$

which is the position at step N of a random walk with the iid increments Xn. The event { SN = 0 infinitely often } is invariant under finite permutations. Therefore, the zero–one law is applicable and one infers that the probability of a random walk with real iid increments visiting the origin infinitely often is either one or zero. Visiting the origin infinitely often is a tail event with respect to the sequence (SN), but SN are not independent and therefore the Kolmogorov's zero–one law is not directly applicable here.[2]

References

1. ^ Hewitt, E.; Savage, L. J. (1955). "Symmetric measures on Cartesian products". Trans. Amer. Math. Soc. 80: 470–501. doi:10.1090/s0002-9947-1955-0076206-8.
2. ^ This example is from Shiryaev, A. (1996). Probability Theory (Second ed.). New York: Springer-Verlag. pp. 381–82.