# Borel–Cantelli lemma

(Redirected from Borel-Cantelli Lemma)

In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century.[1][2] A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law.

## Statement of lemma for probability spaces

Let E1,E2,... be a sequence of events in some probability space. The Borel–Cantelli lemma states:[3]

If the sum of the probabilities of the En is finite
${\displaystyle \sum _{n=1}^{\infty }\Pr(E_{n})<\infty ,}$
then the probability that infinitely many of them occur is 0, that is,
${\displaystyle \Pr \left(\limsup _{n\to \infty }E_{n}\right)=0.\,}$

Here, "lim sup" denotes limit supremum of the sequence of events, and each event is a set of outcomes. That is, lim sup En is the set of outcomes that occur infinitely many times within the infinite sequence of events (En). Explicitly,

${\displaystyle \limsup _{n\to \infty }E_{n}=\bigcap _{n=1}^{\infty }\bigcup _{k\geq n}^{\infty }E_{k}.}$

The theorem therefore asserts that if the sum of the probabilities of the events En is finite, then the set of all outcomes that are "repeated" infinitely many times must occur with probability zero. Note that no assumption of independence is required.

### Example

Suppose (Xn) is a sequence of random variables with Pr(Xn = 0) = 1/n2 for each n. The probability that Xn = 0 occurs for infinitely many n is equivalent to the probability of the intersection of infinitely many [Xn = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ∑Pr(Xn = 0) converges to π2/6 ≈ 1.645 < ∞, and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of Xn = 0 occurring for infinitely many n is 0. Almost surely (i.e., with probability 1), Xn is nonzero for all but finitely many n.

## Proof[4]

Let ${\displaystyle [E_{n}]}$ denote the indicator function of the event ${\displaystyle E_{n}}$ (using Iverson bracket notation). Then, by the linearity of expectation

${\displaystyle \operatorname {E} \left(\sum _{n}[E_{n}]\right)=\sum _{n}\operatorname {E} ([E_{n}])=\sum _{n}\Pr(E_{n})<\infty }$

by hypothesis. This directly implies that

${\displaystyle \Pr \left(\sum _{n}[E_{n}]=\infty \right)=0}$

because otherwise

${\displaystyle \operatorname {E} \left(\sum _{n}[E_{n}]\right)\geq \int _{\{\sum _{n}[E_{n}]=\infty \}}\left(\sum _{n}[E_{n}]\right)\,\mathrm {d} \mathbb {\Pr } =\infty .}$

### Alternative proof [5]

Let (En) be a sequence of events in some probability space and suppose that the sum of the probabilities of the En is finite. That is suppose:

${\displaystyle \sum _{n=1}^{\infty }\Pr(E_{n})<\infty .}$

As the series converges, we must have that

${\displaystyle \sum _{n=N}^{\infty }\Pr(E_{n})\rightarrow 0,\quad {\text{as }}N\to \infty .}$

Therefore :

${\displaystyle \inf _{N\geqslant 1}\sum _{n=N}^{\infty }\Pr(E_{n})=0.}$

Therefore it follows that

{\displaystyle {\begin{aligned}\Pr \left(\limsup _{n\to \infty }E_{n}\right)&=\Pr({\text{infinitely many of the }}E_{n}{\text{ occur}})\\[6pt]&=\Pr \left(\bigcap _{N=1}^{\infty }\bigcup _{n=N}^{\infty }E_{n}\right)\\&\leqslant \inf _{N\geqslant 1}\Pr \left(\bigcup _{n=N}^{\infty }E_{n}\right)\\&\leqslant \inf _{N\geqslant 1}\sum _{n=N}^{\infty }\Pr(E_{n})\\&=0\end{aligned}}}

## General measure spaces

For general measure spaces, the Borel–Cantelli lemma takes the following form:

Let μ be a (positive) measure on a set X, with σ-algebra F, and let (An) be a sequence in F. If
${\displaystyle \sum _{n=1}^{\infty }\mu (A_{n})<\infty ,}$
then
${\displaystyle \mu \left(\limsup _{n\to \infty }A_{n}\right)=0.\,}$

## Converse result

A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states: If the events En are independent and the sum of the probabilities of the En diverges to infinity, then the probability that infinitely many of them occur is 1. That is:

If ${\displaystyle \sum _{n=1}^{\infty }\Pr(E_{n})=\infty }$ and the events ${\displaystyle (E_{n})_{n=1}^{\infty }}$ are independent, then ${\displaystyle \Pr(\limsup _{n\rightarrow \infty }E_{n})=1.}$

The assumption of independence can be weakened to pairwise independence, but in that case the proof is more difficult.

### Example

The infinite monkey theorem is a special case of this lemma.

The lemma can be applied to give a covering theorem in Rn. Specifically (Stein 1993, Lemma X.2.1), if Ej is a collection of Lebesgue measurable subsets of a compact set in Rn such that

${\displaystyle \sum _{j}\mu (E_{j})=\infty ,}$

then there is a sequence Fj of translates

${\displaystyle F_{j}=E_{j}+x_{j}\,}$

such that

${\displaystyle \lim \sup F_{j}=\bigcap _{n=1}^{\infty }\bigcup _{k=n}^{\infty }F_{k}=\mathbb {R} ^{n}}$

apart from a set of measure zero.

## Proof[5]

Suppose that ${\displaystyle \sum _{n=1}^{\infty }\Pr(E_{n})=\infty }$ and the events ${\displaystyle (E_{n})_{n=1}^{\infty }}$ are independent. It is sufficient to show the event that the En's did not occur for infinitely many values of n has probability 0. This is just to say that it is sufficient to show that

${\displaystyle 1-\Pr(\limsup _{n\rightarrow \infty }E_{n})=0.\,}$

Noting that:

{\displaystyle {\begin{aligned}1-\Pr(\limsup _{n\rightarrow \infty }E_{n})&=1-\Pr \left(\{E_{n}{\text{ i.o.}}\}\right)=\Pr \left(\{E_{n}{\text{ i.o.}}\}^{c}\right)\\&=\Pr \left(\left(\bigcap _{N=1}^{\infty }\bigcup _{n=N}^{\infty }E_{n}\right)^{c}\right)=\Pr \left(\bigcup _{N=1}^{\infty }\bigcap _{n=N}^{\infty }E_{n}^{c}\right)\\&=\Pr \left(\liminf _{n\rightarrow \infty }E_{n}^{c}\right)=\lim _{N\rightarrow \infty }\Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)\end{aligned}}}

it is enough to show: ${\displaystyle \Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)=0}$. Since the ${\displaystyle (E_{n})_{n=1}^{\infty }}$ are independent:

{\displaystyle {\begin{aligned}\Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)&=\prod _{n=N}^{\infty }\Pr(E_{n}^{c})\\&=\prod _{n=N}^{\infty }(1-\Pr(E_{n}))\\&\leq \prod _{n=N}^{\infty }\exp(-\Pr(E_{n}))\\&=\exp \left(-\sum _{n=N}^{\infty }\Pr(E_{n})\right)\\&=0.\end{aligned}}}

This completes the proof. Alternatively, we can see ${\displaystyle \Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)=0}$ by taking negative the logarithm of both sides to get:

{\displaystyle {\begin{aligned}-\log \left(\Pr \left(\bigcap _{n=N}^{\infty }E_{n}^{c}\right)\right)&=-\log \left(\prod _{n=N}^{\infty }(1-\Pr(E_{n}))\right)\\&=-\sum _{n=N}^{\infty }\log(1-\Pr(E_{n})).\end{aligned}}}

Since −log(1 − x) ≥ x for all x > 0, the result similarly follows from our assumption that ${\displaystyle \sum _{n=1}^{\infty }\Pr(E_{n})=\infty .}$

## Counterpart

Another related result is the so-called counterpart of the Borel–Cantelli lemma. It is a counterpart of the Lemma in the sense that it gives a necessary and sufficient condition for the limsup to be 1 by replacing the independence assumption by the completely different assumption that ${\displaystyle (A_{n})}$ is monotone increasing for sufficiently large indices. This Lemma says:

Let ${\displaystyle (A_{n})}$ be such that ${\displaystyle A_{k}\subseteq A_{k+1}}$, and let ${\displaystyle {\bar {A}}}$ denote the complement of ${\displaystyle A}$. Then the probability of infinitely many ${\displaystyle A_{k}}$ occur (that is, at least one ${\displaystyle A_{k}}$ occurs) is one if and only if there exists a strictly increasing sequence of positive integers ${\displaystyle (t_{k})}$ such that

${\displaystyle \sum _{k}\Pr(A_{t_{k+1}}\mid {\bar {A}}_{t_{k}})=\infty .}$

This simple result can be useful in problems such as for instance those involving hitting probabilities for stochastic process with the choice of the sequence ${\displaystyle (t_{k})}$ usually being the essence.