Glivenko–Cantelli theorem

(Redirected from Glivenko–Cantelli class)

In the theory of probability, the Glivenko–Cantelli theorem, named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, determines the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows. The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets. The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory, with applications to machine learning. Applications can be found in econometrics making use of M-estimators.

Assume that $X_{1},X_{2},\dots$ are independent and identically-distributed random variables in $\mathbb {R}$ with common cumulative distribution function $F(x)$ . The empirical distribution function for $X_{1},\dots ,X_{n}$ is defined by

$F_{n}(x)={\frac {1}{n}}\sum _{i=1}^{n}I_{[X_{i},\infty )}(x)$ where $I_{C}$ is the indicator function of the set $C$ . For every (fixed) $x$ , $F_{n}(x)$ is a sequence of random variables which converge to $F(x)$ almost surely by the strong law of large numbers, that is, $F_{n}$ converges to $F$ pointwise. Glivenko and Cantelli strengthened this result by proving uniform convergence of $F_{n}$ to $F$ .

Theorem

$\|F_{n}-F\|_{\infty }=\sup _{x\in \mathbb {R} }|F_{n}(x)-F(x)|\longrightarrow 0$ almost surely.

This theorem originates with Valery Glivenko, and Francesco Cantelli, in 1933.

Remarks

• If $X_{n}$ is a stationary ergodic process, then $F_{n}(x)$ converges almost surely to $F(x)=E(1_{X_{1}\leq x})$ . The Glivenko–Cantelli theorem gives a stronger mode of convergence than this in the iid case.
• An even stronger uniform convergence result for the empirical distribution function is available in the form of an extended type of law of the iterated logarithm. See asymptotic properties of the empirical distribution function for this and related results.

Proof

For simplicity, consider a case of continuous random variable $X$ . Fix $-\infty =x_{0} such that $F(x_{j})-F(x_{j-1})={\frac {1}{m}}$ for $j=1,\dots ,m$ . Now for all $x\in \mathbb {R}$ there exists $j\in \{1,\dots ,m\}$ such that $x\in [x_{j-1},x_{j}]$ . Note that

{\begin{aligned}F_{n}(x)-F(x)&\leq F_{n}(x_{j})-F(x_{j-1})=F_{n}(x_{j})-F(x_{j})+1/m,\\F_{n}(x)-F(x)&\geq F_{n}(x_{j-1})-F(x_{j})=F_{n}(x_{j-1})-F(x_{j-1})-1/m.\end{aligned}} Therefore, almost surely

$||F_{n}-F||_{\infty }=\sup _{x\in \mathbb {R} }|F_{n}(x)-F(x)|\leq \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|+1/m.$ Since ${\textstyle \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|\to 0{\text{ a.s.}}}$ by strong law of large numbers, we can guarantee that for any integer ${\textstyle m}$ we can find ${\textstyle N}$ such that for all $n\geq N$ $||F_{n}-F||_{\infty }\leq 1/m{\text{ a.s.}}$ ,

which is the definition of almost sure convergence.

Empirical measures

One can generalize the empirical distribution function by replacing the set $(-\infty ,x]$ by an arbitrary set C from a class of sets ${\mathcal {C}}$ to obtain an empirical measure indexed by sets $C\in {\mathcal {C}}.$ $P_{n}(C)={\frac {1}{n}}\sum _{i=1}^{n}I_{C}(X_{i}),C\in {\mathcal {C}}$ Where $I_{C}(x)$ is the indicator function of each set $C$ .

Further generalization is the map induced by $P_{n}$ on measurable real-valued functions f, which is given by

$f\mapsto P_{n}f=\int _{S}f\,dP_{n}={\frac {1}{n}}\sum _{i=1}^{n}f(X_{i}),f\in {\mathcal {F}}.$ Then it becomes an important property of these classes that the strong law of large numbers holds uniformly on ${\mathcal {F}}$ or ${\mathcal {C}}$ .

Glivenko–Cantelli class

Consider a set ${\mathcal {S}}$ with a sigma algebra of Borel subsets A and a probability measure P. For a class of subsets,

${\mathcal {C}}\subset \{C:C{\mbox{ is measurable subset of }}{\mathcal {S}}\}$ and a class of functions

${\mathcal {F}}\subset \{f:{\mathcal {S}}\to \mathbb {R} ,f{\mbox{ is measurable}}\,\}$ define random variables

$\|P_{n}-P\|_{\mathcal {C}}=\sup _{C\in {\mathcal {C}}}|P_{n}(C)-P(C)|$ $\|P_{n}-P\|_{\mathcal {F}}=\sup _{f\in {\mathcal {F}}}|P_{n}f-Pf|$ where $P_{n}(C)$ is the empirical measure, $P_{n}f$ is the corresponding map, and

$\mathbb {E} f=\int _{\mathcal {S}}f\,dP=Pf$ , assuming that it exists.

Definitions

• A class ${\mathcal {C}}$ is called a Glivenko–Cantelli class (or GC class) with respect to a probability measure P if any of the following equivalent statements is true.
1. $\|P_{n}-P\|_{\mathcal {C}}\to 0$ almost surely as $n\to \infty$ .
2. $\|P_{n}-P\|_{\mathcal {C}}\to 0$ in probability as $n\to \infty$ .
3. $\mathbb {E} \|P_{n}-P\|_{\mathcal {C}}\to 0$ , as $n\to \infty$ (convergence in mean).
The Glivenko–Cantelli classes of functions are defined similarly.
• A class is called a universal Glivenko–Cantelli class if it is a GC class with respect to any probability measure P on (S,A).
• A class is called uniformly Glivenko–Cantelli if the convergence occurs uniformly over all probability measures P on (S,A):
$\sup _{P\in {\mathcal {P}}(S,A)}\mathbb {E} \|P_{n}-P\|_{\mathcal {C}}\to 0;$ $\sup _{P\in {\mathcal {P}}(S,A)}\mathbb {E} \|P_{n}-P\|_{\mathcal {F}}\to 0.$ Theorem (Vapnik and Chervonenkis, 1968)

A class of sets ${\mathcal {C}}$ is uniformly GC if and only if it is a Vapnik–Chervonenkis class.

Examples

• Let $S=\mathbb {R}$ and ${\mathcal {C}}=\{(-\infty ,t]:t\in {\mathbb {R} }\}$ . The classical Glivenko–Cantelli theorem implies that this class is a universal GC class. Furthermore, by Kolmogorov's theorem,
$\sup _{P\in {\mathcal {P}}(S,A)}\|P_{n}-P\|_{\mathcal {C}}\sim n^{-1/2}$ , that is ${\mathcal {C}}$ is uniformly Glivenko–Cantelli class.
• Let P be a nonatomic probability measure on S and ${\mathcal {C}}$ be a class of all finite subsets in S. Because $A_{n}=\{X_{1},\ldots ,X_{n}\}\in {\mathcal {C}}$ , $P(A_{n})=0$ , $P_{n}(A_{n})=1$ , we have that $\|P_{n}-P\|_{\mathcal {C}}=1$ and so ${\mathcal {C}}$ is not a GC class with respect to P.