# Convergence of random variables

(Redirected from Converges in distribution)

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behaviour that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behaviour can be characterised: two readily understood behaviours are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

## Background

"Stochastic convergence" formalizes the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle into a pattern. The pattern may for instance be

• Convergence in the classical sense to a fixed value, perhaps itself coming from a random event
• An increasing similarity of outcomes to what a purely deterministic function would produce
• An increasing preference towards a certain outcome
• An increasing "aversion" against straying far away from a certain outcome

Some less obvious, more theoretical patterns could be

• That the probability distribution describing the next outcome may grow increasingly similar to a certain distribution
• That the series formed by calculating the expected value of the outcome's distance from a particular value may converge to 0
• That the variance of the random variable describing the next event grows smaller and smaller.

These other types of patterns that may arise are reflected in the different types of stochastic convergence that have been studied.

While the above discussion has related to the convergence of a single series to a limiting value, the notion of the convergence of two series towards each other is also important, but this is easily handled by studying the sequence defined as either the difference or the ratio of the two series.

For example, if the average of n uncorrelated random variables Yi, i = 1, ..., n, all having the same finite mean and variance, is given by

$X_n = \frac{1}{n}\sum_{i=1}^n Y_i\,,$

then as n tends to infinity, Xn converges in probability (see below) to the common mean, μ, of the random variables Yi. This result is known as the weak law of large numbers. Other forms of convergence are important in other useful theorems, including the central limit theorem.

Throughout the following, we assume that (Xn) is a sequence of random variables, and X is a random variable, and all of them are defined on the same probability space $\scriptstyle (\Omega, \mathcal{F}, P )$.

## Convergence in distribution

Examples of convergence in distribution
Dice factory
Suppose a new dice factory has just been built. The first few dice come out quite biased, due to imperfections in the production process. The outcome from tossing any of them will follow a distribution markedly different from the desired uniform distribution.

As the factory is improved, the dice become less and less loaded, and the outcomes from tossing a newly produced dice will follow the uniform distribution more and more closely.
Tossing coins
Let Xn be the fraction of heads after tossing up an unbiased coin n times. Then X1 has the Bernoulli distribution with expected value μ = 0.5 and variance σ2 = 0.25. The subsequent random variables X2, X3, … will all be distributed binomially.

As n grows larger, this distribution will gradually start to take shape more and more similar to the bell curve of the normal distribution. If we shift and rescale Xn’s appropriately, then $\scriptstyle Z_n = \sqrt{n}(X_n-\mu)/\sigma$ will be converging in distribution to the standard normal, the result that follows from the celebrated central limit theorem.
Graphic example

Suppose { Xi } is an iid sequence of uniform U(−1,1) random variables. Let $\scriptstyle Z_n = {\scriptscriptstyle\frac{1}{\sqrt{n}}}\sum_{i=1}^n X_i$ be their (normalized) sums. Then according to the central limit theorem, the distribution of Zn approaches the normal N(0, ⅓) distribution. This convergence is shown in the picture: as n grows larger, the shape of the pdf function gets closer and closer to the Gaussian curve.

With this mode of convergence, we increasingly expect to see the next outcome in a sequence of random experiments becoming better and better modeled by a given probability distribution.

Convergence in distribution is the weakest form of convergence, since it is implied by all other types of convergence mentioned in this article. However convergence in distribution is very frequently used in practice; most often it arises from application of the central limit theorem.

### Definition

A sequence $X_1,\ X_2,\ldots$ of random variables is said to converge in distribution, or converge weakly, or converge in law to a random variable X if

$\lim_{n\to\infty} F_n(x) = F(x),$

for every number xR at which $F$ is continuous. Here $F_n$ and $F$ are the cumulative distribution functions of random variables $X_n$ and $X$, respectively.

The requirement that only the continuity points of $F$ should be considered is essential. For example if $X_n$ are distributed uniformly on intervals [0, 1n], then this sequence converges in distribution to a degenerate random variable X = 0. Indeed, Fn(x) = 0 for all n when x ≤ 0, and Fn(x) = 1 for all x1n when n > 0. However, for this limiting random variable F(0) = 1, even though Fn(0) = 0 for all n. Thus the convergence of cdfs fails at the point x = 0 where F is discontinuous.

Convergence in distribution may be denoted as

\begin{align} & X_n \ \xrightarrow{d}\ X,\ \ X_n \ \xrightarrow{\mathcal{D}}\ X,\ \ X_n \ \xrightarrow{\mathcal{L}}\ X,\ \ X_n \ \xrightarrow{d}\ \mathcal{L}_X, \\ & X_n \rightsquigarrow X,\ \ X_n \Rightarrow X,\ \ \mathcal{L}(X_n)\to\mathcal{L}(X),\\ \end{align}

where $\scriptstyle\mathcal{L}_X$ is the law (probability distribution) of $X$. For example if $X$ is standard normal we can write $X_n\,\xrightarrow{d}\,\mathcal{N}(0,\,1)$.

For random vectors {X1X2, …} ⊂ Rk the convergence in distribution is defined similarly. We say that this sequence converges in distribution to a random k-vector X if

$\lim_{n\to\infty} \operatorname{Pr}(X_n\in A) = \operatorname{Pr}(X\in A)$

for every A ⊂ Rk which is a continuity set of X.

The definition of convergence in distribution may be extended from random vectors to more complex random elements in arbitrary metric spaces, and even to the “random variables” which are not measurable — a situation which occurs for example in the study of empirical processes. This is the “weak convergence of laws without laws being defined” — except asymptotically.[1]

In this case the term weak convergence is preferable (see weak convergence of measures), and we say that a sequence of random elements {Xn} converges weakly to X (denoted as Xn ⇒ X) if

$\operatorname{E}^*h(X_n) \to \operatorname{E}\,h(X)$

for all continuous bounded functions h(·).[2] Here E* denotes the outer expectation, that is the expectation of a “smallest measurable function g that dominates h(Xn)”.

### Properties

• Since F(a) = Pr(X ≤ a), the convergence in distribution means that the probability for Xn to be in a given range is approximately equal to the probability that the value of X is in that range, provided n is sufficiently large.
• In general, convergence in distribution does not imply that the sequence of corresponding probability density functions will also converge. As an example one may consider random variables with densities ƒn(x) = (1 − cos(2πnx))1{x∈(0,1)}. These random variables converge in distribution to a uniform U(0, 1), whereas their densities do not converge at all.[3] However, Scheffe's theorem states that convergence of the probability density functions implies convergence in distribution.[4]
• The portmanteau lemma provides several equivalent definitions of convergence in distribution. Although these definitions are less intuitive, they are used to prove a number of statistical theorems. The lemma states that {Xn} converges in distribution to X if and only if any of the following statements are true:[citation needed]
• The continuous mapping theorem states that for a continuous function g(·), if the sequence {Xn} converges in distribution to X, then so does {g(Xn)} converge in distribution to g(X).
• Lévy’s continuity theorem: the sequence {Xn} converges in distribution to X if and only if the sequence of corresponding characteristic functions {φn} converges pointwise to the characteristic function φ of X.
• Convergence in distribution is metrizable by the Lévy–Prokhorov metric.
• A natural link to convergence in distribution is the Skorokhod's representation theorem.

## Convergence in probability

Examples of convergence in probability
Height of a person
Consider the following experiment. First, pick a random person in the street. Let X be his/her height, which is ex ante a random variable. Then you start asking other people to estimate this height by eye. Let Xn be the average of the first n responses. Then (provided there is no systematic error) by the law of large numbers, the sequence Xn will converge in probability to the random variable X.
Archer
Suppose a person takes a bow and starts shooting arrows at a target. Let Xn be his score in n-th shot. Initially he will be very likely to score zeros, but as the time goes and his archery skill increases, he will become more and more likely to hit the bullseye and score 10 points. After the years of practice the probability that he hit anything but 10 will be getting increasingly smaller and smaller. Thus, the sequence Xn converges in probability to X = 10.

Note that Xn does not converge almost surely however. No matter how professional the archer becomes, there will always be a small probability of making an error. Thus the sequence {Xn} will never turn stationary: there will always be non-perfect scores in it, even if they are becoming increasingly less frequent.

The basic idea behind this type of convergence is that the probability of an “unusual” outcome becomes smaller and smaller as the sequence progresses.

The concept of convergence in probability is used very often in statistics. For example, an estimator is called consistent if it converges in probability to the quantity being estimated. Convergence in probability is also the type of convergence established by the weak law of large numbers.

### Definition

A sequence {Xn} of random variables converges in probability towards X if for all ε > 0

$\lim_{n\to\infty}\Pr\big(|X_n-X| \geq \varepsilon\big) = 0.$

Formally, pick any ε > 0 and any δ > 0. Let Pn be the probability that Xn is outside the ball of radius ε centered at X. Then for Xn to converge in probability to X there should exist a number Nδ such that for all n ≥ Nδ the probability Pn is less than δ.

Convergence in probability is denoted by adding the letter p over an arrow indicating convergence, or using the “plim” probability limit operator:

$X_n \ \xrightarrow{p}\ X,\ \ X_n \ \xrightarrow{P}\ X,\ \ \underset{n\to\infty}{\operatorname{plim}}\, X_n = X.$

For random elements {Xn} on a separable metric space (Sd), convergence in probability is defined similarly by[5]

$\forall\varepsilon>0, \Pr\big(d(X_n,X)\geq\varepsilon\big) \to 0.$

### Properties

• Convergence in probability implies convergence in distribution.[proof]
• Convergence in probability does not imply almost sure convergence.[proof]
• In the opposite direction, convergence in distribution implies convergence in probability only when the limiting random variable X is a constant.[proof]
• The continuous mapping theorem states that for every continuous function g(·), if  $\scriptstyle X_n\xrightarrow{p}X$, then also  $\scriptstyle g(X_n)\xrightarrow{p}g(X)$.
• Convergence in probability defines a topology on the space of random variables over a fixed probability space. This topology is metrizable by the Ky Fan metric:[6]
$d(X,Y) = \inf\!\big\{ \varepsilon>0:\ \Pr\big(|X-Y|>\varepsilon\big)\leq\varepsilon\big\}$
or
$d(X,Y)=\mathbb E\left[\min(|X-Y|, 1)\right]$.

## Almost sure convergence

Examples of almost sure convergence
Example 1
Consider an animal of some short-lived species. We record the amount of food that this animal consumes per day. This sequence of numbers will be unpredictable, but we may be quite certain that one day the number will become zero, and will stay zero forever after.
Example 2
Consider a man who tosses seven coins every morning. Each afternoon, he donates one pound to a charity for each head that appeared. The first time the result is all tails, however, he will stop permanently.

Let X1, X2, … be the daily amounts the charity receives from him.

We may be almost sure that one day this amount will be zero, and stay zero forever after that.

However, when we consider any finite number of days, there is a nonzero probability the terminating condition will not occur.

This is the type of stochastic convergence that is most similar to pointwise convergence known from elementary real analysis.

### Definition

To say that the sequence Xn converges almost surely or almost everywhere or with probability 1 or strongly towards X means that

$\operatorname{Pr}\!\left( \lim_{n\to\infty}\! X_n = X \right) = 1.$

This means that the values of Xn approach the value of X, in the sense (see almost surely) that events for which Xn does not converge to X have probability 0. Using the probability space $\scriptstyle (\Omega, \mathcal{F}, P )$ and the concept of the random variable as a function from Ω to R, this is equivalent to the statement

$\operatorname{Pr}\Big( \omega \in \Omega : \lim_{n \to \infty} X_n(\omega) = X(\omega) \Big) = 1.$

Another, equivalent, way of defining almost sure convergence is as follows:

$\lim_{n\to\infty} \operatorname{Pr}\Big( \omega \in \Omega : \sup_{m\ge n} | X_m(\omega) - X(\omega) | \ge \varepsilon \Big) = 0 \quad\text{for all}\quad \varepsilon>0.$

Almost sure convergence is often denoted by adding the letters a.s. over an arrow indicating convergence:

$X_n \, \xrightarrow{\mathrm{a.s.}} \, X.$

For generic random elements {Xn} on a metric space (S, d), convergence almost surely is defined similarly:

$\operatorname{Pr}\Big( \omega\in\Omega:\, d\big(X_n(\omega),X(\omega)\big)\,\underset{n\to\infty}{\longrightarrow}\,0 \Big) = 1$

### Properties

• Almost sure convergence implies convergence in probability, and hence implies convergence in distribution. It is the notion of convergence used in the strong law of large numbers.
• The concept of almost sure convergence does not come from a topology on the space of random variables. This means there is no topology on the space of random variables such that the almost surely convergent sequences are exactly the converging sequences with respect to that topology. In particular, there is no metric of almost sure convergence.

## Sure convergence

To say that the sequence or random variables (Xn) defined over the same probability space (i.e., a random process) converges surely or everywhere or pointwise towards X means

$\lim_{n\rightarrow\infty}X_n(\omega)=X(\omega), \, \, \forall \omega \in \Omega.$

where Ω is the sample space of the underlying probability space over which the random variables are defined.

This is the notion of pointwise convergence of sequence functions extended to sequence of random variables. (Note that random variables themselves are functions).

$\big\{\omega \in \Omega \, | \, \lim_{n \to \infty}X_n(\omega) = X(\omega) \big\} = \Omega.$

Sure convergence of a random variable implies all the other kinds of convergence stated above, but there is no payoff in probability theory by using sure convergence compared to using almost sure convergence. The difference between the two only exists on sets with probability zero. This is why the concept of sure convergence of random variables is very rarely used.

## Convergence in mean

We say that the sequence Xn converges in the r-th mean (or in the Lr-norm) towards X, for some r ≥ 1, if r-th absolute moments of Xn and X exist, and

$\lim_{n\to\infty} \operatorname{E}\left( |X_n-X|^r \right) = 0,$

where the operator E denotes the expected value. Convergence in r-th mean tells us that the expectation of the r-th power of the difference between Xn and X converges to zero.

This type of convergence is often denoted by adding the letter Lr over an arrow indicating convergence:

$X_n \, \xrightarrow{L^r} \, X.$

The most important cases of convergence in r-th mean are:

• When Xn converges in r-th mean to X for r = 1, we say that Xn converges in mean to X.
• When Xn converges in r-th mean to X for r = 2, we say that Xn converges in mean square to X. This is also sometimes referred to as convergence in mean, and is sometimes denoted[7]
$\underset{n \to \infty}{\operatorname{l.i.m.}} X_n = X.\,\!$

Convergence in the r-th mean, for r ≥ 1, implies convergence in probability (by Markov's inequality), while if r > s ≥ 1, convergence in r-th mean implies convergence in s-th mean. Hence, convergence in mean square implies convergence in mean.

## Properties

The chain of implications between the various notions of convergence are noted in their respective sections. They are, using the arrow notation:

$\begin{matrix} \xrightarrow{L^s} & \underset{s>r\geq1}{\Rightarrow} & \xrightarrow{L^r} & & \\ & & \Downarrow & & \\ \xrightarrow{a.s.} & \Rightarrow & \xrightarrow{\ p\ } & \Rightarrow & \xrightarrow{\ d\ } \end{matrix}$

These properties, together with a number of other special cases, are summarized in the following list:

• Almost sure convergence implies convergence in probability:[8][proof]
$X_n\ \xrightarrow{as}\ X \quad\Rightarrow\quad X_n\ \xrightarrow{p}\ X$
• Convergence in probability implies there exists a sub-sequence $(k_n)$ which almost surely converges:[9]
$X_n\ \xrightarrow{p}\ X \quad\Rightarrow\quad X_{k_n}\ \xrightarrow{as}\ X$
• Convergence in probability implies convergence in distribution:[8][proof]
$X_n\ \xrightarrow{p}\ X \quad\Rightarrow\quad X_n\ \xrightarrow{d}\ X$
• Convergence in r-th order mean implies convergence in probability:
$X_n\ \xrightarrow{L^r}\ X \quad\Rightarrow\quad X_n\ \xrightarrow{p}\ X$
• Convergence in r-th order mean implies convergence in lower order mean, assuming that both orders are greater or equal than one:
$X_n\ \xrightarrow{L^r}\ X \quad\Rightarrow\quad X_n\ \xrightarrow{L^s}\ X,$ provided rs ≥ 1.
• If Xn converges in distribution to a constant c, then Xn converges in probability to c:[8][proof]
$X_n\ \xrightarrow{d}\ c \quad\Rightarrow\quad X_n\ \xrightarrow{p}\ c,$ provided c is a constant.
• If Xn converges in distribution to X and the difference between Xn and Yn converges in probability to zero, then Yn also converges in distribution to X:[8][proof]
$X_n\ \xrightarrow{d}\ X,\ \ |X_n-Y_n|\ \xrightarrow{p}\ 0\ \quad\Rightarrow\quad Y_n\ \xrightarrow{d}\ X$
• If Xn converges in distribution to X and Yn converges in distribution to a constant c, then the joint vector (XnYn) converges in distribution to (X, c):[8][proof]
$X_n\ \xrightarrow{d}\ X,\ \ Y_n\ \xrightarrow{d}\ c\ \quad\Rightarrow\quad (X_n,Y_n)\ \xrightarrow{d}\ (X,c)$ provided c is a constant.
Note that the condition that Yn converges to a constant is important, if it were to converge to a random variable Y then we wouldn’t be able to conclude that (XnYn) converges to (X, Y).
• If Xn converges in probability to X and Yn converges in probability to Y, then the joint vector (XnYn) converges in probability to (XY):[8][proof]
$X_n\ \xrightarrow{p}\ X,\ \ Y_n\ \xrightarrow{p}\ Y\ \quad\Rightarrow\quad (X_n,Y_n)\ \xrightarrow{p}\ (X,Y)$
• If Xn converges in probability to X, and if P(|Xn| ≤ b) = 1 for all n and some b, then Xn converges in rth mean to X for all r ≥ 1. In other words, if Xn converges in probability to X and all random variables Xn are almost surely bounded above and below, then Xn converges to X also in any rth mean.
• Almost sure representation. Usually, convergence in distribution does not imply convergence almost surely. However for a given sequence {Xn} which converges in distribution to X0 it is always possible to find a new probability space (Ω, F, P) and random variables {Yn, n = 0,1,…} defined on it such that Yn is equal in distribution to Xn for each n ≥ 0, and Yn converges to Y0 almost surely.[10]
• If for all ε > 0,
$\sum_n \mathbb{P} \left(|X_n - X| > \varepsilon\right) < \infty,$
then we say that Xn converges almost completely, or almost in probability towards X. When Xn converges almost completely towards X then it also converges almost surely to X. In other words, if Xn converges in probability to X sufficiently quickly (i.e. the above sequence of tail probabilities is summable for all ε > 0), then Xn also converges almost surely to X. This is a direct implication from the Borel-Cantelli lemma.
• If Sn is a sum of n real independent random variables:
$S_n = X_1+\cdots+X_n \,$
then Sn converges almost surely if and only if Sn converges in probability.
$\left. \begin{array}{ccc} X_n\xrightarrow{a.s.} X \\ \\ |X_n| < Y \\ \\ \mathrm{E}(Y) < \infty \end{array}\right\} \quad\Rightarrow \quad X_n\xrightarrow{L^1} X$
• A necessary and sufficient condition for L1 convergence is $X_n\xrightarrow{P} X$ and the sequence (Xn) is uniformly integrable.

## Notes

1. ^ Bickel et al. 1998, A.8, page 475
2. ^
3. ^ Romano & Siegel 1985, Example 5.26
4. ^ Koro. "Scheffé's theorem". Retrieved 1 February 2013.
5. ^ Dudley 2002, Chapter 9.2, page 287
6. ^ Dudley 2002, p. 289
7. ^ Porat, B. (1994). Digital Processing of Random Signals: Theory & Methods. Prentice Hall. p. 19. ISBN 0-13-063751-3.
8. van der Vaart 1998, Theorem 2.7
9. ^ Gut, Allan (2005). Probability: A graduate course. Theorem 3.4: Springer. ISBN 0-387-22833-0.
10. ^ van der Vaart 1998, Th.2.19

## References

• Bickel, Peter J.; Klaassen, Chris A.J.; Ritov, Ya’acov; Wellner, Jon A. (1998). Efficient and adaptive estimation for semiparametric models. New York: Springer-Verlag. ISBN 0-387-98473-9. LCCN QA276.8.E374.
• Billingsley, Patrick (1986). Probability and Measure. Wiley Series in Probability and Mathematical Statistics (2nd ed.). Wiley.
• Billingsley, Patrick (1999). Convergence of probability measures (2nd ed.). John Wiley & Sons. pp. 1–28. ISBN 0-471-19745-9.
• Dudley, R.M. (2002). Real analysis and probability. Cambridge, UK: Cambridge University Press. ISBN 0-521-80972-X.
• Grimmett, G.R.; Stirzaker, D.R. (1992). Probability and random processes (2nd ed.). Clarendon Press, Oxford. pp. 271–285. ISBN 0-19-853665-8.
• Jacobsen, M. (1992). Videregående Sandsynlighedsregning (Advanced Probability Theory) (3rd ed.). HCØ-tryk, Copenhagen. pp. 18–20. ISBN 87-91180-71-6.
• Ledoux, Michel; Talagrand, Michel (1991). Probability in Banach spaces. Berlin: Springer-Verlag. pp. xii+480. ISBN 3-540-52013-9. MR 1102015.
• Romano, Joseph P.; Siegel, Andrew F. (1985). Counterexamples in probability and statistics. Great Britain: Chapman & Hall. ISBN 0-412-98901-8. LCCN 1985 QA273.R58 1985.
• van der Vaart, Aad W.; Wellner, Jon A. (1996). Weak convergence and empirical processes. New York: Springer-Verlag. ISBN 0-387-94640-3. LCCN 1996 QA274.V33 1996.
• van der Vaart, Aad W. (1998). Asymptotic statistics. New York: Cambridge University Press. ISBN 978-0-521-49603-2. LCCN 1998 QA276.V22 1998.
• Williams, D. (1991). Probability with Martingales. Cambridge University Press. ISBN 0-521-40605-6.
• Wong, E.; Hájek, B. (1985). Stochastic Processes in Engineering Systems. New York: Springer–Verlag.