Jump to content

Convergence of random variables: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Correction of an incomplete definition
Simplification of a sentence and removal of an ambiguity.
Line 5: Line 5:
We say that the sequence ''X''<sub>''n''</sub> converges towards ''X'' '''in distribution''', if
We say that the sequence ''X''<sub>''n''</sub> converges towards ''X'' '''in distribution''', if
:[[mathematical limit|lim]]<sub>''n''&rarr;&infin;</sub> Pr(''X''<sub>''n''</sub> &le; ''a'') = Pr(''X'' &le; ''a'')
:[[mathematical limit|lim]]<sub>''n''&rarr;&infin;</sub> Pr(''X''<sub>''n''</sub> &le; ''a'') = Pr(''X'' &le; ''a'')
for every [[real number]] ''a'' except values of ''a'' at which the cumulative distribution function of the limiting random variable ''X'' is not continuous.
for every [[real number]] ''a'' at which the cumulative distribution function of the limiting random variable ''X'' is continuous.


Essentially, this means that the likelyhood that the value of ''X'' is in a given range is very similar to the likelyhood that the value of ''X''<sub>''n''</sub> is in that range, if only ''n'' is large enough. This notion of convergence is used in the [[central limit theorem]]s.
Essentially, this means that the likelyhood that the value of ''X'' is in a given range is very similar to the likelyhood that the value of ''X''<sub>''n''</sub> is in that range, if only ''n'' is large enough. This notion of convergence is used in the [[central limit theorem]]s.

Revision as of 23:21, 11 December 2002

In probability theory, several different notions of convergence of random variables are investigated. These will be presented here. Throughout, we assume that (Xn) is a sequence of random variables, and X is a random variable, and all of them are defined on the same probability space (Ω, Pr).

Convergence in distribution

We say that the sequence Xn converges towards X in distribution, if

limn→∞ Pr(Xna) = Pr(Xa)

for every real number a at which the cumulative distribution function of the limiting random variable X is continuous.

Essentially, this means that the likelyhood that the value of X is in a given range is very similar to the likelyhood that the value of Xn is in that range, if only n is large enough. This notion of convergence is used in the central limit theorems.

Convergence in probability

We say that the sequence Xn converges towards X in probability or weakly if

limn→∞Pr(|Xn - X| > ε) = 0

for every ε > 0.

This means that if you pick a tolerance ε and choose n large enough, then the value of Xn will be almost guaranteed to be within that tolerance of the value of X. This notion of convergence is used in the weak law of large numbers.

Convergence in probability implies convergence in distribution.

Almost sure convergence

We say that the sequence Xn converges almost surely or almost everywhere or with probability 1 or strongly towards X if

Pr ( {ω in Ω : limn→∞ Xn(ω) = X(ω)} ) = 1

This means that you are virtually guaranteed that the values of Xn approach the value of X. This notion of convergence is used in the strong law of large numbers.

Almost sure convergence implies convergence in probability.

Convergence in mean

We say that the sequence Xn converges towards X in mean or in the L1 norm if

limn→∞ E(|Xn - X|) = 0

where E denotes the expected value.

This means that the expected difference between Xn and X gets as small as desired if n is chosen big enough. This convergence is considered in Lp spaces (where p = 1).

Convergence in the mean implies convergence in probability. There is no general relation between convergence in mean and almost sure convergence however.

Convergence in mean square

We say that the sequence Xn converges towards X in mean square or in the L2 norm if

limn→∞ E(|Xn - X|2) = 0.

This means that the expected squared difference between Xn and X gets as small as desired if n is chosen big enough. This convergence is considered in Lp spaces (where p = 2).

Convergence in mean square implies convergence in mean.