Uniform integrability

From Wikipedia, the free encyclopedia
  (Redirected from Dunford–Pettis theorem)
Jump to: navigation, search

Uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales. The definition used in measure theory is closely related to, but not identical to, the definition typically used in probability.

Measure theoretic definition[edit]

Textbooks on real analysis and measure theory often use the following definition.[1][2]

Let  (X,\mathfrak{M}, \mu) be a positive measure space. A set \Phi\subset L^1(\mu) is called uniformly integrable if to each  \epsilon>0 there corresponds a  \delta>0 such that

 \left| \int_E f d\mu \right| < \epsilon

whenever f \in \Phi and \mu(E)<\delta.

Probability definition[edit]

In the theory of probability, the following definition applies.[3][4][5]

  • An alternative definition involving two clauses may be presented as follows: A class \mathcal{C} of random variables is called uniformly integrable if:
    • There exists a finite M such that, for every X in \mathcal{C}, E(|X|)\leq M.
    • For every \epsilon > 0 there exists \delta > 0 such that, for every measurable A such that P(A)\leq \delta and every X in \mathcal{C}, E(|X|:A)\leq\epsilon.

The two probabilistic definitions are equivalent.[6]

Relationship between definitions[edit]

The two definitions are closely related. A probability space is a measure space with total measure 1. A random variable is a real-valued measurable function on this space, and the expectation of a random variable is defined as the integral of this function with respect to the probability measure.[7] Specifically,

Let  (\Omega, \mathcal{F}, P) be a probability space. Let the random variable X be a real-valued \mathcal{F}-measurable function. Then the expectation of X is defined by

E(X) = \int_\Omega X dP

provided that the integral exists.

Then the alternative probabilistic definition above can be rewritten in measure theoretic terms as: A set \mathcal{C} of real-valued functions is called uniformly integrable if:

  • There exists a finite M such that, for every X in \mathcal{C}, \int_\Omega X dP \leq M.
  • For every \epsilon > 0 there exists \delta > 0 such that, for every measurable A such that P(A)\leq \delta and for every X in \mathcal{C}, \int_A |X| dP \leq \epsilon.

Comparison of this definition with the measure theoretic definition given above shows that the measure theoretic definition requires only that each function be in L^1(\mu). In other words, \int_X f d\mu is finite for each f, but there is not necessarily an upper bound to the values of these integrals. In contrast, the probabilistic definition requires that the integrals have an upper bound.

One consequence of this is that uniformly integrable random variables (under the probabilistic definition) are tight. That is, for each \epsilon > 0, there exists a > 0 such that

 \int_{|X| > a} dP < \epsilon

for all X.[8]

In contrast, uniformly integrable functions (under the measure theoretic definition) are not necessarily tight.[9]

In his book, Bass uses the term uniformly absolutely continuous to refer to sets of random variables (or functions) which satisfy the second clause of the alternative definition. However, this definition does not require each of the functions to have a finite integral.[10]

Related corollaries[edit]

The following results apply to the probabilistic definition.[11]

  • Definition 1 could be rewritten by taking the limits as
\lim_{K \to \infty} \sup_{X \in \mathcal{C}} E(|X|I_{|X|\geq K})=0.
  • A non-UI sequence. Let \Omega = [0,1] \subset \mathbb{R}, and define
X_n(\omega) = \begin{cases}
  n, & \omega\in (0,1/n), \\
  0 , & \text{otherwise.} \end{cases}
Clearly X_n\in L^1, and indeed E(|X_n|)=1\ , for all n. However,
E(|X_n|,|X_n|\ge K)= 1\ \text{ for all } n\ge K,
and comparing with definition 1, it is seen that the sequence is not uniformly integrable.
Non-UI sequence of RVs. The area under the strip is always equal to 1, but X_n \to 0 pointwise.
  • By using Definition 2 in the above example, it can be seen that the first clause is satisfied as L^1 norm of all X_ns are 1 i.e., bounded. But the second clause does not hold as given any \delta positive, there is an interval  (0, 1/n) with measure less than \delta and E[|X_m|: (0, 1/n)] =1 for all m \ge n .
  • If X is a UI random variable, by splitting
and bounding each of the two, it can be seen that a uniformly integrable random variable is always bounded in L^1.
  • If any sequence of random variables X_n is dominated by an integrable, non-negative Y: that is, for all ω and n,
\ |X_n(\omega)| \le |Y(\omega)|,\ Y(\omega)\ge 0,\ E(Y)< \infty,
then the class \mathcal{C} of random variables \{X_n\} is uniformly integrable.
  • A class of random variables bounded in L^p (p>1) is uniformly integrable.

Relevant theorems[edit]

A class of random variables X_n \subset L^1(\mu) is uniformly integrable if and only if it is relatively compact for the weak topology \sigma(L^1,L^\infty).
The family \{X_{\alpha}\}_{\alpha\in\Alpha} \subset L^1(\mu) is uniformly integrable if and only if there exists a non-negative increasing convex function G(t) such that
\lim_{t \to \infty} \frac{G(t)}{t} = \infty and \sup_{\alpha} E(G(|X_{\alpha}|)) < \infty.

Relation to convergence of random variables[edit]

  • A sequence \{X_n\} converges to X in the L_1 norm if and only if it converges in measure to X and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable.[14] This is a generalization of the dominated convergence theorem.


  1. ^ Rudin, Walter (1987). Real and Complex Analysis (3 ed.). Singapore: McGraw–Hill Book Co. p. 133. ISBN 0-07-054234-1. 
  2. ^ Royden, H.L. and Fitzpatrick, P.M. (2010). Real Analysis (4 ed.). Boston: Prentice Hall. p. 93. ISBN 0-13-143747-X. 
  3. ^ Williams, David (1997). Probability with Martingales (Repr. ed.). Cambridge: Cambridge Univ. Press. pp. 126–132. ISBN 978-0-521-40605-5. 
  4. ^ Gut, Allan (2005). Probability: A Graduate Course. Springer. pp. 214–218. ISBN 0-387-22833-0. 
  5. ^ Bass, Richard F. (2011). Stochastic Processes. Cambridge: Cambridge University Press. pp. 356–357. ISBN 978-1-107-00800-7. 
  6. ^ Gut 2005, p. 214.
  7. ^ Bass 2011, p. 348.
  8. ^ Gut 2005, p. 236.
  9. ^ Royden and Fitzpatrick 2010, p. 98.
  10. ^ Bass 2011, p. 356.
  11. ^ Gut 2005, pp. 215-216.
  12. ^ Dellacherie, C. and Meyer, P.A. (1978). Probabilities and Potential, North-Holland Pub. Co, N. Y. (Chapter II, Theorem T25).
  13. ^ Meyer, P.A. (1966). Probability and Potentials, Blaisdell Publishing Co, N. Y. (p.19, Theorem T22).
  14. ^ Bogachev, Vladimir I. (2007). Measure Theory Volume I. Berlin Heidelberg: Springer-Verlag. p. 268. doi:10.1007/978-3-540-34514-5_4. ISBN 3-540-34513-2.