In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to the Chebyshev's inequality as the second Chebyshev's inequality) or Bienaymé's inequality.
Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable.
An example of an application of Markov's inequality is the fact that (assuming incomes are non-negative) no more than 1/5 of the population can have more than 5 times the average income.
If X is a nonnegative random variable and a > 0, then
Extended version for monotonically increasing functions
If φ is a monotonically increasing function from the nonnegative reals to the nonnegative reals, X is a random variable, a ≥ 0, and φ(a) > 0, then
We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.
Proof In the language of probability theory
For any event , let be the indicator random variable of , that is, if occurs and otherwise.
Using this notation, we have if the event occurs, and if . Then, given ,
which is clear if we consider the two possible values of . If , then , and so . Otherwise, we have X ≥ a, for which I(X ≥ a) = 1 and so aI(X ≥ a) = a ≤ X.
Since is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore
Now, using linearity of expectations, the left side of this inequality is the same as
Thus we have
and since a > 0, we can divide both sides by a.
In the language of measure theory
We may assume that the function is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued function s on X given by
Then . By the definition of the Lebesgue integral
and since , both sides can be divided by , obtaining
for any a>0. Here Var(X) is the variance of X, defined as:
Chebyshev's inequality follows from Markov's inequality by considering the random variable
and the constant
for which Markov's inequality reads
This argument can be summarized (where "MI" indicates use of Markov's inequality):
- The "monotonic" result can be demonstrated by:
- The result that, for a nonnegative random variable X, the quantile function of X satisfies:
- the proof using
- Let be a self-adjoint matrix-valued random variable and a > 0. Then
- can be shown in a similar manner.
- Concentration inequality - a summary of tail-bounds on random variables.
- "Markov and Chebyshev Inequalities". Retrieved 4 February 2016.
- E.M. Stein, R. Shakarchi, "Real Analysis, Measure Theory, Integration, & Hilbert Spaces", vol. 3, 1st ed., 2005, p.91
|This article needs additional citations for verification. (September 2010)|