In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
Jensen's inequality generalizes the statement that the secant line of a convex function lies above the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (for t ∈ [0,1]),
while the graph of the function is the convex function of the weighted means,
The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using either the language of measure theory or (equivalently) probability. In the probabilistic setting, the inequality can be further generalized to its full strength.
A common application has as a function of another variable (or set of variables) , that is, . All of this carries directly over to the general continuous case: the weights ai are replaced by a non-negative integrable function f (x), such as a probability distribution, and the summations are replaced by integrals.
where , and is a non-negative Lebesgue-integrable function. In this case, the Lebesgue measure of need not be unity. However, by integration by substitution, the interval can be rescaled so that it has measure unity. Then Jensen's inequality can be applied to get
In this probability setting, the measure μ is intended as a probability , the integral with respect to μ as an expected value, and the function as a random variableX.
Notice that the equality holds if and only if X is constant (degenerate random variable) or if φ is Borel-almost surely linear (that is, if there is a Borel-measurable set of full measure such that φ is a linear function on , i.e., there exist such that , for all ).
General inequality in a probabilistic setting
More generally, let T be a real topological vector space, and X a T-valued integrable random variable. In this general setting, integrable means that there exists an element in T, such that for any element z in the dual space of T: , and . Then, for any measurable convex function φ and any sub-σ-algebra of :
A graphical "proof" of Jensen's inequality for the probabilistic case. The dashed curve along the X axis is the hypothetical distribution of X, while the dashed curve along the Y axis is the corresponding distribution of Y values. Note that the convex mapping Y(X) increasingly "stretches" the distribution for increasing values of X.
This is a proof without words of Jensen's inequality for n variables. Without loss of generality, the sum of the positive weights is 1. It follows that the weighted point lies in the convex hull of the original points, which lies above the function itself by the definition of convexity. The conclusion follows. 
Jensen's inequality can be proved in several ways, and three different proofs corresponding to the different statements above will be offered. Before embarking on these mathematical derivations, however, it is worth analyzing an intuitive graphical argument based on the probabilistic case where X is a real number (see figure). Assuming a hypothetical distribution of X values, one can immediately identify the position of and its image in the graph. Noticing that for convex mappings Y = φ(X) the corresponding distribution of Y values is increasingly "stretched out" for increasing values of X, it is easy to see that the distribution of Y is broader in the interval corresponding to X > X0 and narrower in X < X0 for any X0; in particular, this is also true for . Consequently, in this picture the expectation of Y will always shift upwards with respect to the position of . A similar reasoning holds if the distribution of X covers a decreasing portion of the convex function, or both a decreasing and an increasing portion of it. This "proves" the inequality, i.e.
with equality when φ(X) is not strictly convex, e.g. when it is a straight line, or when X follows a degenerate distribution (i.e. is a constant).
If λ1 and λ2 are two arbitrary nonnegative real numbers such that λ1 + λ2 = 1 then convexity of φ implies
This can be easily generalized: if λ1, ..., λn are nonnegative real numbers such that λ1 + ... + λn = 1, then
for any x1, ..., xn. This finite form of the Jensen's inequality can be proved by induction: by convexity hypotheses, the statement is true for n = 2. Suppose it is true also for some n, one needs to prove it for n + 1. At least one of the λi is strictly positive, say λ1; therefore by convexity inequality:
one can apply the induction hypotheses to the last term in the previous formula to obtain the result, namely the finite form of the Jensen's inequality.
In order to obtain the general inequality from this finite form, one needs to use a density argument. The finite form can be rewritten as:
Since convex functions are continuous, and since convex combinations of Dirac deltas are weaklydense in the set of probability measures (as could be easily verified), the general statement is obtained simply by a limiting procedure.
Let g be a real-valued μ-integrable function on a probability space Ω, and let φ be a convex function on the real numbers. Since φ is convex, at each real number x we have a nonempty set of subderivatives, which may be thought of as lines touching the graph of φ at x, but which are at or below the graph of φ at all points (support lines of the graph).
Now, if we define
because of the existence of subderivatives for convex functions, we may choose a and b such that
for all real x and
But then we have that
for all x. Since we have a probability measure, the integral is monotone with μ(Ω) = 1 so that
Proof 3 (general inequality in a probabilistic setting)
Let X be an integrable random variable that takes values in a real topological vector space T. Since is convex, for any , the quantity
is decreasing as θ approaches 0+. In particular, the subdifferential of evaluated at x in the direction y is well-defined by
It is easily seen that the subdifferential is linear in y (that is false and the assertion requires Hahn-Banach theorem to be proved) and, since the infimum taken in the right-hand side of the previous formula is smaller than the value of the same term for θ = 1, one gets
In particular, for an arbitrary sub-σ-algebra we can evaluate the last inequality when to obtain
Now, if we take the expectation conditioned to on both sides of the previous expression, we get the result since:
by the linearity of the subdifferential in the y variable, and the following well-known property of the conditional expectation:
It shows that the average message length is minimised when codes are assigned on the basis of the true probabilities p rather than any other distribution q. The quantity that is non-negative is called the Kullback–Leibler divergence of q from p.
Since −log(x) is a strictly convex function for x > 0, it follows that equality holds when p(x) equals q(x) almost everywhere.
If L is a convex function and a sub-sigma-algebra, then, from the conditional version of Jensen's inequality, we get
So if δ(X) is some estimator of an unobserved parameter θ given a vector of observables X; and if T(X) is a sufficient statistic for θ; then an improved estimator, in the sense of having a smaller expected loss L, can be obtained by calculating
the expected value of δ with respect to θ, taken over all possible vectors of observations X compatible with the same value of T(X) as that observed. Further, because T is a sufficient statistics, does not depend on θ, hence, becomes a statistics.