Jump to content

Large deviations theory

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Ilmari Karonen (talk | contribs) at 12:19, 5 October 2010 (Fixing temporary "arxiv.org/PS_cache" and obsolete "arxiv.org/ftp" URLs to link to abstract page with download links instead (with script assistance)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. Some basic ideas of the theory can be tracked back to Laplace and Cramér, although a clear unified formal definition was introduced in 1966 by Varadhan[1]. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures.

Roughly speaking, large deviations theory concerns itself with the exponential decay of the probability measures of certain kinds of extreme or tail events, as the number of observations grows arbitrarily large.

Introductory examples

An elementary example

Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by , where we encode head as -1 and tail as 1. Now let denote the mean value after trials, namely

Then lies between -1 and 1. From the law of large numbers (and also from our experience) we know that as N grows, becomes closer and closer to with increasing probability. Let us make this statement more precise. For a given value , let us compute the probability that is greater than . Define

.

Then by Chernoff's inequality it can be shown that . This bound is rather sharp, in the sense that cannot be replaced with a larger number which would yield a true inequality for all positive . (However, the exponential bound can still be reduced by a subexponential factor on the order of .) At any rate, the probability is decaying exponentially rapidly as N grows large, at a rate depending on x.

Large deviations for sums of independent random variables

In the above example of coin-tossing we explicitly assumed that each toss is an independent trial. And for each toss, the probability of getting head or tail is always the same. This makes the random numbers independent and identically distributed (i.i.d.). For i.i.d. variables whose common distribution satisfies a certain growth condition, large deviation theory states that the following limit exists:

The function is called the "rate function" or "Cramér function" or sometimes the "entropy function". Roughly speaking, the existence of this limit is what establishes the above mentioned exponential decay and allows us to conclude that for large , takes the form:

which is the basic result of large deviations theory in this setting. Note that the inequality given in the first paragraph, as opposed to the asymptotic formula presented here, need not remain valid in more general settings.

In the i.i.d. setting, if we know the probability distribution of , an explicit expression for the rate function can be obtained. This is given by a Legendre transform,

where the function is called the cumulant generating function (CGF), given by

Here denotes expectation value with respect to the probability distribution function of and is any one of s. If follows a Gaussian distribution, the rate function becomes a parabola with its apex at the mean of the Gaussian distribution.

If the condition of independent identical distribution is relaxed, particularly if the numbers are not independent but nevertheless satisfy the Markov property, the basic large deviations result stated above can be generalized.

Formal definition

Given a Polish space let be a sequence of Borel probability measures on , let be a sequence of positive real numbers such that , and finally let be a lower semicontinuous functional on . The sequence is said to satisfy a large deviation principle with speed and rate , iff for each Borel measurable set

where and denote respectively the closure and interior of .

Brief history

The first rigorous results concerning large deviations are due to the Swedish mathematician Harald Cramér, who applied them to model the insurance business. From the point of view of an insurance company, the earning is at a constant rate per month (the monthly premium) but the claims come randomly. For the company to be successful over a certain period of time (preferably many months), the total earning should exceed the total claim. Thus to estimate the premium you have to ask the following question : "What should we choose as the premium such that over months the total claim should be less than  ? " This is clearly the same question asked by the large deviations theory. Cramér gave a solution to this question for i.i.d. random variables, where the rate function is expressed as a power series. The results we have quoted above were later obtained by Chernoff, among other people. A very incomplete list of mathematicians who have made important advances would include S.R.S. Varadhan (who has won the Abel prize), D. Ruelle and O.E. Lanford.

Applications

Principles of large deviations may be effectively applied to gather information out of a probabilistic model. Thus, theory of large deviations finds its applications in information theory and risk management. In Physics, the best known application of large deviations theory arise in Thermodynamics and Statistical Mechanics (in connection with relating entropy with rate function).

Large deviations and entropy

The rate function is related to the entropy in statistical mechanics. This can be heuristically seen in the following way. In statistical mechanics the entropy of a particular macro-state is related to the number of micro-states which corresponds to this macro-state. In our coin tossing example the mean value could designate a particular macro-state. And the particular sequence of heads and tails which gives rise to a particular value of constitutes a particular micro-state. Loosely speaking a macro-state having more number of micro-states giving rise to it, has higher entropy. And a state with higher entropy has more chance of being realised in actual experiments. The macro-state with mean value of zero (as many heads as tails) has the highest number micro-states giving rise to it and it is indeed the state with the highest entropy. And in most practical situation we shall indeed obtain this macro-state for large number of trials. The "rate function" on the other hand measures the probability of appearance of a particular macro-state. The smaller the rate function the higher is the chance of a macro-state appearing. In our coin-tossing the value of the "rate function" for mean value equal to zero is zero. In this way one can see the "rate function" as the negative of the "entropy".

References

  1. ^ S.R.S. Varadhan, Asymptotic probability and differential equations, Comm. Pure Appl. Math. 19 (1966),261-286.

Bibliography

  • Special invited paper: Large deviations by S. R. S. Varadhan The Annals of Probability 2008, Vol. 36, No. 2, 397–419 DOI: 10.1214/07-AOP348
  • Entropy, Large Deviations and Statistical Mechanics by R.S. Ellis, Springer Publication. ISBN 3-540-29059-1
  • Large Deviations for Performance Analysis by Alan Weiss and Adam Shwartz. Chapman and Hall ISBN 0-412-06311-5
  • Large Deviations Techniques and Applications by Amir Dembo and Ofer Zeitouni. Springer ISBN 0-387-98406-2
  • Random Perturbations of Dynamical Systems by M.I. Freidlin and A.D. Wentzell. Springer ISBN 0-387-98362-7

See also