||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (March 2011) (Learn how and when to remove this template message)|
In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics.
The motivation for studying empirical measures is that it is often impossible to know the true underlying probability measure . We collect observations and compute relative frequencies. We can estimate , or a related distribution function by means of the empirical measure or empirical distribution function, respectively. These are uniformly good estimates under certain conditions. Theorems in the area of empirical processes provide rates of this convergence.
- The empirical measure Pn is defined for measurable subsets of S and given by
- where is the indicator function and is the Dirac measure.
- For a fixed measurable set A, nPn(A) is a binomial random variable with mean nP(A) and variance nP(A)(1 − P(A)).
- In particular,Pn(A) is an unbiased estimator of P(A).
- For a fixed partition of S, random variables form a multinomial distribution with event probabilities
- The covariance matrix of this multinomial distribution is .
- is the empirical measure indexed by , a collection of measurable subsets of S.
In particular, the empirical measure of A is simply the empirical mean of the indicator function, Pn(A) = Pn IA.
For a fixed measurable function , is a random variable with mean and variance .
By the strong law of large numbers, Pn(A) converges to P(A) almost surely for fixed A. Similarly converges to almost surely for a fixed measurable function . The problem of uniform convergence of Pn to P was open until Vapnik and Chervonenkis solved it in 1968.
If the class (or ) is Glivenko–Cantelli with respect to P then Pn converges to P uniformly over (or ). In other words, with probability 1 we have
Empirical distribution function
The empirical distribution function provides an example of empirical measures. For real-valued iid random variables it is given by
In this case, empirical measures are indexed by a class It has been shown that is a uniform Glivenko–Cantelli class, in particular,
with probability 1.
- Vapnik, V.; Chervonenkis, A (1968). "Uniform convergence of frequencies of occurrence of events to their probabilities". Dokl. Akad. Nauk SSSR 181.
- Billingsley, P. (1995). Probability and Measure (Third ed.). New York: John Wiley and Sons. ISBN 0-471-80478-9.
- Donsker, M. D. (1952). "Justification and extension of Doob's heuristic approach to the Kolmogorov–Smirnov theorems". Annals of Mathematical Statistics 23 (2): 277–281. doi:10.1214/aoms/1177729445.
- Dudley, R. M. (1978). "Central limit theorems for empirical measures". Annals of Probability 6 (6): 899–929. doi:10.1214/aop/1176995384. JSTOR 2243028.
- Dudley, R. M. (1999). Uniform Central Limit Theorems. Cambridge Studies in Advanced Mathematics 63. Cambridge, UK: Cambridge University Press. ISBN 0-521-46102-2.
- Wolfowitz, J. (1954). "Generalization of the theorem of Glivenko–Cantelli". Annals of Mathematical Statistics 25 (1): 131–138. doi:10.1214/aoms/1177728852. JSTOR 2236518.