Total variation distance of probability measures

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes just called "the" statistical distance.

Definition[edit]

The total variation distance between two probability measures P and Q on a sigma-algebra of subsets of the sample space is defined via[1]

Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event.

Special cases[edit]

For a finite alphabet we can relate the total variation distance to the 1-norm of the difference of the two probability distributions as follows:[2]

Similarly, for arbitrary sample space , measure , and probability measures and with Radon-Nikodym derivatives and with respect to , an equivalent definition of the total variation distance is

Relationship with other concepts[edit]

The total variation distance is related to the Kullback–Leibler divergence by Pinsker's inequality.

See also[edit]

References[edit]

  1. ^ Chatterjee, Sourav. "Distances between probability measures" (PDF). UC Berkeley. Archived from the original (PDF) on July 8, 2008. Retrieved 21 June 2013. 
  2. ^ Levin, David Asher; Peres, Yuval; Wilmer, Elizabeth Lee. Markov Chains and Mixing Times. American Mathematical Soc. ISBN 9780821886274.