# Total variation distance of probability measures

(Redirected from Total variation distance)

In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes just called "the" statistical distance.

## Definition

The total variation distance between two probability measures P and Q on a sigma-algebra ${\displaystyle {\mathcal {F}}}$ of subsets of the sample space ${\displaystyle \Omega }$ is defined via[1]

${\displaystyle \delta (P,Q)=\sup _{A\in {\mathcal {F}}}\left|P(A)-Q(A)\right|.}$

Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event.

## Special cases

For a finite or countable alphabet we can relate the total variation distance to the 1-norm of the difference of the two probability distributions as follows:[2]

${\displaystyle \delta (P,Q)={\frac {1}{2}}\|P-Q\|_{1}={\frac {1}{2}}\sum _{x}\left|P(x)-Q(x)\right|\;.}$

Similarly, for arbitrary sample space ${\displaystyle \Omega }$, measure ${\displaystyle \mu }$, and probability measures ${\displaystyle P}$ and ${\displaystyle Q}$ with Radon-Nikodym derivatives ${\displaystyle f_{P}}$ and ${\displaystyle f_{Q}}$ with respect to ${\displaystyle \mu }$, an equivalent definition of the total variation distance is

${\displaystyle \delta (P,Q)={\frac {1}{2}}\|f_{P}-f_{Q}\|_{L_{1}(\mu )}={\frac {1}{2}}\int _{\Omega }\left|f_{P}-f_{Q}\right|d\mu \;.}$

## Relationship with other concepts

The total variation distance is related to the Kullback–Leibler divergence by Pinsker's inequality.