# Wasserstein metric

In mathematics, the Wasserstein (or Vaserstein) metric is a distance function defined between probability distributions on a given metric space M.

Intuitively, if each distribution is viewed as a unit amount of "dirt" piled on M, the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of dirt that needs to be moved times the distance it has to be moved. Because of this analogy, the metric is known in computer science as the earth mover's distance.

The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after the Russian mathematician Leonid Vaseršteĭn who introduced the concept in 1969. Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaserstein" being of German origin).

## Definition

Let (Md) be a metric space for which every probability measure on M is a Radon measure (a so-called Radon space). For p ≥ 1, let Pp(M) denote the collection of all probability measures μ on M with finite pth moment: for some x0 in M,

${\displaystyle \int _{M}d(x,x_{0})^{p}\,\mathrm {d} \mu (x)<+\infty .}$

Then the pth Wasserstein distance between two probability measures μ and ν in Pp(M) is defined as

${\displaystyle W_{p}(\mu ,\nu ):=\left(\inf _{\gamma \in \Gamma (\mu ,\nu )}\int _{M\times M}d(x,y)^{p}\,\mathrm {d} \gamma (x,y)\right)^{1/p},}$

where Γ(μν) denotes the collection of all measures on M × M with marginals μ and ν on the first and second factors respectively. (The set Γ(μν) is also called the set of all couplings of μ and ν.)

The above distance is usually denoted Wp(μν) (typically among authors who prefer the "Wasserstein" spelling) or ℓp(μν) (typically among authors who prefer the "Vaserstein" spelling). The remainder of this article will use the Wp notation.

The Wasserstein metric may be equivalently defined by

${\displaystyle W_{p}(\mu ,\nu )^{p}=\inf \mathbf {E} {\big [}d(X,Y)^{p}{\big ]},}$

where E[Z] denotes the expected value of a random variable Z and the infimum is taken over all joint distributions of the random variables X and Y with marginals μ and ν respectively.

## Examples

### Point masses (degenerate distributions)

Let ${\displaystyle \mu _{1}=\delta _{a_{1}}}$ and ${\displaystyle \mu _{2}=\delta _{a_{2}}}$ be two degenerate distributions (i.e. Dirac delta distributions) located at points ${\displaystyle a_{1}}$ and ${\displaystyle a_{2}}$ in ${\displaystyle \mathbb {R} }$. There is only one possible coupling of these two measures, namely the point mass ${\displaystyle \delta _{(a_{1},a_{2})}}$ located at ${\displaystyle (a_{1},a_{2})\in \mathbb {R} ^{2}}$. Thus, using the usual absolute value function as the distance function on ${\displaystyle \mathbb {R} }$, for any ${\displaystyle p\geq 1}$, the ${\displaystyle p}$-Wasserstein distance between ${\displaystyle \mu _{1}}$ and ${\displaystyle \mu _{2}}$ is

${\displaystyle W_{p}(\mu _{1},\mu _{2})=|a_{1}-a_{2}|.}$

By similar reasoning, if ${\displaystyle \mu _{1}=\delta _{a_{1}}}$ and ${\displaystyle \mu _{2}=\delta _{a_{2}}}$ are point masses located at points ${\displaystyle a_{1}}$ and ${\displaystyle a_{2}}$ in ${\displaystyle \mathbb {R} ^{n}}$, and we use the usual Euclidean norm on ${\displaystyle \mathbb {R} ^{n}}$ as the distance function, then

${\displaystyle W_{p}(\mu _{1},\mu _{2})=\|a_{1}-a_{2}\|_{2}.}$

### Normal distributions

Let ${\displaystyle \mu _{1}={\mathcal {N}}(m_{1},C_{1})}$ and ${\displaystyle \mu _{2}={\mathcal {N}}(m_{2},C_{2})}$ be two non-degenerate Gaussian measures (i.e. normal distributions) on ${\displaystyle \mathbb {R} ^{n}}$, with respective expected values ${\displaystyle m_{1}}$ and ${\displaystyle m_{2}\in \mathbb {R} ^{n}}$ and symmetric positive semi-definite covariance matrices ${\displaystyle C_{1}}$ and ${\displaystyle C_{2}\in \mathbb {R} ^{n\times n}}$. Then,[1] with respect to the usual Euclidean norm on ${\displaystyle \mathbb {R} ^{n}}$, the 2-Wasserstein distance between ${\displaystyle \mu _{1}}$ and ${\displaystyle \mu _{2}}$ is

${\displaystyle W_{2}(\mu _{1},\mu _{2})^{2}=\|m_{1}-m_{2}\|_{2}^{2}+\mathop {\mathrm {trace} } {\bigl (}C_{1}+C_{2}-2{\bigl (}C_{2}^{1/2}C_{1}C_{2}^{1/2}{\bigr )}^{1/2}{\bigr )}.}$

This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case ${\displaystyle p=2}$), since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term disappears and only the term involving the Euclidean distance between the means remains.

## Applications

The Wasserstein metric is a natural way to compare the probability distributions of two variables X and Y, where one variable is derived from the other by small, non-uniform perturbations (random or deterministic).

In computer science, for example, the metric W1 is widely used to compare discrete distributions, e.g. the color histograms of two digital images; see earth mover's distance for more details.

## Properties

### Metric structure

It can be shown that Wp satisfies all the axioms of a metric on Pp(M). Furthermore, convergence with respect to Wp is equivalent to the usual weak convergence of measures plus convergence of the first pth moments.

### Dual representation of W1

The following dual representation of W1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when μ and ν have bounded support,

${\displaystyle W_{1}(\mu ,\nu )=\sup \left\{\left.\int _{M}f(x)\,\mathrm {d} (\mu -\nu )(x)\right|{\mbox{continuous }}f:M\to \mathbb {R} ,\mathrm {Lip} (f)\leq 1\right\},}$

where Lip(f) denotes the minimal Lipschitz constant for f.

Compare this with the definition of the Radon metric:

${\displaystyle \rho (\mu ,\nu ):=\sup \left\{\left.\int _{M}f(x)\,\mathrm {d} (\mu -\nu )(x)\right|{\mbox{continuous }}f:M\to [-1,1]\right\}.}$

If the metric d is bounded by some constant C, then

${\displaystyle 2W_{1}(\mu ,\nu )\leq C\rho (\mu ,\nu ),}$

and so convergence in the Radon metric (identical to total variation convergence when M is a Polish space) implies convergence in the Wasserstein metric, but not vice versa.

### Separability and completeness

For any p ≥ 1, the metric space (Pp(M), Wp) is separable, and is complete if (M, d) is separable and complete.[2]