Wasserstein metric: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 112: Line 112:


and so convergence in the Radon metric (identical to '''total variation convergence''' when ''M'' is a [[Polish space]]) implies convergence in the Wasserstein metric, but not vice versa.
and so convergence in the Radon metric (identical to '''total variation convergence''' when ''M'' is a [[Polish space]]) implies convergence in the Wasserstein metric, but not vice versa.
====Proof====


'''Discrete case''': When <math>M</math> is discrete, solving for the 1-Wasserstein distance is a problem in linear programming:<math display="block">\begin{cases}
\min \sum_{x, y} C(x, y) \mu(x, y) \\
\sum_y \mu(x, y) = \mu(x) \\
\sum_x \mu(x, y) = \nu(y) \\
\mu \geq 0
\end{cases}</math>
where <math>C: M \times M \to [0, \infty)</math> is a general "cost function".

By carefully writing the above equations as matrix equations, then using the duality theorem of linear programming, we obtain its [[Dual linear program|dual problem]]<ref>{{Citation |last=Matoušek |first=Jiří |title=Duality of Linear Programming |url=http://dx.doi.org/10.1007/978-3-540-30717-4_6 |work=Universitext |pages=81–104 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |isbn=978-3-540-30697-9 |access-date=2022-07-15 |last2=Gärtner |first2=Bernd}}</ref>:<math display="block">\begin{cases}
\max \sum_x \mu(x)f(x) + \sum_y \nu(y)g(y)\\
f(x) + g(y) \leq C(x, y)
\end{cases}</math>For the general case, the dual problem is found by converting sums to integrals.

{{Math theorem|name=Theorem|note=Kantorovich-Rubenstein duality|math_statement=
When the probability space <math>X</math> is a metric space, and <math>C(x, y) = d(x, y)</math>, then
<math display="block">W_1(\mu, \nu) = \frac 1 K\sup_{\|f\|_L \leq K} \mathbb{E}_{x\sim p}[f(x)] - E_{y\sim q}[f(y)]</math>
for any <math>K > 0</math>.}}

{{Math proof|proof=
It suffices to prove the case of <math>K=1</math>.
Start with
<math display="block">W_1(\mu, \nu) = \sup_{f(x) + g(y) \leq d(x, y)} \mathbb{E}_{x\sim p}[f(x)] + E_{y\sim q}[g(y)]</math>
Then, for any choice of <math>g</math>, one can push the term higher by setting <math>f(x) = \inf_y d(x, y) - g(y)</math>, making it an [[infimal convolution]] of <math>-g</math> with a cone. This implies <math>f(x) - f(y) \leq d(x, y)</math> for any <math>x, y</math>, that is, <math>\|f\|_L \leq 1</math>.

Thus,
<math display="block">\begin{align}
W_1(\mu, \nu) &= \sup_{g}\sup_{f(x) + g(y) \leq d(x, y)} \mathbb{E}_{x\sim p}[f(x)] + E_{y\sim q}[g(y)]\\
&= \sup_{g}\sup_{\|f\|_L \leq 1, f(x) + g(y) \leq d(x, y)} \mathbb{E}_{x\sim p}[f(x)] + E_{y\sim q}[g(y)]\\
&= \sup_{\|f\|_L \leq 1}\sup_{g, f(x) + g(y) \leq d(x, y)} \mathbb{E}_{x\sim p}[f(x)] + E_{y\sim q}[g(y)]
\end{align}
</math>
Next, for any choice of <math>\|f\|_L\leq 1</math>, <math>g</math> can be optimized by setting <math>g(y) = \sup_x d(x, y) - f(x)</math>. Since <math>\|f\|_L\leq 1</math>, this implies <math>g(y) = -f(y)</math>.
}}

The infimal convolution step is visually clear when the probability space is <math>\R</math>. One simply plot out the curve of <math>-g</math>, then at each point, draw a cone, and take the lower envelope of the cones as <math>f</math>, ash shown in the diagram.

[[File:Infimal_convolution_of_a_cone_with_a_cubic_curve.svg|thumb|Infimal convolution of a cone with a <math>-g</math>.]]
'''1D Example'''. When both <math>\mu, \nu</math> are distributions on <math>\R</math>, then integration by parts give<math display="block">\mathbb{E}_{x\sim \mu}[f(x)] - E_{y\sim \nu}[f(y)] = \int f'(x) (F_\nu(x) - F_\mu(x)) dx</math>thus<math display="block">
f(x) = K \cdot \mathop{sign}(F_\nu(x) - F_\mu(x))</math>
===Equivalence of ''W''<sub>2</sub> and a negative-order Sobolev norm===
===Equivalence of ''W''<sub>2</sub> and a negative-order Sobolev norm===



Revision as of 03:59, 15 July 2022

In mathematics, the Wasserstein distance or KantorovichRubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.

Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on , the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781. Because of this analogy, the metric is known in computer science as the earth mover's distance.

The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata[1] (Russian, 1969). However the metric was first defined by Leonid Kantorovich in The Mathematical Method of Production Planning and Organization[2] (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" being of German origin).

Definition

Let be a metric space for which every Borel probability measure on is a Radon measure (a so-called Radon space). For , let denote the collection of all probability measures on with finite moment, that is, there exists some in such that:

The Wasserstein distance between two probability measures and in is defined as

where denotes the collection of all measures on with marginals and on the first and second factors respectively. (The set is also called the set of all couplings of and .)

The above distance is usually denoted (typically among authors who prefer the "Wasserstein" spelling) or (typically among authors who prefer the "Vaserstein" spelling). The remainder of this article will use the notation.

The Wasserstein metric may be equivalently defined by

where denotes the expected value of a random variable and the infimum is taken over all joint distributions of the random variables and with marginals and respectively.

Intuition and connection to optimal transport

Two one-dimensional distributions and , plotted on the x and y axes, and one possible joint distribution that defines a transport plan between them. The joint distribution/transport plan is not unique

One way to understand the above definition is to consider the optimal transport problem. That is, for a distribution of mass on a space , we wish to transport the mass in such a way that it is transformed into the distribution on the same space; transforming the 'pile of earth' to the pile . This problem only makes sense if the pile to be created has the same mass as the pile to be moved; therefore without loss of generality assume that and are probability distributions containing a total mass of 1. Assume also that there is given some cost function

that gives the cost of transporting a unit mass from the point to the point . A transport plan to move into can be described by a function which gives the amount of mass to move from to . You can imagine the task as the need to move a pile of earth of shape to the hole in the ground of shape such that at the end, both the pile of earth and the hole in the ground completely vanish. In order for this plan to be meaningful, it must satisfy the following properties

That is, that the total mass moved out of an infinitesimal region around must be equal to and the total mass moved into a region around must be . This is equivalent to the requirement that be a joint probability distribution with marginals and . Thus, the infinitesimal mass transported from to is , and the cost of moving is , following the definition of the cost function. Therefore, the total cost of a transport plan is

The plan is not unique; the optimal transport plan is the plan with the minimal cost out of all possible transport plans. As mentioned, the requirement for a plan to be valid is that it is a joint distribution with marginals and ; letting denote the set of all such measures as in the first section, the cost of the optimal plan is

If the cost of a move is simply the distance between the two points, then the optimal cost is identical to the definition of the distance.

Examples

Point masses (degenerate distributions)

Let and be two degenerate distributions (i.e. Dirac delta distributions) located at points and in . There is only one possible coupling of these two measures, namely the point mass located at . Thus, using the usual absolute value function as the distance function on , for any , the -Wasserstein distance between and is

By similar reasoning, if and are point masses located at points and in , and we use the usual Euclidean norm on as the distance function, then

Normal distributions

Let and be two non-degenerate Gaussian measures (i.e. normal distributions) on , with respective expected values and and symmetric positive semi-definite covariance matrices and . Then,[3] with respect to the usual Euclidean norm on , the 2-Wasserstein distance between and is

This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case ), since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term disappears and only the term involving the Euclidean distance between the means remains.

One-dimensional distributions

Let be probability measures on , and denote their cumulative distribution functions by and . Then the transport problem has an analytic solution: Optimal transport preserves the order of probability mass elements, so the mass at quantile of moves to quantile of . Thus, the -Wasserstein distance between and is

where and are the quantile functions (inverse CDFs). In the case of , a change of variables leads to the formula

.

Applications

The Wasserstein metric is a natural way to compare the probability distributions of two variables X and Y, where one variable is derived from the other by small, non-uniform perturbations (random or deterministic).

In computer science, for example, the metric W1 is widely used to compare discrete distributions, e.g. the color histograms of two digital images; see earth mover's distance for more details.

In their paper 'Wasserstein GAN', Arjovsky et al.[4] use the Wasserstein-1 metric as a way to improve the original framework of Generative Adversarial Networks (GAN), to alleviate the vanishing gradient and the mode collapse issues. The special case of normal distributions is used in a Frechet Inception Distance.

The Wasserstein metric has a formal link with Procrustes analysis, with application to chirality measures,[5] and to shape analysis.[6]

In computational biology, Wasserstein metric can be used to compare between persistence diagrams of cytometry datasets.[7]

The Wasserstein metric also has been used in inverse problems in geophysics.[8]

Properties

Metric structure

It can be shown that Wp satisfies all the axioms of a metric on Pp(M). Furthermore, convergence with respect to Wp is equivalent to the usual weak convergence of measures plus convergence of the first pth moments.[9]

Dual representation of W1

The following dual representation of W1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when μ and ν have bounded support,

where Lip(f) denotes the minimal Lipschitz constant for f.

Compare this with the definition of the Radon metric:

If the metric d is bounded by some constant C, then

and so convergence in the Radon metric (identical to total variation convergence when M is a Polish space) implies convergence in the Wasserstein metric, but not vice versa.

Proof

Discrete case: When is discrete, solving for the 1-Wasserstein distance is a problem in linear programming:

where is a general "cost function".

By carefully writing the above equations as matrix equations, then using the duality theorem of linear programming, we obtain its dual problem[10]:

For the general case, the dual problem is found by converting sums to integrals.

Theorem (Kantorovich-Rubenstein duality) — When the probability space is a metric space, and , then

for any .

Proof

It suffices to prove the case of . Start with

Then, for any choice of , one can push the term higher by setting , making it an infimal convolution of with a cone. This implies for any , that is, .

Thus,

Next, for any choice of , can be optimized by setting . Since , this implies .

The infimal convolution step is visually clear when the probability space is . One simply plot out the curve of , then at each point, draw a cone, and take the lower envelope of the cones as , ash shown in the diagram.

Infimal convolution of a cone with a .

1D Example. When both are distributions on , then integration by parts give

thus

Equivalence of W2 and a negative-order Sobolev norm

Under suitable assumptions, the Wasserstein distance of order two is Lipschitz equivalent to a negative-order homogeneous Sobolev norm.[11] More precisely, if we take to be a connected Riemannian manifold equipped with a positive measure , then we may define for the seminorm

and for a signed measure on the dual norm

Then any two probability measures and on satisfy the upper bound

In the other direction, if and each have densities with respect to the standard volume measure on that are both bounded above some , and has non-negative Ricci curvature, then

Separability and completeness

For any p ≥ 1, the metric space (Pp(M), Wp) is separable, and is complete if (M, d) is separable and complete.[12]

See also

References

  1. ^ Vaserstein LN (1969). "Markov processes over denumerable products of spaces, describing large systems of automata" (PDF). Problemy Peredači Informacii. 5 (3): 64–72.
  2. ^ Kantorovich LV (1939). "Mathematical Methods of Organizing and Planning Production". Management Science. 6 (4): 366–422. doi:10.1287/mnsc.6.4.366. JSTOR 2627082.
  3. ^ Olkin I, Pukelsheim F (October 1982). "The distance between two random vectors with given dispersion matrices". Linear Algebra and Its Application. 48: 257–263. doi:10.1016/0024-3795(82)90112-4. ISSN 0024-3795.
  4. ^ Arjovsky M, Chintala S, Bottou L (July 2017). "Wasserstein Generative Adversarial Networks". International Conference on Machine Learning 214-223: 214–223.
  5. ^ Petitjean M (2002). "Chiral mixtures" (PDF). Journal of Mathematical Physics. 43 (8): 4147–4157. Bibcode:2002JMP....43.4147P. doi:10.1063/1.1484559.
  6. ^ Petitjean M (2004). "From shape similarity to shape complementarity: toward a docking theory". Journal of Mathematical Chemistry. 35 (3): 147–158. doi:10.1023/B:JOMC.0000033252.59423.6b. S2CID 121320315.
  7. ^ Mukherjee S, Wethington D, Dey TK, Das J (March 2022). "Determining clinically relevant features in cytometry data using persistent homology". PLOS Computational Biology. 18 (3): e1009931. arXiv:2203.06263. Bibcode:2022PLSCB..18E9931M. doi:10.1371/journal.pcbi.1009931. PMC 9009779. PMID 35312683.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  8. ^ Frederick, Christina; Yang, Yunan (2022-05-06). "Seeing through rock with help from optimal transport". Snapshots of Modern Mathematics from Oberwolfach. doi:10.14760/SNAP-2022-004-EN.
  9. ^ Clement P, Desch W (2008). "An elementary proof of the triangle inequality for the Wasserstein metric". Proceedings of the American Mathematical Society. 136 (1): 333–339. doi:10.1090/S0002-9939-07-09020-X.
  10. ^ Matoušek, Jiří; Gärtner, Bernd, "Duality of Linear Programming", Universitext, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 81–104, ISBN 978-3-540-30697-9, retrieved 2022-07-15
  11. ^ Peyre R (October 2018). "Comparison between W2 distance and −1 norm, and localization of Wasserstein distance". ESAIM: Control, Optimisation and Calculus of Variations. 24 (4): 1489–1501. doi:10.1051/cocv/2017050. ISSN 1292-8119. (See Theorems 2.1 and 2.5.)
  12. ^ Bogachev VI, Kolesnikov AV (October 2012). "The Monge–Kantorovich problem: achievements, connections, and perspectives". Russian Mathematical Surveys. 67 (5): 785–890. Bibcode:2012RuMaS..67..785B. doi:10.1070/RM2012v067n05ABEH004808.

Further reading

External links