# Bhattacharyya distance

In statistics, the Bhattacharyya distance measures the similarity of two probability distributions. It is closely related to the Bhattacharyya coefficient, which is a measure of the amount of overlap between two statistical samples or populations.

It is not a metric, despite named a "distance", since it does not obey the triangle inequality.

## History

Both the Bhattacharyya distance and the Bhattacharyya coefficient are named after Anil Kumar Bhattacharyya, a statistician who worked in the 1930s at the Indian Statistical Institute.[1] He developed the method to measure the distance between two non-normal distributions and illustrated this with the classical multinomial populations[2] as well as probability distributions that are absolutely continuous with respect to the Lebesgue measure.[3] The latter work appeared partly in 1943 in the Bulletin of the Calcutta Mathematical Society,[3] while the former part, despite being submitted for publication in 1941, appeared almost five years later in Sankhya.[2][1]

## Definition

For probability distributions ${\displaystyle P}$ and ${\displaystyle Q}$ on the same domain ${\displaystyle {\mathcal {X}}}$, the Bhattacharyya distance is defined as

${\displaystyle D_{B}(P,Q)=-\ln \left(BC(P,Q)\right)}$

where

${\displaystyle BC(P,Q)=\sum _{x\in {\mathcal {X}}}{\sqrt {P(x)Q(x)}}}$

is the Bhattacharyya coefficient for discrete probability distributions.

For continuous probability distributions, with ${\displaystyle P(dx)=p(x)dx}$ and ${\displaystyle Q(dx)=q(x)dx}$ where ${\displaystyle p(x)}$ and ${\displaystyle q(x)}$ are the probability density functions, the Bhattacharyya coefficient is defined as

${\displaystyle BC(P,Q)=\int _{\mathcal {X}}{\sqrt {p(x)q(x)}}\,dx}$.

More generally, given two probability measures ${\displaystyle P,Q}$ on a measurable space ${\displaystyle ({\mathcal {X}},{\mathcal {B}})}$, let ${\displaystyle \lambda }$ be a (sigma finite) measure such that ${\displaystyle P}$ and ${\displaystyle Q}$ are absolutely continuous with respect to ${\displaystyle \lambda }$ i.e. such that ${\displaystyle P(dx)=p(x)\lambda (dx)}$, and ${\displaystyle Q(dx)=q(x)\lambda (dx)}$ for probability density functions ${\displaystyle p,q}$ with respect to ${\displaystyle \lambda }$ defined ${\displaystyle \lambda }$-almost everywhere. Such a measure, even such a probability measure, always exists, e.g. ${\displaystyle \lambda ={\tfrac {1}{2}}(P+Q)}$. Then define the Bhattacharyya measure on ${\displaystyle ({\mathcal {X}},{\mathcal {B}})}$ by

${\displaystyle bc(dx|P,Q)={\sqrt {p(x)q(x)}}\,\lambda (dx)={\sqrt {{\frac {P(dx)}{\lambda (dx)}}(x){\frac {Q(dx)}{\lambda (dx)}}(x)}}\lambda (dx).}$

It does not depend on the measure ${\displaystyle \lambda }$, for if we choose a measure ${\displaystyle \mu }$ such that ${\displaystyle \lambda }$ and an other measure choice ${\displaystyle \lambda '}$ are absolutely continuous i.e. ${\displaystyle \lambda =l(x)\mu }$ and ${\displaystyle \lambda '=l'(x)\mu }$, then

${\displaystyle P(dx)=p(x)\lambda (dx)=p'(x)\lambda '(dx)=p(x)l(x)\mu (dx)=p'(x)l'(x)\mu (dx)}$,

and similarly for ${\displaystyle Q}$. We then have

${\displaystyle bc(dx|P,Q)={\sqrt {p(x)q(x)}}\,\lambda (dx)={\sqrt {p(x)q(x)}}\,l(x)\mu (x)={\sqrt {p(x)l(x)q(x)\,l(x)}}\mu (dx)={\sqrt {p'(x)l'(x)q'(x)l'(x)}}\,\mu (dx)={\sqrt {p'(x)q'(x)}}\,\lambda '(dx)}$.

We finally define the Bhattacharyya coefficient

${\displaystyle BC(P,Q)=\int _{\mathcal {X}}bc(dx|P,Q)=\int _{\mathcal {X}}{\sqrt {p(x)q(x)}}\,\lambda (dx)}$.

By the above, the quantity ${\displaystyle BC(P,Q)}$ does not depend on ${\displaystyle \lambda }$, and by the Cauchy inequality ${\displaystyle 0\leq BC(P,Q)\leq 1}$. In particular if ${\displaystyle P(dx)=p(x)Q(dx)}$ is absolutely continuous wrt to ${\displaystyle Q}$ with Radon Nikodym derivative ${\displaystyle p(x)={\frac {P(dx)}{Q(dx)}}(x)}$, then

${\displaystyle BC(P,Q)=\int _{\mathcal {X}}{\sqrt {p(x)}}Q(dx)=\int _{\mathcal {X}}{\sqrt {\frac {P(dx)}{Q(dx)}}}Q(dx)=E_{Q}\left[{\sqrt {\frac {P(dx)}{Q(dx)}}}\right]}$

## Properties

${\displaystyle 0\leq BC\leq 1}$ and ${\displaystyle 0\leq D_{B}\leq \infty }$.

${\displaystyle D_{B}}$ does not obey the triangle inequality, though the Hellinger distance ${\displaystyle {\sqrt {1-BC(p,q)}}}$ does.

Let ${\displaystyle p\sim {\mathcal {N}}(\mu _{p},\sigma _{p}^{2})}$, ${\displaystyle q\sim {\mathcal {N}}(\mu _{q},\sigma _{q}^{2})}$, where ${\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}$ is the normal distribution with mean ${\displaystyle \mu }$ and variance ${\displaystyle \sigma ^{2}}$; then

${\displaystyle D_{B}(p,q)={\frac {1}{4}}{\frac {(\mu _{p}-\mu _{q})^{2}}{\sigma _{p}^{2}+\sigma _{q}^{2}}}+{\frac {1}{2}}\ln \left({\frac {\sigma _{p}^{2}+\sigma _{q}^{2}}{2\sigma _{p}\sigma _{q}}}\right)}$.

And in general, given two multivariate normal distributions ${\displaystyle p_{i}={\mathcal {N}}({\boldsymbol {\mu }}_{i},\,{\boldsymbol {\Sigma }}_{i})}$,

${\displaystyle D_{B}(p_{1},p_{2})={1 \over 8}({\boldsymbol {\mu }}_{1}-{\boldsymbol {\mu }}_{2})^{T}{\boldsymbol {\Sigma }}^{-1}({\boldsymbol {\mu }}_{1}-{\boldsymbol {\mu }}_{2})+{1 \over 2}\ln \,\left({\det {\boldsymbol {\Sigma }} \over {\sqrt {\det {\boldsymbol {\Sigma }}_{1}\,\det {\boldsymbol {\Sigma }}_{2}}}}\right)}$,

where ${\displaystyle {\boldsymbol {\Sigma }}={{\boldsymbol {\Sigma }}_{1}+{\boldsymbol {\Sigma }}_{2} \over 2}.}$[4] Note that the first term is a squared Mahalanobis distance.

## Applications

The Bhattacharyya coefficient quantifies the "closeness" of two random statistical samples.

Given two sequences from distributions ${\displaystyle P,Q}$, bin them into ${\displaystyle n}$ buckets, and let the frequency of samples from ${\displaystyle P}$ in bucket ${\displaystyle i}$ be ${\displaystyle p_{i}}$, and similarly for ${\displaystyle q_{i}}$, then the sample Bhattacharyya coefficient is

${\displaystyle BC(\mathbf {p} ,\mathbf {q} )=\sum _{i=1}^{n}{\sqrt {p_{i}q_{i}}},}$

which is an estimator of ${\displaystyle BC(P,Q)}$. The quality of estimation depends on the choice of buckets; too few buckets would overestimate ${\displaystyle BC(P,Q)}$, while too many would underestimate.

A common task in classification is estimating the separability of classes. Up to a multiplicative factor, the squared Mahalanobis distance is a special case of the Bhattacharyya distance when the two classes are normally distributed with the same variances. When two classes have similar means but significantly different variances, the Mahalanobis distance would be close to zero, while the Bhattacharyya distance would not be.

The Bhattacharyya coefficient is used in the construction of polar codes.[5]

The Bhattacharyya distance is used in feature extraction and selection,[6] image processing,[7] speaker recognition,[8] and phone clustering.[9]