# Talk:Bhattacharyya distance

WikiProject Statistics (Rated Start-class, Low-importance)

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

Start  This article has been rated as Start-Class on the quality scale.
Low  This article has been rated as Low-importance on the importance scale.
WikiProject Mathematics (Rated Start-class, Low-importance)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 Start Class
 Low Importance
Field: Probability and statistics

## Computer vision category

I removed this article from the computer vision category. This B-distance is probably usful in some part of CV but

1. It is not a concept developed within CV or specific to CV.
2. There is no material in this article which relates it to CV.

--KYN 22:19, 27 July 2007 (UTC)

## Merger question

The articles do seem to be about the same thing. One problem may be the title to adopt. Should "distance" be used, considering that it isn't a "distance" in the metric sense? So what is the common usage? Melcombe (talk) 16:31, 15 May 2008 (UTC)

Google seems to prefer distance (13,000 versus 3,000 hits). Btyner (talk) 20:14, 10 May 2009 (UTC)
It is a measure of dissimilarity, even though it does not obey the triangle inequality (see Kailath's paper, referenced in the main article).Jrvz (talk) 16:54, 8 November 2010 (UTC)

The term "distance" certainly applies to the Bhattacharyya distance, it is a different type of distance than the traditional Euclidean distance between two points, but there are many other "distances" such as the Chess-board distance, the Chamfer distance or the Kullback-Leibler Distance (also called divergence). What the B.D. is measuring is how far apart two "things" are where the "things" are statistical distributions, the closer the distributions, the smaller the distance. "Bhattacharyya distance" is a term widely used in statistics and in other areas such as computer vision and image processing when it refers to the distance between distributions of classes that were obtained from an image. So, it is not directly related to C.V. but widely used there — Preceding unsigned comment added by Creyes (talkcontribs) 13:41, 12 February 2013 (UTC)

There is no article on battacharyya bound!! — Preceding unsigned comment added by 165.91.166.94 (talk) 22:56, 26 November 2006

Note: in article's Definition section, in paragraph
relating it to Mahalanobis distance, figures are
mentioned but none are present.


128.101.106.11 (talk) 23:37, 7 March 2013 (UTC)

## Error in simplified formula?

In the definition based on mean and variance: "where sigma_p is the variance of the p-th distribution," -> is sigma really the variance, shouldn't it be the standard deviation? Since sigma^2 is used everywhere in the formula, I'm not sure but I suspect that it should be "where sigma^2_p is the variance of the p-th distribution". Can someone confirm or refute that please?

82.67.136.228 (talk) 16:05, 5 April 2013 (UTC)

In the article that is referred to (in which it also says that sigma is the variance), this expression is derived from the more general expression involving covariance matrices. Seeing as the covariance matrix is a generalization of the variance to higher dimensions, and this leads to the same expression involving sigma as the standard deviation (thus sigma squared as the variance), this must be an error in the article as well. Can someone please confirm that?

Rokusottervanger (talk) 09:30, 15 March 2016 (UTC)

Using Maxima I can quickly derive Bhattacharyya distance for normal distributions directly like that:
N[m,s](x) := 1/(s*sqrt(2*%pi)) * exp(-(x-m)**2/(2*s**2)) $assume(sp > 0, sq > 0)$
-log(integrate(sqrt(N[mp,sp](x) * N[mq,sq](x)), x, -inf, inf));

Which gives ${\displaystyle -\log \left({{{\sqrt {2}}\,{\sqrt {\it {sp}}}\,{\sqrt {\it {sq}}}\,e^{-{{{\it {mq}}^{2}-2\,{\it {mp}}\,{\it {mq}}+{\it {mp}}^{2}} \over {4\,{\it {sq}}^{2}+4\,{\it {sp}}^{2}}}}} \over {\sqrt {{\it {sq}}^{2}+{\it {sp}}^{2}}}}\right)}$ where ${\displaystyle sp}$ and ${\displaystyle sq}$ stand for standard deviations. Substituting the usual symbols, i.e. ${\displaystyle \sigma _{p}}$ for ${\displaystyle sp}$, etc, and simplifying this is ${\displaystyle {\frac {1}{2}}\log \left({\frac {\sigma _{p}^{2}+\sigma _{q}^{2}}{2\sigma _{p}\sigma _{q}}}\right)+{\frac {1}{4}}{\frac {(\mu _{p}-\mu _{q})^{2}}{\sigma _{p}^{2}+\sigma _{q}^{2}}}}$ which is essentially the same formula as the one in the article (except that the first term is further rewritten so that sigmas appear only squared, perhaps for numerical reasons). So I conclude that indeed variance is, as usual, written ${\displaystyle \sigma ^{2}}$ in it. Wherever the mistake comes from, it's an obvious typo so I'm fixing it. — mwgamera (talk) 10:51, 16 March 2016 (UTC)