Talk:Von Mises distribution
|WikiProject Mathematics||(Rated B-class, Low-importance)|
|WikiProject Statistics||(Rated B-class, Low-importance)|
Hello. There must a multi-dimensional analog of the Mises distribution. I'm thinking of something like a spray paint splotch on a basketball; that would be like a 2-d Gaussian blob wrapped onto a 2-sphere. What can be said about a N-d blob on a N-sphere? Have a great day, 188.8.131.52 18:38, 15 January 2006 (UTC)
- There is; it's called the generalized von Mises distribution, and its PDF is basically exp(k*dot(x,mu)) with the appropriate normalization.
- Take a look at the directional statistics article. In fact, the generalization of the Von Mises distribution is a distribution on the torus. See this article in PNAS and the references therein. Tomixdf (talk) 17:32, 9 October 2008 (UTC)
- Yes it would! It would be great if someone make it (a rose diagram for example). It is tricky to superpose rose diagrams, but side-by-side plots with different parameter values would be useful. Also, the related Wrapped Cauchy distribution deserves a page, no? It, too, is widely used for circular data and has some nice properties. Maybe I'll give it a shot (though very busy and not much experience writing up stats pages). - Eliezg (talk) 17:41, 16 April 2008 (UTC)
Numerically computing cdf
While the cdf equation on the page is correct, it is not suitable to compute the cdf numerically due to instabilities in the fraction of Bessel functions (took me some hours to figure that one out). Wouldn't it be useful to add a comment about that and provide a reference to a paper (GW Hill, 1977) that provides a numerically stable solution? I didn't want to mess with the article, as there are probably people who have strong opinions about what's supposed to be in these articles and what shouldn't. 184.108.40.206 (talk) 15:39, 20 April 2009 (UTC)
In the section 'Limiting behavior' it is said that as k --> infinity the distribution becomes a normal distribution. While this is correct, I think it is somewhat misleading as said normal distribution would have zero variance. I think it would be more appropriate to say that the distribution becomes a Dirac delta function.
- What that means is that as k grows larger, the difference between the von Mises and the normal distribution becomes smaller. If you are willing to settle for some small error in your calculations, then what that means is that there is a variance (that is larger than zero!) below which you can use the normal distribution and von Mises distribution interchangeably. That useful information would be lost if we simply said it tended towards a delta function.PAR (talk) 19:02, 31 December 2009 (UTC)
Somethings wrong, either I am or the article is. Lets say =1, =0. Then I calculate moments of z to be:
Ok, thats wrong, the variance of z should be
I mean, there is something strange about the "circular variance"
Does anyone know where this definition comes from or how it is used in analysis? It seems unnatural and mathematically intractable, unlike the first definition. I have never seen it used in any analysis of variance, it seems like every reference basically says "oh and by the way the circular variance is…." and then ignores it. Furthermore, if you use the first expression as the variance, then a sample statistic of
where overlines indicate sample averages, will be an unbiased estimator of the first variance, and this is a familiar kind of expression, similar to the relationship of standard deviation to variance in linear statistics. I'm pretty sure an unbiased estimator of the second expression is fairly crazy.PAR (talk) 18:49, 31 December 2009 (UTC)
No use of future tense in math articles, please
I find this page nice and informative. However, could someone sufficiently familiar with the subject eliminate the use of "will"? I think the future tense has nothing to do in maths. Either it is, or it isn't. —Preceding unsigned comment added by 220.127.116.11 (talk) 13:15, 22 February 2010 (UTC)
Von Mises vs Wrapped Gaussian
Can anyone expand on "It may be thought of as a close approximation to the wrapped normal distribution," ? While it is true that as k->infinity the approximation becomes very good, for small k it is not clear. Yes, the shapes of the two distributions look similar, but I wouldn't describe the approximation as "close" for small k, based on looking at the graphs. Are there any bounds on the error that anyone knows about, as a function of k? —Preceding unsigned comment added by 18.104.22.168 (talk) 18:03, 1 January 2011 (UTC)
- Actually, as k->0, the approximation gets good again. But in between, it's arguably not so good. —Preceding unsigned comment added by 22.214.171.124 (talk) 18:06, 1 January 2011 (UTC)
I edited the introduction. Not sure if wrapped normal distribution should be in the introduction at all. But if these two are compared one should be clear in which case one or the other is correct. The wrapped Gaussian as a result of the addition of independent random angle increments (diffusion) is not stationary. Disordered materials with a preferred orientation, on the other hand, are to the first order approximated by a diffusion process in a cosine potential. The stationary distribution of this process is the von Mises distribution. I guess the name circular normal distribution reflects the fact that it is the maximum entropy distribution for the circle. 126.96.36.199 (talk) 06:49, 25 August 2011 (UTC)
- I was wondering what you mean by "not stationary" for the addition of independent angle increments? Also, von Mises is maximum entropy for a fixed value of the mean of z - that is, a fixed value of the mean sine and cosine of the angle measured. As far as comparing the two for various values of κ and σ, 1/κ=σ^2 when κ is large (or σ is small). This does not mean that when this is not the case, the two can be compared. κ and σ are just parameters that happen to have a simple relationship in the limit of small angles. One way to compare the two is to compare the two cases with the same circular variance. The match is closer. I think the best way to compare the two is to match two that have the same entropy - i.e. convey the same uncertainty in information. Again, I would expect the match to be closer. PAR (talk) 20:15, 25 August 2011 (UTC)
The Von Miss distribution is a Maximum entropy probability distribution on the unit circle with a given mean. This fact should be added to the article somewhere, probably in the introduction. —Preceding unsigned comment added by 188.8.131.52 (talk) 23:00, 7 January 2011 (UTC)