|WikiProject Statistics||(Rated C-class, Mid-importance)|
- Well, the two say exactly the same thing. Perhaps there is some reason why one way of saying it is preferable in a given context, but certainly if either is correct then so is the other. Michael Hardy 22:37, 18 March 2007 (UTC)
- Thanks for comments. I confused that . —The preceding unsigned comment was added by 18.104.22.168 (talk) 04:09, 19 March 2007 (UTC).
I agree, but if one wants to be mathematically strict V/2 is not defined, where V is matrix. Matric algebra defines the product of a non-zero scalar with a matrix and thus (1/2) * V is well defined, but V/2 is not. In this sense V/2 is rather a convenient convention. So from the above equations, the second (using 1/2 outside of the trace) seems more mathematically sound. having said this I have used many times the first equation, simply because of convenience. KT
I'm wondering if we should be discussing . The variable w (as opposed to W) hasn't been defined and W appears to be the input for which we're calculating the PDF, not the constant matrix which characterizes the distribution. 22.214.171.124 (talk) 14:10, 18 September 2008 (UTC)
- No. W is the random variable, NOT the argument to the pdf. Putting the r.v. in that role is a usual clumsy freshman mistake. Michael Hardy (talk) 18:46, 18 September 2008 (UTC)
- Using the lower case notation for the argument is inconsistent and confusing though. How about using something like instead (this would be in keeping with Wishart being the conjugate prior for a multivariate normal with unknown precision matrix) or for those who don't like Greek letters. Also note that the argument in Inverse-Wishart distribution is a capital letter. Shae (talk) 08:26, 10 June 2009 (UTC)
I'm assuming that the dagger is indicating the matrix transpose? If it is, then the notation should be changed so that it is consistent with the rest of the article: the superscript "T". Wtt (talk) 20:20, 27 July 2008 (UTC)
I just want to add a theorem that I was looking for a time, and found it.
Let an s-by-s matrix, where $X$ is s-by-n () random Gaussian matrix with i.i.d. entries. Then
Variance or Standard Deviation?
I'm wondering if the that turns up in Corollary 2 as part of shouldn't have a square on it? Granted is the sample variance but is it not which is the unknown population variance? I think this would be consistent with Corollary 1. Douglas mclean (talk) 08:27, 21 September 2009 (UTC)
That's another useful contribution! Coupla points:
- in the Bartlett decomposition, it should be , not , that is decomposed.
- the decomposition assumes .
Primrose61 (talk) 16:23, 28 December 2009 (UTC)
Is there a reason for switching from n to m as the d.f. parameter? (Starting with the first theorem.) If there isn't a reason for the change, I think it would be better to keep it as n through out. I can do the change--but I thought I'd ask first since it has been sitting as such for a while it appears.
Is the missing non-centrality parameter part of an editorial guideline to have separate pages for non-central distributions? — Preceding unsigned comment added by 126.96.36.199 (talk) 20:40, 20 March 2013 (UTC)
Probability density function
We might want to indicate the measure behind the density. It appears that this is the density against Lebesgue measure on p + (p*(p-1))/2 dimensional real space, the variables corresponding to the lower entries along with the diagonal of the state matrices. However, there are other, possibly more natural, measures to use as an underlying for the space of positive definite symmetric matrices - it could possibly be some curved, invariant measure on the manifold of positive definite symmetric matrices as a subset of p^2 dimensional space (think about how sphere measure is a measure on the 2 dimensional manifold that is a subset of 3 dimensional space that is more natural than, say, Lebesgue measure on the extended complex plane). Perhaps it would be symmetric in the eigenvalues, and symmetric under rotations, etc.Vinzklorthos (talk) 22:33, 6 January 2014 (UTC)
- Aspects of Multivariate Statistical Theory (Wiley Series in Probability and Statistics) by Robb J. Muirhead
- Prékopa, A., On Random Determinants. I. Studia Sci. Math. Hung. 2 (1967), 125-132.