We say follows an inverse Wishart distribution, denoted as , if its inverse has a Wishart distribution. Important identities have been derived for the inverse-Wishart distribution.[2]
Suppose we wish to make inference about a covariance matrix whose prior has a distribution. If the observations are independent p-variate Gaussian variables drawn from a distribution, then the conditional distribution has a distribution, where .
Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.
Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter , using the formula and the linear algebra identity :
(this is useful because the variance matrix is not known in practice, but because is known a priori, and can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge.[5]
The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.
There appears to be a typo in the paper whereby the coefficient of is given as rather than , and that the expression for the mean square inverse Wishart, corollary 3.1, should read
To show how the interacting terms become sparse when the covariance is diagonal, let and introduce some arbitrary parameters :
where denotes the matrix vectorization operator. Then the second moment matrix becomes
which is non-zero only when involving the correlations of diagonal elements of , all other elements are mutually uncorrelated, though not necessarily statistically independent. The variances of the Wishart product are also obtained by Cook et al.[7] in the singular case and, by extension, to the full rank case.
Muirhead[8] shows in Theorem 3.2.8 that if is distributed as and is an arbitrary vector, independent of then and , one degree of freedom being relinquished by estimation of the sample mean in the latter. Similarly, Bodnar et.al. further find that and setting the marginal distribution of the leading diagonal element is thus
and by rotating end-around a similar result applies to all diagonal elements .
A corresponding result in the complex Wishart case was shown by Brennan and Reed[9] and the uncorrelated inverse complex Wishart was shown by Shaman[10] to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated.
i.e., the inverse-gamma distribution, where is the ordinary Gamma function.
The Inverse Wishart distribution is a special case of the inverse matrix gamma distribution when the shape parameter and the scale parameter .
Another generalization has been termed the generalized inverse Wishart distribution, . A positive definite matrix is said to be distributed as if is distributed as . Here denotes the symmetric matrix square root of , the parameters are positive definite matrices, and the parameter is a positive scalar larger than . Note that when is equal to an identity matrix, . This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.[11]
When the scale matrix is an identity matrix, , and is an arbitrary orthogonal matrix, replacement of by does not change the pdf of so belongs to the family of spherically invariant random processes (SIRPs) in some sense.[clarification needed]
Thus, an arbitrary p-vector with length can be rotated into the vector without changing the pdf of , moreover can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of are identically inverse chi squared distributed, with pdf in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al,[12] where it is expressed in the inverse form .
As is the case with the Wishart distribution linear transformations of the distribution yield a modified inverse Wishart distribution. If and are full rank matrices then[13]
^A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. Vol. 2B (2 ed.). Arnold. ISBN978-0-340-80752-1.
^Haff, LR (1979). "An identity for the Wishart distribution with applications". Journal of Multivariate Analysis. 9 (4): 531–544. doi:10.1016/0047-259x(79)90056-3.
^Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013-11-01). Bayesian Data Analysis, Third Edition (3rd ed.). Boca Raton: Chapman and Hall/CRC. ISBN9781439840955.
^Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification". IEEE Transactions on Bioinformatics and Computational Biology. 11 (1): 202–218. doi:10.1109/tcbb.2013.143. PMID26355519. S2CID10096507.
^Rosen, Dietrich von (1988). "Moments for the Inverted Wishart Distribution". Scand. J. Stat. 15: 97–109 – via JSTOR.