Inverse-Wishart distribution

From Wikipedia, the free encyclopedia
  (Redirected from Inverse Wishart distribution)
Jump to: navigation, search
Inverse-Wishart
Notation  \mathcal{W}^{-1}({\mathbf\Psi},\nu)
Parameters  \nu > p-1  degrees of freedom (real)
\mathbf{\Psi} > 0 scale matrix (pos. def)
Support \mathbf{X} is positive definite
pdf

\frac{\left|{\mathbf\Psi}\right|^{\frac{\nu}{2}}}{2^{\frac{\nu p}{2}}\Gamma_p(\frac{\nu}{2})} \left|\mathbf{X}\right|^{-\frac{\nu+p+1}{2}}e^{-\frac{1}{2}\operatorname{tr}({\mathbf\Psi}\mathbf{X}^{-1})}

Mean \frac{\mathbf{\Psi}}{\nu - p - 1}For \nu > p + 1
Mode \frac{\mathbf{\Psi}}{\nu + p + 1}[1]:406
Variance see below

In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.

We say \mathbf{X} follows an inverse Wishart distribution, denoted as  \mathbf{X}\sim \mathcal{W}^{-1}({\mathbf\Psi},\nu), if its inverse  \mathbf{X}^{-1} has a Wishart distribution  \mathcal{W}({\mathbf \Psi}^{-1}, \nu) . Important identities have been derived for Inverse-Wishart distribution.[2]

Density[edit]

The probability density function of the inverse Wishart is:


\frac{\left|{\mathbf\Psi}\right|^{\frac{\nu}{2}}}{2^{\frac{\nu p}{2}}\Gamma_p(\frac{\nu}{2})} \left|\mathbf{X}\right|^{-\frac{\nu+p+1}{2}}e^{-\frac{1}{2}\operatorname{tr}({\mathbf\Psi}\mathbf{X}^{-1})}

where \mathbf{X} and {\mathbf\Psi} are p\times p positive definite matrices, and Γp(·) is the multivariate gamma function.

Theorems[edit]

Distribution of the inverse of a Wishart-distributed matrix[edit]

If {\mathbf A}\sim \mathcal{W}({\mathbf\Sigma},\nu) and {\mathbf\Sigma} is of size p \times p, then \mathbf{X}={\mathbf A}^{-1} has an inverse Wishart distribution \mathbf{X}\sim \mathcal{W}^{-1}({\mathbf\Sigma}^{-1},\nu) .[3]

Marginal and conditional distributions from an inverse Wishart-distributed matrix[edit]

Suppose {\mathbf A}\sim \mathcal{W}^{-1}({\mathbf\Psi},\nu) has an inverse Wishart distribution. Partition the matrices  {\mathbf A} and  {\mathbf\Psi} conformably with each other


    {\mathbf{A}} = \begin{bmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22} \end{bmatrix}, \;
    {\mathbf{\Psi}} = \begin{bmatrix} \mathbf{\Psi}_{11} & \mathbf{\Psi}_{12} \\ \mathbf{\Psi}_{21} & \mathbf{\Psi}_{22} \end{bmatrix}

where {\mathbf A_{ij}} and {\mathbf \Psi_{ij}} are  p_{i}\times p_{j} matrices, then we have

i)  {\mathbf A_{11} } is independent of  {\mathbf A}_{11}^{-1}{\mathbf A}_{12} and  {\mathbf A}_{22\cdot 1} , where {\mathbf A_{22\cdot 1}} = {\mathbf A}_{22} - {\mathbf A}_{21}{\mathbf A}_{11}^{-1}{\mathbf A}_{12} is the Schur complement of  {\mathbf A_{11} } in  {\mathbf A} ;

ii)  {\mathbf A_{11} } \sim \mathcal{W}^{-1}({\mathbf \Psi_{11} }, \nu-p_{2}) ;

iii)  {\mathbf A}_{11}^{-1} {\mathbf A}_{12}| {\mathbf A}_{22\cdot 1} \sim MN_{p_{1}\times p_{2}}
( {\mathbf \Psi}_{11}^{-1} {\mathbf \Psi}_{12},  {\mathbf A}_{22\cdot 1} \otimes  {\mathbf \Psi}_{11}^{-1}) , where  MN_{p\times q}(\cdot,\cdot) is a matrix normal distribution;

iv)  {\mathbf A}_{22\cdot 1} \sim  \mathcal{W}^{-1}({\mathbf \Psi}_{22\cdot 1}, \nu) , where {\mathbf \Psi_{22\cdot 1}} = {\mathbf \Psi}_{22} - {\mathbf \Psi}_{21}{\mathbf \Psi}_{11}^{-1}{\mathbf \Psi}_{12};

Conjugate distribution[edit]

Suppose we wish to make inference about a covariance matrix {\mathbf{\Sigma}} whose prior {p(\mathbf{\Sigma})} has a \mathcal{W}^{-1}({\mathbf\Psi},\nu) distribution. If the observations \mathbf{X}=[\mathbf{x}_1,\ldots,\mathbf{x}_n] are independent p-variate Gaussian variables drawn from a N(\mathbf{0},{\mathbf \Sigma}) distribution, then the conditional distribution {p(\mathbf{\Sigma}|\mathbf{X})} has a \mathcal{W}^{-1}({\mathbf A}+{\mathbf\Psi},n+\nu) distribution, where {\mathbf{A}}=\mathbf{X}\mathbf{X}^T.

Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.

Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter \mathbf{\Sigma}.

P(\mathbf{X}|\mathbf{\Psi},\nu) = \int P(\mathbf{X}|\mathbf{\Sigma})P(\mathbf{\Sigma}|\mathbf{\Psi},\nu) d\mathbf{\Sigma} = \frac{|\mathbf{\Psi}|^{\frac{\nu}{2}}\Gamma_p\left(\frac{\nu+n}{2}\right)}{\pi^{\frac{np}{2}}|\mathbf{\Psi}+\mathbf{A}|^{\frac{\nu+n}{2}}\Gamma_p(\frac{\nu}{2})}

(this is useful because the variance matrix \mathbf{\Sigma} is not known in practice, but because {\mathbf\Psi} is known a priori, and {\mathbf A} can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge.[4]

Moments[edit]

The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.

The mean:[3]:85


E(\mathbf X) = \frac{\mathbf\Psi}{\nu-p-1}.

The variance of each element of \mathbf{X}:


\operatorname{Var}(x_{ij}) = \frac{(\nu-p+1)\psi_{ij}^2 + (\nu-p-1)\psi_{ii}\psi_{jj}}
{(\nu-p)(\nu-p-1)^2(\nu-p-3)}

The variance of the diagonal uses the same formula as above with i=j, which simplifies to:


\operatorname{Var}(x_{ii}) = \frac{2\psi_{ii}^2}{(\nu-p-1)^2(\nu-p-3)}.

The covariance of elements of \mathbf{X} are given by:


\operatorname{Cov}(x_{ij},x_{kl}) = \frac{2\psi_{ij}\psi_{kl} + (\nu-p-1) (\psi_{ik}\psi_{jl} + \psi_{il}\psi_{kj})}{(\nu-p)(\nu-p-1)^2(\nu-p-3)}

Related distributions[edit]

A univariate specialization of the inverse-Wishart distribution is the inverse-gamma distribution. With p=1 (i.e. univariate) and \alpha = \nu/2, \beta = \mathbf{\Psi}/2 and x=\mathbf{X} the probability density function of the inverse-Wishart distribution becomes

p(x|\alpha, \beta) = \frac{\beta^\alpha\, x^{-\alpha-1} \exp(-\beta/x)}{\Gamma_1(\alpha)}.

i.e., the inverse-gamma distribution, where \Gamma_1(\cdot) is the ordinary Gamma function.

A generalization is the inverse multivariate gamma distribution.

Another generalization has been termed the generalized inverse Wishart distribution, \mathcal{GW}^{-1}. A  p \times p positive definite matrix \mathbf{X} is said to be distributed as \mathcal{GW}^{-1}(\mathbf{\Psi},\nu,\mathbf{S}) if \mathbf{Y} = \mathbf{X}^{1/2}\mathbf{S}^{-1}\mathbf{X}^{1/2} is distributed as \mathcal{W}^{-1}(\mathbf{\Psi},\nu). Here \mathbf{X}^{1/2} denotes the symmetric matrix square root of \mathbf{X}, the parameters \mathbf{\Psi},\mathbf{S} are  p \times p positive definite matrices, and the parameter \nu is a positive scalar larger than 2p. Note that when \mathbf{S} is equal to an identity matrix, \mathcal{GW}^{-1}(\mathbf{\Psi},\nu,\mathbf{S}) = \mathcal{W}^{-1}(\mathbf{\Psi},\nu). This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.[5]

A different type of generalization is the normal-inverse-Wishart distribution, essentially the product of a multivariate normal distribution with an inverse Wishart distribution.

See also[edit]

References[edit]

{{Reflist refs= [2] [4]}}

  1. ^ A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference 2B (2 ed.). Arnold. ISBN 0-340-80752-0. 
  2. ^ a b Haff, LR (1979). "An identity for the Wishart distribution with applications". Journal of Multivariate Analysis 9 (4): 531–544. 
  3. ^ a b Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN 0-12-471250-9. 
  4. ^ a b Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification". IEEE Transactions on Bioinformatics and Computational Biology 11 (1): 202–218. 
  5. ^ Triantafyllopoulos, K. (2011). "Real-time covariance estimation for the local level model". Journal of Time Series Analysis 32 (2): 93–107. doi:10.1111/j.1467-9892.2010.00686.x.  edit