Generalized inverse Gaussian distribution

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Generalized inverse Gaussian
Probability density function
Probability density plots of GIG distributions
Parameters a > 0, b > 0, p real
Support x > 0
pdf f(x) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} x^{(p-1)} e^{-(ax + b/x)/2}
Mean \frac{\sqrt{b}\ K_{p+1}(\sqrt{a b}) }{ \sqrt{a}\ K_{p}(\sqrt{a b})}
Mode \frac{(p-1)+\sqrt{(p-1)^2+ab}}{a}
Variance \left(\frac{b}{a}\right)\left[\frac{K_{p+2}(\sqrt{ab})}{K_p(\sqrt{ab})}-\left(\frac{K_{p+1}(\sqrt{ab})}{K_p(\sqrt{ab})}\right)^2\right]
MGF \left(\frac{a}{a-2t}\right)^{\frac{p}{2}}\frac{K_p(\sqrt{b(a-2t)})}{K_p(\sqrt{ab})}
CF \left(\frac{a}{a-2it}\right)^{\frac{p}{2}}\frac{K_p(\sqrt{b(a-2it)})}{K_p(\sqrt{ab})}

In probability theory and statistics, the generalized inverse Gaussian distribution (GIG) is a three-parameter family of continuous probability distributions with probability density function

f(x) = \frac{(a/b)^{p/2}}{2 K_p(\sqrt{ab})} x^{(p-1)} e^{-(ax + b/x)/2},\qquad x>0,

where Kp is a modified Bessel function of the second kind, a > 0, b > 0 and p a real parameter. It is used extensively in geostatistics, statistical linguistics, finance, etc. This distribution was first proposed by Étienne Halphen.[1][2][3] It was rediscovered and popularised by Ole Barndorff-Nielsen, who called it the generalized inverse Gaussian distribution. It is also known as the Sichel distribution, after Herbert Sichel. Its statistical properties are discussed in Bent Jørgensen's lecture notes.[4]

Differential equation



\left\{f(x) (x (a x-2 p+2)-b)+2 x^2 f'(x)=0,f(1)=\frac{e^{\frac{1}{2} (-a-b)}
   \left(\frac{a}{b}\right)^{p/2}}{2 K_p\left(\sqrt{a b}\right)}\right\}


Special cases[edit]

The inverse Gaussian and gamma distributions are special cases of the generalized inverse Gaussian distribution for p = -1/2 and b = 0, respectively.[5] Specifically, an inverse Gaussian distribution of the form

 f(x;\mu,\lambda) = \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}

is a GIG with a = \lambda/\mu^2, b = \lambda, and p=-1/2. A Gamma distribution of the form

g(x;\alpha,\beta) = \beta^{\alpha}\frac{1}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x}

is a GIG with a = 2 \beta, b = 0, and p = \alpha.

Other special cases include the inverse-gamma distribution, for a=0, and the hyperbolic distribution, for p=0.[5]

Entropy[edit]

The entropy of the generalized inverse Gaussian distribution is given as[citation needed]

H(f(x))=\frac{1}{2} \log \left(\frac{b}{a}\right)+\log \left(2 K_p\left(\sqrt{a b}\right)\right)-
(p-1) \frac{\left[\frac{d}{d\nu}K_\nu\left(\sqrt{ab}\right)\right]_{\nu=p}}{K_p\left(\sqrt{a b}\right)}+\frac{\sqrt{a b}}{2 K_p\left(\sqrt{a b}\right)}\left( K_{p+1}\left(\sqrt{a b}\right) + K_{p-1}\left(\sqrt{a b}\right)\right)

where \left[\frac{d}{d\nu}K_\nu\left(\sqrt{a b}\right)\right]_{\nu=p} is a derivative of the modified Bessel function of the second kind with respect to the order \nu evaluated at \nu=p

Conjugate prior for Gaussian[edit]

The GIG distribution is conjugate to the normal distribution when serving as the mixing distribution in a normal variance-mean mixture.[6][7] Let the prior distribution for some hidden variable, say z, be GIG:


P(z|a,b,p) = \text{GIG}(z|a,b,p)

and let there be T observed data points, X=x_1,\ldots,x_T, with normal likelihood function, conditioned on z:


P(X|z,\alpha,\beta) = \prod_{i=1}^T N(x_i|\alpha+\beta z,z)

where N(x|\mu,v) is the normal distribution, with mean \mu and variance v. Then the posterior for z, given the data is also GIG:


P(z|X,a,b,p,\alpha,\beta) = \text{GIG}(z|p-\tfrac{T}{2},a+T\beta^2,b+S)

where \textstyle S = \sum_{i=1}^T (x_i-\alpha)^2.[note 1]

Notes[edit]

  1. ^ Due to the conjugacy, these details can be derived without solving integrals, by noting that
    P(z|X,a,b,p,\alpha,\beta)\propto P(z|a,b,p)P(X|z,\alpha,\beta).
    Omitting all factors independent of z, the right-hand-side can be simplified to give an un-normalized GIG distribution, from which the posterior parameters can be identified.


References[edit]

  1. ^ Seshadri, V. (1997). "Halphen's laws". In Kotz, S.; Read, C. B.; Banks, D. L. Encyclopedia of Statistical Sciences, Update Volume 1. New York: Wiley. pp. 302–306. 
  2. ^ Perreault, L.; Bobée, B.; Rasmussen, P. F. (1999). "Halphen Distribution System. I: Mathematical and Statistical Properties". Journal of Hydrologic Engineering 4 (3): 189. doi:10.1061/(ASCE)1084-0699(1999)4:3(189).  edit
  3. ^ Étienne Halphen was the uncle of the mathematician Georges Henri Halphen.
  4. ^ Jørgensen, Bent (1982). Statistical Properties of the Generalized Inverse Gaussian Distribution. Lecture Notes in Statistics 9. New York–Berlin: Springer-Verlag. ISBN 0-387-90665-7. MR 0648107. 
  5. ^ a b Johnson, Norman L.; Kotz, Samuel; Balakrishnan, N. (1994), Continuous univariate distributions. Vol. 1, Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics (2nd ed.), New York: John Wiley & Sons, pp. 284–285, ISBN 978-0-471-58495-7, MR 1299979 
  6. ^ Dimitris Karlis, "An EM type algorithm for maximum likelihood estimation of the normal–inverse Gaussian distribution", Statistics & Probability Letters 57 (2002) 43–52.
  7. ^ Barndorf-Nielsen, O.E., 1997. Normal Inverse Gaussian Distributions and stochastic volatility modelling. Scand. J. Statist. 24, 1–13.

See also[edit]