Inverse Gaussian distribution

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Inverse Gaussian
Probability density function
PDF invGauss.svg
Parameters \lambda > 0
 \mu > 0
Support  x \in (0,\infty)
pdf  \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}
CDF

 \Phi\left(\sqrt{\frac{\lambda}{x}} \left(\frac{x}{\mu}-1 \right)\right) +\exp\left(\frac{2 \lambda}{\mu}\right) \Phi\left(-\sqrt{\frac{\lambda}{x}}\left(\frac{x}{\mu}+1 \right)\right)

where  \Phi \left(\right) is the standard normal (standard Gaussian) distribution c.d.f.
Mean  \mu
Mode \mu\left[\left(1+\frac{9 \mu^2}{4 \lambda^2}\right)^\frac{1}{2}-\frac{3 \mu}{2 \lambda}\right]
Variance \frac{\mu^3}{\lambda}
Skewness 3\left(\frac{\mu}{\lambda}\right)^{1/2}
Ex. kurtosis \frac{15 \mu}{\lambda}
MGF e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2t}{\lambda}}\right]}
CF e^{\left(\frac{\lambda}{\mu}\right)\left[1-\sqrt{1-\frac{2\mu^2\mathrm{i}t}{\lambda}}\right]}

In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).

Its probability density function is given by

 f(x;\mu,\lambda)
= \left[\frac{\lambda}{2 \pi x^3}\right]^{1/2} \exp{\frac{-\lambda (x-\mu)^2}{2 \mu^2 x}}

for x > 0, where \mu > 0 is the mean and \lambda > 0 is the shape parameter.

As λ tends to infinity, the inverse Gaussian distribution becomes more like a normal (Gaussian) distribution. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian Motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian Motion with positive drift takes to reach a fixed positive level.

Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.

To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write

X \sim IG(\mu, \lambda).\,\!

Properties[edit]

Summation[edit]

If Xi has a IG(μ0wiλ0wi2) distribution for i = 1, 2, ..., n and all Xi are independent, then


S=\sum_{i=1}^n X_i
\sim
IG \left(  \mu_0 \sum w_i, \lambda_0 \left(\sum w_i \right)^2  \right).

Note that


\frac{\textrm{Var}(X_i)}{\textrm{E}(X_i)}= \frac{\mu_0^2 w_i^2 }{\lambda_0 w_i^2 }=\frac{\mu_0^2}{\lambda_0}

is constant for all i. This is a necessary condition for the summation. Otherwise S would not be inverse Gaussian.

Scaling[edit]

For any t > 0 it holds that


X \sim IG(\mu,\lambda) \,\,\,\,\,\, \Rightarrow \,\,\,\,\,\, tX \sim IG(t\mu,t\lambda).

Exponential family[edit]

The inverse Gaussian distribution is a two-parameter exponential family with natural parameters -λ/(2μ²) and -λ/2, and natural statistics X and 1/X.

Differential equation[edit]

Main article: Differential equation


\left\{2 \mu ^2 x^2 f'(x)+f(x) \left(-\lambda  \mu ^2+\lambda  x^2+3 \mu
   ^2 x\right)=0,f(1)=\frac{\sqrt{\lambda } e^{-\frac{\lambda  (1-\mu
   )^2}{2 \mu ^2}}}{\sqrt{2 \pi }}\right\}

Relationship with Brownian motion[edit]

The stochastic process Xt given by

X_0 = 0\quad
X_t = \nu t + \sigma W_t\quad\quad\quad\quad

(where Wt is a standard Brownian motion and \nu > 0) is a Brownian motion with drift ν.

Then, the first passage time for a fixed level \alpha > 0 by Xt is distributed according to an inverse-gaussian:

T_\alpha = \inf\{ 0 < t \mid X_t=\alpha \} \sim IG(\tfrac\alpha\nu, \tfrac {\alpha^2} {\sigma^2}).\,

When drift is zero[edit]

A common special case of the above arises when the Brownian motion has no drift. In that case, parameter μ tends to infinity, and the first passage time for fixed level α has probability density function

 f \left( x; 0, \left(\frac{\alpha}{\sigma}\right)^2 \right)
= \frac{\alpha}{\sigma \sqrt{2 \pi x^3}} \exp\left(-\frac{\alpha^2 }{2 x \sigma^2}\right).

This is a Lévy distribution with parameters c=\frac{\alpha^2}{\sigma^2} and \mu=0.

Maximum likelihood[edit]

The model where


X_i \sim IG(\mu,\lambda w_i), \,\,\,\,\,\, i=1,2,\ldots,n

with all wi known, (μλ) unknown and all Xi independent has the following likelihood function


L(\mu, \lambda)=
\left(      \frac{\lambda}{2\pi}   \right)^\frac n 2  
\left(      \prod^n_{i=1} \frac{w_i}{X_i^3}    \right)^{\frac{1}{2}} 
\exp\left(\frac{\lambda}{\mu}\sum_{i=1}^n w_i -\frac{\lambda}{2\mu^2}\sum_{i=1}^n w_i X_i - \frac\lambda 2 \sum_{i=1}^n w_i \frac1{X_i} \right).

Solving the likelihood equation yields the following maximum likelihood estimates


\hat{\mu}= \frac{\sum_{i=1}^n w_i X_i}{\sum_{i=1}^n w_i}, \,\,\,\,\,\,\,\, \frac{1}{\hat{\lambda}}= \frac{1}{n} \sum_{i=1}^n w_i \left( \frac{1}{X_i}-\frac{1}{\hat{\mu}} \right).

\hat{\mu} and \hat{\lambda} are independent and


\hat{\mu} \sim IG \left(\mu, \lambda \sum_{i=1}^n w_i \right)  \,\,\,\,\,\,\,\, \frac{n}{\hat{\lambda}} \sim \frac{1}{\lambda} \chi^2_{n-1}.

Generating random variates from an inverse-Gaussian distribution[edit]

The following algorithm may be used.[1]

Generate a random variate from a normal distribution with a mean of 0 and 1 standard deviation


\displaystyle \nu = N(0,1).

Square the value


\displaystyle y = \nu^2

and use this relation


x = \mu + \frac{\mu^2 y}{2\lambda} - \frac{\mu}{2\lambda}\sqrt{4\mu \lambda y + \mu^2 y^2}.

Generate another random variate, this time sampled from a uniform distribution between 0 and 1


\displaystyle z = U(0,1).

If


z \le \frac{\mu}{\mu+x}

then return


\displaystyle
x

else return


\frac{\mu^2}{x}.

Sample code in Java:

  1. public double inverseGaussian(double mu, double lambda) {
  2.        Random rand = new Random();
  3.        double v = rand.nextGaussian();   // sample from a normal distribution with a mean of 0 and 1 standard deviation
  4.        double y = v*v;
  5.        double x = mu + (mu*mu*y)/(2*lambda) - (mu/(2*lambda)) * Math.sqrt(4*mu*lambda*y + mu*mu*y*y);
  6.        double test = rand.nextDouble();  // sample from a uniform distribution between 0 and 1
  7.        if (test <= (mu)/(mu + x))
  8.               return x;
  9.        else
  10.               return (mu*mu)/x;
  11. }
Wald Distribution using Python with aid of matplotlib and NumPy

And to plot Wald distribution in Python using matplotlib and NumPy:

  1. import matplotlib.pyplot as plt
  2. import numpy as np
  3.  
  4. h = plt.hist(np.random.wald(3, 2, 100000), bins=200, normed=True)
  5.  
  6. plt.show()

Related distributions[edit]

  • If  X \sim \textrm{IG}(\mu,\lambda)\, then  k X \sim \textrm{IG}(k \mu,k \lambda)\,
  • If  X_i \sim \textrm{IG}(\mu,\lambda)\, then  \sum_{i=1}^{n} X_i \sim \textrm{IG}(n \mu,n^2 \lambda)\,
  • If  X_i \sim \textrm{IG}(\mu,\lambda)\, for i=1,\ldots,n\, then  \bar{X} \sim \textrm{IG}(\mu,n \lambda)\,
  • If  X_i \sim \textrm{IG}(\mu_i,2 \mu^2_i)\, then  \sum_{i=1}^{n} X_i \sim \textrm{IG}\left(\sum_{i=1}^n \mu_i, 2 {\left( \sum_{i=1}^{n} \mu_i \right)}^2\right)\,

The convolution of a Wald distribution and an exponential (the ex-Wald distribution) is used as a model for response times in psychology.[2]

History[edit]

This distribution appears to have been first derived by Schrödinger in 1915 as the time to first passage of a Brownian motion.[3] The name inverse Gaussian was proposed by Tweedie in 1945.[4] Wald re-derived this distribution in 1947 as the limiting form of a sample in a sequential probability ratio test. Tweedie investigated this distribution in 1957 and established some of its statistical properties.

Software[edit]

The R programming language has software for this distribution.[5]

See also[edit]

Notes[edit]

  1. ^ Michael, John R.; Schucany, William R.; Haas, Roy W. (May 1976). "Generating Random Variates Using Transformations with Multiple Roots". The American Statistician (American Statistical Association) 30 (2): 88–90. doi:10.2307/2683801. JSTOR 2683801.  edit
  2. ^ Schwarz, W (2001). "The ex-Wald distribution as a descriptive model of response times". Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc 33 (4): 457–69. PMID 11816448.  edit
  3. ^ Schrodinger E (1915) Zur Theorie der Fall—und Steigversuche an Teilchenn mit Brownscher Bewegung. Physikalische Zeitschrift 16, 289-295
  4. ^ Folks, J. L.; Chhikara, R. S. (1978). "The Inverse Gaussian Distribution and Its Statistical Application--A Review". Journal of the Royal Statistical Society. Series B (Methodological) 40 (3): 263–289. doi:10.2307/2984691. JSTOR 2984691.  edit
  5. ^ Giner, Goknur. "A monotonically convergent Newton iteration for the quantiles of any unimodal distribution, with application to the inverse Gaussian distribution". 

References[edit]

  • The inverse gaussian distribution: theory, methodology, and applications by Raj Chhikara and Leroy Folks, 1989 ISBN 0-8247-7997-5
  • System Reliability Theory by Marvin Rausand and Arnljot Høyland
  • The Inverse Gaussian Distribution by Dr. V. Seshadri, Oxford Univ Press, 1993

External links[edit]