# Student's t-distribution

(Redirected from Student t distribution)
Parameters Probability density function Cumulative distribution function ${\displaystyle \nu >0}$ degrees of freedom (real) x ∈ (−∞; +∞) ${\displaystyle \textstyle {\frac {\Gamma \left({\frac {\nu +1}{2}}\right)}{{\sqrt {\nu \pi }}\,\Gamma \left({\frac {\nu }{2}}\right)}}\left(1+{\frac {x^{2}}{\nu }}\right)^{-{\frac {\nu +1}{2}}}\!}$ ${\displaystyle {\begin{matrix}{\frac {1}{2}}+x\Gamma \left({\frac {\nu +1}{2}}\right)\times \\[0.5em]{\frac {\,_{2}F_{1}\left({\frac {1}{2}},{\frac {\nu +1}{2}};{\frac {3}{2}};-{\frac {x^{2}}{\nu }}\right)}{{\sqrt {\pi \nu }}\,\Gamma \left({\frac {\nu }{2}}\right)}}\end{matrix}}}$ where 2F1 is the hypergeometric function 0 for ${\displaystyle \nu >1}$, otherwise undefined 0 0 ${\displaystyle \textstyle {\frac {\nu }{\nu -2}}}$ for ${\displaystyle \nu >2}$, ∞ for ${\displaystyle 1<\nu \leq 2}$, otherwise undefined 0 for ${\displaystyle \nu >3}$, otherwise undefined ${\displaystyle \textstyle {\frac {6}{\nu -4}}}$ for ${\displaystyle \nu >4}$, ∞ for ${\displaystyle 2<\nu \leq 4}$, otherwise undefined ${\displaystyle {\begin{matrix}{\frac {\nu +1}{2}}\left[\psi \left({\frac {1+\nu }{2}}\right)-\psi \left({\frac {\nu }{2}}\right)\right]\\[0.5em]+\ln {\left[{\sqrt {\nu }}B\left({\frac {\nu }{2}},{\frac {1}{2}}\right)\right]}\,{\scriptstyle {\text{(nats)}}}\end{matrix}}}$ undefined ${\displaystyle \textstyle {\frac {K_{\nu /2}\left({\sqrt {\nu }}|t|\right)\cdot \left({\sqrt {\nu }}|t|\right)^{\nu /2}}{\Gamma (\nu /2)2^{\nu /2-1}}}}$ for ${\displaystyle \nu >0}$ ${\displaystyle K_{\nu }(x)}$: Modified Bessel function of the second kind[1]

In probability and statistics, Student's t-distribution (or simply the t-distribution) is any member of a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small and population standard deviation is unknown. It was developed by William Sealy Gosset under the pseudonym Student. Whereas a normal distribution describes a full population, t-distributions describe samples drawn from a full population; accordingly, the t-distribution for each sample size is different, and the larger the sample, the more the distribution resembles a normal distribution.

The t-distribution plays a role in a number of widely used statistical analyses, including Student's t-test for assessing the statistical significance of the difference between two sample means, the construction of confidence intervals for the difference between two population means, and in linear regression analysis. The Student's t-distribution also arises in the Bayesian analysis of data from a normal family.

If we take a sample of n observations from a normal distribution, then the t-distribution with ${\displaystyle \nu =n-1}$ degrees of freedom can be defined as the distribution of the location of the true mean, relative to the sample mean and divided by the sample standard deviation, after multiplying by the normalizing term ${\displaystyle {\sqrt {n}}}$. In this way, the t-distribution can be used to say how confident you are that any given range contains the true mean.

The t-distribution is symmetric and bell-shaped, like the normal distribution, but has heavier tails, meaning that it is more prone to producing values that fall far from its mean. This makes it useful for understanding the statistical behavior of certain types of ratios of random quantities, in which variation in the denominator is amplified and may produce outlying values when the denominator of the ratio falls close to zero. The Student's t-distribution is a special case of the generalised hyperbolic distribution.

## History and etymology

Statistician William Sealy Gosset, known as "Student"

In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert[2][3][4] and Lüroth.[5][6][7] The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper.

In the English-language literature the distribution takes its name from William Sealy Gosset's 1908 paper in Biometrika under the pseudonym "Student".[8][9] Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. One version of the origin of the pseudonym is that Gosset's employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Another version is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material.[10][11]

Gosset's paper refers to the distribution as the "frequency distribution of standard deviations of samples drawn from a normal population". It became well-known through the work of Ronald Fisher, who called the distribution "Student's distribution" and represented the test value with the letter t.[12][13]

## How Student's distribution arises from sampling

Let X1, ..., Xn be independent and identically distributed as N(μσ2), i.e. this is a sample of size n from a normally distributed population with expected value μ and variance σ2.

Let

${\displaystyle {\bar {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}}$

be the sample mean and let

${\displaystyle S^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(X_{i}-{\bar {X}})^{2}}$

be the (Bessel-corrected) sample variance. Then the random variable

${\displaystyle {\frac {{\bar {X}}-\mu }{\sigma /{\sqrt {n}}}}}$

has a standard normal distribution (i.e. normal with expected value 0 and variance 1), and the random variable

${\displaystyle {\frac {{\bar {X}}-\mu }{S/{\sqrt {n}}}}}$

(where σ has been substituted for S) has a Student's t-distribution with n − 1 degrees of freedom.

## Definition

### Probability density function

Student's t-distribution has the probability density function given by

${\displaystyle f(t)={\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}\left(1+{\frac {t^{2}}{\nu }}\right)^{\!-{\frac {\nu +1}{2}}},\!}$

where ${\displaystyle \nu }$ is the number of degrees of freedom and ${\displaystyle \Gamma }$ is the gamma function. This may also be written as

${\displaystyle f(t)={\frac {1}{{\sqrt {\nu }}\,\mathrm {B} ({\frac {1}{2}},{\frac {\nu }{2}})}}\left(1+{\frac {t^{2}}{\nu }}\right)^{\!-{\frac {\nu +1}{2}}}\!,}$

where B is the Beta function.

For ${\displaystyle \nu }$ even,

${\displaystyle {\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}={\frac {(\nu -1)(\nu -3)\cdots 5\cdot 3}{2{\sqrt {\nu }}(\nu -2)(\nu -4)\cdots 4\cdot 2\,}}\cdot }$

For ${\displaystyle \nu }$ odd,

${\displaystyle {\frac {\Gamma ({\frac {\nu +1}{2}})}{{\sqrt {\nu \pi }}\,\Gamma ({\frac {\nu }{2}})}}={\frac {(\nu -1)(\nu -3)\cdots 4\cdot 2}{\pi {\sqrt {\nu }}(\nu -2)(\nu -4)\cdots 5\cdot 3\,}}\cdot \!}$

The probability density function is symmetric, and its overall shape resembles the bell shape of a normally distributed variable with mean 0 and variance 1, except that it is a bit lower and wider. As the number of degrees of freedom grows, the t-distribution approaches the normal distribution with mean 0 and variance 1.

The following images show the density of the t-distribution for increasing values of ${\displaystyle \nu }$. The normal distribution is shown as a blue line for comparison. Note that the t-distribution (red line) becomes closer to the normal distribution as ${\displaystyle \nu }$ increases.

 1 degree of freedom 2 degrees of freedom 3 degrees of freedom 5 degrees of freedom 10 degrees of freedom 30 degrees of freedom

### Cumulative distribution function

The cumulative distribution function can be written in terms of I, the regularized incomplete beta function. For t > 0,[14]

${\displaystyle F(t)=\int _{-\infty }^{t}f(u)\,du=1-{\tfrac {1}{2}}I_{x(t)}\left({\tfrac {\nu }{2}},{\tfrac {1}{2}}\right),}$

where

${\displaystyle x(t)={\frac {\nu }{t^{2}+\nu }}.}$

Other values would be obtained by symmetry. An alternative formula, valid for ${\displaystyle t^{2}<\nu }$, is[14]

${\displaystyle \int _{-\infty }^{t}f(u)\,du={\tfrac {1}{2}}+t{\frac {\Gamma \left({\tfrac {1}{2}}(\nu +1)\right)}{{\sqrt {\pi \nu }}\,\Gamma \left({\tfrac {\nu }{2}}\right)}}\,{}_{2}F_{1}\left({\tfrac {1}{2}},{\tfrac {1}{2}}(\nu +1);{\tfrac {3}{2}};-{\tfrac {t^{2}}{\nu }}\right),}$

where 2F1 is a particular case of the hypergeometric function.

For information on its inverse cumulative distribution function, see quantile function#Student's t-distribution.

### Special cases

Certain values of ${\displaystyle \nu }$ give an especially simple form.

• ${\displaystyle \nu =1}$
Distribution function:
${\displaystyle F(t)={\tfrac {1}{2}}+{\tfrac {1}{\pi }}\arctan(t).}$
Density function:
${\displaystyle f(t)={\frac {1}{\pi (1+t^{2})}}.}$
See Cauchy distribution
• ${\displaystyle \nu =2}$
Distribution function:
${\displaystyle F(t)={\tfrac {1}{2}}+{\frac {t}{2{\sqrt {2+t^{2}}}}}.}$
Density function:
${\displaystyle f(t)={\frac {1}{\left(2+t^{2}\right)^{\frac {3}{2}}}}.}$
• ${\displaystyle \nu =3}$
Density function:
${\displaystyle f(t)={\frac {6{\sqrt {3}}}{\pi \left(3+t^{2}\right)^{2}}}.}$
• ${\displaystyle \nu =\infty }$
Density function:
${\displaystyle f(t)={\frac {1}{\sqrt {2\pi }}}e^{-{\frac {t^{2}}{2}}}.}$
See Normal distribution

## How the t-distribution arises

### Sampling distribution

Let x1, ..., xn be the numbers observed in a sample from a continuously distributed population with expected value μ. The sample mean and sample variance are given by:

{\displaystyle {\begin{aligned}{\bar {x}}&={\frac {x_{1}+\cdots +x_{n}}{n}},\\s^{2}&={\frac {1}{n-1}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}.\end{aligned}}}

The resulting t-value is

${\displaystyle t={\frac {{\bar {x}}-\mu }{s/{\sqrt {n}}}}.}$

The t-distribution with n − 1 degrees of freedom is the sampling distribution of the t-value when the samples consist of independent identically distributed observations from a normally distributed population. Thus for inference purposes t is a useful "pivotal quantity" in the case when the mean and variance (μ, σ2) are unknown population parameters, in the sense that the t-value has then a probability distribution that depends on neither μ nor σ2.

### Bayesian inference

Main article: Bayesian inference

In Bayesian statistics, a (scaled, shifted) t-distribution arises as the marginal distribution of the unknown mean of a normal distribution, when the dependence on an unknown variance has been marginalised out:[15]

{\displaystyle {\begin{aligned}p(\mu \mid D,I)=&\int p(\mu ,\sigma ^{2}\mid D,I)\,d\sigma ^{2}\\=&\int p(\mu \mid D,\sigma ^{2},I)\,p(\sigma ^{2}\mid D,I)\,d\sigma ^{2},\end{aligned}}}

where D stands for the data {xi}, and I represents any other information that may have been used to create the model. The distribution is thus the compounding of the conditional distribution of μ given the data and σ2 with the marginal distribution of σ2 given the data.

With n data points, if uninformative, or flat, location and scale priors ${\displaystyle p(\mu \mid \sigma ^{2},I)={\text{const}}}$ and ${\displaystyle p(\sigma ^{2}\mid I)\propto 1/\sigma ^{2}}$ can be taken for μ and σ2, then Bayes' theorem gives

{\displaystyle {\begin{aligned}p(\mu \mid D,\sigma ^{2},I)&\sim N({\bar {x}},\sigma ^{2}/n),\\p(\sigma ^{2}\mid D,I)&\sim \operatorname {Scale-inv-} \chi ^{2}(\nu ,s^{2}),\end{aligned}}}

a normal distribution and a scaled inverse chi-squared distribution respectively, where ${\displaystyle \nu =n-1}$ and

${\displaystyle s^{2}=\sum {\frac {(x_{i}-{\bar {x}})^{2}}{n-1}}.}$

The marginalisation integral thus becomes

{\displaystyle {\begin{aligned}p(\mu \mid D,I)&\propto \int _{0}^{\infty }{\frac {1}{\sqrt {\sigma ^{2}}}}\exp \left(-{\frac {1}{2\sigma ^{2}}}n(\mu -{\bar {x}})^{2}\right)\cdot \sigma ^{-\nu -2}\exp(-\nu s^{2}/2\sigma ^{2})\,d\sigma ^{2}\\&\propto \int _{0}^{\infty }\sigma ^{-\nu -3}\exp \left(-{\frac {1}{2\sigma ^{2}}}\left(n(\mu -{\bar {x}})^{2}+\nu s^{2}\right)\right)\,d\sigma ^{2}.\end{aligned}}}

This can be evaluated by substituting ${\displaystyle z=A/2\sigma ^{2}}$, where ${\displaystyle A=n(\mu -{\bar {x}})^{2}+\nu s^{2}}$, giving

${\displaystyle dz=-{\frac {A}{2\sigma ^{4}}}\,d\sigma ^{2},}$

so

${\displaystyle p(\mu \mid D,I)\propto A^{-{\frac {\nu +1}{2}}}\int _{0}^{\infty }z^{(\nu -1)/2}\exp(-z)\,dz.}$

But the z integral is now a standard Gamma integral, which evaluates to a constant, leaving

{\displaystyle {\begin{aligned}p(\mu \mid D,I)&\propto A^{-{\frac {\nu +1}{2}}}\\&\propto \left(1+{\frac {n(\mu -{\bar {x}})^{2}}{\nu s^{2}}}\right)^{-{\frac {\nu +1}{2}}}.\end{aligned}}}

This is a form of the t-distribution with an explicit scaling and shifting that will be explored in more detail in a further section below. It can be related to the standardised t-distribution by the substitution

${\displaystyle t={\frac {\mu -{\bar {x}}}{s/{\sqrt {n}}}}.}$

The derivation above has been presented for the case of uninformative priors for μ and σ2; but it will be apparent that any priors that lead to a normal distribution being compounded with a scaled inverse chi-squared distribution will lead to a t-distribution with scaling and shifting for P(μ | DI), although the scaling parameter corresponding to s2/n above will then be influenced both by the prior information and the data, rather than just by the data as above.

## Characterization

### As the distribution of a test statistic

Student's t-distribution with ${\displaystyle \nu }$ degrees of freedom can be defined as the distribution of the random variable T with[14][16]

${\displaystyle T={\frac {Z}{\sqrt {V/\nu }}}=Z{\sqrt {\frac {\nu }{V}}},}$

where

A different distribution is defined as that of the random variable defined, for a given constant μ, by

${\displaystyle (Z+\mu ){\sqrt {\frac {\nu }{V}}}.}$

This random variable has a noncentral t-distribution with noncentrality parameter μ. This distribution is important in studies of the power of Student's t-test.

#### Derivation

Suppose X1, ..., Xn are independent realizations of the normally-distributed, random variable X, which has an expected value μ and variance σ2. Let

${\displaystyle {\overline {X}}_{n}={\frac {1}{n}}(X_{1}+\cdots +X_{n})}$

be the sample mean, and

${\displaystyle S_{n}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}\left(X_{i}-{\overline {X}}_{n}\right)^{2}}$

be an unbiased estimate of the variance from the sample. It can be shown that the random variable

${\displaystyle V=(n-1){\frac {S_{n}^{2}}{\sigma ^{2}}}}$

has a chi-squared distribution with ${\displaystyle \nu =n-1}$ degrees of freedom (by Cochran's theorem).[17] It is readily shown that the quantity

${\displaystyle Z=\left({\overline {X}}_{n}-\mu \right){\frac {\sqrt {n}}{\sigma }}}$

is normally distributed with mean 0 and variance 1, since the sample mean ${\displaystyle {\overline {X}}_{n}}$ is normally distributed with mean μ and variance σ2/n. Moreover, it is possible to show that these two random variables (the normally distributed one Z and the chi-squared-distributed one V) are independent. Consequently[clarification needed] the pivotal quantity

${\displaystyle T\equiv {\frac {Z}{\sqrt {V/\nu }}}=\left({\overline {X}}_{n}-\mu \right){\frac {\sqrt {n}}{S_{n}}},}$

which differs from Z in that the exact standard deviation σ is replaced by the random variable Sn, has a Student's t-distribution as defined above. Notice that the unknown population variance σ2 does not appear in T, since it was in both the numerator and the denominator, so it canceled. Gosset intuitively obtained the probability density function stated above, with ${\displaystyle \nu }$ equal to n − 1, and Fisher proved it in 1925.[12]

The distribution of the test statistic T depends on ${\displaystyle \nu }$, but not μ or σ; the lack of dependence on μ and σ is what makes the t-distribution important in both theory and practice.

### As a maximum entropy distribution

Student's t-distribution is the maximum entropy probability distribution for a random variate X for which ${\displaystyle E(\ln(\nu +X^{2}))}$ is fixed.[18]

## Properties

### Moments

For ${\displaystyle \nu >1}$, the raw moments of the t-distribution are

${\displaystyle E(T^{k})={\begin{cases}0&k{\text{ odd}},\quad 0

Moments of order ${\displaystyle \nu }$ or higher do not exist.[19]

The term for ${\displaystyle 0, k even, may be simplified using the properties of the gamma function to

${\displaystyle E(T^{k})=\nu ^{\frac {k}{2}}\,\prod _{i=1}^{\frac {k}{2}}{\frac {2i-1}{\nu -2i}}\qquad k{\text{ even}},\quad 0

For a t-distribution with ${\displaystyle \nu }$ degrees of freedom, the expected value is 0 if ${\displaystyle \nu >1}$, and its variance is ${\displaystyle {\frac {\nu }{\nu -2}}}$ if ${\displaystyle \nu >2}$. The skewness is 0 if ${\displaystyle \nu >3}$ and the excess kurtosis is ${\displaystyle {\frac {6}{\nu -4}}}$ if ${\displaystyle \nu >4}$.

### Monte Carlo sampling

There are various approaches to constructing random samples from the Student's t-distribution. The matter depends on whether the samples are required on a stand-alone basis, or are to be constructed by application of a quantile function to uniform samples; e.g., in the multi-dimensional applications basis of copula-dependency.[citation needed] In the case of stand-alone sampling, an extension of the Box–Muller method and its polar form is easily deployed.[20] It has the merit that it applies equally well to all real positive degrees of freedom, ν, while many other candidate methods fail if ν is close to zero.[20]

### Integral of Student's probability density function and p-value

The function A(t|ν) is the integral of Student's probability density function, f(t) between −t and t, for t ≥ 0. It thus gives the probability that a value of t less than that calculated from observed data would occur by chance. Therefore, the function A(t|ν) can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of t and the probability of its occurrence if the two sets of data were drawn from the same population. This is used in a variety of situations, particularly in t-tests. For the statistic t, with ν degrees of freedom, A(t|ν) is the probability that t would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that t ≥ 0). It can be easily calculated from the cumulative distribution function Fν(t) of the t-distribution:

${\displaystyle A(t|\nu )=F_{\nu }(t)-F_{\nu }(-t)=1-I_{\frac {\nu }{\nu +t^{2}}}\left({\frac {\nu }{2}},{\frac {1}{2}}\right),}$

where Ix is the regularized incomplete beta function (ab).

For statistical hypothesis testing this function is used to construct the p-value.

### Differential equation

The pdf of the t-distribution is a solution to the following differential equation:

${\displaystyle \left\{{\begin{array}{l}\left(\nu +x^{2}\right)f'(x)+(\nu +1)xf(x)=0,\\[6pt]\displaystyle f(1)={\frac {\nu ^{\nu /2}(\nu +1)^{-{\frac {\nu }{2}}-{\frac {1}{2}}}}{B\left({\frac {\nu }{2}},{\frac {1}{2}}\right)}}\end{array}}\right\}}$

where B(x,y) is a beta function.

## Non-standardized Student's t-distribution

### In terms of scaling parameter σ, or σ2

Student's t distribution can be generalized to a three parameter location-scale family, introducing a location parameter ${\displaystyle \mu }$ and a scale parameter ${\displaystyle \sigma }$, through the relation

${\displaystyle X=\mu +\sigma T}$

or

${\displaystyle T={\frac {X-\mu }{\sigma }}}$

This means that ${\displaystyle {\frac {x-\mu }{\sigma }}}$ has a classic Student's t distribution with ${\displaystyle \nu }$ degrees of freedom.

The resulting non-standardized Student's t-distribution has a density defined by[21]

${\displaystyle p(x\mid \nu ,\mu ,\sigma )={\frac {\Gamma ({\frac {\nu +1}{2}})}{\Gamma ({\frac {\nu }{2}}){\sqrt {\pi \nu }}\sigma }}\left(1+{\frac {1}{\nu }}\left({\frac {x-\mu }{\sigma }}\right)^{2}\right)^{-{\frac {\nu +1}{2}}}}$

Here, ${\displaystyle \sigma }$ does not correspond to a standard deviation: it is not the standard deviation of the scaled t distribution, which may not even exist; nor is it the standard deviation of the underlying normal distribution, which is unknown. ${\displaystyle \sigma }$ simply sets the overall scaling of the distribution. In the Bayesian derivation of the marginal distribution of an unknown normal mean ${\displaystyle \mu }$ above, ${\displaystyle \sigma }$ as used here corresponds to the quantity ${\displaystyle \scriptstyle {s/{\sqrt {n}}}}$, where

${\displaystyle s^{2}=\sum {\frac {(x_{i}-{\bar {x}})^{2}}{n-1}}.}$

Equivalently, the distribution can be written in terms of ${\displaystyle \sigma ^{2}}$, the square of this scale parameter:

${\displaystyle p(x\mid \nu ,\mu ,\sigma ^{2})={\frac {\Gamma ({\frac {\nu +1}{2}})}{\Gamma ({\frac {\nu }{2}}){\sqrt {\pi \nu \sigma ^{2}}}}}\left(1+{\frac {1}{\nu }}{\frac {(x-\mu )^{2}}{\sigma ^{2}}}\right)^{-{\frac {\nu +1}{2}}}}$

Other properties of this version of the distribution are:[21]

{\displaystyle {\begin{aligned}\operatorname {E} (X)&=\mu &{\text{for }}\,\nu >1,\\\operatorname {var} (X)&=\sigma ^{2}{\frac {\nu }{\nu -2}}&{\text{for }}\,\nu >2,\\\operatorname {mode} (X)&=\mu .\end{aligned}}}

This distribution results from compounding a Gaussian distribution (normal distribution) with mean ${\displaystyle \mu }$ and unknown variance, with an inverse gamma distribution placed over the variance with parameters ${\displaystyle a=\nu /2}$ and ${\displaystyle b=\nu \sigma ^{2}/2}$. In other words, the random variable X is assumed to have a Gaussian distribution with an unknown variance distributed as inverse gamma, and then the variance is marginalized out (integrated out). The reason for the usefulness of this characterization is that the inverse gamma distribution is the conjugate prior distribution of the variance of a Gaussian distribution. As a result, the non-standardized Student's t-distribution arises naturally in many Bayesian inference problems. See below.

Equivalently, this distribution results from compounding a Gaussian distribution with a scaled-inverse-chi-squared distribution with parameters ${\displaystyle \nu }$ and ${\displaystyle \sigma ^{2}}$. The scaled-inverse-chi-squared distribution is exactly the same distribution as the inverse gamma distribution, but with a different parameterization, i.e. ${\displaystyle \nu =2a,\sigma ^{2}=b/a}$.

### In terms of inverse scaling parameter λ

An alternative parameterization in terms of an inverse scaling parameter ${\displaystyle \lambda }$ (analogous to the way precision is the reciprocal of variance), defined by the relation ${\displaystyle \lambda ={\frac {1}{\sigma ^{2}}}}$. Then the density is defined by[22]

${\displaystyle p(x|\nu ,\mu ,\lambda )={\frac {\Gamma ({\frac {\nu +1}{2}})}{\Gamma ({\frac {\nu }{2}})}}\left({\frac {\lambda }{\pi \nu }}\right)^{\frac {1}{2}}\left(1+{\frac {\lambda (x-\mu )^{2}}{\nu }}\right)^{-{\frac {\nu +1}{2}}}.}$

Other properties of this version of the distribution are:[22]

{\displaystyle {\begin{aligned}\operatorname {E} (X)&=\mu \quad \quad \quad {\text{for }}\,\nu >1,\\{\text{var}}(X)&={\frac {1}{\lambda }}{\frac {\nu }{\nu -2}}\,\quad {\text{for }}\,\nu >2,\\{\text{mode}}(X)&=\mu .\end{aligned}}}

This distribution results from compounding a Gaussian distribution with mean ${\displaystyle \mu }$ and unknown precision (the reciprocal of the variance), with a gamma distribution placed over the precision with parameters ${\displaystyle a=\nu /2}$ and ${\displaystyle b=\nu /(2\lambda )}$. In other words, the random variable X is assumed to have a normal distribution with an unknown precision distributed as gamma, and then this is marginalized over the gamma distribution.

## Related distributions

• If X ~ t(ν) has a Student's t-distribution then X2 has an F-distribution: ${\displaystyle X^{2}\sim \mathrm {F} (\nu _{1}=1,\nu _{2}=\nu )}$
• The noncentral t-distribution generalizes the t-distribution to include a location parameter. Unlike the nonstandardized t-distributions, the noncentral distributions are not symmetric (the median is not the same as the mode).
• The discrete Student's t-distribution is defined by its probability mass function at r being proportional to:[23]
${\displaystyle \prod _{j=1}^{k}{\frac {1}{(r+j+a)^{2}+b^{2}}}\quad \quad r=\ldots ,-1,0,1,\ldots .}$
Here a, b, and k are parameters. This distribution arises from the construction of a system of discrete distributions similar to that of the Pearson distributions for continuous distributions.[24]
• One can generate Student-t samples by taking the ratio of variables from the normal distribution and the square-root of chi-squared distribution. If we use instead of the normal distribution, e.g., the Irwin–Hall distribution, we obtain over-all a symmetric 4-parameter distribution, which includes the normal, the uniform, the triangular, the Student-t and the Cauchy distribution. This is also more flexible than some other symmetric generalizations of the Gaussian distribution.

## Uses

### In frequentist statistical inference

Student's t-distribution arises in a variety of statistical estimation problems where the goal is to estimate an unknown parameter, such as a mean value, in a setting where the data are observed with additive errors. If (as in nearly all practical statistical work) the population standard deviation of these errors is unknown and has to be estimated from the data, the t-distribution is often used to account for the extra uncertainty that results from this estimation. In most such problems, if the standard deviation of the errors were known, a normal distribution would be used instead of the t-distribution.

Confidence intervals and hypothesis tests are two statistical procedures in which the quantiles of the sampling distribution of a particular statistic (e.g. the standard score) are required. In any situation where this statistic is a linear function of the data, divided by the usual estimate of the standard deviation, the resulting quantity can be rescaled and centered to follow Student's t-distribution. Statistical analyses involving means, weighted means, and regression coefficients all lead to statistics having this form.

Quite often, textbook problems will treat this the population standard deviation as if it were known and thereby avoid the need to use the Student's t-distribution. These problems are generally of two kinds: (1) those in which the sample size is so large that one may treat a data-based estimate of the variance as if it were certain, and (2) those that illustrate mathematical reasoning, in which the problem of estimating the standard deviation is temporarily ignored because that is not the point that the author or instructor is then explaining.

#### Hypothesis testing

A number of statistics can be shown to have t-distributions for samples of moderate size under null hypotheses that are of interest, so that the t-distribution forms the basis for significance tests. For example, the distribution of Spearman's rank correlation coefficient ρ, in the null case (zero correlation) is well approximated by the t distribution for sample sizes above about 20.[citation needed]

#### Confidence intervals

Suppose the number A is so chosen that

${\displaystyle \Pr(-A

when T has a t-distribution with n − 1 degrees of freedom. By symmetry, this is the same as saying that A satisfies

${\displaystyle \Pr(T

so A is the "95th percentile" of this probability distribution, or ${\displaystyle A=t_{(0.05,n-1)}}$. Then

${\displaystyle \Pr \left(-A<{\frac {{\overline {X}}_{n}-\mu }{\frac {S_{n}}{\sqrt {n}}}}

and this is equivalent to

${\displaystyle \Pr \left({\overline {X}}_{n}-A{\frac {S_{n}}{\sqrt {n}}}<\mu <{\overline {X}}_{n}+A{\frac {S_{n}}{\sqrt {n}}}\right)=0.9.}$

Therefore, the interval whose endpoints are

${\displaystyle {\overline {X}}_{n}\pm A{\frac {S_{n}}{\sqrt {n}}}}$

is a 90% confidence interval for μ. Therefore, if we find the mean of a set of observations that we can reasonably expect to have a normal distribution, we can use the t-distribution to examine whether the confidence limits on that mean include some theoretically predicted value – such as the value predicted on a null hypothesis.

It is this result that is used in the Student's t-tests: since the difference between the means of samples from two normal distributions is itself distributed normally, the t-distribution can be used to examine whether that difference can reasonably be supposed to be zero.

If the data are normally distributed, the one-sided (1 − a)-upper confidence limit (UCL) of the mean, can be calculated using the following equation:

${\displaystyle \mathrm {UCL} _{1-a}={\overline {X}}_{n}+t_{a,n-1}{\frac {S_{n}}{\sqrt {n}}}.}$

The resulting UCL will be the greatest average value that will occur for a given confidence interval and population size. In other words, ${\displaystyle {\overline {X}}_{n}}$ being the mean of the set of observations, the probability that the mean of the distribution is inferior to UCL1−a is equal to the confidence level 1 − a.

#### Prediction intervals

The t-distribution can be used to construct a prediction interval for an unobserved sample from a normal distribution with unknown mean and variance.

### In Bayesian statistics

The Student's t-distribution, especially in its three-parameter (location-scale) version, arises frequently in Bayesian statistics as a result of its connection with the normal distribution. Whenever the variance of a normally distributed random variable is unknown and a conjugate prior placed over it that follows an inverse gamma distribution, the resulting marginal distribution of the variable will follow a Student's t-distribution. Equivalent constructions with the same results involve a conjugate scaled-inverse-chi-squared distribution over the variance, or a conjugate gamma distribution over the precision. If an improper prior proportional to σ−2 is placed over the variance, the t-distribution also arises. This is the case regardless of whether the mean of the normally distributed variable is known, is unknown distributed according to a conjugate normally distributed prior, or is unknown distributed according to an improper constant prior.

Related situations that also produce a t-distribution are:

### Robust parametric modeling

The t-distribution is often used as an alternative to the normal distribution as a model for data, which often has heavier tails than the normal distribution allows for; see e.g. Lange et al.[25] The classical approach was to identify outliers and exclude or downweight them in some way. However, it is not always easy to identify outliers (especially in high dimensions), and the t-distribution is a natural choice of model for such data and provides a parametric approach to robust statistics.

A Bayesian account can be found in Gelman et al.[26] The degrees of freedom parameter controls the kurtosis of the distribution and is correlated with the scale parameter. The likelihood can have multiple local maxima and, as such, it is often necessary to fix the degrees of freedom at a fairly low value and estimate the other parameters taking this as given. Some authors[citation needed] report that values between 3 and 9 are often good choices. Venables and Ripley[citation needed] suggest that a value of 5 is often a good choice.

## Table of selected values

Most statistical textbooks list t-distribution tables. Nowadays, the better way to a fully precise critical t value or a cumulative probability is the statistical function implemented in spreadsheets, or an interactive calculating web page. The relevant spreadsheet functions are TDIST and TINV, while online calculating pages save troubles like positions of parameters or names of functions.

The following table lists a few selected values for t-distributions with ν degrees of freedom for a range of one-sided or two-sided critical regions. For an example of how to read this table, take the fourth row, which begins with 4; that means ν, the number of degrees of freedom, is 4 (and if we are dealing, as above, with n values with a fixed sum, n = 5). Take the fifth entry, in the column headed 95% for one-sided (90% for two-sided). The value of that entry is 2.132. Then the probability that T is less than 2.132 is 95% or Pr(−∞ < T < 2.132) = 0.95; this also means that Pr(−2.132 < T < 2.132) = 0.9.

This can be calculated by the symmetry of the distribution,

Pr(T < −2.132) = 1 − Pr(T > −2.132) = 1 − 0.95 = 0.05,

and so

Pr(−2.132 < T < 2.132) = 1 − 2(0.05) = 0.9.

Note that the last row also gives critical points: a t-distribution with infinitely many degrees of freedom is a normal distribution. (See Related distributions above).

The first column is the number of degrees of freedom.

One-sided 75% 80% 85% 90% 95% 97.5% 99% 99.5% 99.75% 99.9% 99.95%
Two-sided 50% 60% 70% 80% 90% 95% 98% 99% 99.5% 99.8% 99.9%
1 1.000 1.376 1.963 3.078 6.314 12.71 31.82 63.66 127.3 318.3 636.6
2 0.816 1.080 1.386 1.886 2.920 4.303 6.965 9.925 14.09 22.33 31.60
3 0.765 0.978 1.250 1.638 2.353 3.182 4.541 5.841 7.453 10.21 12.92
4 0.741 0.941 1.190 1.533 2.132 2.776 3.747 4.604 5.598 7.173 8.610
5 0.727 0.920 1.156 1.476 2.015 2.571 3.365 4.032 4.773 5.893 6.869
6 0.718 0.906 1.134 1.440 1.943 2.447 3.143 3.707 4.317 5.208 5.959
7 0.711 0.896 1.119 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.706 0.889 1.108 1.397 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9 0.703 0.883 1.100 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.700 0.879 1.093 1.372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.697 0.876 1.088 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.695 0.873 1.083 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.694 0.870 1.079 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.692 0.868 1.076 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.691 0.866 1.074 1.341 1.753 2.131 2.602 2.947 3.286 3.733 4.073
16 0.690 0.865 1.071 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.689 0.863 1.069 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.688 0.862 1.067 1.330 1.734 2.101 2.552 2.878 3.197 3.610 3.922
19 0.688 0.861 1.066 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.687 0.860 1.064 1.325 1.725 2.086 2.528 2.845 3.153 3.552 3.850
21 0.686 0.859 1.063 1.323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.686 0.858 1.061 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
23 0.685 0.858 1.060 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.767
24 0.685 0.857 1.059 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.684 0.856 1.058 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.684 0.856 1.058 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.684 0.855 1.057 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.683 0.855 1.056 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.683 0.854 1.055 1.311 1.699 2.045 2.462 2.756 3.038 3.396 3.659
30 0.683 0.854 1.055 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.681 0.851 1.050 1.303 1.684 2.021 2.423 2.704 2.971 3.307 3.551
50 0.679 0.849 1.047 1.299 1.676 2.009 2.403 2.678 2.937 3.261 3.496
60 0.679 0.848 1.045 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
80 0.678 0.846 1.043 1.292 1.664 1.990 2.374 2.639 2.887 3.195 3.416
100 0.677 0.845 1.042 1.290 1.660 1.984 2.364 2.626 2.871 3.174 3.390
120 0.677 0.845 1.041 1.289 1.658 1.980 2.358 2.617 2.860 3.160 3.373
0.674 0.842 1.036 1.282 1.645 1.960 2.326 2.576 2.807 3.090 3.291

The number at the beginning of each row in the table above is ν, which has been defined above as n − 1. The percentage along the top is 100%(1 − α). The numbers in the main body of the table are tα, ν. If a quantity T is distributed as a Student's t-distribution with ν degrees of freedom, then there is a probability 1 − α that T will be less than tα, ν. (Calculated as for a one-tailed or one-sided test, as opposed to a two-tailed test.)

For example, given a sample with a sample variance 2 and sample mean of 10, taken from a sample set of 11 (10 degrees of freedom), using the formula

${\displaystyle {\overline {X}}_{n}\pm A{\frac {S_{n}}{\sqrt {n}}},}$

we can determine that at 90% confidence, we have a true mean lying below

${\displaystyle 10+1.37218{\frac {\sqrt {2}}{\sqrt {11}}}=10.58510.}$

In other words, on average, 90% of the times that an upper threshold is calculated by this method, this upper threshold exceeds the true mean.

And, still at 90% confidence, we have a true mean lying over

${\displaystyle 10-1.37218{\frac {\sqrt {2}}{\sqrt {11}}}=9.41490.}$

In other words, on average, 90% of the times that a lower threshold is calculated by this method, this lower threshold lies below the true mean.

So that at 80% confidence (calculated from 1 − 2 × (1 − 90%) = 80%), we have a true mean lying within the interval

${\displaystyle \left(10-1.37218{\frac {\sqrt {2}}{\sqrt {11}}},10+1.37218{\frac {\sqrt {2}}{\sqrt {11}}}\right)=(9.41490,10.58510).}$

In other words, on average, 80% of the times that upper and lower thresholds are calculated by this method, the true mean is both below the upper threshold and above the lower threshold. This is not the same thing as saying that there is an 80% probability that the true mean lies between a particular pair of upper and lower thresholds that have been calculated by this method; see confidence interval and prosecutor's fallacy.

## Notes

1. ^ Hurst, Simon. The Characteristic Function of the Student-t Distribution, Financial Mathematics Research Report No. FMRR006-95, Statistics Research Report No. SRR044-95 Archived February 18, 2010, at the Wayback Machine.
2. ^ Helmert, F. R. (1875). "Über die Bestimmung des wahrscheinlichen Fehlers aus einer endlichen Anzahl wahrer Beobachtungsfehler". Z. Math. Phys., 20, 300–3.
3. ^ Helmert, F. R. (1876a). "Über die Wahrscheinlichkeit der Potenzsummen der Beobachtungsfehler und uber einige damit in Zusammenhang stehende Fragen". Z. Math. Phys., 21, 192–218.
4. ^ Helmert, F. R. (1876b). "Die Genauigkeit der Formel von Peters zur Berechnung des wahrscheinlichen Beobachtungsfehlers directer Beobachtungen gleicher Genauigkeit", Astron. Nachr., 88, 113–32.
5. ^ Lüroth, J (1876). "Vergleichung von zwei Werten des wahrscheinlichen Fehlers". Astron. Nachr. 87 (14): 209–20. Bibcode:1876AN.....87..209L. doi:10.1002/asna.18760871402.
6. ^ Pfanzagl, J.; Sheynin, O. (1996). "A forerunner of the t-distribution (Studies in the history of probability and statistics XLIV)". Biometrika. 83 (4): 891–898. doi:10.1093/biomet/83.4.891. MR 1766040.
7. ^ Sheynin, O. (1995). "Helmert's work in the theory of errors". Arch. Hist. Exact Sci. 49: 73–104. doi:10.1007/BF00374700.
8. ^ "Student" [William Sealy Gosset] (March 1908). "The probable error of a mean" (PDF). Biometrika. 6 (1): 1–25. doi:10.1093/biomet/6.1.1.
9. ^ "Student" (William Sealy Gosset), original Biometrika paper as a scan.
10. ^ M. Wendl (2016) Pseudonymous fame, Science, 351(6280), 1406.
11. ^ Mortimer, Robert G. (2005). Mathematics for Physical Chemistry, 3rd ed. Academic Press. ISBN 0-12-508347-5 (p. 326).
12. ^ a b Fisher, R. A. (1925). "Applications of "Student's" distribution" (PDF). Metron. 5: 90–104.
13. ^ Walpole, Ronald; Myers, Raymond; Myers, Sharon; Ye, Keying. (2002). Probability and Statistics for Engineers and Scientists, 7th edi. p. 237. Pearson Education. ISBN 81-7758-404-9
14. ^ a b c Johnson, N. L., Kotz, S., Balakrishnan, N. (1995) Continuous Univariate Distributions, Volume 2, 2nd Edition. Wiley, ISBN 0-471-58494-0 (Chapter 28).
15. ^ A. Gelman et al (1995), Bayesian Data Analysis, Chapman & Hall. ISBN 0-412-03991-5. p. 68
16. ^ Hogg & Craig (1978), Sections 4.4 and 4.8.
17. ^ Cochran, W. G. (April 1934). "The distribution of quadratic forms in a normal system, with applications to the analysis of covariance". Mathematical Proceedings of the Cambridge Philosophical Society. 30 (2): 178–191. Bibcode:1934PCPS...30..178C. doi:10.1017/S0305004100016595.
18. ^ Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model" (PDF). Journal of Econometrics. Elsevier: 219–230. Retrieved 2011-06-02.
19. ^ See, for example, page 56 of Casella and Berger, Statistical Inference, 1990 Duxbury.
20. ^ a b Bailey, R. W. (1994). "Polar Generation of Random Variates with the t-Distribution". Mathematics of Computation. 62 (206): 779–781. doi:10.2307/2153537.
21. ^ a b Jackman, Simon (2009). Bayesian Analysis for the Social Sciences. Wiley. p. 507.
22. ^ a b Bishop, C.M. (2006). Pattern recognition and machine learning. Springer.
23. ^ Ord, J.K. (1972) Families of Frequency Distributions, Griffin. ISBN 0-85264-137-0 (Table 5.1)
24. ^ Ord, J.K. (1972) Families of Frequency Distributions, Griffin. ISBN 0-85264-137-0 (Chapter 5)
25. ^ Lange, Kenneth L., Roderick JA Little, and Jeremy MG Taylor. "Robust statistical modeling using the t distribution." Journal of the American Statistical Association 84.408 (1989): 881-896.
26. ^ Gelman, Andrew, et al. Bayesian data analysis, Chapter 12; Boca Raton, FL, USA: Chapman & Hall/CRC, 2014