# Truncated normal distribution

Notation Probability density functionProbability density function for the truncated normal distribution for different sets of parameters. In all cases, a = −10 and b = 10. For the black: μ = −8, σ = 2; blue: μ = 0, σ = 2; red: μ = 9, σ = 10; orange: μ = 0, σ = 10. Cumulative distribution functionCumulative distribution function for the truncated normal distribution for different sets of parameters. In all cases, a = −10 and b = 10. For the black: μ = −8, σ = 2; blue: μ = 0, σ = 2; red: μ = 9, σ = 10; orange: μ = 0, σ = 10. ${\displaystyle \xi ={\frac {x-\mu }{\sigma }},\ \alpha ={\frac {a-\mu }{\sigma }},\ \beta ={\frac {b-\mu }{\sigma }}}$${\displaystyle Z=\Phi (\beta )-\Phi (\alpha )}$ ${\displaystyle \mu \in \mathbb {R} }$${\displaystyle \sigma ^{2}\geq 0}$ (but see definition) ${\displaystyle a\in \mathbb {R} }$ — minimum value of ${\displaystyle x}$ ${\displaystyle b\in \mathbb {R} }$ — maximum value of ${\displaystyle x}$ (${\displaystyle b>a}$) ${\displaystyle x\in [a,b]}$ ${\displaystyle f(x;\mu ,\sigma ,a,b)={\frac {\varphi (\xi )}{\sigma Z}}\,}$[1] ${\displaystyle F(x;\mu ,\sigma ,a,b)={\frac {\Phi (\xi )-\Phi (\alpha )}{Z}}}$ ${\displaystyle \mu +{\frac {\varphi (\alpha )-\varphi (\beta )}{Z}}\sigma }$ ${\displaystyle \mu +\Phi ^{-1}\left({\frac {\Phi (\alpha )+\Phi (\beta )}{2}}\right)\sigma }$ ${\displaystyle \left\{{\begin{array}{ll}a,&\mathrm {if} \ \mu b\end{array}}\right.}$ ${\displaystyle \sigma ^{2}\left[1-{\frac {\beta \varphi (\beta )-\alpha \varphi (\alpha )}{Z}}-\left({\frac {\varphi (\alpha )-\varphi (\beta )}{Z}}\right)^{2}\right]}$ ${\displaystyle \ln({\sqrt {2\pi e}}\sigma Z)+{\frac {\alpha \varphi (\alpha )-\beta \varphi (\beta )}{2Z}}}$ ${\displaystyle e^{\mu t+\sigma ^{2}t^{2}/2}\left[{\frac {\Phi (\beta -\sigma t)-\Phi (\alpha -\sigma t)}{\Phi (\beta )-\Phi (\alpha )}}\right]}$

In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics.

## Definitions

Suppose ${\displaystyle X}$ has a normal distribution with mean ${\displaystyle \mu }$ and variance ${\displaystyle \sigma ^{2}}$ and lies within the interval ${\displaystyle (a,b),{\text{with}}\;-\infty \leq a. Then ${\displaystyle X}$ conditional on ${\displaystyle a has a truncated normal distribution.

Its probability density function, ${\displaystyle f}$, for ${\displaystyle a\leq x\leq b}$, is given by

${\displaystyle f(x;\mu ,\sigma ,a,b)={\frac {1}{\sigma }}\,{\frac {\varphi ({\frac {x-\mu }{\sigma }})}{\Phi ({\frac {b-\mu }{\sigma }})-\Phi ({\frac {a-\mu }{\sigma }})}}}$

and by ${\displaystyle f=0}$ otherwise.

Here,

${\displaystyle \varphi (\xi )={\frac {1}{\sqrt {2\pi }}}\exp \left(-{\frac {1}{2}}\xi ^{2}\right)}$
is the probability density function of the standard normal distribution and ${\displaystyle \Phi (\cdot )}$ is its cumulative distribution function
${\displaystyle \Phi (x)={\frac {1}{2}}\left(1+\operatorname {erf} (x/{\sqrt {2}})\right).}$
By definition, if ${\displaystyle b=\infty }$, then ${\displaystyle \Phi \left({\tfrac {b-\mu }{\sigma }}\right)=1}$, and similarly, if ${\displaystyle a=-\infty }$, then ${\displaystyle \Phi \left({\tfrac {a-\mu }{\sigma }}\right)=0}$.

The above formulae show that when ${\displaystyle -\infty the scale parameter ${\displaystyle \sigma ^{2}}$ of the truncated normal distribution is allowed to assume negative values. The parameter ${\displaystyle \sigma }$ is in this case imaginary, but the function ${\displaystyle f}$ is nevertheless real, positive, and normalizable. The scale parameter ${\displaystyle \sigma ^{2}}$ of the untruncated normal distribution must be positive because the distribution would not be normalizable otherwise. The doubly truncated normal distribution, on the other hand, can in principle have a negative scale parameter (which is different from the variance, see summary formulae), because no such integrability problems arise on a bounded domain. In this case the distribution cannot be interpreted as an untruncated normal conditional on ${\displaystyle a, of course, but can still be interpreted as a maximum-entropy distribution with first and second moments as constraints, and has an additional peculiar feature: it presents two local maxima instead of one, located at ${\displaystyle x=a}$ and ${\displaystyle x=b}$.

## Properties

The truncated normal is one of two possible maximum entropy probability distributions for a fixed mean and variance constrained to the interval [a,b], the other being the truncated U.[2] Truncated normals with fixed support form an exponential family. Nielsen[3] reported closed-form formula for calculating the Kullback-Leibler divergence and the Bhattacharyya distance between two truncated normal distributions with the support of the first distribution nested into the support of the second distribution.

### Moments

If the random variable has been truncated only from below, some probability mass has been shifted to higher values, giving a first-order stochastically dominating distribution and hence increasing the mean to a value higher than the mean ${\displaystyle \mu }$ of the original normal distribution. Likewise, if the random variable has been truncated only from above, the truncated distribution has a mean less than ${\displaystyle \mu .}$

Regardless of whether the random variable is bounded above, below, or both, the truncation is a mean-preserving contraction combined with a mean-changing rigid shift, and hence the variance of the truncated distribution is less than the variance ${\displaystyle \sigma ^{2}}$ of the original normal distribution.

#### Two sided truncation[4]

Let ${\displaystyle \alpha =(a-\mu )/\sigma }$ and ${\displaystyle \beta =(b-\mu )/\sigma }$. Then:

${\displaystyle \operatorname {E} (X\mid a
and
${\displaystyle \operatorname {Var} (X\mid a

Care must be taken in the numerical evaluation of these formulas, which can result in catastrophic cancellation when the interval ${\displaystyle [a,b]}$ does not include ${\displaystyle \mu }$. There are better ways to rewrite them that avoid this issue.[5]

#### One sided truncation (of lower tail)[6]

In this case ${\displaystyle \;b=\infty ,\;\varphi (\beta )=0,\;\Phi (\beta )=1,}$ then

${\displaystyle \operatorname {E} (X\mid X>a)=\mu +\sigma \varphi (\alpha )/Z,\!}$

and

${\displaystyle \operatorname {Var} (X\mid X>a)=\sigma ^{2}[1+\alpha \varphi (\alpha )/Z-(\varphi (\alpha )/Z)^{2}],}$

where ${\displaystyle Z=1-\Phi (\alpha ).}$

#### One sided truncation (of upper tail)

In this case ${\displaystyle \;a=\alpha =-\infty ,\;\varphi (\alpha )=0,\;\Phi (\alpha )=0,}$ then

${\displaystyle \operatorname {E} (X\mid X
${\displaystyle \operatorname {Var} (X\mid X

Barr & Sherrill (1999) give a simpler expression for the variance of one sided truncations. Their formula is in terms of the chi-square CDF, which is implemented in standard software libraries. Bebu & Mathew (2009) provide formulas for (generalized) confidence intervals around the truncated moments.

##### A recursive formula

As for the non-truncated case, there is a recursive formula for the truncated moments.[7]

##### Multivariate

Computing the moments of a multivariate truncated normal is harder.

## Generating values from the truncated normal distribution

A random variate ${\displaystyle x}$ defined as ${\displaystyle x=\Phi ^{-1}(\Phi (\alpha )+U\cdot (\Phi (\beta )-\Phi (\alpha )))\sigma +\mu }$ with ${\displaystyle \Phi }$ the cumulative distribution function and ${\displaystyle \Phi ^{-1}}$ its inverse, ${\displaystyle U}$ a uniform random number on ${\displaystyle (0,1)}$, follows the distribution truncated to the range ${\displaystyle (a,b)}$. This is simply the inverse transform method for simulating random variables. Although one of the simplest, this method can either fail when sampling in the tail of the normal distribution,[8] or be much too slow.[9] Thus, in practice, one has to find alternative methods of simulation.

One such truncated normal generator (implemented in Matlab and in R (programming language) as trandn.R ) is based on an acceptance rejection idea due to Marsaglia.[10] Despite the slightly suboptimal acceptance rate of Marsaglia (1964) in comparison with Robert (1995), Marsaglia's method is typically faster,[9] because it does not require the costly numerical evaluation of the exponential function.

For more on simulating a draw from the truncated normal distribution, see Robert (1995), Lynch (2007), Devroye (1986). The MSM package in R has a function, rtnorm, that calculates draws from a truncated normal. The truncnorm package in R also has functions to draw from a truncated normal.

Chopin (2011) proposed (arXiv) an algorithm inspired from the Ziggurat algorithm of Marsaglia and Tsang (1984, 2000), which is usually considered as the fastest Gaussian sampler, and is also very close to Ahrens's algorithm (1995). Implementations can be found in C, C++, Matlab and Python.

Sampling from the multivariate truncated normal distribution is considerably more difficult.[11] Exact or perfect simulation is only feasible in the case of truncation of the normal distribution to a polytope region.[11][12] In more general cases, Damien & Walker (2001) introduce a general methodology for sampling truncated densities within a Gibbs sampling framework. Their algorithm introduces one latent variable and, within a Gibbs sampling framework, it is more computationally efficient than the algorithm of Robert (1995).

## Notes

1. ^ "Lecture 4: Selection" (PDF). web.ist.utl.pt. Instituto Superior Técnico. November 11, 2002. p. 1. Retrieved 14 July 2015.
2. ^ Dowson, D.; Wragg, A. (September 1973). "Maximum-entropy distributions having prescribed first and second moments (Corresp.)". IEEE Transactions on Information Theory. 19 (5): 689–693. doi:10.1109/TIT.1973.1055060. ISSN 1557-9654.
3. ^ Frank Nielsen (2022). "Statistical Divergences between Densities of Truncated Exponential Families with Nested Supports: Duo Bregman and Duo Jensen Divergences". Entropy. 24 (3). MDPI: 421. Bibcode:2022Entrp..24..421N. doi:10.3390/e24030421. PMC 8947456. PMID 35327931.
4. ^ Johnson, Norman Lloyd; Kotz, Samuel; Balakrishnan, N. (1994). Continuous Univariate Distributions. Vol. 1 (2nd ed.). New York: Wiley. Section 10.1. ISBN 0-471-58495-9. OCLC 29428092.
5. ^ Fernandez-de-Cossio-Diaz, Jorge (2017-12-06), TruncatedNormal.jl: Compute mean and variance of the univariate truncated normal distribution (works far from the peak), retrieved 2017-12-06
6. ^ Greene, William H. (2003). Econometric Analysis (5th ed.). Prentice Hall. ISBN 978-0-13-066189-0.
7. ^ Document by Eric Orjebin, "https://people.smp.uq.edu.au/YoniNazarathy/teaching_projects/studentWork/EricOrjebin_TruncatedNormalMoments.pdf"
8. ^ Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo methods. John Wiley & Sons.
9. ^ a b Botev, Z. I.; L'Ecuyer, P. (2017). "Simulation from the Normal Distribution Truncated to an Interval in the Tail". 10th EAI International Conference on Performance Evaluation Methodologies and Tools. 25th–28th Oct 2016 Taormina, Italy: ACM. pp. 23–29. doi:10.4108/eai.25-10-2016.2266879. ISBN 978-1-63190-141-6.{{cite conference}}: CS1 maint: location (link)
10. ^ Marsaglia, George (1964). "Generating a variable from the tail of the normal distribution". Technometrics. 6 (1): 101–102. doi:10.2307/1266749. JSTOR 1266749.
11. ^ a b Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society, Series B. 79: 125–148. arXiv:1603.04166. doi:10.1111/rssb.12162. S2CID 88515228.
12. ^ Botev, Zdravko & L'Ecuyer, Pierre (2018). "Chapter 8: Simulation from the Tail of the Univariate and Multivariate Normal Distribution". In Puliafito, Antonio (ed.). Systems Modeling: Methodologies and Tools. EAI/Springer Innovations in Communication and Computing. Springer, Cham. pp. 115–132. doi:10.1007/978-3-319-92378-9_8. ISBN 978-3-319-92377-2. S2CID 125554530.
13. ^ Sun, Jingchao; Kong, Maiying; Pal, Subhadip (22 June 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme". Communications in Statistics - Theory and Methods. 52 (5): 1591–1613. doi:10.1080/03610926.2021.1934700. ISSN 0361-0926. S2CID 237919587.