# Truncated distribution

Support Probability density function Probability density function for the truncated normal distribution for different sets of parameters. In all cases, a = −10 and b = 10. For the black: μ = −8, σ = 2; blue: μ = 0, σ = 2; red: μ = 9, σ = 10; orange: μ = 0, σ = 10. $x\in (a,b]$ ${\frac {g(x)}{F(b)-F(a)}}$ ${\frac {\int _{a}^{x}dF(t)}{F(b)-F(a)}}={\frac {F(x)-F(a)}{F(b)-F(a)}}$ ${\frac {\int _{a}^{b}xdF(x)}{F(b)-F(a)}}$ $F^{-1}\left({\frac {F(a)+F(b)}{2}}\right)$ In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or even to know about, occurrences is limited to values which lie above or below a given threshold or within a specified range. For example, if the dates of birth of children in a school are examined, these would typically be subject to truncation relative to those of all children in the area given that the school accepts only children in a given age range on a specific date. There would be no information about how many children in the locality had dates of birth before or after the school's cutoff dates if only a direct approach to the school were used to obtain information.

Where sampling is such as to retain knowledge of items that fall outside the required range, without recording the actual values, this is known as censoring, as opposed to the truncation here.

## Definition

The following discussion is in terms of a random variable having a continuous distribution although the same ideas apply to discrete distributions. Similarly, the discussion assumes that truncation is to a semi-open interval y ∈ (a,b] but other possibilities can be handled straightforwardly.

Suppose we have a random variable, $X$ that is distributed according to some probability density function, $f(x)$ , with cumulative distribution function $F(x)$ both of which have infinite support. Suppose we wish to know the probability density of the random variable after restricting the support to be between two constants so that the support, $y=(a,b]$ . That is to say, suppose we wish to know how $X$ is distributed given $a .

$f(x|a where $g(x)=f(x)$ for all $a and $g(x)=0$ everywhere else. That is, $g(x)=f(x)\cdot I(\{a where $I$ is the indicator function. Note that the denominator in the truncated distribution is constant with respect to the $x$ .

Notice that in fact $f(x|a is a density:

$\int _{a}^{b}f(x|a .

Truncated distributions need not have parts removed from the top and bottom. A truncated distribution where just the bottom of the distribution has been removed is as follows:

$f(x|X>y)={\frac {g(x)}{1-F(y)}}$ where $g(x)=f(x)$ for all $y and $g(x)=0$ everywhere else, and $F(x)$ is the cumulative distribution function.

A truncated distribution where the top of the distribution has been removed is as follows:

$f(x|X\leq y)={\frac {g(x)}{F(y)}}$ where $g(x)=f(x)$ for all $x\leq y$ and $g(x)=0$ everywhere else, and $F(x)$ is the cumulative distribution function.

## Expectation of truncated random variable

Suppose we wish to find the expected value of a random variable distributed according to the density $f(x)$ and a cumulative distribution of $F(x)$ given that the random variable, $X$ , is greater than some known value $y$ . The expectation of a truncated random variable is thus:

$E(X|X>y)={\frac {\int _{y}^{\infty }xg(x)dx}{1-F(y)}}$ where again $g(x)$ is $g(x)=f(x)$ for all $x>y$ and $g(x)=0$ everywhere else.

Letting $a$ and $b$ be the lower and upper limits respectively of support for the original density function $f$ (which we assume is continuous), properties of $E(u(X)|X>y)$ , where $u$ is some continuous function with a continuous derivative, include:

1. $\lim _{y\to a}E(u(X)|X>y)=E(u(X))$ 2. $\lim _{y\to b}E(u(X)|X>y)=u(b)$ 3. ${\frac {\partial }{\partial y}}[E(u(X)|X>y)]={\frac {f(y)}{1-F(y)}}[E(u(X)|X>y)-u(y)]$ and ${\frac {\partial }{\partial y}}[E(u(X)|X 1. $\lim _{y\to a}{\frac {\partial }{\partial y}}[E(u(X)|X>y)]=f(a)[E(u(X))-u(a)]$ 2. $\lim _{y\to b}{\frac {\partial }{\partial y}}[E(u(X)|X>y)]={\frac {1}{2}}u'(b)$ Provided that the limits exist, that is: $\lim _{y\to c}u'(y)=u'(c)$ , $\lim _{y\to c}u(y)=u(c)$ and $\lim _{y\to c}f(y)=f(c)$ where $c$ represents either $a$ or $b$ .

## Examples

The truncated normal distribution is an important example.

The Tobit model employs truncated distributions. Other examples include truncated binomial at x=0 and truncated poisson at x=0.

## Random truncation

Suppose we have the following set up: a truncation value, $t$ , is selected at random from a density, $g(t)$ , but this value is not observed. Then a value, $x$ , is selected at random from the truncated distribution, $f(x|t)=Tr(x)$ . Suppose we observe $x$ and wish to update our belief about the density of $t$ given the observation.

First, by definition:

$f(x)=\int _{x}^{\infty }f(x|t)g(t)dt$ , and
$F(a)=\int _{x}^{a}\left[\int _{-\infty }^{\infty }f(x|t)g(t)dt\right]dx.$ Notice that $t$ must be greater than $x$ , hence when we integrate over $t$ , we set a lower bound of $x$ . The functions $f(x)$ and $F(x)$ are the unconditional density and unconditional cumulative distribution function, respectively.

By Bayes' rule,

$g(t|x)={\frac {f(x|t)g(t)}{f(x)}},$ which expands to

$g(t|x)={\frac {f(x|t)g(t)}{\int _{x}^{\infty }f(x|t)g(t)dt}}.$ ### Two uniform distributions (example)

Suppose we know that t is uniformly distributed from [0,T] and x|t is distributed uniformly on [0,t]. Let g(t) and f(x|t) be the densities that describe t and x respectively. Suppose we observe a value of x and wish to know the distribution of t given that value of x.

$g(t|x)={\frac {f(x|t)g(t)}{f(x)}}={\frac {1}{t(\ln(T)-\ln(x))}}\quad {\text{for all }}t>x.$ 