# Probability integral transform

In probability theory, the probability integral transform (also known as universality of the uniform) relates to the result that data values that are modeled as being random variables from any given continuous distribution can be converted to random variables having a standard uniform distribution.[1] This holds exactly provided that the distribution being used is the true distribution of the random variables; if the distribution is one fitted to the data, the result will hold approximately in large samples.

The result is sometimes modified or extended so that the result of the transformation is a standard distribution other than the uniform distribution, such as the exponential distribution.

The transform was introduced by Ronald Fisher in his 1932 edition of the book Statistical Methods for Research Workers.[2]

## Applications

One use for the probability integral transform in statistical data analysis is to provide the basis for testing whether a set of observations can reasonably be modelled as arising from a specified distribution. Specifically, the probability integral transform is applied to construct an equivalent set of values, and a test is then made of whether a uniform distribution is appropriate for the constructed dataset. Examples of this are P–P plots and Kolmogorov–Smirnov tests.

A second use for the transformation is in the theory related to copulas which are a means of both defining and working with distributions for statistically dependent multivariate data. Here the problem of defining or manipulating a joint probability distribution for a set of random variables is simplified or reduced in apparent complexity by applying the probability integral transform to each of the components and then working with a joint distribution for which the marginal variables have uniform distributions.

A third use is based on applying the inverse of the probability integral transform to convert random variables from a uniform distribution to have a selected distribution: this is known as inverse transform sampling.

## Statement

Suppose that a random variable ${\displaystyle X}$ has a continuous distribution for which the cumulative distribution function (CDF) is ${\displaystyle F_{X}.}$ Then the random variable ${\displaystyle Y}$ defined as

${\displaystyle Y:=F_{X}(X)\,,}$

Equivalently, if ${\displaystyle \mu }$ is the uniform measure on ${\displaystyle [0,1]}$, the distribution of ${\displaystyle X}$ on ${\displaystyle \mathbb {R} }$ is the pushforward measure ${\displaystyle \mu \circ F_{X}^{-1}}$.

## Proof

Given any random continuous variable ${\displaystyle X}$, define ${\displaystyle Y=F_{X}(X)}$. Given ${\displaystyle y\in [0,1]}$, if ${\displaystyle F_{X}^{-1}(y)}$ exists (i.e., if there exists a unique ${\displaystyle x}$ such that ${\displaystyle F_{X}(x)=y}$), then:

{\displaystyle {\begin{aligned}F_{Y}(y)&=\operatorname {P} (Y\leq y)\\&=\operatorname {P} (F_{X}(X)\leq y)\\&=\operatorname {P} (X\leq F_{X}^{-1}(y))\\&=F_{X}(F_{X}^{-1}(y))\\&=y\end{aligned}}}

If ${\displaystyle F_{X}^{-1}(y)}$ does not exist, then it can be replaced in this proof by the function ${\displaystyle \chi }$, where we define ${\displaystyle \chi (0)=-\infty }$, ${\displaystyle \chi (1)=\infty }$, and ${\displaystyle \chi (y)\equiv \inf\{x:F_{X}(x)\geq y\}}$ for ${\displaystyle y\in (0,1)}$, with the same result that ${\displaystyle F_{Y}(y)=y}$. Thus, ${\displaystyle F_{Y}}$ is just the CDF of a ${\displaystyle \mathrm {Uniform} (0,1)}$ random variable, so that ${\displaystyle Y}$ has a uniform distribution on the interval ${\displaystyle [0,1]}$.

## Examples

For a first, illustrative example, let ${\displaystyle X}$ be a random variable with a standard normal distribution ${\displaystyle {\mathcal {N}}(0,1)}$. Then its CDF is

${\displaystyle \Phi (x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}{\rm {e}}^{-t^{2}/2}\,{\rm {d}}t={\frac {1}{2}}{\Big [}\,1+\operatorname {erf} {\Big (}{\frac {x}{\sqrt {2}}}{\Big )}\,{\Big ]},\quad x\in \mathbb {R} ,\,}$

where ${\displaystyle \operatorname {erf} (),}$ is the error function. Then the new random variable ${\displaystyle Y,}$ defined by ${\displaystyle Y:=\Phi (X),}$ is uniformly distributed.

As second example, if ${\displaystyle X}$ has an exponential distribution with unit mean, then its CDF is

${\displaystyle F(x)=1-\exp(-x),}$

and the immediate result of the probability integral transform is that

${\displaystyle Y=1-\exp(-X)}$

has a uniform distribution. Moreover, by symmetry of the uniform distribution,

${\displaystyle Z=\exp(-X)}$

also has a uniform distribution.