# User:SRevel/sandbox

Jump to: navigation, search

I'm copying & pasting article content in here, so that I can clip bits I don't need, and add reminder notes (relevant to the SOA P-exam) or personal notes (scattered pedantry).

# Encoding a random variable (p.d.f., c.d.f., chi.f., m.g.f.)

The most natural way to describe a discrete random variable is to give its probability distribution function ${\displaystyle p_{X}:\mathbb {R} \rightarrow \mathbb {R} ^{+}}$, where ${\displaystyle p(x)}$ reports the probability of the event ${\displaystyle X=x}$. To describe a wider class of random variables, we interpret this p(x) as specifying the atoms of a finite real measure ${\displaystyle dp(x)}$ (on whatever space supports it, usually ${\displaystyle \mathbb {R} }$).

A complete description of a random variable ${\displaystyle X}$ may be given in the form of a cumulative distribution function, characteristic function, or moment-generating function, defined by

${\displaystyle F_{X}(x)=\int _{-\infty }^{x}p_{X}(x)dx,}$
${\displaystyle \chi _{X}(t)=\mathbf {E} \left[e^{itX}\right],}$
${\displaystyle M_{X}(t)=\mathbf {E} \left[e^{tX}\right].}$

We drop the subscript as soon as we're sure Prof. Renalis is not looking.

# Special discrete random variables

## Binomial and Negative Binomial

Formally, 'trials' are i.i.d. binary random variables, with p.d.f. ${\displaystyle d\mathbb {P} \left[X=x\right]=p\delta _{0}+(1-p)\delta _{1}}$.

Binomial: Given ${\displaystyle n}$ many trials, the probability of obtaining exactly ${\displaystyle k}$ successes is

${\displaystyle \mathbb {P} \left[X=k\right]={n} \choose {k}p^{k}(1-p)^{n-k}.}$

(It's the probability of obtaining k successes and then n-k failures, times the number of ways to rearrange where those k successes and n-k failures occur.)

A binomial rv has mean ${\displaystyle np}$ and variance ${\displaystyle np(1-p)}$.

Given a required threshold of ${\displaystyle k}$ successes, the probability of needing exactly ${\displaystyle n}$ trials is

${\displaystyle {n-1} \choose {k-1}p^{k}(1-p)^{n-k}.}$

(It's like the binomial, except we can't rearrange the final success.)

# Stuff I haven't copypasted into place yet

There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables.

The moment-generating function does not always exist even for real-valued arguments, unlike the characteristic function. There are relations between the behavior of the moment-generating function of a distribution and properties of the distribution, such as the existence of moments.

## Definition

In probability theory and statistics, the moment-generating function of a random variable X is

${\displaystyle M_{X}(t):=E\left[e^{tX}\right],\quad t\in \mathbb {R} ,}$

wherever this expectation exists.

${\displaystyle M_{X}(0)}$ always exists and is equal to 1.

A key problem with moment-generating functions is that moments and the moment-generating function may not exist, as the integrals need not converge absolutely. By contrast, the characteristic function always exists (because it is the integral of a bounded function on a space of finite measure), and thus may be used instead.

More generally, where ${\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})}$, an n-dimensional random vector, one uses ${\displaystyle \mathbf {t} \cdot \mathbf {X} =\mathbf {t} ^{\mathrm {T} }\mathbf {X} }$ instead of tX:

${\displaystyle M_{\mathbf {X} }(\mathbf {t} ):=E\left(e^{\mathbf {t} ^{\mathrm {T} }\mathbf {X} }\right).}$

The reason for defining this function is that it can be used to find all the moments of the distribution.[1] The series expansion of etX is:

${\displaystyle e^{tX}=1+tX+{\frac {t^{2}X^{2}}{2!}}+{\frac {t^{3}X^{3}}{3!}}+\cdots +{\frac {t^{n}X^{n}}{n!}}+\cdots .}$

Hence:

${\displaystyle M_{X}(t)=E(e^{tX})=1+tm_{1}+{\frac {t^{2}m_{2}}{2!}}+{\frac {t^{3}m_{3}}{3!}}+\cdots +{\frac {t^{n}m_{n}}{n!}}+\cdots ,}$

where mn is the nth moment.

If we differentiate MX(t) i times with respect to t and then set t = 0 we shall therefore obtain the ith moment about the origin, mi.

## Examples

Distribution Moment-generating function MX(t) Characteristic function φ(t)
Bernoulli ${\displaystyle \,P(X=1)=p}$   ${\displaystyle \,1-p+pe^{t}}$   ${\displaystyle \,1-p+pe^{it}}$
Geometric ${\displaystyle (1-p)^{k-1}\,p\!}$   ${\displaystyle {\frac {pe^{t}}{1-(1-p)e^{t}}}\!}$,
for  ${\displaystyle t<-\ln(1-p)\!}$
${\displaystyle {\frac {pe^{it}}{1-(1-p)\,e^{it}}}\!}$
Binomial B(n, p)   ${\displaystyle \,(1-p+pe^{t})^{n}}$   ${\displaystyle \,(1-p+pe^{it})^{n}}$
Poisson Pois(λ)   ${\displaystyle \,e^{\lambda (e^{t}-1)}}$   ${\displaystyle \,e^{\lambda (e^{it}-1)}}$
Uniform U(a, b)   ${\displaystyle \,{\frac {e^{tb}-e^{ta}}{t(b-a)}}}$   ${\displaystyle \,{\frac {e^{itb}-e^{ita}}{it(b-a)}}}$
Normal N(μ, σ2)   ${\displaystyle \,e^{t\mu +{\frac {1}{2}}\sigma ^{2}t^{2}}}$   ${\displaystyle \,e^{it\mu -{\frac {1}{2}}\sigma ^{2}t^{2}}}$
Chi-squared χ2k   ${\displaystyle \,(1-2t)^{-k/2}}$   ${\displaystyle \,(1-2it)^{-k/2}}$
Gamma Γ(k, θ)   ${\displaystyle \,(1-t\theta )^{-k}}$   ${\displaystyle \,(1-it\theta )^{-k}}$
Exponential Exp(λ)   ${\displaystyle \,(1-t\lambda ^{-1})^{-1}}$   ${\displaystyle \,(1-it\lambda ^{-1})^{-1}}$
Multivariate normal N(μ, Σ)   ${\displaystyle \,e^{t^{\mathrm {T} }\mu +{\frac {1}{2}}t^{\mathrm {T} }\Sigma t}}$   ${\displaystyle \,e^{it^{\mathrm {T} }\mu -{\frac {1}{2}}t^{\mathrm {T} }\Sigma t}}$
Degenerate δa   ${\displaystyle \,e^{ta}}$   ${\displaystyle \,e^{ita}}$
Laplace L(μ, b)   ${\displaystyle \,{\frac {e^{t\mu }}{1-b^{2}t^{2}}}}$   ${\displaystyle \,{\frac {e^{it\mu }}{1+b^{2}t^{2}}}}$
Cauchy Cauchy(μ, θ) not defined   ${\displaystyle \,e^{it\mu -\theta |t|}}$
Negative Binomial NB(r, p)   ${\displaystyle \,{\frac {((1-p)e^{t})^{r}}{(1-pe^{t})^{r}}}}$   ${\displaystyle \,{\frac {((1-p)e^{it})^{r}}{(1-pe^{it})^{r}}}}$

So that characteristic function is a Wick rotation of the moment generating function Mx(t).

## Calculation

The moment-generating function is given by the Riemann–Stieltjes integral

${\displaystyle M_{X}(t)=\int _{-\infty }^{\infty }e^{tx}\,dF(x)}$

where F is the cumulative distribution function.

If X has a continuous probability density function ƒ(x), then MX(−t) is the two-sided Laplace transform of ƒ(x).

{\displaystyle {\begin{aligned}M_{X}(-t)&=\int _{-\infty }^{\infty }e^{tx}f(x)\,dx\\&=\int _{-\infty }^{\infty }\left(1+tx+{\frac {t^{2}x^{2}}{2!}}+\cdots +{\frac {t^{n}x^{n}}{n!}}+\cdots \right)f(x)\,dx\\&=1+tm_{1}+{\frac {t^{2}m_{2}}{2!}}+\cdots +{\frac {t^{n}m_{n}}{n!}}+\cdots ,\end{aligned}}}

where mn is the nth moment.

### Sum of independent random variables

If X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and

${\displaystyle S_{n}=\sum _{i=1}^{n}a_{i}X_{i},}$

where the ai are constants, then the probability density function for Sn is the convolution of the probability density functions of each of the Xi, and the moment-generating function for Sn is given by

${\displaystyle M_{S_{n}}(t)=M_{X_{1}}(a_{1}t)M_{X_{2}}(a_{2}t)\cdots M_{X_{n}}(a_{n}t)\,.}$

### Vector-valued random variables

For vector-valued random variables X with real components, the moment-generating function is given by

${\displaystyle M_{X}(t)=E\left(e^{\langle t,X\rangle }\right)}$

where t is a vector and ${\displaystyle \langle \cdot ,\cdot \rangle }$ is the dot product.

## Important properties

The most important property of the moment-generating function is that if two distributions have the same moment-generating function, then they are identical at all points. That is, if for all values of t,

${\displaystyle M_{X}(t)=M_{Y}(t),\,}$

then

${\displaystyle F_{X}(x)=F_{Y}(x)\,}$

for all values of x (or equivalently X and Y have the same distribution). This statement is not equivalent to if two distributions have the same moments, then they are identical at all points", because in some cases the moments exist and yet the moment-generating function does not, because in some cases the limit

${\displaystyle \lim _{n\rightarrow \infty }\sum _{i=0}^{n}{\frac {t^{i}m_{i}}{i!}}}$

does not exist. This happens for the lognormal distribution.

### Calculations of moments

The moment-generating function is so called because if it exists on an open interval around t = 0, then it is the exponential generating function of the moments of the probability distribution:

${\displaystyle E\left(X^{n}\right)=M_{X}^{(n)}(0)={\frac {d^{n}M_{X}}{dt^{n}}}(0).}$

n should be nonnegative.

## Other properties

Hoeffding's lemma provides a bound on the moment-generating function in the case of a zero-mean, bounded random variable.

## Relation to other functions

Related to the moment-generating function are a number of other transforms that are common in probability theory:

characteristic function
The characteristic function ${\displaystyle \varphi _{X}(t)}$ is related to the moment-generating function via ${\displaystyle \varphi _{X}(t)=M_{iX}(t)=M_{X}(it):}$ the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform.
cumulant-generating function
The cumulant-generating function is defined as the logarithm of the moment-generating function; some instead define the cumulant-generating function as the logarithm of the characteristic function, while others call this latter the second cumulant-generating function.
probability-generating function
The probability-generating function is defined as ${\displaystyle G(z)=E[z^{X}].\,}$ This immediately implies that ${\displaystyle G(e^{t})=E[e^{tX}]=M_{X}(t).\,}$

## References

1. ^ Bulmer, M.G., Principles of Statistics, Dover, 1979, pp. 75–79
• Casella, George; Berger, Roger. Statistical Inference (2nd ed.). pp. 59–68. ISBN 978-0-534-24312-8.