From Wikipedia, the free encyclopedia
In probability theory , the probability distribution of the sum of two independent random variables is the convolution of their individual distributions.
Many distributions have well known convolutions. The following is a list of these convolutions. Each statement is of the form
∑
i
=
1
n
X
i
∼
Y
{\displaystyle \sum _{i=1}^{n}X_{i}\sim Y}
where
X
1
,
X
2
,
.
.
.
,
X
n
{\displaystyle X_{1},X_{2},...,X_{n}\,}
are independent and identically distributed .
Discrete Distributions [ edit ]
∑
i
=
1
n
B
e
r
n
o
u
l
l
i
(
p
)
∼
B
i
n
o
m
i
a
l
(
n
,
p
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Bernoulli} (p)\sim \mathrm {Binomial} (n,p)}
∑
i
=
1
n
B
i
n
o
m
i
a
l
(
n
i
,
p
)
∼
B
i
n
o
m
a
i
l
(
∑
i
=
1
n
n
i
,
p
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Binomial} (n_{i},p)\sim \mathrm {Binomail} (\sum _{i=1}^{n}n_{i},p)}
∑
i
=
1
n
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
n
i
,
p
)
∼
N
e
g
a
t
i
v
e
B
i
n
o
m
a
i
l
(
∑
i
=
1
n
n
i
,
p
)
{\displaystyle \sum _{i=1}^{n}\mathrm {NegativeBinomial} (n_{i},p)\sim \mathrm {NegativeBinomail} (\sum _{i=1}^{n}n_{i},p)}
∑
i
=
1
n
G
e
o
m
e
t
r
i
c
(
p
)
∼
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
n
,
p
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Geometric} (p)\sim \mathrm {NegativeBinomial} (n,p)}
∑
i
=
1
n
P
o
i
s
s
o
n
(
λ
i
)
∼
P
o
i
s
s
o
n
(
∑
i
=
1
n
λ
i
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Poisson} (\lambda _{i})\sim \mathrm {Poisson} (\sum _{i=1}^{n}\lambda _{i})}
Continuous Distributions [ edit ]
∑
i
=
1
n
N
o
r
m
a
l
(
μ
i
,
σ
i
2
)
∼
N
o
r
m
a
l
(
∑
i
=
1
n
μ
i
,
∑
i
=
1
n
σ
i
2
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Normal} (\mu _{i},\sigma _{i}^{2})\sim \mathrm {Normal} (\sum _{i=1}^{n}\mu _{i},\sum _{i=1}^{n}\sigma _{i}^{2})}
∑
i
=
1
n
G
a
m
m
a
(
α
i
,
β
)
∼
G
a
m
m
a
(
∑
i
=
1
n
α
i
,
β
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Gamma} (\alpha _{i},\beta )\sim \mathrm {Gamma} (\sum _{i=1}^{n}\alpha _{i},\beta )}
∑
i
=
1
n
E
x
p
o
n
e
n
t
i
a
l
(
θ
)
∼
G
a
m
m
a
(
n
,
θ
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Exponential} (\theta )\sim \mathrm {Gamma} (n,\theta )}
∑
i
=
1
n
χ
2
(
r
i
)
∼
χ
2
(
∑
i
=
1
n
r
i
)
{\displaystyle \sum _{i=1}^{n}\chi ^{2}(r_{i})\sim \chi ^{2}(\sum _{i=1}^{n}r_{i})}
∑
i
=
1
r
N
2
(
0
,
1
)
∼
χ
r
2
{\displaystyle \sum _{i=1}^{r}N^{2}(0,1)\sim \chi _{r}^{2}\,\!}
∑
i
=
1
r
N
2
(
0
,
1
)
∼
χ
r
2
{\displaystyle \sum _{i=1}^{r}N^{2}(0,1)\sim \chi _{r}^{2}}
∑
i
=
1
n
(
X
i
−
X
¯
)
2
∼
σ
2
χ
n
−
1
2
w
h
e
r
e
X
i
∼
N
(
μ
,
σ
2
)
a
n
d
X
¯
=
1
n
∑
i
=
1
n
X
i
{\displaystyle \sum _{i=1}^{n}(X_{i}-{\bar {X}})^{2}\sim \sigma ^{2}\chi _{n-1}^{2}\qquad \mathrm {where} \quad X_{i}\sim N(\mu ,\sigma ^{2})\quad \mathrm {and} \quad {\bar {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}\,\!}
.
Example Proof [ edit ]
There are various ways to prove the above relations. A straightforward technique is to use the moment generating function , which is unique to a given distribution.
Proof that
∑
i
=
1
n
B
e
r
n
o
u
l
l
i
(
p
)
∼
B
i
n
o
m
i
a
l
(
n
,
p
)
{\displaystyle \sum _{i=1}^{n}\mathrm {Bernoulli} (p)\sim \mathrm {Binomial} (n,p)}
[ edit ]
X
i
∼
B
e
r
n
o
u
l
l
i
(
p
)
0
<
p
<
1
1
≤
i
≤
n
{\displaystyle X_{i}\sim \mathrm {Bernoulli} (p)\quad 0<p<1\quad 1\leq i\leq n}
Y
=
∑
i
=
1
n
X
i
{\displaystyle Y=\sum _{i=1}^{n}X_{i}}
Z
∼
B
i
n
o
m
i
a
l
(
n
,
p
)
{\displaystyle Z\sim \mathrm {Binomial} (n,p)\,\!}
The moment generating function of each
X
i
{\displaystyle X_{i}}
and of
Z
{\displaystyle Z}
is
M
X
i
(
t
)
=
1
−
p
+
p
e
t
M
Z
(
t
)
=
(
1
−
p
+
p
e
t
)
n
{\displaystyle M_{X_{i}}(t)=1-p+pe^{t}\qquad M_{Z}(t)=(1-p+pe^{t})^{n}}
where t is within some neighborhood of zero.
M
Y
(
t
)
=
E
(
e
t
∑
i
=
1
n
X
i
)
=
E
(
∏
i
=
1
n
e
t
X
i
)
=
∏
i
=
1
n
E
(
e
t
X
i
)
=
∏
i
=
1
n
(
1
−
p
+
p
e
t
)
=
(
1
−
p
+
p
e
t
)
n
=
M
Z
(
t
)
{\displaystyle M_{Y}(t)=E(e^{t\sum _{i=1}^{n}X_{i}})=E(\prod _{i=1}^{n}e^{tX_{i}})=\prod _{i=1}^{n}E(e^{tX_{i}})=\prod _{i=1}^{n}(1-p+pe^{t})=(1-p+pe^{t})^{n}=M_{Z}(t)}
The expectation of the product is the product of the expectations since each
X
i
{\displaystyle X_{i}}
is independent.
Since
Y
{\displaystyle Y}
and
Z
{\displaystyle Z}
have the same moment generating function they must have the same distribution.
See Also [ edit ]
References [ edit ]
Craig, Allen T. (2005). Introduction to Mathematical Statistics (sixth ed.). Pearson Prentice Hall. ISBN 0-13-008507-3 .