From Wikipedia, the free encyclopedia
In probability theory , the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of random variables is the convolution of their corresponding probability mass functions or probability density functions respectively. Many well known distributions have simple convolutions. The following is a list of these convolutions. Each statement is of the form
∑
i
=
1
n
X
i
∼
Y
{\displaystyle \sum _{i=1}^{n}X_{i}\sim Y}
where
X
1
,
X
2
,
…
,
X
n
{\displaystyle X_{1},X_{2},\dots ,X_{n}\,}
are independent and identically distributed random variables. In place of
X
i
{\displaystyle X_{i}}
and
Y
{\displaystyle Y}
the names of the corresponding distributions and their parameters have been indicated.
Discrete distributions
∑
i
=
1
n
B
e
r
n
o
u
l
l
i
(
p
)
∼
B
i
n
o
m
i
a
l
(
n
,
p
)
0
<
p
<
1
n
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Bernoulli} (p)\sim \mathrm {Binomial} (n,p)\qquad 0<p<1\quad n=1,2,\dots \,\!}
∑
i
=
1
n
B
i
n
o
m
i
a
l
(
n
i
,
p
)
∼
B
i
n
o
m
i
a
l
(
∑
i
=
1
n
n
i
,
p
)
0
<
p
<
1
n
i
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Binomial} (n_{i},p)\sim \mathrm {Binomial} \left(\sum _{i=1}^{n}n_{i},p\right)\qquad 0<p<1\quad n_{i}=1,2,\dots \,\!}
∑
i
=
1
n
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
n
i
,
p
)
∼
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
∑
i
=
1
n
n
i
,
p
)
0
<
p
<
1
n
i
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {NegativeBinomial} (n_{i},p)\sim \mathrm {NegativeBinomial} \left(\sum _{i=1}^{n}n_{i},p\right)\qquad 0<p<1\quad n_{i}=1,2,\dots \,\!}
∑
i
=
1
n
G
e
o
m
e
t
r
i
c
(
p
)
∼
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
n
,
p
)
0
<
p
<
1
n
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Geometric} (p)\sim \mathrm {NegativeBinomial} (n,p)\qquad 0<p<1\quad n=1,2,\dots \,\!}
∑
i
=
1
n
P
o
i
s
s
o
n
(
λ
i
)
∼
P
o
i
s
s
o
n
(
∑
i
=
1
n
λ
i
)
λ
i
>
0
{\displaystyle \sum _{i=1}^{n}\mathrm {Poisson} (\lambda _{i})\sim \mathrm {Poisson} \left(\sum _{i=1}^{n}\lambda _{i}\right)\qquad \lambda _{i}>0\,\!}
Continuous distributions
∑
i
=
1
n
N
o
r
m
a
l
(
μ
i
,
σ
i
2
)
∼
N
o
r
m
a
l
(
∑
i
=
1
n
μ
i
,
∑
i
=
1
n
σ
i
2
)
−
∞
<
μ
i
<
∞
σ
i
2
>
0
{\displaystyle \sum _{i=1}^{n}\mathrm {Normal} (\mu _{i},\sigma _{i}^{2})\sim \mathrm {Normal} \left(\sum _{i=1}^{n}\mu _{i},\sum _{i=1}^{n}\sigma _{i}^{2}\right)\qquad -\infty <\mu _{i}<\infty \quad \sigma _{i}^{2}>0}
∑
i
=
1
n
C
a
u
c
h
y
(
a
i
,
γ
i
)
∼
C
a
u
c
h
y
(
∑
i
=
1
n
a
i
,
∑
i
=
1
n
γ
i
)
−
∞
<
a
i
<
∞
γ
i
>
0
{\displaystyle \sum _{i=1}^{n}\mathrm {Cauchy} (a_{i},\gamma _{i})\sim \mathrm {Cauchy} \left(\sum _{i=1}^{n}a_{i},\sum _{i=1}^{n}\gamma _{i}\right)\qquad -\infty <a_{i}<\infty \quad \gamma _{i}>0}
∑
i
=
1
n
G
a
m
m
a
(
α
i
,
β
)
∼
G
a
m
m
a
(
∑
i
=
1
n
α
i
,
β
)
α
i
>
0
β
>
0
{\displaystyle \sum _{i=1}^{n}\mathrm {Gamma} (\alpha _{i},\beta )\sim \mathrm {Gamma} \left(\sum _{i=1}^{n}\alpha _{i},\beta \right)\qquad \alpha _{i}>0\quad \beta >0}
∑
i
=
1
n
E
x
p
o
n
e
n
t
i
a
l
(
θ
)
∼
G
a
m
m
a
(
n
,
θ
)
θ
>
0
n
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Exponential} (\theta )\sim \mathrm {Gamma} (n,\theta )\qquad \theta >0\quad n=1,2,\dots }
∑
i
=
1
n
χ
2
(
r
i
)
∼
χ
2
(
∑
i
=
1
n
r
i
)
r
i
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\chi ^{2}(r_{i})\sim \chi ^{2}\left(\sum _{i=1}^{n}r_{i}\right)\qquad r_{i}=1,2,\dots }
∑
i
=
1
r
N
2
(
0
,
1
)
∼
χ
r
2
r
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{r}N^{2}(0,1)\sim \chi _{r}^{2}\qquad r=1,2,\dots }
∑
i
=
1
n
(
X
i
−
X
¯
)
2
∼
σ
2
χ
n
−
1
2
,
{\displaystyle \sum _{i=1}^{n}(X_{i}-{\bar {X}})^{2}\sim \sigma ^{2}\chi _{n-1}^{2},\quad }
where
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
is a random sample from
N
(
μ
,
σ
2
)
{\displaystyle N(\mu ,\sigma ^{2})}
and
X
¯
=
1
n
∑
i
=
1
n
X
i
.
{\displaystyle {\bar {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}.\,\!}
See also
References