Bernoulli distribution

From Wikipedia, the free encyclopedia
  (Redirected from Bernoulli random variable)
Jump to: navigation, search
Parameters 0<p<1, p\in\R
Support k \in \{0,1\}\,
    q=(1-p) & \text{for }k=0 \\ p & \text{for }k=1
    0 & \text{for }k<0 \\ q & \text{for }0\leq k<1 \\ 1 & \text{for }k\geq 1
Mean  p\,
Median \begin{cases}
0 & \text{if } q > p\\
0.5 & \text{if } q=p\\
1 & \text{if } q<p
Mode \begin{cases}
0 & \text{if } q > p\\
0, 1 & \text{if } q=p\\
1 & \text{if } q < p
Variance p(1-p) (=pq)\,
Skewness \frac{1-2p}{\sqrt{pq}}
Ex. kurtosis \frac{1-6pq}{pq}
Entropy -q\ln(q)-p\ln(p)\,
MGF q+pe^t\,
CF q+pe^{it}\,
PGF q+pz\,
Fisher information  \frac{1}{p(1-p)}

In probability theory and statistics, the Bernoulli distribution, named after Swiss scientist Jacob Bernoulli,[1] is the probability distribution of a random variable which takes the value 1 with success probability of p and the value 0 with failure probability of q=1-p. It can be used to represent a coin toss where 1 and 0 would represent "head" and "tail" (or vice versa), respectively. In particular, unfair coins would have p \neq 0.5.

The Bernoulli distribution is a special case of the two-point distribution, for which the two possible outcomes need not be 0 and 1.


Plot of Bernoulli distribution probability mass function

If X is a random variable with this distribution, we have:

 Pr(X=1) = 1 - Pr(X=0) = 1 - q = p.\!

The probability mass function f of this distribution, over possible outcomes k, is

 f(k;p) = \begin{cases} p & \text{if }k=1, \\[6pt]
1-p & \text {if }k=0.\end{cases}

This can also be expressed as

f(k;p) = p^k (1-p)^{1-k}\!\quad \text{for }k\in\{0,1\}.

The Bernoulli distribution is a special case of the binomial distribution with n = 1.[2]

The kurtosis goes to infinity for high and low values of p, but for p=1/2 the two-point distributions including the Bernoulli distribution have a lower excess kurtosis than any other probability distribution, namely −2.

The Bernoulli distributions for 0 \le p \le 1 form an exponential family.

The maximum likelihood estimator of p based on a random sample is the sample mean.


The expected value of a Bernoulli random variable X is


This is due to the fact that for a Bernoulli distributed random variable X with P(X=1)=p and P(X=0)=q we find

\operatorname{E}[X] = P(X=1)\cdot 1 + P(X=0)\cdot 0 = p \cdot 1 + q\cdot 0 = p


The variance of a Bernoulli distributed X is

\operatorname{Var}[X] = pq = p(1-p)

We first find

\operatorname{E}[X^2] = P(X=1)\cdot 1^2 + P(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p

From this follows

\operatorname{Var}[X] = \operatorname{E}[X^2]-\operatorname{E}[X]^2 = p-p^2 = p(1-p) = pq


The skewness is \frac{q-p}{\sqrt{pq}}=\frac{1-2p}{\sqrt{pq}}. When we take the standardized Bernoulli distributed random variable \frac{X-\operatorname{E}[X]}{\sqrt{\operatorname{Var}[X]}} we find that this random variable attains \frac{q}{\sqrt{pq}} with probability p and attains -\frac{p}{\sqrt{pq}} with probability q. Thus we get

\gamma_1 &= \operatorname{E} \left[\left(\frac{X-\operatorname{E}[X]}{\sqrt{\operatorname{Var}[X]}}\right)^3\right] \\ 
&= p \cdot \left(\frac{q}{\sqrt{pq}}\right)^3 + q \cdot \left(-\frac{p}{\sqrt{pq}}\right)^3 \\
&= \frac{1}{\sqrt{pq}^3} \left(pq^3-qp^3\right) \\
&= \frac{pq}{\sqrt{pq}^3} (q-p) \\
&= \frac{q-p}{\sqrt{pq}}

Related distributions[edit]

  • If X_1,\dots,X_n are independent, identically distributed (i.i.d.) random variables, all Bernoulli distributed with success probability p, then
Y = \sum_{k=1}^n X_k \sim \mathrm{B}(n,p) (binomial distribution).

The Bernoulli distribution is simply \mathrm{B}(1,p).

See also[edit]


  1. ^ James Victor Uspensky: Introduction to Mathematical Probability, McGraw-Hill, New York 1937, page 45
  2. ^ McCullagh and Nelder (1989), Section 4.2.2.


External links[edit]