Hypergeometric distribution

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Parameters \begin{align}N&\in \left\{0,1,2,\dots\right\} \\
                                 K&\in \left\{0,1,2,\dots,N\right\} \\
                                 n&\in \left\{0,1,2,\dots,N\right\}\end{align}\,
Support \scriptstyle{k\, \in\, \left\{\max{(0,\, n+K-N)},\, \dots,\, \min{(n,\, K )}\right\}}\,
pmf {{{K \choose k} {{N-K} \choose {n-k}}}\over {N \choose n}}
CDF 1-{{{n \choose {k+1}}{{N-n} \choose {K-k-1}}}\over {N \choose K}} \,_3F_2\!\!\left[\begin{array}{c}1,\ k+1-K,\ k+1-n \\ k+2,\ N+k+2-K-n\end{array};1\right], where \,_pF_q is the generalized hypergeometric function
Mean n {K\over N}
Mode \left \lfloor \frac{(n+1)(K+1)}{N+2} \right \rfloor
Variance n{K\over N}{(N-K)\over N}{N-n\over N-1}
Skewness \frac{(N-2K)(N-1)^\frac{1}{2}(N-2n)}{[nK(N-K)(N-n)]^\frac{1}{2}(N-2)}
Ex. kurtosis

 \left.\frac{1}{n K(N-K)(N-n)(N-2)(N-3)}\cdot\right. \Big[(N-1)N^{2}\Big(N(N+1)-6K(N-K)-6n(N-n)\Big)+

6 n K (N-K)(N-n)(5N-6)\Big]
MGF \frac{{N-K \choose n} \scriptstyle{\,_2F_1(-n, -K; N - K - n + 1; e^{t}) } }
                         {{N \choose n}}  \,\!
CF \frac{{N-K \choose n} \scriptstyle{\,_2F_1(-n, -K; N - K - n + 1; e^{it}) }}
{{N \choose n}}

In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of k successes in n draws, without replacement, from a finite population of size N that contains exactly K successes, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of k successes in n draws with replacement.

In statistics, the hypergeometric test uses the hypergeometric distribution to calculate the statistical significance of having drawn a specific k successes (out of n total draws) from the aforementioned population. The test is often used to identify which sub-populations are over- or under-represented in a sample. This test has a wide range of applications. For example, a marketing group could use the test to understand their customer base by testing a set of known customers for over-representation of various demographic subgroups (e.g., women, people under 30).


The following conditions characterize the hypergeometric distribution:

  • The result of each draw (the elements of the population being sampled) can be classified into one of two mutually exclusive categories (e.g. Pass/Fail or Female/Male or Employed/Unemployed).
  • The probability of a success changes on each draw, as each draw decreases the population (sampling without replacement from a finite population).

A random variable X follows the hypergeometric distribution if its probability mass function (pmf) is given by[1]

 P(X = k) = \frac{\binom{K}{k} \binom{N - K}{n-k}}{\binom{N}{n}},


  • N is the population size,
  • K is the number of success states in the population,
  • n is the number of draws,
  • k is the number of observed successes,
  • \textstyle {a \choose b} is a binomial coefficient.

The pmf is positive when \max(0, n+K-N) \leq k \leq \min(K,n).

The pmf satisfies the recurrence relation[citation needed]

(k + 1) (N - K - (n - k - 1)) P(X = k + 1) = (K - k) (n - k) P(X = k)


P(X = 0) = \frac{\binom{N - K}{n}}{\binom{N}{n}}.

Combinatorial identities[edit]

As one would expect, the probabilities sum up to 1:

 \sum_{0\leq k\leq n}    { {K \choose k} { N-K \choose n-k} \over {N \choose n} }  = 1

This is essentially Vandermonde's identity from combinatorics.

Also note the following identity holds:

 {{{K \choose k} {{N-K} \choose {n-k}}}\over {N \choose n}} = {{{n \choose k} {{N-n} \choose {K-k}}}\over {N \choose K}}.

This follows from the symmetry of the problem, but it can also be shown by expressing the binomial coefficients in terms of factorials and rearranging the latter.

Application and example[edit]

The classical application of the hypergeometric distribution is sampling without replacement. Think of an urn with two types of marbles, red ones and green ones. Define drawing a green marble as a success and drawing a red marble as a failure (analogous to the binomial distribution). If the variable N describes the number of all marbles in the urn (see contingency table below) and K describes the number of green marbles, then N − K corresponds to the number of red marbles. In this example, X is the random variable whose outcome is k, the number of green marbles actually drawn in the experiment. This situation is illustrated by the following contingency table:

drawn not drawn total
green marbles k Kk K
red marbles nk N + k − n − K N − K
total n N − n N

Now, assume (for example) that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green? Note that although we are looking at success/failure, the data are not accurately modeled by the binomial distribution, because the probability of success on each trial is not the same, as the size of the remaining population changes as we remove each marble.

This problem is summarized by the following contingency table:

drawn not drawn total
green marbles k = 4 Kk = 1 K = 5
red marbles nk = 6 N + k − n − K = 39 N − K = 45
total n = 10 N − n = 40 N = 50

The probability of drawing exactly k green marbles can be calculated by the formula

 P(X=k) = f(k;N,K,n) = {{{K \choose k} {{N-K} \choose {n-k}}}\over {N \choose n}}.

Hence, in this example calculate

 P(X=4) = f(4;50,5,10) = {{{5 \choose 4} {{45} \choose {6}}}\over {50 \choose 10}} = {5\cdot 8145060\over 10272278170} = 0.003964583\dots.

Intuitively we would expect it to be even more unlikely for all 5 marbles to be green.

 P(X=5) = f(5;50,5,10) = {{{5 \choose 5} {{45} \choose {5}}}\over {50 \choose 10}} = {1\cdot 1221759
\over 10272278170} = 0.0001189375\dots,

As expected, the probability of drawing 5 green marbles is roughly 35 times less likely than that of drawing 4.

Application to Texas Hold'em Poker[edit]

In Hold'em Poker players make the best hand they can combining the two cards in their hand with the 5 cards (community cards) eventually turned up on the table. The deck has 52 and there are 13 of each suit. For this example assume a player has 2 clubs in the hand and there are 3 cards showing on the table, 2 of which are also clubs. The player would like to know the probability of one of the next 2 cards to be shown being a club to complete his flush.

There are 4 clubs showing so there are 9 still unseen. There are 5 cards showing (2 in the hand and 3 on the table) so there are 52-5=47 still unseen.

The probability that one of the next two cards turned is a club can be calculated using hypergeometric with k=1, n=2, K=9 and N=47. (about 31.6%)

The probability that both of the next two cards turned are clubs can be calculated using hypergeometric with k=2, n=2, K=9 and N=47. (about 3.3%)

The probability that neither of the next two cards turned are clubs can be calculated using hypergeometric with k=0, n=2, K=9 and N=47. (about 65.0%)


Swapping the roles of green and red marbles:

 f(k;N,K,n) = f(n-k;N,N-K,n)

Swapping the roles of drawn and not drawn marbles:

 f(k;N,K,n) = f(K-k;N,K,N-n)

Swapping the roles of green and drawn marbles:

 f(k;N,K,n) = f(k;N,n,K)

Hypergeometric test[edit]

The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of k successes (out of n total draws) from a population of size N containing K successes. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing k or more successes from the population in n total draws. In a test for under-representation, the p-value is the probability of randomly drawing k or fewer successes.

Relationship to Fisher's exact test[edit]

Biologist and statistician Ronald Fisher

The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test[2] ). Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see[3] ).

Order of draws[edit]

The probability of drawing any sequence of white and black marbles (the hypergeometric distribution) depends only on the number of white and black marbles, not on the order in which they appear; i.e., it is an exchangeable distribution. As a result, the probability of drawing a white marble in the i^{\text{th}} draw is[4]

 P(W_i) = {\frac{K}{N}} .

Related distributions[edit]

Let X ~ Hypergeometric(K, N, n) and p=K/N.

  • If n=1 then X has a Bernoulli distribution with parameter p.
  • Let Y have a binomial distribution with parameters n and p; this models the number of successes in the analogous sampling problem with replacement. If N and K are large compared to n, and p is not close to 0 or 1, then X and Y have similar distributions, i.e., P(X \le k) \approx P(Y \le k).
  • If n is large, N and K are large compared to n, and p is not close to 0 or 1, then
P(X \le k) \approx \Phi \left( \frac{k-n p}{\sqrt{n p (1-p)}} \right)

where \Phi is the standard normal distribution function

The following table describes four distributions related to the number of successes in a sequence of draws:

With replacements No replacements
Given number of draws binomial distribution hypergeometric distribution
Given number of failures negative binomial distribution negative hypergeometric distribution

Tail Bounds[edit]

Let X ~ Hypergeometric(K, N, n) and p=K/N. Then we can derive the following bounds:

\Pr[X\le (p - t)n]
&\le \exp(-n\text{D}(p-t||p))
&\le \exp(-2t^2n)\\
\Pr[X\ge (p+t)n]
&\le \exp(-n\text{D}(p+t||p))
&\le \exp(-2t^2n)\\



is the Kullback-Leibler divergence and it is used that D(a||b) \ge 2(a-b)^2.[5]

If n is close to N, it can be useful to apply symmetry to "invert" the bounds, which give you the following:[6]

\Pr[X\le (p - t)n]
&\le \exp(-(N-n)\text{D}(p+\tfrac{tn}{N-n}||p))
&\le \exp(-2\tfrac{(tn)^2}{N-n})\\
\Pr[X\ge (p+t)n]
&\le \exp(-(N-n)\text{D}(p-\tfrac{tn}{N-n}||p))
&\le \exp(-2\tfrac{(tn)^2}{N-n})\\

Multivariate hypergeometric distribution[edit]

Multivariate Hypergeometric Distribution
Parameters c \in \mathbb{N} = \lbrace 0, 1, \ldots \rbrace
(K_1,\ldots,K_c) \in \mathbb{N}^c
N = \sum_{i=1}^c K_i
n \in \lbrace 0,\ldots,N\rbrace
Support \left\{ \mathbf{k} \in \mathbb{Z}_{0+}^c \, : \, \forall i\ k_i \le K_i , \sum_{i=1}^{c} k_i = n \right\}
pmf \frac{\prod_{i=1}^{c} \binom{K_i}{k_i}}{\binom{N}{n}}
Mean E(X_i) = \frac{n K_i}{N}
Variance \text{Var}(X_i) = \frac{K_i}{N} \left(1-\frac{K_i}{N}\right) n \frac{N-n}{N-1}
\text{Cov}(X_i,X_j) = -\frac{n K_i K_j}{N^2} \frac{N-n}{N-1}

The model of an urn with black and white marbles can be extended to the case where there are more than two colors of marbles. If there are Ki marbles of color i in the urn and you take n marbles at random without replacement, then the number of marbles of each color in the sample (k1,k2,...,kc) has the multivariate hypergeometric distribution. This has the same relationship to the multinomial distribution that the hypergeometric distribution has to the binomial distribution—the multinomial distribution is the "with-replacement" distribution and the multivariate hypergeometric is the "without-replacement" distribution.

The properties of this distribution are given in the adjacent table, where c is the number of different colors and N=\sum_{i=1}^{c} K_i is the total number of marbles.


Suppose there are 5 black, 10 white, and 15 red marbles in an urn. You reach in and randomly select six marbles without replacement. What is the probability that you pick exactly two of each color?

 P(2\text{ black}, 2\text{ white}, 2\text{ red}) = {{{5 \choose 2}{10 \choose 2} {15 \choose 2}}\over {30 \choose 6}} = 0.079575596816976

Note: When picking the six marbles without replacement, the expected number of black marbles is 6×(5/30) = 1, the expected number of white marbles is 6×(10/30) = 2, and the expected number of red marbles is 6×(15/30) = 3. This comes from the expected value of a Binomial distribution, E(X) = np.

See also[edit]


  1. ^ Rice, John A. (2007). Mathematical Statistics and Data Analysis (Third ed.). Duxbury Press. p. 42. 
  2. ^ Rivals, I.; Personnaz, L.; Taing, L.; Potier, M.-C (2007). "Enrichment or depletion of a GO category within a class of genes: which test?". Bioinformatics 23 (4): 401–407. doi:10.1093/bioinformatics/btl633. PMID 17182697. 
  3. ^ K. Preacher and N. Briggs. "Calculation for Fisher's Exact Test: An interactive calculation tool for Fisher's exact probability test for 2 x 2 tables (interactive page)". 
  4. ^ http://www.stat.yale.edu/~pollard/Courses/600.spring2010/Handouts/Symmetry%5BPolyaUrn%5D.pdf
  5. ^ https://ahlenotes.wordpress.com/2015/12/08/hypergeometric_tail/
  6. ^ https://ahlenotes.wordpress.com/2015/12/08/hypergeometric_tail/


External links[edit]