Negative hypergeometric distribution

Parameters ${\displaystyle N\in \left\{0,1,2,\dots \right\}}$ - total number of elements ${\displaystyle K\in \left\{0,1,2,\dots ,N\right\}}$ - total number of 'success' elements ${\displaystyle r\in \left\{0,1,2,\dots ,N-K\right\}}$ - number of failures when experiment is stopped ${\displaystyle k\in \left\{0,\,\dots ,\,K\right\}}$ - number of successes when experiment is stopped. ${\displaystyle {\frac {{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}}{N \choose K}}}$ ${\displaystyle r{\frac {K}{N-K+1}}}$ ${\displaystyle r{\frac {(N+1)K}{(N-K+1)(N-K+2)}}[1-{\frac {r}{N-K+1}}]}$

In probability theory and statistics, the negative hypergeometric distribution describes probabilities for when sampling from a finite population without replacement in which each sample can be classified into two mutually exclusive categories like Pass/Fail, Male/Female or Employed/Unemployed. As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw. Unlike the standard hypergeometric distribution, which describes the number of successes in a fixed sample size, in the negative hypergeometric distribution, samples are drawn until ${\displaystyle r}$ failures have been found, and the distribution describes the probability of finding ${\displaystyle k}$ successes in such a sample. In other words, the negative hypergeometric distribution describes the likelihood of ${\displaystyle k}$ successes in a sample with exactly ${\displaystyle r}$ failures.

Definition

There are ${\displaystyle N}$ elements, of which ${\displaystyle K}$ are defined as "successes" and the rest are "failures".

Elements are drawn one after the other, without replacements, until ${\displaystyle r}$ failures are encountered. Then, the drawing stops and the number ${\displaystyle k}$ of successes is counted. The negative hypergeometric distribution, ${\displaystyle NHG_{N,K,r}(k)}$ is the discrete distribution of this ${\displaystyle k}$.

The outcome requires that we observe ${\displaystyle k}$ successes in ${\displaystyle (k+r-1)}$ draws and the ${\displaystyle (k+r){\text{-th}}}$ bit must be a failure. The probability of the former can be found by the direct application of the hypergeometric distribution ${\displaystyle (HG_{N,K,k+r-1}(k))}$ and the probability of the latter is simply the number of failures remaining ${\displaystyle (=N-K-(r-1))}$ divided by the size of the remaining population ${\displaystyle (=N-(k+r-1)}$. The probability of having exactly ${\displaystyle k}$ successes up to the ${\displaystyle r{\text{-th}}}$ failure (i.e. the drawing stops as soon as the sample includes the predefined number of ${\displaystyle r}$ failures) is then the product of these two probabilities:

${\displaystyle {\frac {{\binom {K}{k}}{\binom {N-K}{k+r-1-k}}}{\binom {N}{k+r-1}}}\cdot {\frac {N-K-(r-1)}{N-(k+r-1)}}={\frac {{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}}{N \choose K}}.}$

Therefore, a random variable follows the negative hypergeometric distribution if its probability mass function (pmf) is given by

${\displaystyle f(k;N,K,r)\equiv \Pr(X=k)={\frac {{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}}{N \choose K}}\quad {\text{for }}k=0,1,2,\dotsc ,K}$

where

• ${\displaystyle N}$ is the population size,
• ${\displaystyle K}$ is the number of success states in the population,
• ${\displaystyle r}$ is the number of failures,
• ${\displaystyle k}$ is the number of observed successes,
• ${\displaystyle a \choose b}$ is a binomial coefficient

By design the probabilities sum up to 1. However, in case we want show it explicitly we have:

${\displaystyle \sum _{k=0}^{K}\Pr(X=k)=\sum _{k=0}^{K}{\frac {{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}}{N \choose K}}={\frac {1}{N \choose K}}\sum _{k=0}^{K}{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}={\frac {1}{N \choose K}}{N \choose K}=1,}$

where we have used that,

{\displaystyle {\begin{aligned}\sum _{j=0}^{k}{\binom {j+m}{j}}{\binom {n-m-j}{k-j}}&=\sum _{j=0}^{k}(-1)^{j}{\binom {-m}{j}}(-1)^{k-j}{\binom {k-n-(-m)}{k-j}}\\&=(-1)^{k}{\binom {k-n}{k}}=(-1)^{k}{\binom {k-(n+1)-1}{k}}={\binom {n+1}{k}},\end{aligned}}}

which can be derived using the binomial identity, ${\displaystyle {{n \choose k}=(-1)^{k}{k-n-1 \choose k}}}$, and the Chu–Vandermonde identity, ${\displaystyle \sum _{j=0}^{k}{\binom {m}{j}}{\binom {n-m}{k-j}}={\binom {n}{k}}}$, which holds for any complex-values ${\displaystyle m}$ and ${\displaystyle n}$ and any non-negative integer ${\displaystyle k}$.

The relationship ${\displaystyle \sum _{j=0}^{k}{\binom {j+m}{j}}{\binom {n-m-j}{k-j}}={\binom {n+1}{k}}}$ can also be found by examination of the coefficient of ${\displaystyle x^{k}}$ in the expansion of ${\displaystyle {\frac {1}{(1-x)^{m+1}}}{\frac {1}{(1-x)^{n-m-k+1}}}={\frac {1}{(1-x)^{n-k+2}}}}$, using Newton's binomial series.

Expectation

When counting the number ${\displaystyle k}$ of successes before ${\displaystyle r}$ failures, the expected number of successes is ${\displaystyle {\frac {rK}{N-K+1}}}$ and can be derived as follows.

{\displaystyle {\begin{aligned}E[X]&=\sum _{k=0}^{K}k\Pr(X=k)=\sum _{k=0}^{K}k{\frac {{{k+r-1} \choose {k}}{{N-r-k} \choose {K-k}}}{N \choose K}}={\frac {r}{N \choose K}}\left[\sum _{k=0}^{K}{\frac {(k+r)}{r}}{{k+r-1} \choose {r-1}}{{N-r-k} \choose {K-k}}\right]-r\\&={\frac {r}{N \choose K}}\left[\sum _{k=0}^{K}{{k+r} \choose {r}}{{N-r-k} \choose {K-k}}\right]-r={\frac {r}{N \choose K}}\left[\sum _{k=0}^{K}{{k+r} \choose {k}}{{N-r-k} \choose {K-k}}\right]-r\\&={\frac {r}{N \choose K}}\left[{{N+1} \choose K}\right]-r={\frac {rK}{N-K+1}},\end{aligned}}}

where we have used the relationship ${\displaystyle \sum _{j=0}^{k}{\binom {j+m}{j}}{\binom {n-m-j}{k-j}}={\binom {n+1}{k}}}$, that we derived above to show that the negative hypergeometric distribution was properly normalized.

Variance

The variance can be derived by the following calculation.

{\displaystyle {\begin{aligned}E[X^{2}]&=\sum _{k=0}^{K}k^{2}\Pr(X=k)=\left[\sum _{k=0}^{K}(k+r)(k+r+1)\Pr(X=k)\right]-(2r+1)E[X]-r^{2}-r\\&={\frac {r(r+1)}{N \choose K}}\left[\sum _{k=0}^{K}{{k+r+1} \choose {k+1}}{{N+1-(r+1)-k} \choose {K-k}}\right]-(2r+1)E[X]-r^{2}-r\\&={\frac {r(r+1)}{N \choose K}}\left[{{N+2} \choose K}\right]-(2r+1)E[X]-r^{2}-r={\frac {rK(N-r+Kr+1)}{(N-K+1)(N-K+2)}}\end{aligned}}}

Then the variance is ${\displaystyle Var[X]=E[X^{2}]-\left(E[X]\right)^{2}={\frac {rK(N+1)(N-K-r+1)}{(N-K+1)^{2}(N-K+2)}}}$

Related distributions

If the drawing stops after a constant number ${\displaystyle n}$ of draws (regardless of the number of failures), then the number of successes has the hypergeometric distribution, ${\displaystyle HG_{N,K,n}(k)}$. The two functions are related in the following way:[1]

${\displaystyle NHG_{N,K,r}(k)=1-HG_{N,N-K,k}(r-1)}$

Negative-hypergeometric distribution (like the hypergeometric distribution) deals with draws without replacement, so that the probability of success is different in each draw. In contrast, negative-binomial distribution (like the binomial distribution) deals with draws with replacement, so that the probability of success is the same and the trials are independent. The following table summarizes the four distributions related to drawing items:

With replacements No replacements
# of successes in constant # of draws binomial distribution hypergeometric distribution
# of successes in constant # of failures negative binomial distribution negative hypergeometric distribution

References

1. ^ a b Negative hypergeometric distribution in Encyclopedia of Math.