Jump to content

Coupon collector's problem

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Justiyaya (talk | contribs) at 06:27, 19 June 2021 (Reverted good faith edits by 99.147.14.39 (talk): not an improvement (HG) (3.4.10)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Graph of number of coupons, n vs the expected number of trials (i.e., time) needed to collect them all, E (T )

In probability theory, the coupon collector's problem describes "collect all coupons and win" contests. It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as .[a] For example, when n = 50 it takes about 225[b] trials on average to collect all 50 coupons.

Solution

Calculating the expectation

Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. Then . Think of T and ti as random variables. Observe that the probability of collecting a new coupon is . Therefore, has geometric distribution with expectation . By the linearity of expectations we have:

Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:

where is the Euler–Mascheroni constant.

Now one can use the Markov inequality to bound the desired probability:

It is also easy see that the above can be modified slightly to handle the case when we've already collected some of the coupons. Let k be the number of coupons already collected, then:

And of course we see that when then we get the original result.

Calculating the variance

Using the independence of random variables ti, we obtain:

since (see Basel problem).

Now one can use the Chebyshev inequality to bound the desired probability:

Tail estimates

A different upper bound can be derived from the following observation. Let denote the event that the -th coupon was not picked in the first trials. Then:

Thus, for , we have .

Extensions and generalizations

  • Donald J. Newman and Lawrence Shepp gave a generalization of the coupon collector's problem when m copies of each coupon need to be collected. Let Tm be the first time m copies of each coupon are collected. They showed that the expectation in this case satisfies:
Here m is fixed. When m = 1 we get the earlier formula for the expectation.
  • Common generalization, also due to Erdős and Rényi:
This is equal to
where m denotes the number of coupons to be collected, and PJ denoting the probability of getting any coupon in the set of coupons J.

See also

Notes

  1. ^ Here and throughout this article, "log" refers to the natural logarithm rather than a logarithm to some other base. The use of Θ here invokes big O notation.
  2. ^ E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The approximation for this expected number gives in this case .

References

  1. ^ Flajolet, Philippe; Gardy, Danièle; Thimonier, Loÿs (1992), "Birthday paradox, coupon collectors, caching algorithms and self-organizing search", Discrete Applied Mathematics, 39 (3): 207–229, CiteSeerX 10.1.1.217.5965, doi:10.1016/0166-218x(92)90177-c