An Essay towards solving a Problem in the Doctrine of Chances

From Wikipedia, the free encyclopedia
Jump to: navigation, search

An Essay towards solving a Problem in the Doctrine of Chances is a work on the mathematical theory of probability by the Reverend Thomas Bayes, published in 1763,[1] two years after its author's death. It included a statement of a special case of what is now called Bayes' theorem. In 18th-century English, the phrase "doctrine of chances" meant the theory of probability. It had been introduced as the title of a book by Abraham de Moivre.

Bayes supposed a sequence of independent experiments, each having as its outcome either success or failure, the probability of success being some number p between 0 and 1. But then he supposed p to be an uncertain quantity, whose probability of being in any interval between 0 and 1 is the length of the interval. In modern terms, p would be considered a random variable uniformly distributed between 0 and 1. Conditionally on the value of p, the trials resulting in success or failure are independent, but unconditionally (or "marginally") they are not. That is because if a large number of successes are observed, then p is more likely to be large, so that success on the next trial is more probable. The question Bayes addressed was: what is the conditional probability distribution of p, given the numbers of successes and failures so far observed. The answer is that its probability density function is

 f(p) = \frac{(n+1)!}{k!(n-k)!} p^k (1-p)^{n-k}\text{ for }0\le p \le 1

(and ƒ(p) = 0 for p < 0 or p > 1) where k is the number of successes so far observed, and n is the number of trials so far observed. This is what today is called the Beta distribution with parameters k + 1 and n − k + 1.

Outline[edit]

Bayes' preliminary results (Propositions 3, 4, and 5) imply the truth of the theorem that is named for him. Particularly, Proposition 5 gives a simple description of conditional probability:

"If there be two subsequent events, the probability of the second b/N and the probability of both together P/N, and it being first discovered that the second event has also happened, from hence I guess that the first event has also happened, the probability I am right is P/b."

However, it does not appear that Bayes emphasized or focused on this finding. He presented his work as the solution to a problem:

"Given the number of times in which an unknown event has happened and failed [... Find] the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named."[2]

Bayes gave an example of a man trying to guess the ratio of "blanks" and "prizes" at a lottery. So far the man has watched the lottery draw ten blanks and one prize. Given these data, Bayes showed in detail how to compute the probability that the ratio of blanks to prizes is between 9:1 and 11:1 (the probability is low - about 7.7%). He went on to describe that computation after the man has watched the lottery draw twenty blanks and two prizes, forty blanks and four prizes, and so on. Finally, having drawn 10,000 blanks and 1,000 prizes, the probability reaches about 97%.[2]

Bayes' main result (Proposition 9) is the following in modern terms:

Assume a uniform prior distribution of the binomial parameter p. After observing m successes and n failures,

P(a<p<b|m;n)=
\frac {\int_a^b {n+m \choose m} p^m (1-p)^n\,dp}
 {\int_0^1 {n+m \choose m} p^m (1-p)^n\,dp}.
\!

It is unclear whether Bayes was a "Bayesian" in the modern sense. That is, whether he was interested in Bayesian inference, or merely in probability. Proposition 9 seems "Bayesian" in its presentation as a probability about the parameter p. However, Bayes stated his question in a manner that suggests a frequentist viewpoint: he supposed that a ball is thrown at random onto a square table (this table is often misrepresented as a billiard table, and the ball as a billiard ball, but Bayes never describes them as such), and considered further balls that fall to the left or right of the first ball with probabilities p and 1-p. The algebra is of course identical no matter which view is taken.

Richard Price and the existence of God[edit]

Richard Price discovered Bayes' essay and its now-famous theorem in Bayes' papers after Bayes' death. He believed that Bayes' Theorem helped prove the existence of God ("the Deity") and wrote the following in his introduction to the essay:

"The purpose I mean is, to shew what reason we have for believing that there are in the constitution of things fixt laws according to which things happen, and that, therefore, the frame of the world must be the effect of the wisdom and power of an intelligent cause; and thus to confirm the argument taken from final causes for the existence of the Deity. It will be easy to see that the converse problem solved in this essay is more directly applicable to this purpose; for it shews us, with distinctness and precision, in every case of any particular order or recurrency of events, what reason there is to think that such recurrency or order is derived from stable causes or regulations in nature, and not from any irregularities of chance." (Philosophical Transactions of the Royal Society of London, 1763)[2]

In modern terms this is an instance of the teleological argument.

Versions of the essay[edit]

Commentaries[edit]

  • G. A. Barnard (1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes' Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:293–295. (biographical remarks)
  • Stephen M. Stigler (1982). "Thomas Bayes' Bayesian Inference," Journal of the Royal Statistical Society, Series A, 145:250–258. (Stigler argues for a revised interpretation of the essay; recommended)
  • Isaac Todhunter (1865). A History of the Mathematical Theory of Probability from the time of Pascal to that of Laplace, Macmillan. Reprinted 1949, 1956 by Chelsea and 2001 by Thoemmes.

References[edit]

External links[edit]