Jump to content

Bayes' theorem

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 205.143.123.10 (talk) at 19:22, 27 May 2011. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The simple statement of Bayes' theorem.

In probability theory and applications, Bayes' theorem shows how to determine inverse probabilities: knowing the conditional probability of B given A, what is the conditional probability of A given B? This can be done, but also involves the so-called prior or unconditional probabilities of A and B.

This theorem is named for Thomas Bayes (pronounced /ˈbeɪz/ or "bays") and often called Bayes' law or Bayes' rule. Bayes' theorem expresses the conditional probability, or "posterior probability", of a hypothesis H (i.e. its probability after evidence E is observed) in terms of the "prior probability" of H, the prior probability of E, and the conditional probability of E given H. It implies that evidence has a confirming effect if it is more likely given H than given not-H.[1] Bayes' theorem is valid in all common interpretations of probability, and it is commonly applied in science and engineering.[2] However, there is disagreement among statisticians regarding the question whether it can be used to reduce all statistical questions to problems of inverse probability. Can competing scientific hypotheses be assigned prior probabilities?

The key idea is that the probability of an event A given an event B (for example, the probability that one has breast cancer given that one has tested positive in a mammogram) depends not only on the relationship between events A and B (that is, the accuracy of mammograms) but also on the marginal probability (or "simple probability") of occurrence of each event. For instance, if mammograms are known to be 95% accurate, this could be due to 5.0% false positives, 5.0% false negatives (missed cases), or a mix of false positives and false negatives. Bayes' theorem allows one to calculate the conditional probability of having breast cancer, given a positive mammogram, for any of these three cases. The probability of a positive mammogram will be different for each of these cases. In the example at hand, there is a point of great practical importance that is worth noting: if the prevalence of mammograms resulting positive for cancer is, say, 5.0%, then the conditional probability that an individual with a positive result actually does have cancer is rather small, since the marginal probability of this type of cancer is closer to 1.0%. The probability of a positive result is therefore five times more likely than the probability of the cancer itself. Also, one can deduce that the conditional probability that positive mammogram result implies breast cancer is at most 20%. It could possibly be less, if the conditional probability that given breast cancer, the mammogram being positive is not 100% (i.e. false negatives). This shows the value of correctly understanding and applying Bayes' mathematical theorem.

The theorem

Illustration of Bayes' theorem by two 3-dimensional tree diagrams.

Simple form of theorem

Thomas Bayes addressed both the case of discrete probability distributions of data and the more complicated case of continuous probability distributions. In the discrete case, Bayes' theorem relates the conditional and marginal probabilities of events A and B, provided that the probability of B does not equal zero:

In Bayes' theorem, each probability has a conventional name:


Bayes' theorem in this form gives a mathematical representation of how the conditional probability of event A given B is related to the converse conditional probability of B given A.

Via conditional probabilities

To derive Bayes' theorem, start from the definition of conditional probability. The probability of the event A given the event B is

Equivalently, the probability of the event B given the event A is

Rearranging and combining these two equations, we find

This lemma is sometimes called the product rule for probabilities. Dividing the left and right hand sides of this equation by P(B), provided that neither P(B) nor P(A) is 0, we obtain Bayes' theorem:

This lemma is symmetric in A and B, since A and B are arbitrarily-chosen symbols, and dividing by P(A), provided that it is non-zero, gives a statement of Bayes' theorem in which the two symbols have changed places.

Alternative form

Bayes' theorem is often completed by noting that, according to the Law of total probability,

,

where Ac is the complementary event of A (often called "not A").

This results in the analogous form:

.

More generally, the law states that given a partition, i.e. {Ai}, of the event space,

.

Thus, for any Ai in the partition, Bayes' theorem states that

Extensions

Theorems analogous to Bayes' theorem cover more than two events. For example:

This can be derived in a few steps from Bayes' theorem and the definition of conditional probability:

Similarly,

can be regarded as a conditional Bayes' Theorem and can be derived as follows:

A general strategy is to work with a decomposition of the joint probability, and to marginalize (integrate or sum) over the variables that are not of interest. Depending on the form of the decomposition, it may be possible to prove that some integrals must be 1, and thus they fall out of the decomposition; exploiting this property can reduce the computations very substantially. A Bayesian network, for example, specifies a factorization of a joint distribution of several variables in which the conditional probability of any one variable given the remaining ones takes a particularly simple form (see Markov blanket).

For probability densities

There is also a version of Bayes' theorem for continuous distributions. It is somewhat harder to derive, since probability densities are not probabilities, so Bayes' theorem has to be established by a limit process.[4] Bayes originally used the theorem to find a continuous posterior distribution given discrete observations.[citation needed]

Bayes' theorem for probability densities is formally similar to the theorem for probabilities:

.

There is an analogous statement of the law of total probability, one which is used in the denominator:

.

As in the discrete case, the terms have standard names.

is the joint probability density function of X and Y,

is the posterior probability density function of X given Y = y,

is (as a function of x) the likelihood function of Y given X = x, and

and

are the marginal probability density functions of X and Y respectively, where ƒX(x) is the prior probability density function of X.


Bayes' theorem with continuous prior and posterior distributions

Suppose a continuous probability distribution with probability density function ƒΘ is assigned to an uncertain quantity Θ. (In the conventional language of mathematical probability theory Θ would be a "random variable.") The probability that the event B will be the outcome of an experiment depends on Θ; it is P(B | Θ). As a function of Θ this is the likelihood function:

Then the posterior probability distribution of Θ, i.e. the conditional probability distribution of Θ given the observed data B, has probability density function

where the "constant" is a normalizing constant so chosen as to make the integral of the function equal to 1, so that it is indeed a probability density function. This is the form of Bayes' theorem actually considered by Thomas Bayes.

In other words, Bayes' theorem says:

To get the posterior probability distribution, multiply the prior probability distribution by the likelihood function and then normalize.

More generally still, the new data B may be the value of an observed continuously distributed random variable X. The probability that it has any particular value is therefore 0. In such a case, the likelihood function is the value of a probability density function of X given Θ, rather than a probability of B given Θ:

Bayes' rule

Bayes' theorem is often more easy to apply, and to generalize, when expressed in terms of odds. It is then usually referred to as Bayes' rule,[citation needed] which is expressed in words as posterior odds equals prior odds times likelihood ratio. The term Bayes factor is often used instead of likelihood ratio.

In terms of odds and likelihood ratio

Bayes' theorem can also be written neatly in terms of a likelihood ratio Λ and odds O as

where O(A|B) are the (posterior) odds of A given B,

O(A) are the (prior) odds of A by itself

and Λ(A|B) is the likelihood ratio

Note that the odds considered here are the odds for and against event A, or in other words, the odds for against . One can more generally consider the odds for any event against any other event . This results in the theorem

where are the posterior odds for against given ,

are the prior odds for against

and is the Bayes' factor or likelihood ratio for against given information .

Sequential use of Bayes' rule

In terms of the odds between any two events and one can successively condition on events and getting

When and stand for a particular event and its complement respectively, we speak of the odds for and against . The two-stage odds form of Bayes' rule becomes then

Both these versions of two-step Bayes' rule can be extended to an arbitrary number of steps.


A simple example of Bayes' theorem

Suppose there is a school with 60% boys and 40% girls as its students. The female students wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees a (random) student from a distance, and what the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem.

The event A is that the student observed is a girl, and the event B is that the student observed is wearing trousers. To compute P(A|B), we first need to know:

  • P(B|A), or the probability of the student wearing trousers given that the student is a girl. Since girls are as likely to wear skirts as trousers, this is 0.5.
  • P(A), or the probability that the student is a girl regardless of any other information. Since the observer sees a random student, meaning that all students have the same probability of being observed, and the fraction of girls among the students is 40%, this probability equals 0.4.
  • P(B), or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since half of the girls and all of the boys are wearing trousers, this is 0.5×0.4 + 1.0×0.6 = 0.8.

Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:

Another, essentially equivalent way of obtaining the same result is as follows: Assume, for concreteness, that there are 100 students, 60 boys and 40 girls. Among these, 60 boys and 20 girls wear trousers. All together there are 80 trouser-wearers, of which 20 are girls. Therefore the chance that a random trouser-wearer is a girl equals 20/80 = 0.25. Put in terms of Bayes´ theorem, the probability of a student being a girl is 40/100, the probability that any given girl will wear trousers is 1/2. The product of these two is 20/100, but we know the student is wearing trousers, so one deducts the 20 students not wearing trousers, and then calculate a probability of (20/100)/(80/100), or 20/80.

It is often helpful when calculating conditional probabilities to create a simple table containing the number of occurrences of each outcome, or the relative frequencies of each outcome, for each of the independent variables. The table below illustrates the use of this method for the above girl-or-boy example

Girls Boys Total
Trousers
20
60
 80
Skirts
20
 0
 20
Total
40
60
100

Application of the theorem

As a formal theorem, Bayes' theorem is valid in all common interpretations of probability. However, frequentist and Bayesian interpretations disagree on how (and to what) probabilities are assigned. In the Bayesian interpretation, probabilities are rationally coherent degrees of belief, or a degree of belief in a proposition given a body of well-specified information.[2] Bayes' theorem can then be understood as specifying how an ideally rational person responds to evidence.[1] In the frequentist interpretation, probabilities are the frequencies of occurrence of random events as proportions of a whole. Though his name has become associated with subjective probability, Bayes himself interpreted the theorem in an objective sense.[5]

In 1946, Bayes' theorem was given additional prominence by a theorem (Cox's theorem)[6][7] by the physicist R.T. Cox which showed that any system of inference that fits certain requirements can be mapped onto probability.[2][8] Bayes' Theorem has since found a wide variety of applications in science and engineering.[2]

Further examples

Example 1: Drug testing

An example of the use of Bayes' theorem is the evaluation of drug test results. Suppose a certain drug test is 99% sensitive and 99% specific, that is, the test will correctly identify a drug user as testing positive 99% of the time, and will correctly identify a non-user as testing negative 99% of the time. This would seem to be a relatively accurate test, but Bayes' theorem can be used to demonstrate the relatively high probability of misclassifying non-users as users. Let's assume a corporation decides to test its employees for drug use, and that only 0.5% of the employees actually use the drug. What is the probability that, given a positive drug test, an employee is actually a drug user? Let "D" stand for being a drug user and "N" indicate being a non-user. Let "+" be the event of a positive drug test. We need to know the following:

  • P(D), or the probability that the employee is a drug user, regardless of any other information. This is 0.005, since 0.5% of the employees are drug users. This is the prior probability of D.
  • P(N), or the probability that the employee is not a drug user. This is 1 − P(D), or 0.995.
  • P(+|D), or the probability that the test is positive, given that the employee is a drug user. This is 0.99, since the test is 99% accurate.
  • P(+|N), or the probability that the test is positive, given that the employee is not a drug user. This is 0.01, since the test will produce a false positive for 1% of non-users.
  • P(+), or the probability of a positive test event, regardless of other information. This is 0.0149 or 1.49%, which is found by adding the probability that a true positive result will appear (= 99% x 0.5% = 0.495%) plus the probability that a false positive will appear (= 1% x 99.5% = 0.995%). This is the prior probability of +.

Given this information, we can compute the posterior probability P(D|+) of an employee who tested positive actually being a drug user:

Despite the specificity and sensitivity of the test, the low base-rate of use renders the accuracy of the test low: the probability that an employee who tests positive actually uses drugs is only about 33%, so it is in fact more likely that the employee is not a drug user. The rarer the condition for which we are testing, the greater the percentage of positive tests that will be false positives.

As a concrete example, assume the company has 1000 employees, 5 of which (0.5%) are drug users. Since the test is 99% sensitive, virtually all of the 5 drug users test positive. However, 1% of the rest of employees (1%×995 ≈ 10 non-drug users) get false positive test results. We thus have about 15 total positive results, of which only about 5 (5 / 15 ≈ 33%) were genuine.

We could alternatively compute the posterior probability P(N|+) probability of an employee who tested positive actually being a non-drug user.

The result shows that there is a 100%-33.22% = 66.78% chance that an employee who tested positive is a non-user, despite the specificity and sensitivity of the test being 99%.

We can get the same results by using Bayes' rule. The prior odds on an employee being a drug-user are 199 to 1 against, as 0.5%=1/200 and 99.5%=199/200. The likelihood ratio or Bayes factor when an employee tests positive is 0.99/0.01 =99:1 in favour of being a drug-user: this is the ratio of the probability of a drug-user testing positive, to the probability of a non-drug user testing positive. The posterior odds on being a drug user are therefore 1*99 : 199*1 = 99:199, which is very close to 100:200 = 1:2. In round numbers, only one in three of those testing positive are actually drug-users.

Example 2: Bayesian inference

Applications of Bayes' theorem often assume the philosophy underlying Bayesian probability that uncertainty and degrees of belief can be measured as probabilities.

We describe the marginal probability distribution of a variable A as the prior probability distribution or simply the "prior" distribution. The conditional distribution of A given the "data" B is the posterior probability distribution or just the "posterior" distribution.

Suppose we wish to know about the proportion r of voters in a large population who will vote "yes" in a referendum. Let n be the number of voters in a random sample (chosen with replacement, so that we have statistical independence) and let m be the number of voters in that random sample who will vote "yes". Suppose that we observe n = 10 voters and m = 7 say they will vote yes. From Bayes' theorem we can calculate the probability distribution function for r using

From this we see that from the prior probability density function f(r) and the likelihood function L(r) = f(m = 7|r, n = 10), we can compute the posterior probability density function f(r|n = 10, m = 7).

The prior probability density function f(r) summarizes what we know about the distribution of r in the absence of any observation. We provisionally assume in this case that the prior distribution of r is uniform over the interval [0, 1]. That is, f(r) = 1. If some additional background information is found, we should modify the prior accordingly. However before we have any observations, all outcomes are equally likely.

Under the assumption of random sampling, selecting voters is just like getting random balls from an urn. The likelihood function L(r) = P(m = 7|r, n = 10,) for such a problem is just the probability of 7 successes in 10 trials for a binomial distribution.

As with the prior, the likelihood is open to revision—more complex assumptions will yield more complex likelihood functions. Maintaining the current assumptions, we compute the normalizing factor,

and the posterior distribution for r is then

for 0 ≤ r ≤ 1.

One might be interested in the probability that more than half the voters will vote "yes". The prior probability that more than half the voters will vote "yes" is 1/2, by the symmetry of the uniform distribution. In comparison, the posterior probability that more than half the voters will vote "yes", i.e., the conditional probability given the outcome of the opinion poll – that seven of the 10 voters questioned will vote "yes" – is

which is about an "89% chance".

Example 3: The Monty Hall problem

We are presented with three doors — red, green, and blue — from which to choose, one of which has a prize hidden behind it. Suppose we choose the red door. The host of the contest, who knows the location of the prize and will not open that door, opens the blue door and reveals that there is no prize behind it. He then asks if we wish to change from our initial choice of red. Will changing to green now improve our chances of winning the prize?

One may think, with two doors left unopened, that one has a 50:50 chance with either one, so there is no point for or against changing doors. However, this is not true.

Let us call the situations where the prize is behind one of the doors door Ar, Ag, and Ab, for the red, green, and blue doors, respectively. It is commonly assumed the prize is placed randomly (alternatively: our initial information concerning the location of the prize is completely neutral), hence

.

Let us call "the host opens the blue door" proposition B. It is assumed the host choose at random when he has a choice, hence B must have probability 1/2.

  • When the prize is behind the red door, the host is free to open the green or the blue door. If the host opens them uniformly at random: (or if we are have no information at all as to his choice)
  • When the prize is behind the green door, the host must open the blue door. Thus:
  • When the prize is behind the blue door, the host must open the green door. Thus,

Thus, under the condition we have chosen the red door, we get:

So, we should change from the red to the green door for the higher probability of winning.

All of this assumes that after we have chosen a door, the host opens a second door at random, with equal probability, whenever two are available: the rooms behind both the green and blue doors are empty because behind the red door is the prize. If the host always opens the blue door in that situation, it makes no difference what we do. If the host never opens the blue door when the rooms behind both the green and blue doors are empty, then we with certainty will get the prize by changing from red to green. Under these two alternative assumptions, "always" and "never", the unconditional probability that the host opens the blue door happens to be 2/3 and 1/3, rather than 1/2 as above.

We can also get these conclusions using Bayes' rule. Let us consider the odds on the car being behind the door initially chosen by the player (the red door). The prior odds on that door hiding the car are 2:1 against. If the car is behind that door, the host is equally likely to open either other door. If the car is not behind that door, it is equally likely behind either other door, so the host is equally likely to open either other (other) door. Thus whether or not the car is behind the initially chosen (red) door, the host opens the blue door with probability 50%. The Bayes' factor is 0.5:0.5=1:1. The posterior odds on the car being behind the door initially chosen by the player remain unaltered at 2:1 against. He had better switch.

This route shows that the identity of the door opened by the host (whether it is blue of green) does not change the chances that the car is behind the red door. The chances that it is behind the blue or green door does dramatically change: one chance drops to zero, the other jumps to 2/3.

Historical remarks

Bayes' theorem was named after the Reverend Thomas Bayes (1702–61), who studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). His friend Richard Price edited and presented this work in 1763, after Bayes' death, as An Essay towards solving a Problem in the Doctrine of Chances.[9] The French mathematician Pierre-Simon Laplace reproduced and extended Bayes' results in 1774, apparently quite unaware of Bayes' work.[10]

Bayes presented his work as the solution to a problem:

Given the number of times in which an unknown event has happened and failed [... Find] the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named.[9]

Bayes gave an example of a man trying to guess the ratio of "blanks" and "prizes" at a lottery. So far the man has watched the lottery draw 10 blanks and one prize. Given these data, Bayes showed in detail how to compute the probability that the ratio of blanks to prizes is between 9:1 and 11:1 (the probability is low - about 7.7 percent). Bayes went on to describe that computation after the man has watched the lottery draw 20 blanks and two prizes, 40 blanks and four prizes, and so on. He ends with the lottery having drawn 10,000 blanks and 1,000 prizes. The resulting probability is quite high - about 0.97 - that the ratio of blanks to prizes is between 9:1 and 11:1.[9]

One of Bayes's results (Proposition five) gives a simple description of conditional probability, and it shows that conditional probability can be expressed independently of the order of events:

"If there be two subsequent events, the probability of the second b/N and the probability of both together P/N, and it being first discovered that the second event has also happened, from hence I guess that the first event has also happened, the probability I am right is P/b."

In modern terms, P/b = P(A| B) where A and B are the first and second subsequent events—or the conditional probability of the first event where the condition is that the second event has happened. The expression says nothing about the order of occurrence: it measures correlation, and not causation.

Bayes's preliminary results - in particular, Propositions three, four, and five - imply the truth of the theorem that is named for him, but it does not appear that Bayes emphasized or focused on that finding.

Bayes' main result (Proposition nine in the essay) is the following in modern terms: Assume a uniform prior distribution of the binomial parameter p. After observing m successes and n failures, the probability that p is between two values a and b is thus:

The proposition is "Bayesian" in its presentation as a probability about the parameter p (which is itself a probability in this case). One may compute probabilities for experimental outcomes, of course, but one may also do it for a parameter that governs an experiment, and the same algebra and calculations are used to compute both.

Bayes stated his question in a way that may make assigning a probability distribution to a parameter palatable to a "frequentist". He supposed that a billiard ball is thrown at random onto a billiard table, and considered further billiard balls that fall above or below the first ball with probabilities p and q = (1 - p).

Stephen Fienberg describes the evolution from "inverse probability" at the time of Bayes and Laplace, a term still used by Harold Jeffreys (1939), to "Bayesian" in the 1950s.[11] Ironically, Ronald A. Fisher introduced the "Bayesian" label in a derogatory sense.[citation needed] It is unclear whether Bayes was Bayesian in the modern sense. That is, whether he was interested in inference, or rather, merely in probability. The essay of 1763 is more of a paper on probability.

Stephen Stigler suggested in 1983 that Bayes' theorem was discovered by Nicholas Saunderson some time before Bayes.[12] Edwards (1986) disputed that interpretation.[13]

Richard Price and the Existence of a Deity

Richard Price discovered Bayes's essay and its now-famous theorem in Bayes's papers after Bayes' death. He believed that Bayes' Theorem helped prove the existence of God ("the Deity") and wrote the following in his introduction to the Essay.

The purpose I mean is, to shew what reason we have for believing that there are in the constitution of things fixt laws according to which things happen, and that, therefore, the frame of the world must be the effect of the wisdom and power of an intelligent cause; and thus to confirm the argument taken from final causes for the existence of the Deity. It will be easy to see that the converse problem solved in this essay is more directly applicable to this purpose; for it shews us, with distinctness and precision, in every case of any particular order or recurrency of events, what reason there is to think that such recurrency or order is derived from stable causes or regulations in nature, and not from any irregularities of chance. --Philosophical Transactions of the Royal Society of London, 1763.[9]

In modern terms this is an instance of the teleological argument.

See also

References

  1. ^ a b Howson, Colin (1993). Scientific Reasoning: The Bayesian Approach. Open Court. ISBN 9780812692341. {{cite book}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  2. ^ a b c d Jaynes, Edwin T. (2003). Probability theory: the logic of science. Cambridge University Press. ISBN 9780521592710.
  3. ^ Entry by Charles Sanders Peirce for the Century Dictionary [full citation needed].
  4. ^ Papoulis, Athanasios (1984), Probability, Random Variables, and Stochastic Processes, Second edition. New York: McGraw-Hill. (Section 7.3)
  5. ^ Earman, John (1992). "Bayes' Bayesianism". Bayes Or Bust?: A Critical Examination of Bayesian Confirmation Theory. MIT Press. ISBN 9780262050463.
  6. ^ R.T. Cox (1946), "Probability, Frequency, and Reasonable Expectation," Am. Jour. Phys., 14, 1–13
  7. ^ R.T. Cox (1961) The Algebra of Probable Inference, Johns Hopkins University Press, Baltimore, MD
  8. ^ Baron, Jonathan (1994). Thinking and Deciding (2 ed.). Oxford University Press. pp. 209–210. ISBN 0521437326.
  9. ^ a b c d Bayes, Thomas, and Price, Richard (1763). "An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S." (PDF). Philosophical Transactions of the Royal Society of London. 53 (0): 370–418. doi:10.1098/rstl.1763.0053.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  10. ^ Daston, Lorraine (1988). Classical Probability in the Enlightenment. Princeton Univ Press. p. 268. ISBN 0-691-08497-1.
  11. ^ Fienberg, Stephen E. (2006).When Did Bayesian Inference Become “Bayesian”?.
  12. ^ Stephen M. Stigler (1983), "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296.
  13. ^ A. W. F. Edwards (1986), "Is the Reference in Hartley (1749) to Bayesian Inference?", The American Statistician 40(2):109–110

Versions of the essay

Commentaries

  • G. A. Barnard (1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes' Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:293–295. (biographical remarks)
  • Daniel Covarrubias. "An Essay Towards Solving a Problem in the Doctrine of Chances". (an outline and exposition of Bayes' essay)
  • Stephen M. Stigler (1982). "Thomas Bayes' Bayesian Inference," Journal of the Royal Statistical Society, Series A, 145:250–258. (Stigler argues for a revised interpretation of the essay; recommended)
  • Isaac Todhunter (1865). A History of the Mathematical Theory of Probability from the time of Pascal to that of Laplace, Macmillan. Reprinted 1949, 1956 by Chelsea and 2001 by Thoemmes.
  • Eliezer S. Yudkowsky (2003). An Intuitive Explanation of Bayesian Reasoning (includes Java applets and biography)

Additional material