Jump to content

Posterior probability

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Drewnoakes (talk | contribs) at 17:17, 10 March 2012 (→‎Example: added explanation of step that uses the law of total probability). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence is taken into account. Similarly, the posterior probability distribution is the distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained from an experiment or survey.

Definition

Let us have an a priori belief that the probability distribution function is and an observation with the likelihood , then the posterior probability is defined as

The posterior probability can be written in the memorable form as

.

Example

Suppose there is a mixed school having 60% boys and 40% girls as students. The girl students wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem.

The event A is that the student observed is a girl, and the event B is that the student observed is wearing trousers. To compute P(A|B), we first need to know:

  • P(A), or the probability that the student is a girl regardless of any other information. Since the observer sees a random student, meaning that all students have the same probability of being observed, and the percentage of girls among the students is 40%, this probability equals 0.4.
  • P(A'), or the probability that the student is a boy regardless of any other information (A' is the complementary event to A). This is 60%, or 0.6.
  • P(B|A), or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5.
  • P(B|A'), or the probability of the student wearing trousers given that the student is a boy. This is given as 1.
  • P(B), or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since P(B) = P(B|A)P(A) + P(B|A')P(A') (via the law of total probability), this is 0.5×0.4 + 1×0.6 = 0.8.

Given all this information, the probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula:

Calculation

The posterior probability distribution of one random variable given the value of another can be calculated with Bayes' theorem by multiplying the prior probability distribution by the likelihood function, and then dividing by the normalizing constant, as follows:

gives the posterior probability density function for a random variable X given the data Y = y, where

  • is the prior density of X,
  • is the likelihood function as a function of x,
  • is the normalizing constant, and
  • is the posterior density of X given the data Y = y.

Classification

In classification posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also Class membership probabilities. While Statistical classification methods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence. It is desirable, to transform or re-scale membership values to class membership probabilities, since they are comparable and additionally easier applicable for post-processing.

See also

References

  • Peter M. Lee (2004). Bayesian Statistics, an introduction (3rd ed.). Wiley. ISBN 978-0-340-81405-5.