Conditional probability

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In probability theory, a conditional probability measures the probability of an event given that (by assumption, presumption, assertion or evidence) another event has occurred.[1] If the events are A and B respectively, this is said to be "the probability of A given B". It is commonly denoted by P(A|B), or sometimes PB(A). In case that both "A" and "B" are categorical variables, conditional probability table is typically used to represent the conditional probability.

The concept of conditional probability is one of the most fundamental and one of the most important concepts in probability theory.[2] But conditional probabilities can be quite slippery and require careful interpretation.[3] In statistical inference, the conditional probability is an update of the probability of an event based on new information.[3] Incorporating the new information can be done as follows [1]

  • We start with a probability measure on a sample space, say (X,P).
  • Let the event of interest be A.
  • If we wish to measure the probability of the event A knowing that event B has or will have occurred we need to examine event A as it is restricted to event B.
  • Since both A and B are events in the same sample space, A restricted to B is A \cap B.
  • Whenever P(B)>0 with the original probability measure on the original sample space (X,P), B must be the sure event in the restricted space (B,PB) and thus PB(B) must be 1.
  • To derive P(A|B)=PB(A) so that P(B|B)=1 we re-scale P(A \cap B) by dividing by P(B).
  • This results in P(A|B) = P(A \cap B)/P(B) whenever P(B)>0 and 0 otherwise.

Note: This approach results in a probability measure that is consistent with the original probability measure and satisfies all the Kolmogorov Axioms.

Note: The phraseology "evidence" or "information" is generally used in the Bayesian interpretation of probability. The conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). This is consistent with the frequentist interpretation.

Note: P(A|B) (the conditional probability of A given B) may or may not be equal to P(A) (the unconditional probability of A). If P(A|B) = P(A), A and B are said to be independent.

Definition[edit]

Illustration of conditional probabilities with an Euler diagram. The unconditional probability P(A) = 0.52. However, the conditional probability P(A|B1) = 1, P(A|B2) ≈ 0.75, and P(A|B3) = 0.
On a tree diagram, branch probabilities are conditional on the event associated with the parent node.
Venn Pie Chart describing conditional probabilities

Conditioning on an event[edit]

Kolmogorov definition[edit]

Given two events A and B from the sigma-field of a probability space with P(B) > 0, the conditional probability of A given B is defined as the quotient of the probability of the joint of events A and B, and the probability of B:

P(A|B) = \frac{P(A \cap B)}{P(B)}

This may be visualized as restricting the sample space to B. The logic behind this equation is that if the outcomes are restricted to B, this set serves as the new sample space.

Note that this is a definition but not a theoretical result. We just denote the quantity P(AB)/P(B) as P(A|B) and call it the conditional probability of A given B.

As an axiom of probability[edit]

Some authors, such as De Finetti, prefer to introduce conditional probability as an axiom of probability:

P(A \cap B) = P(A|B)P(B)

Although mathematically equivalent, this may be preferred philosophically; under major probability interpretations such as the subjective theory, conditional probability is considered a primitive entity. Further, this "multiplication axiom" introduces a symmetry with the summation axiom for mutually exclusive events:[4]

P(A \cup B) = P(A) + P(B) - \cancelto0{P(A \cap B)}

Definition with σ-algebra[edit]

If P(B) = 0, then the simple definition of P(A|B) is undefined. However, it is possible to define a conditional probability with respect to a σ-algebra of such events (such as those arising from a continuous random variable).

For example, if X and Y are non-degenerate and jointly continuous random variables with density ƒX,Y(xy) then, if B has positive measure,


P(X \in A \mid Y \in B) =
\frac{\int_{y\in B}\int_{x\in A} f_{X,Y}(x,y)\,dx\,dy}{\int_{y\in B}\int_{x\in\Omega} f_{X,Y}(x,y)\,dx\,dy} .

The case where B has zero measure can only be dealt with directly in the case that B = {y0}, representing a single point, in which case


P(X \in A \mid Y = y_0) = \frac{\int_{x\in A} f_{X,Y}(x,y_0)\,dx}{\int_{x\in\Omega} f_{X,Y}(x,y_0)\,dx} .

If A has measure zero then the conditional probability is zero. An indication of why the more general case of zero measure cannot be dealt with in a similar way can be seen by noting that the limit, as all δyi approach zero, of


P(X \in A \mid Y \in \cup_i[y_i,y_i+\delta y_i]) \approxeq
\frac{\sum_{i} \int_{x\in A} f_{X,Y}(x,y_i)\,dx\,\delta y_i}{\sum_{i}\int_{x\in\Omega} f_{X,Y}(x,y_i) \,dx\, \delta y_i} ,

depends on their relationship as they approach zero. See conditional expectation for more information.

Conditioning on a random variable[edit]

Conditioning on an event may be generalized to conditioning on a random variable. Let X be a random variable taking some value from xn. Let A be an event. The conditional probability of A given X is defined as the random variable:

P(A|X) \text{  taking on the value } P(A\mid X=x_n) \text{  if } X=x_n

More formally:

P(A|X)(\omega)=P(A\mid X=X(\omega)) .

The conditional probability P(A|X) is a function of X, e.g., if the function g is defined as

g(x)= P(A\mid X=x),

then

P(A|X) =g\circ X

Note that P(A|X) and X are now both random variables. From the law of total probability, the expected value of P(A|X) is equal to the unconditional probability of A.

Example[edit]

Suppose that somebody secretly rolls two fair six-sided dice, and we must predict the outcome.

  • Let A be the value rolled on die 1
  • Let B be the value rolled on die 2

What is the probability that A = 2? Table 1 shows the sample space. A = 2 in 6 of the 36 outcomes, thus P(A=2) = 636 = 16.

Table 1
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Suppose it is revealed that A+B ≤ 5. Table 2 shows that A+B ≤ 5 for 10 outcomes. For 3 of these, A = 2. So the probability that A = 2 given that A+B ≤ 5 is P(A=2 | A+B ≤ 5) = 310 = 0.3.

Table 2
+ B=1 2 3 4 5 6
A=1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12

Statistical independence[edit]

Events A and B are defined to be statistically independent if:

P(A \cap B) \ = \ P(A) P(B)
\Leftrightarrow P(A|B) \ = \ P(A)
\Leftrightarrow P(B|A) \ = \ P(B).

That is, the occurrence of A does not affect the probability of B, and vice versa. Although the derived forms may seem more intuitive, they are not the preferred definition as the conditional probabilities may be undefined if P(A) or P(B) are 0, and the preferred definition is symmetrical in A and B.

Common fallacies[edit]

These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question.

Assuming conditional probability is of similar size to its inverse[edit]

A geometric visualisation of Bayes' theorem. In the table, the values ax, ay, bx and by give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A|X) P(X) = P(X|A) P(A) i.e. P(A|X) = P(X|A) P(A) / P(X). Similar reasoning can be used to show that P(B|X) = P(X|B) P(B) / P(X) etc.

In general, it cannot be assumed that P(A|B) ≈ P(B|A). This can be an insidious error, even for those who are highly conversant with statistics.[5] The relationship between P(A|B) and P(B|A) is given by Bayes' theorem:

P(B|A) = \frac{P(A|B) P(B)}{P(A)}.

That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B).

Alternatively, noting that AB = BA, and applying conditional probability:

P(A \cap B) = P(A|B)P(B)= P(B \cap A) = P(B|A)P(A)

Rearranging gives the result.

Assuming marginal and conditional probabilities are of similar size[edit]

In general, it cannot be assumed that P(A) ≈ P(A|B). These probabilities are linked through the formula for total probability:

P(A) \, = \, \sum_n P(A \cap B_n) \, = \, \sum_n P(A|B_n)P(B_n).

where : B_n \cap B_{n+1} \, = \, \emptyset \,\forall n . This fallacy may arise through selection bias.[6] For example, in the context of a medical claim, let SC be the event that a sequela (chronic disease) S occurs as a consequence of circumstance (acute condition) C. Let H be the event that an individual seeks medical help. Suppose that in most cases, C does not cause S so P(SC) is low. Suppose also that medical attention is only sought if S has occurred due to C. From experience of patients, a doctor may therefore erroneously conclude that P(SC) is high. The actual probability observed by the doctor is P(SC|H).

Over- or under-weighting priors[edit]

Not taking prior probability into account partially or completely is called base rate neglect. The reverse, insufficient adjustment from the prior probability is conservatism.

Formal derivation[edit]

Formally, P(A|B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.[7][8]

Let Ω be a sample space with elementary events {ω}. Suppose we are told the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. For events in B, it is reasonable to assume that the relative magnitudes of the probabilities will be preserved. For some constant scale factor α, the new distribution will therefore satisfy:

\text{1. }\omega \in B : P(\omega|B) = \alpha P(\omega)
\text{2. }\omega \notin B : P(\omega|B) = 0
\text{3. }\sum_{\omega \in \Omega} {P(\omega|B)} = 1.

Substituting 1 and 2 into 3 to select α:


\begin{align}
\sum_{\omega \in \Omega} {P(\omega | B)} &= \sum_{\omega \in B} {\alpha P(\omega)} + \cancelto{0}{\sum_{\omega \notin B} 0} \\
&= \alpha \sum_{\omega \in B} {P(\omega)} \\
&= \alpha \cdot P(B) \\
\end{align}
\implies \alpha = \frac{1}{P(B)}

So the new probability distribution is

\text{1. }\omega \in B : P(\omega|B) = \frac{P(\omega)}{P(B)}
\text{2. }\omega \notin B : P(\omega| B) = 0

Now for a general event A,


\begin{align}
P(A|B) &= \sum_{\omega \in A \cap B} {P(\omega | B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega|B)} \\
&= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\
&= \frac{P(A \cap B)}{P(B)}
\end{align}

See also[edit]

References[edit]

  1. ^ a b Gut, Allan (2013). Probability: A Graduate Course (2 ed.). New York, NY: Springer. ISBN 978-1-4614-4707-8. 
  2. ^ Sheldon Ross, A First Course in Probability, 8th Edition (2010), Pearson Prentice Hall, ISBN 978-0-13-603313-4
  3. ^ a b George Casella and Roger L. Berger, Statistical Inference,(2002), Duxbury Press, ISBN 978-0-534-24312-8
  4. ^ Gillies, Donald (2000); "Philosophical Theories of Probability"; Routledge; Chapter 4 "The subjective theory"
  5. ^ Paulos, J.A. (1988) Innumeracy: Mathematical Illiteracy and its Consequences, Hill and Wang. ISBN 0-8090-7447-8 (p. 63 et seq.)
  6. ^ Thomas Bruss, F; Der Wyatt Earp Effekt; Spektrum der Wissenschaft; March 2007
  7. ^ George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, ISBN 0-534-11958-1 (p. 18 et seq.)
  8. ^ Grinstead and Snell's Introduction to Probability, p. 134

External links[edit]