Berkson's paradox

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Monkbot (talk | contribs) at 02:25, 15 January 2014 (→‎References: Fix CS1 deprecated date parameter errors). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Berkson's paradox or Berkson's fallacy is a result in conditional probability and statistics which is counterintuitive for some people, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design.

It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson.

Statement

The result is that two independent events become conditionally dependent (negatively dependent) given that at least one of them occurs. Symbolically:

if 0 < P(A) < 1 and 0 < P(B) < 1,
and P(A|B) = P(A), i.e. they are independent,
then P(A|B,C) < P(A|C) where C = AB (i.e. A or B).

In words, given two independent events, if you only consider outcomes where at least one occurs, then they become negatively dependent.

Explanation

The cause is that the conditional probability of event A occurring, given that it or B occurs, is inflated: it is higher than the unconditional probability, because we have excluded cases where neither occur.

P(A|AB) > P(A)
conditional probability inflated relative to unconditional

One can see this in tabular form as follows: the gray regions are the outcomes where at least one event occurs (and ~A means "not A").

A ~A
B A & B ~A & B
~B A & ~B ~A & ~B

For instance, if one has a sample of 100, and both A and B occur independently half the time (So P(A) = P(B) = 1/2), one obtains:

A ~A
B 25 25
~B 25 25

So in 75 outcomes, either A or B occurs, of which 50 have A occurring, so

P(A|AB) = 50/75 = 2/3 > 1/2 = 50/100 = P(A).

Thus the probability of A is higher in the subset (of outcomes where it or B occurs), 2/3, than in the overall population, 1/2.

Berkson's paradox arises because the conditional probability of A given B within this subset equals the conditional probability in the overall population, but the unconditional probability within the subset is inflated relative to the unconditional probability in the overall population, hence, within the subset, the presence of B decreases the conditional probability of A (back to its overall unconditional probability):

P(A|B, AB) = P(A|B) = P(A)
P(A|AB) > P(A).

Examples

A classic illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. If a control group is also ascertained from the in-patient population, a difference in hospital admission rates for the case sample and control sample can result in a spurious association between the disease and the risk factor.

As another example, suppose a collector has 1000 postage stamps, of which 300 are pretty and 100 are rare, with 30 being both pretty and rare. 10% of all her stamps are rare and 10% of her pretty stamps are rare, so prettiness tells nothing about rarity. She puts the 370 stamps which are pretty or rare on display. Just over 27% of the stamps on display are rare, but still only 10% of the pretty stamps are rare (and 100% of the 70 not-pretty stamps on display are rare). If an observer only considers stamps on display, he will observe a spurious negative relationship between prettiness and rarity as a result of the selection bias (that is, not-prettiness strongly indicates rarity in the display, but not in the total collection).

References

  • Berkson, Joseph (June 1946). "Limitations of the Application of Fourfold Table Analysis to Hospital Data". Biometrics Bulletin. 2 (3): 47–53. doi:10.2307/3002000. JSTOR 3002000. {{cite journal}}: Italic or bold markup not allowed in: |journal= (help) (The paper is frequently miscited as Berkson, J. (1949) Biological Bulletin 2, 47–53.)