This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (March 2023)
Berkson's paradox, also known as Berkson's bias, collider bias, or Berkson's fallacy, is a result in conditional probability and statistics which is often found to be counterintuitive, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions. Specifically, it arises when there is an ascertainment bias inherent in a study design. The effect is related to the explaining away phenomenon in Bayesian networks, and conditioning on a collider in graphical models.
It is often described in the fields of medical statistics or biostatistics, as in the original description of the problem by Joseph Berkson.
The most common example of Berkson's paradox is a false observation of a negative correlation between two desirable traits, i.e., that members of a population which have some desirable trait tend to lack a second. Berkson's paradox occurs when this observation appears true when in reality the two properties are unrelated—or even positively correlated—because members of the population where both are absent are not equally observed. For example, a person may observe from their experience that fast food restaurants in their area which serve good hamburgers tend to serve bad fries and vice versa; but because they would likely not eat anywhere where both were bad, they fail to allow for the large number of restaurants in this category which would weaken or even flip the correlation.
Berkson's original illustration involves a retrospective study examining a risk factor for a disease in a statistical sample from a hospital in-patient population. Because samples are taken from a hospital in-patient population, rather than from the general public, this can result in a spurious negative association between the disease and the risk factor. For example, if the risk factor is diabetes and the disease is cholecystitis, a hospital patient without diabetes is more likely to have cholecystitis than a member of the general population, since the patient must have had some non-diabetes (possibly cholecystitis-causing) reason to enter the hospital in the first place. That result will be obtained regardless of whether there is any association between diabetes and cholecystitis in the general population.
An example presented by Jordan Ellenberg: Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex's dating pool. So, among the men that Alex dates, Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex's selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in the population (since even among nice men, the ugliest portion of the population is skipped). Berkson's negative correlation is an effect that arises within the dating pool: the rude men that Alex dates must have been even more handsome to qualify.
As a quantitative example, suppose a collector has 1000 postage stamps, of which 300 are pretty and 100 are rare, with 30 being both pretty and rare. 30% of all his stamps are pretty and 10% of his pretty stamps are rare, so prettiness tells nothing about rarity. He puts the 370 stamps which are pretty or rare on display. Just over 27% of the stamps on display are rare (100/370), but still only 10%(30/300) of the pretty stamps are rare (and 100% of the 70 not-pretty stamps on display are rare). If an observer only considers stamps on display, they will observe a spurious negative relationship between prettiness and rarity as a result of the selection bias (that is, not-prettiness strongly indicates rarity in the display, but not in the total collection).
Two independent events become conditionally dependent given that at least one of them occurs. Symbolically:
- If , , and , then and hence .
- Event and event may or may not occur.
- , a conditional probability, is the probability of observing event given that is true.
- Explanation: Event and are independent of each other.
- is the probability of observing event given that and ( or ) occurs. This can also be written as .
- Explanation: The probability of given both and ( or ) is smaller than the probability of given ( or )
In other words, given two independent events, if you consider only outcomes where at least one occurs, then they become conditionally dependent, as shown above.
There's a simpler, more general argument:
Given two events A and B with 0 < P(A) ≤ 1, we have 0 < P(A) ≤ P(A U B) ≤ 1. Multiplying both sides of the right-hand inequality by P(A), we get P(A)P(A U B) ≤ P(A). Dividing both sides of this by P(A U B) yields P(A) ≤ P(A) / P(A U B) = P(A ∩ (A U B)) / P(A U B) = P(A | A U B), i.e., P(A) ≤ P(A | A U B). When P(A U B) < 1 (i.e., when A U B is a set of less than full probability), the inequality is strict: P(A) < P(A | A U B), and hence, A and A U B are dependent.
Note only two assumptions were used in the argument above: (i) 0 < P(A) ≤ 1 which is sufficient to imply P(A) ≤ P(A | A U B). And (ii) P(A U B) < 1, which with (i) implies the strict inequality P(A) < P(A | A U B), and so dependence of A and A U B. It's not necessary to assume A and B are independent--it's true for any events A and B satisfying (i) and (ii) (including independent events).
The cause is that the conditional probability of event occurring, given that it or occurs, is inflated: it is higher than the unconditional probability, because we have excluded cases where neither occur.
- conditional probability inflated relative to unconditional
One can see this in tabular form as follows: the yellow regions are the outcomes where at least one event occurs (and ~A means "not A").
|B||A & B||~A & B|
|~B||A & ~B||~A & ~B|
For instance, if one has a sample of , and both and occur independently half the time ( ), one obtains:
So in outcomes, either or occurs, of which have occurring. By comparing the conditional probability of to the unconditional probability of :
We see that the probability of is higher () in the subset of outcomes where ( or ) occurs, than in the overall population (). On the other hand, the probability of given both and ( or ) is simply the unconditional probability of , , since is independent of . In the numerical example, we have conditioned on being in the top row:
Here the probability of is .
Berkson's paradox arises because the conditional probability of given within the three-cell subset equals the conditional probability in the overall population, but the unconditional probability within the subset is inflated relative to the unconditional probability in the overall population, hence, within the subset, the presence of decreases the conditional probability of (back to its overall unconditional probability):
- Berkson, Joseph (June 1946). "Limitations of the Application of Fourfold Table Analysis to Hospital Data". Biometrics Bulletin. 2 (3): 47–53. doi:10.2307/3002000. JSTOR 3002000. PMID 21001024. (The paper is frequently miscited as Berkson, J. (1949) Biological Bulletin 2, 47–53.)
- Jordan Ellenberg, "Why are handsome men such jerks?"