Bonferroni correction

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics, the Bonferroni correction is a method used to counteract the problem of multiple comparisons. It is named after Italian mathematician Carlo Emilio Bonferroni for the use of Bonferroni inequalities,[1] but modern usage is credited to Olive Jean Dunn, who first used it in a pair of articles written in 1959 and 1961.[2][3]

Informal introduction[edit]

Statistical inference logic is based on rejecting the null hypotheses if the likelihood of the observed data under the null hypotheses is low. The problem of multiplicity arises from the fact that as we increase the number of hypotheses being tested, we also increase the likelihood of a rare event, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., make a Type I error).

The Bonferroni correction is based on the idea that if an experimenter is testing m hypotheses, then one way of maintaining the familywise error rate (FWER) is to test each individual hypothesis at a statistical significance level of 1/m times what it would be if only one hypothesis were tested.

So, if the desired significance level for the whole family of tests should be (at most) \alpha, then the Bonferroni correction would test each individual hypothesis at a significance level of \alpha/m. For example, if a trial is testing eight hypotheses with a desired \alpha = 0.05, then the Bonferroni correction would test each individual hypothesis at \alpha = 0.05/8 = 0.00625.

Statistically significant simply means that a given result is unlikely to have occurred by chance assuming the null hypothesis is actually correct (i.e., no difference among groups, no effect of treatment, no relation among variables).


[dubious ]

Let H_{1},...,H_{m} be a family of hypotheses and p_{1},...,p_{m} the corresponding p-values. Let I_{0} be the (unknown) subset of the true null hypotheses, having m_{0} members.

The familywise error rate is the probability of rejecting at least one of the members in I_{0}; that is, to make one or more type I error. The Bonferroni Correction states that choosing all p_{i}\leq\frac{\alpha}{m} will control the \mathit{FWER}\leq\alpha. The proof follows from Boole's inequality: \mathit{FWER}=\mathit{Pr}\left\{ \bigcup_{I_{o}}\left(p_{i}\leq\frac{\alpha}{m}\right)\right\} \leq\sum_{I_{o}}\left\{\mathit{Pr}\left(p_{i}\leq\frac{\alpha}{m}\right)\right\}\leq m_{0}\frac{\alpha}{m}\leq m\frac{\alpha}{m}=\alpha

This result does not require that the tests be independent.



We have used the fact that \sum_{i=1}^{n}\frac{\alpha}{n}=\alpha, but the correction can be generalized and applied to any \sum_{i=1}^{n}a_{i}=\alpha, as long as the weights are defined prior to the test.

Confidence intervals[edit]

Bonferroni correction can be used to adjust confidence intervals. If we are forming m confidence intervals, and wish to have overall confidence level of 1-\alpha, then adjusting each individual confidence interval to the level of 1-\frac{\alpha}{m} will be the analog confidence interval correction.


There are other alternatives to control the familywise error rate. For example, the Holm–Bonferroni method and the Šidák correction are said to be uniformly more powerful test procedures than the Bonferroni correction.


The Bonferroni correction can be somewhat conservative if there are a large number of tests and/or the test statistics are positively correlated. The correction also comes at the cost of increasing the probability of producing false negatives, and consequently reducing statistical power.

Another criticism concerns the concept of a family of hypotheses. The statistical community has not reached a consensus on how to define such a family. As there is no standard definition, test results may change dramatically, only by modifying the way we consider the hypotheses families.

All of these criticisms, however, apply to adjustments for multiple comparisons in general, and are not specific to the Bonferroni correction.

See also[edit]


  1. ^ Bonferroni, C. E., Teoria statistica delle classi e calcolo delle probabilità, Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commerciali di Firenze 1936
  2. ^ Dunn, Olive Jean (1959). "Estimation of the Medians for Dependent Variables". Annals of Mathematical Statistics 30 (1): 192–197. JSTOR 2237135. 
  3. ^ Dunn, Olive Jean (1961). "Multiple Comparisons Among Means" (PDF). Journal of the American Statistical Association 56 (293): 52–64. doi:10.1080/01621459.1961.10482090. 

Further reading[edit]

External links[edit]