Extensions of Fisher's method

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fisher's method are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.

Dependent statistics[edit]

A principle limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.

Known covariance[edit]

Brown's method[edit]

Fisher's method showed that the log-sum of k independent p-values follow a χ2-distribution with 2k degrees of freedom: [1][2]

X = -2\sum_{i=1}^k \log_e(p_i) \sim \chi^2(2k) .

In the case that these p-values are not independent, Brown proposed the idea of approximating X using a scaled χ2-distribution, 2(k’), with k’ degrees of freedom.

The mean and variance of this scaled χ2 variable are:

\operatorname{E}[c\chi^2(k')] = ck' ,
\operatorname{Var}[c\chi^2(k')] = 2c^2k' .

This approximation is shown to be accurate up to two moments.

Unknown covariance[edit]

Kost's method: t approximation[edit]

See reference 2


  1. ^ Brown, M. (1975). "A method for combining non-independent, one-sided tests of significance". Biometrics 31: 987–992. doi:10.2307/2529826. 
  2. ^ Kost, J.; McDermott, M. (2002). "Combining dependent P-values". Statistics & Probability Letters 60: 183–190. doi:10.1016/S0167-7152(02)00310-3.