Jump to content

Extensions of Fisher's method: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Bobthefish2 (talk | contribs)
Bobthefish2 (talk | contribs)
Line 20: Line 20:
<math>E(cχ<sup>2</sup>(k’)) = ck'</math>
<math>E(cχ<sup>2</sup>(k’)) = ck'</math>


<math>Var(cχ<sup>2</sup>(k’)) = 2c<sup>2<sup>k'
<math>Var(cχ<sup>2</sup>(k’)) = 2c<sup>2<sup>k' </math>


This approximation is shown to be accurate up to two moments.
This approximation is shown to be accurate up to two moments.

Revision as of 19:00, 22 September 2011

(Introductory block)

Dependent statistics

A principle limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.

Known covariance

Brown's method: Gaussian approximation

Fisher's method showed that the log-sum of k independent p-values follow a χ2-distribution of 2k degrees of freedom:

In the case that these p-values are not independent, Brown proposed the idea of approximating X using a scaled χ2-distribution, 2(k’), with k’ degrees of freedom.

The mean and variance of this scaled χ2 variable are:

Failed to parse (syntax error): {\displaystyle E(cχ<sup>2</sup>(k’)) = ck'}

Failed to parse (syntax error): {\displaystyle Var(cχ<sup>2</sup>(k’)) = 2c<sup>2<sup>k' }

This approximation is shown to be accurate up to two moments.

[1]

[2]

Unknown covariance

Kost's method: t approximation

References

  1. ^ Brown, M. (1975). "A method for combining non-independent, one-sided tests of significance". Biometrics. 31: 987–992.
  2. ^ Kost, J.; McDermott, M. (2002). "Combining dependent P-values". Statistics & Probability Letters. 60: 183–190.