Jump to content

Extensions of Fisher's method: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
JL-Bot (talk | contribs)
m removing stale CONSTRUCTION template as last edited 8 days ago
Bobthefish2 (talk | contribs)
Line 10: Line 10:
====Brown's method: [[normal distribution|Gaussian approximation]] ====
====Brown's method: [[normal distribution|Gaussian approximation]] ====


Fisher's method showed that the log-sum of independent ''k'' p-values follow a {{nowrap|1='''[[chi (letter)|<span style="font-family:serif">''χ''</span>]]<sup>2</sup>-distribution'''}} of 2''k'' degrees of freedom:
Fisher's method showed that the log-sum of ''k'' independent p-values follow a {{nowrap|1='''[[chi (letter)|<span style="font-family:serif">''χ''</span>]]<sup>2</sup>-distribution'''}} of 2''k'' degrees of freedom:


: <math>X = -2\sum_{i=1}^k \log_e(p_i) \sim \chi^2(2k) </math>
: <math>X = -2\sum_{i=1}^k \log_e(p_i) \sim \chi^2(2k) </math>


Brown proposed the idea of approximating ''X'' using a scaled ''χ''²-distribution, ''cχ''<sup>2</sup>(''k’''), with ''k’'' degrees of freedom. This approximation is shown to be accurate up to two moments.
In the case that these p-values are not independent, Brown proposed the idea of approximating ''X'' using a scaled ''χ''²-distribution, ''cχ''<sup>2</sup>(''k’''), with ''k’'' degrees of freedom. This approximation is shown to be accurate up to two moments.


<ref>{{cite journal | last1 = Brown | first1 = M. | title = A method for combining non-independent, one-sided tests of significance | journal = Biometrics | volume = 31 | pages = 987–992 | year = 1975 }}</ref>
<ref>{{cite journal | last1 = Brown | first1 = M. | title = A method for combining non-independent, one-sided tests of significance | journal = Biometrics | volume = 31 | pages = 987–992 | year = 1975 }}</ref>

Revision as of 17:55, 22 September 2011

(Introductory block)

Dependent statistics

A principle limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.

Known covariance

Brown's method: Gaussian approximation

Fisher's method showed that the log-sum of k independent p-values follow a χ2-distribution of 2k degrees of freedom:

In the case that these p-values are not independent, Brown proposed the idea of approximating X using a scaled χ²-distribution, 2(k’), with k’ degrees of freedom. This approximation is shown to be accurate up to two moments.

[1]

[2]

Unknown covariance

Kost's method: t approximation

References

  1. ^ Brown, M. (1975). "A method for combining non-independent, one-sided tests of significance". Biometrics. 31: 987–992.
  2. ^ Kost, J.; McDermott, M. (2002). "Combining dependent P-values". Statistics & Probability Letters. 60: 183–190.