Familywise error rate

From Wikipedia, the free encyclopedia
  (Redirected from Family-wise error rate)
Jump to: navigation, search

In statistics, familywise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.
FWER procedures (such as the Bonferroni correction) exert a more stringent control over false discovery compared to False discovery rate controlling procedures. FWER controlling seek to reduce the probability of even one false discovery, as opposed to the expected proportion of false discoveries. Thus, FDR procedures have greater power at the cost of increased rates of type I errors, i.e., rejecting the null hypothesis of no effect when it should be accepted.[1]

Definitions[edit]

Classification of m hypothesis tests[edit]

Suppose we have m null hypotheses, denoted by: H1H2, ..., Hm.
Using a statistical test, each hypothesis is declared significant/non-significant.
Summing the test results over Hi  will give us the following table and related random variables:

Null hypothesis is True Alternative hypothesis is True Total
Declared significant V S R
Declared non-significant U T m - R
Total m_0 m - m_0 m

The FWER[edit]

The FWER is the probability of making even one type I error In the family,

 \mathrm{FWER} = \Pr(V \ge 1), \,

or equivalently,

 \mathrm{FWER} = 1 -\Pr(V = 0).

Thus, by assuring  \mathrm{FWER} \le \alpha\,\! \,, the probability of making even one type I error in the family is controlled at level \alpha\,\!.

A procedure controls the FWER in the weak sense if the FWER control at level \alpha\,\! is guaranteed only when all null hypotheses are true (i.e. when m_0 = m so the global null hypothesis is true)

A procedure controls the FWER in the strong sense if the FWER control at level \alpha\,\! is guaranteed for any configuration of true and non-true null hypotheses (including the global null hypothesis)

The concept of a family[edit]

Within the statistical framework, there are several definitions for the term "family":

  • First of all, a distinction must be made between exploratory data analysis and confirmatory data analysis: for exploratory analysis – the family constitutes all inferences made and those that potentially could be made, whereas in the case of confirmatory analysis, the family must include only inferences of interest specified prior to the study.
  • Hochberg & Tamhane (1987)[2] define "family" as "any collection of inferences for which it is meaningful to take into account some combined measure of error".
  • According to Cox (1982), a set of inferences should be regarded a family:
  1. To take into account the selection effect due to data dredging
  2. To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision

To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Benjamini).

History[edit]

Tukey first coined the term experimentwise error rate and "per-experiment" error rate for the error rate that the researcher should use as a control level in a multiple hypothesis experiment.

Since not all tests done in an experiment should constitute a single family (for example: in a multiple-stage experiment, a separate family might be used for each stage), the terminology was changed (by Miller) to "family-wise error-rate" (and was later adopted by Tukey as "batchwise" or "per batch").

Simultaneous inference vs. selective inference[edit]

Controlling FWER is a form of simultaneous inference, where all inference made in a family are jointly corrected up to a pre-specified error rate. Depending on the definition of the family, the researcher might choose a different form of inference:

For example, simultaneous inference may be too conservative for certain large-scale problems that are currently being addressed by science. For such problems, a selective inference approach might be more suitable, since it assumes that any sub-group of hypotheses from the large scale group can be viewed as a family. Selective inference is usually performed by controlling the FDR (false discovery rate criteria). FDR controlling procedures are more powerful (i.e. less conservative) procedures than the familywise error rate (FWER) procedures (such as the Bonferroni correction), at the cost of increasing the likelihood of false positives within the rejected hypothesis.

Controlling procedures[edit]

The following is a concise review of some of the "old and trusted" solutions that ensure strong level \alpha FWER control, followed by some newer solutions. A good review of many of the available methods can be found in the book "Multiple comparison procedures" (Wiley, 1987), by Hochberg and Tamhane.

The Bonferroni procedure[edit]

Main article: Bonferroni correction
  • Denote by p_{i} the p-value for testing H_{i}
  • reject H_{i} if  p_{i} \leq \frac{\alpha}{m}

The Šidák procedure[edit]

Main article: Šidák correction
  • If the test statistics are independent then testing each hypothesis at level  \alpha_{SID} = 1-(1-\alpha)^\frac{1}{m} is Sidak's multiple testing procedure.
  • This test is more powerful than Bonferroni but the gain is small, and the procedure is far less general than Bonferroni's since it requires independence.

Tukey's procedure[edit]

Main article: Tukey's range test
  • Tukey's procedure is only applicable for pairwise comparisons.
  • It assumes independence of the observations being tested, as well as equal variation across observations (homoscedasticity).
  • The procedure calculates for each pair the studentized range statistic:  \frac {Y_{A}-Y_{B}} {SE} where Y_{A} is the larger of the two means being compared, Y_{B} is the smaller, and SE is the standard error of the data in question.
  • Tukey's test is essentially a Student's t-test, except that it corrects for family-wise error-rate.

A correction with a similar framework is Fisher’s LSD (Least Significant Difference).

some newer solutions for strong level \alpha FWER control:

Holm's step-down procedure (1979)[edit]

  • Start by ordering the p-values (from lowest to highest) P_{(1)} \ldots P_{(m)} and let the associated hypotheses be H_{(1)} \ldots H_{(m)}
  • Let R be the smallest k such that P_{(k)} > \frac{\alpha}{m+1-k}
  • Reject the null hypotheses H_{(1)} \ldots H_{(R-1)}. If R = 1 then none of the hypotheses are rejected.
  • This procedure is uniformly better than Bonferroni's.
  • It is worth noticing here that the reason why this procedure controls the family-wise error rate for all the m hypotheses at level α in the strong sense, is because it is essentially a closed testing procedure. As such, each intersection is tested using the simple Bonferroni test.

Hochberg's step-up procedure (1988)[edit]

Hochberg's step-up procedure (1988) is performed using the following steps:[3]

  • Start by ordering the p-values (from lowest to highest) P_{(1)} \ldots P_{(m)} and let the associated hypotheses be H_{(1)} \ldots H_{(m)}
  • For a given \alpha, let R be the largest k such that P_{(k)} \leq \frac{\alpha}{m+1-k}
  • Reject the null hypotheses H_{(1)} \ldots H_{(R)}
  • Hochberg's procedure is more powerful than Holms'.
  • Nevertheless, while Holm’s is based on Bonferroni with no restriction on the joint distribution of the test statistics, Hochberg’s is based on the Simes test (1987) so it holds only under independence (and also under some forms of positive dependence).

Dunnett's correction[edit]

Main article: Dunnett's test

Charles Dunnett (1955, 1966; not to be confused with Dunn) described an alternative alpha error adjustment when k groups are compared to the same control group. Now known as Dunnett's test, this method is less conservative than the Bonferroni adjustment.

Scheffé's method[edit]

Main article: Scheffé's method

Closed testing procedure[edit]

Closed testing procedures control the familywise type I error rate, if in the closed testing procedure all intersection hypotheses are tested using valid local level α tests. Closed testing procedures are a flexible general class of testing procedures that include e.g. the Bonferroni procedure or Holm's step-down procedure.

Other procedures[edit]

Other advanced procedures that ensure strong level \alpha FWER control include the maximum modulus test.

It should also be noted that there are many alternatives to the attempt to control the familywise error rate. Most notably is the false discovery rate which was invented by Benjamini and Hochberg in 1995, and address many of the large-scale inferences problems in a more practical way.

Example[edit]

Consider a randomized clinical trial for a new antidepressant drug using three groups:

  • Existing drug
  • New drug
  • Placebo

In such a design, the researcher might be interested in whether depressive symptoms (measured, for example, by a Beck Depression Inventory score) decreased to a greater extent for those using the new drug compared to the old drug. Further, one might be interested in whether any side effects (e.g., hypersomnia, decreased sex drive, and dry mouth) were observed. In such a case, two families would likely be identified:

  1. Effect of drug on depressive symptoms
  2. Occurrence of any side effects.

The researcher would assign an acceptable Type I error rate, \alpha, (usually 0.05) to each family, and control for family-wise error using appropriate multiple comparison procedures:

  • For the first family, effect of antidepressant on depressive symptoms, pairwise comparisons among groups might be jointly controlled using techniques such as Tukey's range test. Bonferroni correction might also suffice here since there are only three tests (three comparisons of depressive symptoms).
  • In terms of the side effect profile, since we have three comparisons for each side effect, allowing each side effect its own alpha would result in a 37% chance of making at least one Type I error (i.e., 1 - 0.95^9 = 1 - 0.63 = 0.37). Having a total of 9 hypotheses, the Bonferroni correction might be too conservative in this case; a more powerful tool such as Tukey's range test or the Holm-Bonferroni method will probably be more suitable: for example, the researcher may divide \alpha by three (0.05/3 = 0.0167) and allocate .0167 to each side effect multiple comparison procedure. In the case of Tukey's range test, the critical value of q, the studentized range statistic, would thus be based on an \alpha value of 0.0167.

See also[edit]

References[edit]

  1. ^ Shaffer J.P. (1995) Multiple hypothesis testing, Annual Review of Psychology 46:561-584, Annual Reviews
  2. ^ Hochberg Y, Tamhane AC (1987). Multiple comparison procedures. New York: Wiley. 
  3. ^ Hochberg, Yosef (1988). "A Sharper Bonferroni Procedure for Multiple Tests of Significance". Biometrika 75 (4): 800–802. doi:10.1093/biomet/75.4.800. 

External links[edit]