Tukey's range test

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Tukey's test, also known as the Tukey range test, Tukey method, Tukey's honest significance test, Tukey's HSD (honest significant difference) test,[1] or the Tukey–Kramer method, is a single-step multiple comparison procedure and statistical test. It can be used on raw data or in conjunction with an ANOVA (Post-hoc analysis) to find means that are significantly different from each other . Named after John Tukey[2], it compares all possible pairs of means, and is based on a studentized range distribution (q) (this distribution is similar to the distribution of t from the t-test).[3] The Tukey HSD tests should not be confused with the Tukey Mean Difference tests (also known as the Bland-Altman Test).

Tukey's test compares the means of every treatment to the means of every other treatment; that is, it applies simultaneously to the set of all pairwise comparisons

\mu_i-\mu_j \,

and identifies any difference between two means that is greater than the expected standard error. The confidence coefficient for the set, when all sample sizes are equal, is exactly 1 − α. For unequal sample sizes, the confidence coefficient is greater than 1 − α. In other words, the Tukey method is conservative when there are unequal sample sizes.

Assumptions of Tukey's test[edit]

  1. The observations being tested are independent within and among the groups.
  2. The groups associated with each mean in the test are normally distributed .
  3. There is equal within-group variance across the groups associated with each mean in the test (homogeneity of variance).

The test statistic[edit]

Tukey's test is based on a formula very similar to that of the t-test. In fact, Tukey's test is essentially a t-test, except that it corrects for experiment-wise error rate (when there are multiple comparisons being made, the probability of making a type I error increases — Tukey's test corrects for that, and is thus more suitable for multiple comparisons than doing a number of t-tests would be).[3]

The formula for Tukey's test is:

 q_s = \frac{Y_A - Y_B}{SE},

where YA is the larger of the two means being compared, YB is the smaller of the two means being compared, and SE is the standard error of the data in question.

This qs value can then be compared to a q value from the studentized range distribution. If the qs value is larger than the qcritical value obtained from the distribution, the two means are said to be significantly different.[3]

Since the null hypothesis for Tukey's test states that all means being compared are from the same population (i.e. μ1 = μ2 = μ3 = ... = μn), the means should be normally distributed (according to the central limit theorem). This gives rise to the normality assumption of Tukey's test.

The studentized range (q) distribution[edit]

The Tukey method uses the studentized range distribution. Suppose that we take a sample of size n from each of k populations with the same normal distribution N(μ, σ) and suppose that \bar{y}min is the smallest of these sample means and \bar{y}max is the largest of these sample means, and suppose S2 is the pooled sample variance from these samples. Then the following random variable has a Studentized range distribution.

q = \frac{(\overline{y}_{max} - \overline{y}_{min})}{S\sqrt{2/n}}


This value of q is the basis of the critical value of q, based on three factors:

  1. α (the Type I error rate, or the probability of rejecting a true null hypothesis)
  2. k (the number of populations)
  3. df (the number of degrees of freedom (N-k) where N is the total number of observations)

The distribution of q has been tabulated and appears in many textbooks on statistics and online. In some tables the distribution of q has been tabulated without the \sqrt{2} factor. To understand witch table it is we can compare the result for k=2 and compare it to the the result of the Student's t-distribution with the same degrees of freedom and the same α. In addition, R offers a cumulative distribution function (ptukey) and a quantile function (qtukey) for q.

Confidence limits[edit]

The Tukey confidence limits for all pairwise comparisons with confidence coefficient of at least 1 − α are

\bar{y}_{i\bullet}-\bar{y}_{j\bullet} \pm \frac{q_{\alpha;k;N-k}}{\sqrt{2}}\widehat{\sigma}_\varepsilon \sqrt{\frac{2}{n}} \qquad i,j=1,\ldots,k\quad i\neq j.

Notice that the point estimator and the estimated variance are the same as those for a single pairwise comparison. The only difference between the confidence limits for simultaneous comparisons and those for a single comparison is the multiple of the estimated standard deviation.

Also note that the sample sizes must be equal when using the studentized range approach. \widehat{\sigma}_\varepsilon is the standard deviation of the entire design, not just that of the two groups being compared. It is possible to work with unequal sample sizes. In this case, one has to calculate the estimated standard deviation for each pairwise comparison as formalized by Clyde Kramer in 1956, so the procedure for unequal sample sizes is sometimes referred to as the 'Tukey–Kramer method which is as follows:

\bar{y}_{i\bullet}-\bar{y}_{j\bullet} \pm \frac{q_{\alpha;k;N-k}}{\sqrt{2}}\widehat{\sigma}_\varepsilon \sqrt{\frac{1}{n}_{i} + \frac{1}{n}_{j}} \qquad

where n i and n j are the sizes of groups i and j respectively. The degrees of freedom for the whole design is also applied.

Advantages and disadvantages[edit]

When doing all pairwise comparisons, this method is considered the best available when confidence intervals are needed or sample sizes are not equal. When samples sizes are equal and confidence intervals are not needed Tukey’s test is slightly less powerful than the stepdown procedures, but if they are not available Tukey’s is the next-best choice, and unless the number of groups is large, the loss in power will be slight. In the general case when many or all contrasts might be of interest, Scheffé's method tends to give narrower confidence limits and is therefore the preferred method.

See also[edit]

Notes[edit]

  1. ^ Lowry, Richard. One Way ANOVA – Independent Samples. Vassar.edu. Retrieved on December 4th, 2008
  2. ^ Tukey, John. Comparing Individual Means in the Analysis of Variance. Biometrics, Vol. 5, No. 2 (Jun., 1949), pp. 99-114
  3. ^ a b c Linton, L.R., Harder, L.D. (2007) Biology 315 – Quantitative Biology Lecture Notes. University of Calgary, Calgary, AB

Further reading[edit]

  • Douglas C. Montgomery (2013) "Design and Analysis of Experiments", eighth edition, Wiley, section 3.5.7.

External links[edit]