The studentized range, q, which was presented by Newman (1939) and Keuls (1952) and John Tukey in some unpublished notes, is the base statistic for the studentized range distribution, which is used for multiple comparison procedures, such as the single step procedure Tukey's range test and the Duncan's step down procedure and establishing confidence intervals that are still valid after data snooping has occurred.
The value of the studentized range is most often represented by the variable q.
The studentized range computed from a list x1, ..., xn of numbers is given by the formulas
is the sample mean.
The critical value of q based on three factors:
- α (the probability of rejecting a true null hypothesis)
- n (the number of observations or groups)
- v (degrees of freedom in the second sample)
If X1, ..., Xn are independent identically distributed random variables that are normally distributed, the probability distribution of their studentized range is what is usually called the studentized range distribution. This probability distribution is the same regardless of the expected value and standard deviation of the normal distribution from which the sample is drawn: tables are available. This probability distribution has applications to hypothesis testing and multiple comparisons. For example, Tukey's range test and Duncan's new multiple range test (MRT), which uses q statistics, can be used as post-hoc analysis to test between which two groups there is a significant difference after rejecting null hypothesis by analysis of variance.
When only two groups need to be compared, the studentized range distribution is similar to the Student's t distribution, differing only in that it takes into account the number of means under consideration. The more means under consideration, the larger the critical value is. This makes sense since the more means there are, the greater the likelihood that at least some differences between pairs of means will be large due to chance alone.
The concept is named after William Sealey Gosset, who wrote under the pseudonym "Student". The fact that the standard deviation is a sample standard deviation rather than the population standard deviation, and thus something that differs from one random sample to the next, is essential to the definition.
The variability in the value of the sample standard deviation introduces additional uncertainty into the values calculated. This complicates the problem of finding the probability distribution of any statistic that is studentized.
- John A. Rafter (2002). "Multiple Comparison Methods for Means". SIAM 44 (2): 259–278.
- Pearson & Hartley (1970, Section 14, Table 29)
- Pearson & Hartley (1970, Section 14.2)
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (November 2010)|
- Pearson, E.S.; Hartley, H.O. (1970) Biometrika Tables for Statisticians, Volume 1, 3rd Edition, Cambridge University Press. ISBN 0-521-05920-8
- John Neter, Michael H. Kutner, Christopher J. Nachtsheim, William Wasserman (1996) Applied Linear Statistical Models, fourth edition, McGraw-Hill, page 726.
- John A. Rice (1995) Mathematical Statistics and Data Analysis, second edition, Duxbury Press, pages 451–452.