Wilcoxon signed-rank test

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used when comparing two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e. it is a paired difference test). It can be used as an alternative to the paired Student's t-test, t-test for matched pairs, or the t-test for dependent samples when the population cannot be assumed to be normally distributed.[1]

History[edit]

The test is named for Frank Wilcoxon (1892–1965) who, in a single paper, proposed both it and the rank-sum test for two independent samples (Wilcoxon, 1945).[2] The test was popularized by Sidney Siegel (1956) in his influential textbook on non-parametric statistics.[3] Siegel used the symbol T for a value related to, but not the same as, . In consequence, the test is sometimes referred to as the Wilcoxon T test, and the test statistic is reported as a value of T.

Assumptions[edit]

  1. Data are paired and come from the same population.
  2. Each pair is chosen randomly and independently[citation needed].
  3. The data are measured at least on an ordinal scale (i.e., they cannot be nominal).

Test procedure[edit]

Let be the sample size, i.e., the number of pairs. Thus, there are a total of 2N data points. For pairs , let and denote the measurements.

H0: difference between the pairs follows a symmetric distribution around zero
H1: difference between the pairs does not follow a symmetric distribution around zero.
  1. For , calculate and , where is the sign function.
  2. Exclude pairs with . Let be the reduced sample size.
  3. Order the remaining pairs from smallest absolute difference to largest absolute difference, .
  4. Rank the pairs, starting with the smallest as 1. Ties receive a rank equal to the average of the ranks they span. Let denote the rank.
  5. Calculate the test statistic
    , the sum of the signed ranks.
  6. Under null hypothesis, follows a specific distribution with no simple expression. This distribution has an expected value of 0 and a variance of .
    can be compared to a critical value from a reference table.[1]
    The two-sided test consists in rejecting if .
  7. As increases, the sampling distribution of converges to a normal distribution. Thus,
    For , a z-score can be calculated as .
    To perform a two-sided test, reject if .
    Alternatively, one-sided tests can be performed with either the exact or the approximative distribution. p-values can also be calculated.

Original test[edit]

The original Wilcoxon's proposal used a different statistic. Denoted by Siegel as the T statistic, it is the smaller of the two sums of ranks of given sign; in the example given below, therefore, T would equal 3+4+5+6=18. Low values of T are required for significance. As will be obvious from the example below, T is easier to calculate by hand than W and the test is equivalent to the two-sided test described above; however, the distribution of the statistic under has to be adjusted.

Example[edit]

1 125 110 1 15
2 115 122  –1 7
3 130 125 1 5
4 140 120 1 20
5 140 140   0
6 115 124  –1 9
7 140 123 1 17
8 125 137  –1 12
9 140 135 1 5
10 135 145  –1 10
order by absolute difference
5 140 140   0    
3 130 125 1 5 1.5 1.5
9 140 135 1 5 1.5 1.5
2 115 122  –1 7 3  –3
6 115 124  –1 9 4  –4
10 135 145  –1 10 5  –5
8 125 137  –1 12 6  –6
1 125 110 1 15 7 7
7 140 123 1 17 8 8
4 140 120 1 20 9 9
is the sign function, is the absolute value, and is the rank. Notice that pairs 3 and 9 are tied in absolute value. They would be ranked 1 and 2, so each gets the average of those ranks, 1.5.

Effect size[edit]

To compute an effect size for the signed-rank test, one can use the rank correlation.

If the test statistic W is reported, the rank correlation r is equal to the test statistic W divided by the total rank sum S, or r = W/S. [4] Using the above example, the test statistic is W = 9. The sample size of 9 has a total rank sum of S = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 45. Hence, the rank correlation is 9/45, so r = 0.20.

If the test statistic T is reported, an equivalent way to compute the rank correlation is with the difference in proportion between the two rank sums, which is the Kerby (2014) simple difference formula.[4] To continue with the current example, the sample size is 9, so the total rank sum is 45. T is the smaller of the two rank sums, so T is 3 + 4 + 5 + 6 = 18. From this information alone, the remaining rank sum can be computed, because it is the total sum S minus T, or in this case 45 - 18 = 27. Next, the two rank-sum proportions are 27/45 = 60% and 18/45 = 40%. Finally, the rank correlation is the difference between the two proportions (.60 minus .40), hence r = .20.

Implementations[edit]

  • ALGLIB includes implementation of the Wilcoxon signed-rank test in C++, C#, Delphi, Visual Basic, etc.
  • The free statistical software R includes an implementation of the test as wilcox.test(x,y, paired=TRUE), where x and y are vectors of equal length.[5]
  • GNU Octave implements various one-tailed and two-tailed versions of the test in the wilcoxon_test function.
  • SciPy includes an implementation of the Wilcoxon signed-rank test in Python
  • Accord.NET includes an implementation of the Wilcoxon signed-rank test in C# for .NET applications

See also[edit]

  • Mann–Whitney–Wilcoxon test (the variant for two independent samples)
  • Sign test (Like Wilcoxon test, but without the assumption of symmetric distribution of the differences around the median, and without using the magnitude of the difference)

References[edit]

  1. ^ a b Lowry, Richard. "Concepts & Applications of Inferential Statistics". Retrieved 24 March 2011. 
  2. ^ Wilcoxon, Frank (Dec 1945). "Individual comparisons by ranking methods" (PDF). Biometrics Bulletin. 1 (6): 80–83. 
  3. ^ Siegel, Sidney (1956). Non-parametric statistics for the behavioral sciences. New York: McGraw-Hill. pp. 75–83. 
  4. ^ a b Kerby, Dave S. (December 2014), "The simple difference formula: An approach to teaching nonparametric correlation.", Comprehensive Psychology, 3, doi:10.2466/11.IT.3.1 
  5. ^ Dalgaard, Peter (2008). Introductory Statistics with R. Springer Science & Business Media. pp. 99–100. ISBN 978-0-387-79053-4. 

External links[edit]