Jump to content

Wilcoxon signed-rank test: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Test procedure: Seems more logic to speak about exact distribution first rather than the approximation. The esperance of the approximative normal distribution is 0 not 0.5/sigma... Added precision two sided and one-sided test.+Prescision on Siegel
Line 23: Line 23:
# Calculate the [[test statistic]] <math>W</math>
# Calculate the [[test statistic]] <math>W</math>
#: <math>W = \sum_{i=1}^{N_r} [\sgn(x_{2,i} - x_{1,i}) \cdot R_i]</math>, the sum of the signed ranks.
#: <math>W = \sum_{i=1}^{N_r} [\sgn(x_{2,i} - x_{1,i}) \cdot R_i]</math>, the sum of the signed ranks.
# Under null hypothesis, <math>W</math> follows a specific distribution with no simple expression. This distribution have an [[expected value]] of 0 and a [[variance]] of <math>\frac{N_r(N_r + 1)(2N_r + 1)}{6}</math>.
#: <math>W</math> can be compared to a critical value from a reference table.<ref name=lowry />
#: The two-sided test consists in rejecting <math>H_0</math>, if <math>|W| \ge W_{critical, N_r}</math>.
# As <math>N_r</math> increases, the sampling distribution of <math>W</math> converges to a normal distribution. Thus,
# As <math>N_r</math> increases, the sampling distribution of <math>W</math> converges to a normal distribution. Thus,
#: For <math>N_r \ge 10</math>, a [[Z score|z-score]] can be calculated as <math>z = \frac{W - 0.5}{\sigma_W}, \sigma_W = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}</math>.
#: For <math>N_r \ge 10</math>, a [[Z score|z-score]] can be calculated as <math>z = \frac{W}{\sigma_W}, \sigma_W = \sqrt{\frac{N_r(N_r + 1)(2N_r + 1)}{6}}</math>.
#: If <math>z > z_{critical}</math> then reject <math>H_0</math>
#: If <math>|z| > z_{critical}</math> then reject <math>H_0</math> (two-sided test)
#: {{-}}
#: {{-}}
#: Alternatively, one-sided tests can be realised with either the exact or the approximative dsitribution. [[p-value]] can also be calculated.
#: For <math>N_r < 10</math>, <math>W</math> is compared to a critical value from a reference table.<ref name=lowry />
#: {{-}}
#: If <math>W \ge W_{critical, N_r}</math> then reject <math>H_0</math>
#: {{-}}
#: Alternatively, a ''p''-value can be calculated from enumeration of all possible combinations of <math>W</math> given <math>N_r</math>.


The ''T'' statistic used by Siegel is the smaller of two sums of ranks of given sign; in the example given below, therefore, ''T'' would equal 3+4+5+6=18. Low values of ''T'' are required for significance. As will be obvious from the example below, ''T'' is easier to calculate by hand than ''W''.
The ''T'' statistic used by Siegel is the smaller of two sums of ranks of given sign; in the example given below, therefore, ''T'' would equal 3+4+5+6=18. Low values of ''T'' are required for significance. As will be obvious from the example below, ''T'' is easier to calculate by hand than ''W'' and the test is equivalent to the two-sided test above-described (The distribution of the statistic under H0 have to be adjusted).


Excluding zeros is not a statistically justified method and such an approach can lead to enormous calculation errors.
Excluding zeros is not a statistically justified method and such an approach can lead to enormous calculation errors.

Revision as of 15:18, 30 June 2015

The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used when comparing two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e. it is a paired difference test). It can be used as an alternative to the paired Student's t-test, t-test for matched pairs, or the t-test for dependent samples when the population cannot be assumed to be normally distributed.[1]

The Wilcoxon signed-rank test is not the same as the Wilcoxon rank-sum test, although both are nonparametric and involve summation of ranks.

History

The test is named for Frank Wilcoxon (1892–1965) who, in a single paper, proposed both it and the rank-sum test for two independent samples (Wilcoxon, 1945).[2] The test was popularized by Sidney Siegel (1956)[3] in his influential text book on non-parametric statistics. Siegel used the symbol T for a value related to, but not the same as, . In consequence, the test is sometimes referred to as the Wilcoxon T test, and the test statistic is reported as a value of T.

Assumptions

  1. Data are paired and come from the same population.
  2. Each pair is chosen randomly and independently.
  3. The data are measured at least on an ordinal scale (cannot be nominal).

Test procedure

Let be the sample size, the number of pairs. Thus, there are a total of 2N data points. For , let and denote the measurements.

H0: difference between the pairs follows a symmetric distribution arround zero
H1: difference between the pairs do not follow a symmetric distribution arround zero.
  1. For , calculate and , where is the sign function.
  2. Exclude pairs with . Let be the reduced sample size.
  3. Order the remaining pairs from smallest absolute difference to largest absolute difference, .
  4. Rank the pairs, starting with the smallest as 1. Ties receive a rank equal to the average of the ranks they span. Let denote the rank.
  5. Calculate the test statistic
    , the sum of the signed ranks.
  6. Under null hypothesis, follows a specific distribution with no simple expression. This distribution have an expected value of 0 and a variance of .
    can be compared to a critical value from a reference table.[1]
    The two-sided test consists in rejecting , if .
  7. As increases, the sampling distribution of converges to a normal distribution. Thus,
    For , a z-score can be calculated as .
    If then reject (two-sided test)
    Alternatively, one-sided tests can be realised with either the exact or the approximative dsitribution. p-value can also be calculated.

The T statistic used by Siegel is the smaller of two sums of ranks of given sign; in the example given below, therefore, T would equal 3+4+5+6=18. Low values of T are required for significance. As will be obvious from the example below, T is easier to calculate by hand than W and the test is equivalent to the two-sided test above-described (The distribution of the statistic under H0 have to be adjusted).

Excluding zeros is not a statistically justified method and such an approach can lead to enormous calculation errors. A more stable method is:[4]

  • Calculate , (assume sgn(0) = 0)
  • Calculate sampling probabilities
  • For use normal approximation .

(Note that this value is undefined if either or : i.e. if all samples show positive effect or all samples show negative effect. This is not the case with the test statistic as originally defined.)

Example

     
1 125 110 1 15
2 115 122  –1 7
3 130 125 1 5
4 140 120 1 20
5 140 140   0
6 115 124  –1 9
7 140 123 1 17
8 125 137  –1 12
9 140 135 1 5
10 135 145  –1 10
order by absolute difference
     
5 140 140   0    
3 130 125 1 5 1.5 1.5
9 140 135 1 5 1.5 1.5
2 115 122  –1 7 3  –3
6 115 124  –1 9 4  –4
10 135 145  –1 10 5  –5
8 125 137  –1 12 6  –6
1 125 110 1 15 7 7
7 140 123 1 17 8 8
4 140 120 1 20 9 9
is the sign function, is the absolute value, and is the rank. Notice that pairs 3 and 9 are tied in absolute value. They would be ranked 1 and 2, so each gets the average of those ranks, 1.5.

Effect size

To compute an effect size for the signed-rank test, one can use the rank correlation.

If the test statistic W is reported, Kerby (2014) has shown that the rank correlation r is equal to the test statistic W divided by the total rank sum S, or r = W/S.[5] Using the above example, the test statistic is W = 9. The sample size of 9 has a total rank sum of S = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 45. Hence, the rank correlation is 9/45, so r = .20.

If the test statistic T is reported, an equivalent way to compute the rank correlation is with the difference in proportion between the two rank sums, which is the Kerby (2014) simple difference formula.[5] To continue with the current example, the sample size is 9, so the total rank sum is 45. T is the smaller of the two rank sums, so T is 3 + 4 + 5 + 6 = 18. From this information alone, the remaining rank sum can be computed, because it is the total sum S minus T, or in this case 45 - 18 = 27. Next, the two rank-sum proportions are 27/45 = 60% and 18/45 = 40%. Finally, the rank correlation is the difference between the two proportions (.60 minus .40), hence r = .20.

See also

  • Mann-Whitney-Wilcoxon test (the variant for two independent samples)
  • Sign test (Like Wilcoxon test, but without the assumption of symmetric distribution of the differences around the median, and without using the magnitude of the difference)

References

  1. ^ a b Lowry, Richard. "Concepts & Applications of Inferential Statistics". Retrieved 24 March 2011.
  2. ^ Wilcoxon, Frank (Dec 1945). "Individual comparisons by ranking methods" (PDF). Biometrics Bulletin. 1 (6): 80–83.
  3. ^ Siegel, Sidney (1956). Non-parametric statistics for the behavioral sciences. New York: McGraw-Hill. pp. 75–83.
  4. ^ Ikewelugo Cyprian Anaene Oyeka (Apr 2012). "Modified Wilcoxon Signed-Rank Test". Open Journal of Statistics: 172–176.
  5. ^ a b Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf

Implementations

  • ALGLIB includes implementation of the Wilcoxon signed-rank test in C++, C#, Delphi, Visual Basic, etc.
  • The free statistical software R includes an implementation of the test as wilcox.test(x,y, paired=TRUE), where x and y are vectors of equal length.
  • GNU Octave implements various one-tailed and two-tailed versions of the test in the wilcoxon_test function.
  • SciPy includes an implementation of the Wilcoxon signed-rank test in Python