Talk:Wilcoxon signed-rank test
|WikiProject Statistics||(Rated Start-class, High-importance)|
It appears that some portion of this have been copied from a textbook: "The recommended cutoff varies from textbook to textbook — here we use 20 although some put it lower (10) or higher (25)." Is there a copyright violation happening here? 18.104.22.168 (talk) 04:39, 1 December 2009 (UTC)
- 1 Ordinal data
- 2 Assumptions for Wilcoxon-signed rank test?
- 3 One sample testing against hypothesis
- 4 Example wrong?
- 5 Conflict with Siegel & Castellan?
- 6 The W statistic
- 7 Assumptions for Wilcoxon-signed rank test - needs excpention
- 8 Confidence Interval section is not good
- 9 A new section discussing the theory behind it and why it works?
- 10 "History" section
Im no expert, but pretty sure that this test can also deal with ORDINAL data
The test is in parts not well described, even I, as someone who teaches statistics, have difficulties to understand the test fully from the text. sigbert Mi Apr 11 08:09:57 CEST 2007
- NO! Since you are subtracting two values (e.g. pre vs. post) it has to be interval data, not just ordinal. --Statprof (talk) 17:18, 15 April 2008 (UTC)
- Wrong: As Statprof mentioned, subtracting values presupposes interval data: In your example it would presuppose that someone rated 10 is equally far away from someone rated 9 as someone rated, say 3 is from someone rated 2. 22.214.171.124 (talk) —Preceding undated comment added 11:07, 2 March 2009 (UTC).
I would say that the test generally doesnt work for ordinal data. But if you have ordinal data and it is possible to assume that a change in two scale steps always is a greater change than a change of one scale step, a change of three steps always is a greater change than a change of two steps and so forth (a kind of ordinal data which is closed to interval, i.e. semi-interval, even though it is not a perfect equidistance) then the test will work. It demands carefull validation of the scale and rather heavy assumptions, to be on the safe side a sign test could be prefarable. //MG Stat. —Preceding unsigned comment added by 126.96.36.199 (talk) 06:16, 22 April 2010 (UTC)
An already referenced article says, 'the measures of XA and XB have the properties of at least an ordinal scale of measurement, so that it is meaningful to speak of "greater than," "less than," and "equal to."' , and lots of other pages (just one example) agree. To people claiming that 'subtraction requires interval data': the test uses the sign and the rank (not the value!) of each difference, which make sense for ordinal data. I'm changing the article. --asqueella (talk) 11:49, 24 March 2011 (UTC)
- The above comment by 188.8.131.52 makes a lot of sense, but I couldn't find a source for that. --asqueella (talk) 12:30, 24 March 2011 (UTC)
- All known WP:Reliable sources say it works for ordinal data, so the article should reflect that, not ill-informed speculation and WP:original research. Subtracting values does not require interval data. Defining a measure of effect size might do so, but this is a hypothesis test, not a measure of effect size. Qwfp (talk) 19:21, 13 April 2011 (UTC)
I agree with the notion that the test works on ordinal variables (essentially, if you have ordinal data, you skip one step). Reference for instance here: http://www.sussex.ac.uk/Users/grahamh/RM1web/WilcoxonHandoout2011.pdf. Emil, 14:02, 29.4.2013. — Preceding unsigned comment added by 184.108.40.206 (talk) 12:04, 29 April 2013 (UTC)
The test is based on ranks, so is designed to deal with ordinal data. Potential confusion arises because the data that need to be ordinal are the differences between the matched observations, not the individual scores for each member of the pair. So if you have a sample of people and can rank them from least different to most different then you have ordinal data and no problem. If, on the other hand, you have observations for these people (say) at pre and post on a five point scale, you have to calculate the difference scores and it is here that you encounter a problem if the measurements are only made on an ordinal scale. As Statprof points out you have to calculate a difference between the pre and post measurements, and if the data are only ordinal you don't know if a change for person A from the 2nd to 4th categories is bigger/smaller/the same as a change from the 3rd to 5th category for person B. Or even, for that matter, a change from the 4th to 5th category for person C.SpeakSince (talk) 03:20, 6 June 2014 (UTC)
Assumptions for Wilcoxon-signed rank test?
What are the assumptions for the wilcoxon signed rank test? unfortunately information and appliances i find, mainly "contradict". some statistical books state that one assumption of the test is that the distribution of the differences should be symmetric!!!!??? but this assumption would only be true under the null??? thanks. —Preceding unsigned comment added by Fanny151984 (talk • contribs) 10:11, 23 August 2008 (UTC)
- For what it is worth, the Official Matlab (version 2013b) software used by several engineers at large companies makes the following claim about the Wilcoxon signed rank test: "The data are assumed to come from a continuous distribution, symmetric about its median." Their official documentation references  Gibbons, J.D., and S. Chakraborti. Nonparametric Statistical Inference, 5th Ed., Boca Raton, FL: Chapman & Hall/CRC Press, Taylor & Francis Group, 2011.  Hollander, M., and D. A. Wolfe. Nonparametric Statistical Methods. Hoboken, NJ: John Wiley & Sons, Inc., 1999. This does not mean they are correct, but it is evidence to support the claim that this test assumes the data is symmetric about the median, and if it exists the, mean.220.127.116.11 (talk) 23:07, 11 December 2014 (UTC)
One sample testing against hypothesis
Some statistics software (e.g. GraphPad Prism) claim to use the Wilcoxon Signed Rank test for non-parametric one sample testing. i.e. compares the median of a single group to a hypothetical median. Prism distinguishes from the more common two sample test by calling that the Wilcoxon matched pairs test.
N.B. The one sample test on the difference between matched pairs in two groups seems to be equivalent to a Wilcoxon signed rank test on those two groups and comparing to the null hypothesis that the median difference is equal to zero. Although in the one sample test you can compare to any hypothetical median, not just one equal to zero.
The example describes the W+ statistic as a sum of the signed ranks. This contradicts the "Test Procedure" section (and my understanding of the W+ statistic) that says W+ is the minimum of the sum of ranks for positive differences and the sum of ranks for negative differences.
- Yeah, I was trying to figure out how the test could be that the value is less than the critical value if it is this type of sum (i.e. if all were positive, then would not-reject always, but then it almost has to be a reject). Also, the previous section makes reference to the statistic converging (presumably in distribution) to the normal but then doesn't say what the mean and SD are. 018 (talk) 18:19, 3 February 2010 (UTC)
In the example should we be looking the critical values for n=10 as the page says or should we use n=9 because we disregard the data point where the values are equal? —Preceding unsigned comment added by 18.104.22.168 (talk) 20:37, 14 October 2010 (UTC)
I've corrected the example using the sum of the signed ranks and the appropriate critical value, and I corrected the test procedure to match this. — Preceding unsigned comment added by Kastchei (talk • contribs) 02:06, 20 April 2012 (UTC)
Conflict with Siegel & Castellan?
22.214.171.124 commented on the article page: "The diecision rule state here is in error. According to Siegel & Castellan 1988 pp. 88-89, the stated decision rule is if the calculated value is less than or equal to the critical value then the null hypothesis is rejected (not retained as stated in the Wikipeda text)." MichaK (talk) 16:13, 25 October 2010 (UTC)
The W statistic
According to , which is currently one of the external links at the end of this article, the test statistic W is computed as the sum of W+ and W-. This differs from this article, which uses the minimum of W+ and W-. Apparently this is an unresolved issue as some descriptions use only W+, others use the minimum of W+ and W-, and yet others use the sum of W+ and W-. I emailed the author of that page (Dr. Lowry) about this. He said that using the sum of W+ and W- will converge to approximate the Normal distribution with fewer comparisons, and that the method of using the minimum of W+ and W- date to the time when the properties of the relevant sampling distributions had to be worked out laboriously by hand, and are only useful for small-sample cases (where n is less than about 10). He referred me to Mosteller & Rourke, Sturdy Statistics: Nonparametrics and Order Statistics, Addison-Wesley, 1973. I believe his argument is sound, and I think that the method of using the minimum of W+ and W- will tend to lead to more false-positives simply because it relies on fewer observations. I propose that we change this article to describe the simpler and more robust method of computing W as the sum of W+ and W-.--Headlessplatter (talk) 18:25, 21 June 2011 (UTC)
I've changed the test procedure to clarify all the issues you've mentioned. This section and the example now match the procedure recommended by Dr. Lowry. — Preceding unsigned comment added by Kastchei (talk • contribs) 02:10, 20 April 2012 (UTC)
Assumptions for Wilcoxon-signed rank test - needs excpention
I have just added proper citation for the assumption of the wilcoxon signed rank test:
I am not sure this section is complete, since under the framework of randomization tests, the wilcox test has a different H0 (where it works on the effect of the treatment in changing the "distribution", and not necessarily a shift of location parameter).
Confidence Interval section is not good
The section on confidence intervals is complete garbage. This article covers the paired-sample test, but it looks like someone copied some text that was intended for a single-sample non-paired test, and even did a poor job at that. The notation doesn't make sense (e.g., D_i has a single subscript, but seems to define a 2-D matrix of differences), isn't consistent with the notation of the previous section, and the wording itself contains numerous typos, poor grammar, and awkward descriptions.
How to compute the confidence interval for the paired test is not at all obvious, and thus deserves coverage here. As is, the section that is there is doing a disservice and would be better removed. I would, however, like to have a reference for how it is done in the paired case. With the in the test, I don't think it is as easy as using the Walsh averages of Zi.
-- Lonnie Chrisman 19:33, 8 December 2011 (UTC)
A new section discussing the theory behind it and why it works?
Hello, I'm not an expert in this area and after a little reading from (http://www.cis.uoguelph.ca/~wineberg/publications/ECStat2004.pdf) I discovered how this test works. I think the average user looking this up would like to know how the tables for small sample sizes are generated.
(1) Essentially the ranks generated are uniformly distributed provided the source distributions are continuously distributed. Being continuously distributed means the probability of tie is zero (not impossible, just zero probability). Discrete distributions introduce the possibility of ties; each tie drives the distribution of ranks to be less uniform.
(2) A function of the sums (or differences) of uniform random variables (RV) is performed via convolution. This follows from the basics of the study of RVs.
(3) So in the limit the distribution approaches a normal distribution. Note this is not required to perform the test, but describes why once the sample size is sufficiently large (e.g., 30) then you can use the normal distribution. Otherwise the statistic is distributed according to the convolution of 'n uniform RVs.
In the history section it says that this test is also referred to as the "t-test for matched pairs" or the "t-test for dependent samples". Is this really true? Because those phrases should refer to the parametric paired-samples t-test, which of course is completely different than this, uses the mean, variance, correlation, assumes Gaussian distribution, etc. So if people do indeed refer to this test with those phrases, that would be either wrong or misleading, correct? Is this something that should be addressed in the article? Eflatmajor7th (talk) 22:32, 19 October 2013 (UTC)