Talk:Kruskal–Wallis one-way analysis of variance
|WikiProject Statistics||(Rated Start-class, Low-importance)|
|WikiProject Mathematics||(Rated Start-class, Low-importance)|
The formula of the test statistic K is unnecessarily complicated. There is a much simpler form:
Nijdam 21:17, 18 August 2006 (UTC)
Does the Kruskal-Wallis test rely on an assumption of Homoscedasticity (equal variances)? I've found conflicting references on the web which state both sides.
It is stated that the Kruskal-Wallis test does not require the populations to be normal nor does it require them to have equal variability; the article then says that this is a limitation. This is very misleading, as these properties are usually seen as advantages, allowing an ANOVA-like analysis to be performed even when the assumptions of the parametric ANOVA are violated. The limitation is that non-parametric tests typically have less statistical power than parametric tests (i.e. they require some combination of larger sample sizes and effect sizes to reach the equivalent power of parametric tests).184.108.40.206 09:15, 9 May 2007 (UTC)
not testing medians
the kruskal-wallis test does not test the significance of medians between samples, it tests the means. So this sentence is wrong: "In statistics, the Kruskal-Wallis one-way analysis of variance by ranks (named after William Kruskal and W. Allen Wallis) is a non-parametric method for testing equality of population medians among groups" —Preceding unsigned comment added by 220.127.116.11 (talk) 03:18, 4 October 2007 (UTC)
I'd just like to second this. This really does need to be changed in the main page. Every year we have large numbers of students who, though having been told not to rely on Wikipedia, report test results by stating that there was a significant difference in median x between groups. Can someone who knows how, change the damn site please? I am amazed it has been left for so long considering how often the page must be viewed. Cheers. — Preceding unsigned comment added by 18.104.22.168 (talk) 11:22, 22 November 2012 (UTC)
- Both the above comments are in error. The Kruskal-Wallis test is, in its most general application, a test of the null hypothesis that there is no stochastic dominance between any of the groups tested (i.e. H0: P(Xi > Xj) = 0.5 for all groups i and j, with HA: P(Xi > Xj) ≠ 0.5 for at least one i ≠ j). These hypotheses, and this test are not about means. I have cleaned up the article to refer correctly to stochastic dominance.--Lexy-lou (talk) 15:56, 23 July 2014 (UTC)
not testing means or medians.
The null hypothesis is that all populations have the same distribution. Kruskal-Wallis assumes that the errors in observations are i.i.d. (in the same way that parametric ANOVA assumes i.i.d. errors; Kruskal-Wallis drops only the normality assumption). The test is designed to detect simple shifts in location (mean or median - same thing here) among the populations. If one starts allowing more complicated differences between distributions (e.g. changes in shape), then all bets are off. It's easy to construct examples of populations with equal medians, or examples of populations with equal means, that will lead to inflated K-W statistics and high probability of rejection of "equality".
Not surprisingly many web sites, software manuals, texts, etc get carried away by the word "nonparametric", and make interpretations of the K-W statistic that simply aren't true.
As the extension of Mann-Whitney test, the Kruskal-Wallis does not check the equality of groups' medians (which Median test does) or groups' means (which ANOVA does). It checks the overall prevalence of values (See Mann-Whitney test Wikipedia article). Sometimes this is called "difference in location", however there is no consensus as to what term "location" really means. 22.214.171.124 (talk) 07:52, 21 June 2011 (UTC)
- The Kruskal-Wallis test is indeed, in its most general application, a test of the null hypothesis that there is no stochastic dominance between any of the groups tested (i.e. H0: P(Xi > Xj) = 0.5 for all groups i and j, with HA: P(Xi > Xj) ≠ 0.5 for at least one i ≠ j). These hypotheses, and this test are not about means. I have cleaned up the article to refer correctly to stochastic dominance.--Lexy-lou (talk) 15:56, 23 July 2014 (UTC)
Error in first formular
So far i can see the (N-1) factor in first formular is correct a (N-2) factor.
The denominator is not (N-1)*N*(N+1). The true is (N-2)*(N-1)*N.
With this we get the same conclusions as in the article.
also: it looks as though it should be sigma R^2/n, not sigma nR^2