Jump to content

Talk:Wald–Wolfowitz runs test

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

K-S-Test more powerful?

[edit]

As far as I understand, this statement in the article is wrong because the W-W test and the K-S-test apply to completely different kinds of statistics. While the W-W test applies to sequences the K-S test is used to analyse the cumulative distribution of some quantity (e.g. mass distribution in a star cluster) where the order does not matter (e.g. whether count brown dwarfs first or in random order). Furthermore, the K-S test does not make much sense for binary distributions like throwing coins, because it normally requires all events to be related to a different value (e.g. no two stars have exactly the same mass) and coin-throwing results in only two different "values". In other words, the K-S test is not more powerful then the W-W test, just as a thermometer isn't more powerful than a ruler.--SiriusB 15:15, 19 September 2007 (UTC)[reply]

My understanding is the same as this commentator's. The KS test is useful, but not for the same thing as the WW test. — Preceding unsigned comment added by 136.177.20.13 (talk) 18:07, 10 June 2011 (UTC)[reply]

Missing: Application & Expectation

[edit]

I found this article hard to understand. If I have it correctly, N is supposed to be chosen and fixed ahead of time, so that you apply a Runs Test of a specific size, say 5? If this is correct, it could be a lot clearer.

What would help a lot is a small application example showing how to do this measurement.

The other thing that is absolutely necessary here are the expectation formula for populations (and possibly samples). Calculating the the mean and variance for N=6 over a sequence of 1000 values is meaningless if I do not know what the expected mean and variance are.

Additionally, there a no pointers to more/expanded information on this subject. —Preceding unsigned comment added by 68.36.34.97 (talk) 14:06, 15 August 2008 (UTC)[reply]

The N in the article is the length of the two-valued data sequence to be tested for the hypothesis that the elements of the sequence are mutually independent. So if you have a sequence of 1000 values, N = 1000. For the example sequence "++++---+++--++++++----", N = 22, N+ = 13, N = 9. Then μ = 11.64, σ2 = 4.88. There are 6 runsn the sequence, which is a considerably less than the expected value μ: 6 = μ − 2.55σ. This is quite likely significant, depending on the confidence level required for rejecting the null hypothesis of mutual independence. Under that hypothesis, the probability of getting no more than 6 runs in a sequence of 13 pluses and 9 minuses is less than 1 percent.  --Lambiam 22:39, 19 August 2008 (UTC)[reply]

Derive expected value

[edit]

Would anyone care to derive the expected value? I have been unable to, and I can't find any reference showing the derivation, but all references agree with the expected value given in the article.

Here's what I got: 1) Run #1 starts with the first element. 2) Each time we consider the next element, there is a chance that it will be different than the previous element. The chance of this happening is: (N-/N) * (N+/N) + (N+/N) * (N-/N) = 2*(N+)*(N-)/N^2. So far, so good. There are N-1 steps in which a new run can start, so the total number of runs should be 1 + (N-1)*(2*(N+)*(N-)/N^2).

The equation in the article seems to act as though there are N opportunities for a new run to start.

I'm assuming that the variance comes from the binomial distribution. Right?

thanks 136.142.169.66 (talk) 18:48, 25 November 2008 (UTC)[reply]

okay, I found a derivation in a textbook (Brunk 1975, Introduction to Mathematical Statistics), but they are too complex to write them up just now. This author either looks at all N! purmutations of possible sequences, or uses the theory of random sampling without replacement--the binomial method is close, but is only good if you have several possible outcomes, each with low probability. 136.142.169.66 (talk) 21:21, 25 November 2008 (UTC)[reply]
Assuming it was a Bernoulli process, the expectation for the runs would actually be 1 + 2(n-1)pq. This is an easy induction: it's 1 if n=1, and adding a new element the expectance increases by 1 with probability P(Xn+1≠Xn)=2pq.
My guess is that the n/(n-1) correction factor comes from either a "simplification" of the formula or from some bias correction, or both. Any better explanation? --Barrfind (talk) 14:46, 17 February 2010 (UTC)[reply]

Text confusing

[edit]

Quote: If +s and −s alternate randomly, the number of runs in the sequence N for which it is given that there are N+ occurrences of + and N− occurrences of – (so N = N+ + N−) is a random variable whose conditional distribution – given the observation of N+ positive runs and N− negative runs – is approximately normal with:

So, is N+ (N-) the number of +'s (-'s) or the number of _runs_ consisting of +'s (-'s)??

160.83.30.198 (talk) 09:20, 30 November 2010 (UTC)[reply]

This is not correct statistically, or at least the writer has made a few grammatical mistakes..... As detailed above it states that N = N+ + N- given N+ and N- is normal, however this is a constant and has no distribution (outside of the constant)... I believe what is meant to be said is that the total number of simulations N (length of number string) given the number or changes (positive to negative switches or negative to positive) is normally distributed with the mean and variance as follows.... Another important piece of information for this test is that the first and last simulations do not need to be known, or in other words all that is needed is the total number of switches and not the specific N+ N- as given in the wiki. — Preceding unsigned comment added by 70.27.78.27 (talk) 14:37, 6 April 2012 (UTC)[reply]