Talk:Unbiased estimation of standard deviation

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, Low-priority)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Low Priority
 Field: Probability and statistics
WikiProject Statistics (Rated Start-class, Low-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

Start-Class article Start  This article has been rated as Start-Class on the quality scale.
 Low  This article has been rated as Low-importance on the importance scale.

Added section on autocorrelated data[edit]

I have a pretty good background in applied stat, and lots of reference books, and I'm a member of ASA and can do online ASA journal searches, but with all of that I have never seen these bias equations anywhere other than in Law and Kelton. To derive them from Anderson isn't that tough, but finding which expressions in Anderson to use isn't so simple (which is why I put the equation numbers in the references).

What I'm not clear about, and deliberately slid over in this effort, is what is the effect of taking the square root of these bias expressions. Is the resulting PDF still chi? I've been doing some sims lately in support of some ANSI/IEEE nuclear standards development, and there is still a bit of bias that these expressions don't take out. I was already aware of the chi PDF and the small-N correction, but I could use some help in seeing how to apply that sort of transformation to the autocorr case. Any info would be appreciated, and of course should be added to this article.

Given that intro texts don't deal at all with autocorr data, and that such data is common, there needs to be some treatment of the subject somewhere in Wikipedia. Rb88guy (talk) 20:46, 12 January 2009 (UTC)

There are a number of points here:
  • This stuff might eventually be better located separately that would be more obviously relevant to time-series, particularly as the problem starts at the stage of estimating the variance not the standard deviation.
  • The material presently here does not rely on the assumption of a normal distribution and none is stated. If a chi-squared dist were appropriate to the autocorrelated case this would be a necessary requirement, so care would be needed in specifying assumptions. However, I believe the chi-squared dist does not hold, even if the true correlations were known and certainly not if they are estimated from the data. It looks possible the get a formula for the variance of the estimated variance (at least of a normal distribution is assumed) which would be some guide to whether a chi-squared works.
  • There are other ways of estimating the variance of the sample mean which don't start with the ordinary sample variance see for example Moran, P. A. P. 1975. The estimation of standard errors in Monte Carlo simulation experiments. Biometrika, 62:1-4. Also, I think an estimate can be found via spectral analysis, by estimating the spectral density at a frequency of zero.
Melcombe (talk) 16:16, 14 January 2009 (UTC)

A few minor tweaks[edit]

Sorry, forgot to mark a couple of these as minor. Adjusted equation spacing, indents. Changed my "N" to "n" as used in previous material. Rb88guy (talk) 15:49, 13 January 2009 (UTC)

Added plot of c4 vs n[edit]

And added a caption on earlier graph. Rb88guy (talk) 18:23, 13 January 2009 (UTC)

Added calcs for variance of mean[edit]

Using the observed (sample) variance. Thanks to Melcombe for catching my error in the Var[x-bar] expression, and fixing it. I have some nice R graphics that show how this stuff works only in the mean, that is, that the expected-behavior curve(s) pass through the mean values of many thousands of replicates of the various calculations. (There's a lot of scatter.) But it takes a lot of words to describe what's going on in the graphs, so I didn't think that was appropriate. On the other hand, when I do the same sims using the std dev, not the variance, there is still a bit of bias left. As a practical matter for someone trying to calibrate an instrument, removing almost all the bias in the std dev is presumably better than being off by a factor of two... Anyway, that part of this needs more work, and when something sensible is available it should be added here, to complete this section.

Also, the only thing about moving this to a TSA section is that lots of folks who need to be aware of this autocorr bias problem wouldn't think to look at TSA. They might think "How is calibrating this instrument a time series problem? It's just a pile of numbers." Assuming they even know what TSA is or what it can do for (or to) them. Rb88guy (talk) 20:24, 16 January 2009 (UTC)

This is somewhat in danger of becoming original research which is not allowed here. However you have put in refs for the results quoted so that should be OK. To go further one would need to consider that estimating the standard deviation unbiasedly is not central to the usual run of statistical theory and that there may well be good reason for this. Would a better way of treating your "trying to calibrate an instrument" example be to say that what is wanted is a good interval estimate for the mean (ie. a confidence interval or whatever terminology is appropriate). Looking directly at how to define the limits for the CI would combine the idea of getting the "right" estimate for the variance or standard deviation with making an adjustment to the "adjustment for sample size" entailed in the use of limits derived from the Student-t distribution. Indeed, in the uncorrelated case, the use of the Student-t distribution, instead of the normal distribution, might itself be thought of as making a correction for bias in the estimated standard deviation. If the real use of the standard deviation is to construct such CIs, you may be better off aiming your simulations at the properties of the CIs rather than the estimated standrd deviations. If the CI is the real use, then a better home for this stuff might be in an article about CIs for the mean. Don't forget it is possible to put in links from several other articles to point to the right place, wherever it is. Melcombe (talk) 11:34, 19 January 2009 (UTC)
I agree about the research thing; I was hoping someone would know of a reference that takes this from variance to std dev, in the presence of autocorr. If that doesn't exist, I have to say that deriving it is, most likely, beyond my capabilities in stat, but if I did come up with something, then certainly that would be publishable (in a journal, not here). The measurement context that I'm thinking of is calibration of monitoring instruments, particularly for detection limits. There, the std dev of a mean isn't the issue, it's the std dev of the population of filtered, hence autocorrelated, measurements themselves. That std dev is used in Min Detectable Conc calcs. If autocorr isn't accounted for, then the calculated MDC will appear to be way smaller (better) than it really is. So, the remaining issue is that E[s] is not equal to SQRT( E[s^2] ), otherwise we could just take the square root of the "s^2" (and "Var[x-bar]") expressions in the article and everyone could live happily ever after...;) Rb88guy (talk) 16:17, 19 January 2009 (UTC)

Added material on estimating std devs[edit]

Well, I just felt that something needed to be added to bring this stuff back to the std dev from the variance, and it also ties back into the first (original) section (c4). Yes, I suppose some of this is "OR" but it must exist somewhere in the stat literature- this cannot possibly be novel. I'm hoping someone will know a reference for what I called \theta here, or maybe, if it actually doesn't already exist, someone will research it and publish it, and then that can be referenced here. In other words I won't struggle over the exact stuff I put here, but I do think there needs to be some discussion of the issue (that E[s] <> sqrt( E[s^2] ). Consider my addition a strawman...Rb88guy (talk) 20:57, 30 January 2009 (UTC)

Small stuff changed, however[edit]

I think the tone of the intro is too negative- why not just delete the entire article, that, apparently, is full of stuff no one even uses? I could see that, maybe, for the small c4 (and c2, which should also be put in here, along with some material on the Helmert PDF) correction, but the autocorr correction is significant. Anyway, as time permits I may try to come up with a more hopeful intro that might even encourage someone to read this article...PS I've been dabbling with an article on where the c2, c4 factors come from; you might want to take a peek at the raw, far-from-finished stuff I have in my sandbox Rb88guy (talk) 20:33, 25 February 2009 (UTC)

Your sandbox article looks great, though I would suggest that you make it explicitly clear that (using your notation) \hat{\sigma}_n = \hat{\sigma}_{n-1}, rather than having the reader have to deduce this. In fact why not do away with the subscript entirely and just use \hat{\sigma}? Btyner (talk) 01:17, 26 February 2009 (UTC)
Thanks, yep, that needs fixin' along with lots of other stuff. I was thinking of making this an article with a title including in some manner "Helmert's distribution of s", following Deming. Incidentally, there is a TON of useful stuff in that book! I don't usually do anything with sampling (as in survey sampling) so I hadn't even looked at it until recently. Rb88guy (talk) 02:22, 26 February 2009 (UTC)
You may be right about it being too negative, but it does say that it is an important theoretical problem which makes it of interest to a moderately large group of individuals. However, if you can find some useful citations to real applications, then do include them later in the article, with a brief mention in the intro, remembering that it is meant to be short and readable. In your sandbox you are citing the first edition of John&Kotz ... have you seen the second edition, as referenced in this article presently, as it may contain material you haven't seen. Also, regarding your sandbox, you may want to make use of the existing article chi distribution(not presently mentioned), both because it is related and because you may be able abbreviate some of what you want to say. Melcombe (talk) 10:22, 26 February 2009 (UTC)
I remember I noticed the bias of standard deviation for small samples a couple of years ago and was quite disappointed not to find anything on wikipedia. This article which was added not long after would have prevented me from wasting my time reinventing the wheel ;-). So I agree with the intro being too negative. This is true for the autocorrelation stuff, but c2 and c4 are relevant as well. I was processing test results with sample sizes ranging from 2 to 8 so the correction factors were significantly different from 1. Also, not to sound too skinflint, but sometime, even a 1% point of margin can represent a lot of money in some industries. -- Ryk V (talk) 00:23, 18 October 2009 (UTC)
Agreed "a 1% point of margin can represent a lot of money in some industries". But do they rely on having an unbiased estimate of standard deviation? Seems unlikely to me. Perhaps they need an estimate of a percentage point, but these problems are not equivalent unless they are prepared to accept some strong assumptions about distributional form. They would certainly be better off trying to solve the problem they actually face rather than some over-simplistic version chosen because it looks mathematically nice or because it looks similar to problems whose solution is known in simple form. Melcombe (talk) 16:39, 28 October 2009 (UTC)

Added a table of values for c4[edit]

This is one of my first edit so don't hesitate to modify the table in anyway you see fit. I just thought adding it was relevant, because calculating c4 with the main formula is not straightforward (you need to go to Particular values of the Gamma function to get a correct value (and you can't go very far). Adding some external sources from the web could also be a good idea but I don't know which one are acceptable. -- Ryk V (talk) 00:28, 18 October 2009 (UTC)

An alternative correction formula has been set down on Talk:Standard deviation but it has not made its way into this article yet. Melcombe (talk) 16:42, 28 October 2009 (UTC)
I corrected the values for c4(4), c4(6) and c4(100), for which the last digit was off by one. I can post the program I used, but I don't know if this is the place.. Ver Greeneyes (talk) 23:38, 27 September 2010 (UTC)
A first question is what are your calculations calculating? A first reading of the present text is that the table contains values calculated using the first few terms of the series expansion immediately above; but it would be possible to calculate "exact" values using routines for the (log) gamma function. Clearly these would be numerically different. I think we started off with values with fewer decimal places taken from published sources that may have corresponded to the first interpretation. The article needs to be clearer about exactly what the values represent. Melcombe (talk) 08:52, 28 September 2010 (UTC)
mpfr (an arbitrary precision floating point library) has a version of the Gamma function built into it, so I used the formula itself, but also compared it to the even/odd functions at the bottom of the table. They appear to be identical, but I didn't do the math to compare them. Ver Greeneyes (talk) 09:27, 28 September 2010 (UTC)
I have reworded the text to hopefully reflect the fact that the table contains "exact" values. This leaves open the question of whether an additional column containing values from the 4-term approximation is needed. Melcombe (talk) 10:51, 28 September 2010 (UTC)

Readers aren't expected to be educated in your field[edit]

Seems too many articles on Wikipedia suffer from the obscurity of knowledge problem. Where wiz-kids who "know how things work" plop down "the perfect" analysis of the topic, using language and jargon that succinctly and professionally speaks to the ideas. Unfortunately, the end result is NOBODY ELSE CAN UNDERSTAND WHAT YOU JUST WROTE. You need to step back and ask yourself, does this topic read in any way as understandable to the uninitiated? If I had to explain this topic to my 8 year old niece, what words would I use to keep it simple, and still make my point? Please add this much at least the introduction. Thanks! (talk) —Preceding undated comment added 04:58, 21 January 2014 (UTC)


The conclusion from Cochran's theorem is that the square of the expression appearing at the beginning of "bias correction" has a chi-SQUARED distribution, not a chi distribution.

The expression itself has a chi distribution (which is what's relevant here).

13:55, 24 February 2014 (UTC) — Preceding unsigned comment added by (talk)