|This is the talk page for discussing improvements to the Confidence interval article.|
|Archives: Index, 1, 2, 3, 4|
|This article is of interest to the following WikiProjects:|
|This article has an assessment summary page.|
|This subject is featured in the Outline of statistics, which is incomplete and needs further development. That page, along with the other outlines on Wikipedia, is part of Wikipedia's Outline of Knowledge, which also serves as the table of contents or site map of Wikipedia.|
|Threads older than 3 months may be archived by.|
|This article is the subject of an educational assignment at College of Engineering, Pune supported by Wikipedia Ambassadors through the India Education Program. Further details are available on the course page.|
Grumpiness, pedantry and overly long introductions
I'm approaching this page as a non-statistician and I appreciate the hard work gone into the careful crafted introduction on this page. However, as a reader I want to know from the first sentence of an article what this is and why I should care and it doesn't quite achieve this. My background is critical appraisal of medical journal articles and an accepted definition - however correct or otherwise - is "the range in which there is a 95% probability that the true value lies". I know that statisticians get a bee in their bonnet about use of the word probability in this context and get all uppity about referring to the population variable as a random statistic and to refer to this is the introduction just seems grumpy. My current favourite succint explanation is "the range within which we can be 95% confident that the true value for the population lies"  . I think this or something similar should be a first sentence as it makes the whole character of the article that much more approachable and understandable. This is a common flaw of most of the statistics article where too much effort is spent being "correct" before trying to make it understandable or engaging. Arfgab (talk) 13:48, 12 April 2014 (UTC)
- The problem with saying that a CI is "the range in which there is a 95% probability that the true value lies" is that this is simply incorrect. You may well be right in saying that it is an accepted definition in some circles, but it is nevertheless just a common misconception. The article's opening paragraph is not being pedantic, it is just carefully worded to avoid a common error. Exchanging this for "the range within which we can be 95% confident that the true value for the population lies" is not much better, because the word confidence invites the reader to misinterpret it as probability. The critical point is that the 95% refers to the procedure and the random intervals it produces, not to the results of any particular data set. Dezaxa (talk) 21:03, 18 April 2014 (UTC)
- Dezaxa is right to say 'simply incorrect', above. For a nice nice example of how the 75% confidence interval - constructed from particular observed data - can actually include the true parameter value with probability 1, see e.g. Ziheng Yang (2006), Computational Molecular Evolution, pp. 151-152.
The CI is such a commonly used statistic that I am amazed that there is no agreement as to what it means. The page is not much help for a non-statistician and apparently not even to a statistician. Really now, what is a CI? Pcolley (talk) 22:23, 28 April 2014 (UTC)
- I agree it's amazing that there is so much confusion about this, especially because the answer is very clear and has been repeatedly stated by people above: a correctly specified 95% CI should contain the true value exactly 95% of the times if repeated experiments are made (frequentist CI). It is NOT, I repeat NOT !!! "the range in which there is a 95% probability that the true value lies", regardless how much people would like to think that. The latter is called a Bayesian credible / credibility interval and has its own wikipedia page https://en.wikipedia.org/wiki/Credible_interval Incidentally, the latter also gives the correct definition for a frequentist CI, although incomprehensively written otherwise. FlorianHartig (talk) 07:04, 12 June 2014 (UTC)
Sorry, but there is no difference whatsoever between stating that an interval "contain[s] the true value exactly 95% of the times if repeated experiments are made", and stating that an interval is "the range in which there is a 95% probability that the true value lies". You're the one who is confused, about confidence intervals and about frequentism. Bayesian credible intervals are beside the point, as they arise from a different understanding of "95% probability". FilipeS (talk) 11:39, 17 July 2014 (UTC)
- No, the two are quite different. One is a statement about how probable it is that a procedure will generate an interval covering a hypothesized value for a parameter, while the other is a statement about how probable it is that the parameter lies within an interval given particular data. To see how different these are, consider the following example. Suppose there is a population with some property that has an unknown mean μ, and which is distributed linearly between μ-1 and μ+1. Now suppose we are interested in calculating a 50% confidence interval. This can easily be done simply by sampling the population twice and using the two values as the ends of the interval: this works because any sampled value has a 50% probability of being above or below μ, and so there is a 50% probability that two randomly selected values will lie either side of μ. But does this mean that once you have your two values, there is a 50% chance that the interval lies inside? No. Clearly, the further apart the two values are, the more likely they are to cover the value of μ. In fact, if they are more than one unit apart, they are certain to cover μ. Conversely, if they are very close together, they are very unlikely to cover μ. The point is that conditionalizing on particular data is different from conditionalizing on unknown or future data from a procedure. Dezaxa (talk) 10:02, 16 August 2014 (UTC)
- I agree with the above. Here is a less mathematical explanation. Let's say I set up traps to capture a lion in my garden. The traps are completely reliable: one trap will always activate and will always capture the lion. But before turning on my system I roll a 20-sided die. If a 7 shows up my wife will run the operation and with total certainty will sabotage the result by activating the wrong trap to save the lion. So this setup will capture the lion 95% of the time. The probability of capturing the lion using this system is 95%. I have 95% confidence in the procedure. Even if I have prior belief the lion likes to hang out near the trap by the shed the above statements are still correct.
- A trap is activated. The probability the lion got captured is 95%. Next I check which trap got activated and discover it is the trap near the tree at the top of the garden. It is at this point I cannot assert that the lion is near this tree with 95% probability. It seems to me if there is only one lion and therefore I can run the experiment only once I still have a 95% confidence in the procedure. I don't have to have multiple lions. However the only probability statement I can make is the trap will work / has worked before finding out which trap was actually activated. — Axel147 (talk) 20:40, 13 October 2014 (UTC)
I must be missing something. Imagine a hat with red and blue balls. 95% of the balls are red. The probability of picking a red ball is indeed 95%. Now, imagine this hat contains the 95% confidence intervals of the sample mean from all samples of some given size "n". To be clear, all of these confidence intervals are estimated from their respective sample data and thus will differ from one another. Nevertheless, 95% of those confidence intervals contain the population mean. Any confidence interval drawn from the hat has a 95% probability of containing the true population mean. So, my one sample of size "n" will produce a 95% confidence interval of the sample mean and the probability that this confidence interval contains the true population mean will be 95%. --220.127.116.11 (talk) 18:49, 24 February 2015 (UTC)
- Yes, you are missing something. Before you draw a ball, there is a 95% probability that it will be a red one. But once you've drawn it, it is either red or blue: there is no longer any probability. Even if you don't look at the ball and don't know what color you've drawn, on a frequentist understanding of probability the color of the ball in your hand is a fact about the world and there is no probability to it. A bayesian might say that there is a 95% probability that the ball in your hand is red, because to him such a statement describes his state of evidence or belief, but a frequentist cannot say this. The same is true of confidence intervals. 95% of them cover the parameter, but once you've drawn one out of the hat as it were, it either covers the parameter or it doesn't. To say that there is a 95% probability that a particular interval covers the parameter is to use bayesian language. And this is not merely a linguistic or interpretative issue: if one does want to make a bayesian statement of that kind, one would need to conditionalize on all the available information, i.e. any prior information that there might be and on the results of the experiment itself. Depending on what information is available, this might result in a probability very different from 95%. Dezaxa (talk) 13:51, 10 March 2015 (UTC)
A quick test as to the understandability of the current article
OK, not having done statistics for ages and having just read the article, have I got this right? (Second attempt, which is a sign to me that it's not currently as clear as it might be!)
1. You do some work on a sample of a population, and you come up with a result. The 95% CI is not directly a prediction of the range of figures that the 'true' result for the whole population is in; rather it is a prediction of how accurate your process is. (Something affected by, for example, the size of your sample.) Specifically, it says that if you repeatedly sample the population and work out the 95% CI each time, then the true result will tend to be in 95% of those (very probably different albeit with plenty of overlap) confidence intervals? (In each case, it either is or it isn't...)
2. Although the 'correct' figure very probably is not exactly in the middle of the CI, in terms of being able to say 'this result is probably "about right"', the narrower the CI for any given percentage the better? Lovingboth (talk) 13:03, 28 January 2015 (UTC)
- Sorry, you might want to post that to a forum such as stats.stackexchange.com Fgnievinski (talk) 01:12, 29 January 2015 (UTC)
In the section "Meaning and Interpretation", there is the following definition: "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time." The footnote right behind this strongly suggests that this is taken literally from "Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p49, p209". However, if you actually take a look at the book (e.g. on Google books, where all relevant pages are freely available: https://books.google.de/books?id=ppoujo-BInsC), it turns out that this statement is not at all in the book. The footnote is therefore misleading, and I think somebody should change the article and at least make clear that the definition is not a quotation from the book, but -- at best -- a summary/rephrasing of its content.
Utterly indecipherable to the lay-reader.
If the general public is your audience, this article is a complete failure. I'm a reader with an advanced degree, and a well-rounded education, and I can't penetrate even the lede. 18.104.22.168 (talk) 15:24, 28 September 2015 (UTC)
- simple:Confidence interval? fgnievinski (talk) 16:01, 28 September 2015 (UTC)