Talk:Confidence interval

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Statistics (Rated C-class, High-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

C-Class article C  This article has been rated as C-Class on the quality scale.
 High  This article has been rated as High-importance on the importance scale.
WikiProject Mathematics (Rated C-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
C Class
High Importance
 Field: Probability and statistics
One of the 500 most frequently viewed mathematics articles.
This article has comments.
WikiProject Measurement (Rated C-class, High-importance)
WikiProject icon This article is within the scope of WikiProject Measurement, a collaborative effort to improve the coverage of Measurement on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
Checklist icon
 High  This article has been rated as High-importance on the project's importance scale.

This article has comments here.

WikiProject Elections and Referendums  
WikiProject icon This article is within the scope of WikiProject Elections and Referendums, an ongoing effort to improve the quality of, expand upon and create new articles relating to elections, electoral reform and other aspects of democratic decision-making. For more information, visit our project page.
 ???  This article has not yet received a rating on the quality scale.

This article has comments here.

This article has an assessment summary page.

After all this time, this article contains totally erroneous statements about what a confidence interval is, and how it is computed[edit]

For example, in the introduction:

"The level of confidence of the confidence interval would indicate the probability that the confidence range captures this true population parameter given a distribution of samples."

Not at all. (And this is not the only place where this mistake is made in this article.)

This is *not* what a confidence interval means. That is surely the reason that the carefully written first paragraph does not mention anything implying that confidence level means the probability of the parameter lying in it.

It is essential that those editing this article fully understand the correct definition of the subject of this article -- or else that they step aside and make way for those who do.

Otherwise, this article will continue to misinform many thousands of readers over a period of more and more years.Daqu (talk) 19:29, 27 February 2013 (UTC)

Signing in to add a "me, too" here. The confidence interval specifically says ONLY that you can say with X% confidence that subsequent measurements will fall within this same range. It says nothing directly about whether the true value is close to the value you happened to measure. See [1] Cellocgw (talk) 16:06, 15 August 2013 (UTC)

When computing the CI of a mean (assuming a Gaussian population), the multiplier is a critical value from the t distribution. With very large n, this converges on 1.96 for 95% confidence. For smaller n, the multiplier is higher than 1.96. This article is simply wrong. HarveyMotulsky (talk) 14:58, 14 February 2014 (UTC)

Example in the "Conceptual basis" section[edit]

How would people feel if I worked on a new example for the "conceptual basis" section of the article. Currently, the example involves a measured value of a percentage. I find this confusing since the confidence level is also stated as a percentage. Maybe an alternate example with a different type of measurement could be used? Sirsparksalot (talk) 20:47, 12 September 2013 (UTC)

I am going to ask this question under my new username and see what people think (I used to edit under Sirsparksalot, now I use JCMPC). Please provide comments because I think that this is an important issue for this page. It seems as though the 90%/95% example that is given keeps flipping back and forth in a mini editing war. Personally, I feel like some of the problem is that the example given is confusing since it involves polling, which has "measured" values reported as percentages. Maybe it's my own view, but I find it easy to mix up which value is referring to the CI and which is referring to the poll results. Could we use an alternate example to help reduce some of this confusion, perhaps something such as a length measurement? JCMPC (talk) 16:29, 22 November 2013 (UTC)

I agree, please do, and if you can think of an example that would be illustrated by the bar chart that's already to the right of this section then that would be even better :-) Mmitchell10 (talk) 15:21, 21 December 2013 (UTC)

Grumpiness, pedantry and overly long introductions[edit]

I'm approaching this page as a non-statistician and I appreciate the hard work gone into the careful crafted introduction on this page. However, as a reader I want to know from the first sentence of an article what this is and why I should care and it doesn't quite achieve this. My background is critical appraisal of medical journal articles and an accepted definition - however correct or otherwise - is "the range in which there is a 95% probability that the true value lies". I know that statisticians get a bee in their bonnet about use of the word probability in this context and get all uppity about referring to the population variable as a random statistic and to refer to this is the introduction just seems grumpy. My current favourite succint explanation is "the range within which we can be 95% confident that the true value for the population lies" [2] . I think this or something similar should be a first sentence as it makes the whole character of the article that much more approachable and understandable. This is a common flaw of most of the statistics article where too much effort is spent being "correct" before trying to make it understandable or engaging. Arfgab (talk) 13:48, 12 April 2014 (UTC)

The problem with saying that a CI is "the range in which there is a 95% probability that the true value lies" is that this is simply incorrect. You may well be right in saying that it is an accepted definition in some circles, but it is nevertheless just a common misconception. The article's opening paragraph is not being pedantic, it is just carefully worded to avoid a common error. Exchanging this for "the range within which we can be 95% confident that the true value for the population lies" is not much better, because the word confidence invites the reader to misinterpret it as probability. The critical point is that the 95% refers to the procedure and the random intervals it produces, not to the results of any particular data set. Dezaxa (talk) 21:03, 18 April 2014 (UTC)
This isn't the place to discuss what a CI is, but rather than saying 'this is simply incorrect', I think it would be fairer to say that this is something which is disputed. Mmitchell10 (talk) 12:03, 19 April 2014 (UTC)
Dezaxa is right to say 'simply incorrect', above. For a nice nice example of how the 75% confidence interval - constructed from particular observed data - can actually include the true parameter value with probability 1, see e.g. Ziheng Yang (2006), Computational Molecular Evolution, pp. 151-152.

The CI is such a commonly used statistic that I am amazed that there is no agreement as to what it means. The page is not much help for a non-statistician and apparently not even to a statistician. Really now, what is a CI? Pcolley (talk) 22:23, 28 April 2014 (UTC)

I agree it's amazing that there is so much confusion about this, especially because the answer is very clear and has been repeatedly stated by people above: a correctly specified 95% CI should contain the true value exactly 95% of the times if repeated experiments are made (frequentist CI). It is NOT, I repeat NOT !!! "the range in which there is a 95% probability that the true value lies", regardless how much people would like to think that. The latter is called a Bayesian credible / credibility interval and has its own wikipedia page Incidentally, the latter also gives the correct definition for a frequentist CI, although incomprehensively written otherwise. FlorianHartig (talk) 07:04, 12 June 2014 (UTC)

Sorry, but there is no difference whatsoever between stating that an interval "contain[s] the true value exactly 95% of the times if repeated experiments are made", and stating that an interval is "the range in which there is a 95% probability that the true value lies". You're the one who is confused, about confidence intervals and about frequentism. Bayesian credible intervals are beside the point, as they arise from a different understanding of "95% probability". FilipeS (talk) 11:39, 17 July 2014 (UTC)

No, the two are quite different. One is a statement about how probable it is that a procedure will generate an interval covering a hypothesized value for a parameter, while the other is a statement about how probable it is that the parameter lies within an interval given particular data. To see how different these are, consider the following example. Suppose there is a population with some property that has an unknown mean μ, and which is distributed linearly between μ-1 and μ+1. Now suppose we are interested in calculating a 50% confidence interval. This can easily be done simply by sampling the population twice and using the two values as the ends of the interval: this works because any sampled value has a 50% probability of being above or below μ, and so there is a 50% probability that two randomly selected values will lie either side of μ. But does this mean that once you have your two values, there is a 50% chance that the interval lies inside? No. Clearly, the further apart the two values are, the more likely they are to cover the value of μ. In fact, if they are more than one unit apart, they are certain to cover μ. Conversely, if they are very close together, they are very unlikely to cover μ. The point is that conditionalizing on particular data is different from conditionalizing on unknown or future data from a procedure. Dezaxa (talk) 10:02, 16 August 2014 (UTC)
I agree with the above. Here is a less mathematical explanation. Let's say I set up traps to capture a lion in my garden. The traps are completely reliable: one trap will always activate and will always capture the lion. But before turning on my system I roll a 20-sided die. If a 7 shows up my wife will run the operation and with total certainty will sabotage the result by activating the wrong trap to save the lion. So this setup will capture the lion 95% of the time. The probability of capturing the lion using this system is 95%. I have 95% confidence in the procedure. Even if I have prior belief the lion likes to hang out near the trap by the shed the above statements are still correct.
A trap is activated. The probability the lion got captured is 95%. Next I check which trap got activated and discover it is the trap near the tree at the top of the garden. It is at this point I cannot assert that the lion is near this tree with 95% probability. It seems to me if there is only one lion and therefore I can run the experiment only once I still have a 95% confidence in the procedure. I don't have to have multiple lions. However the only probability statement I can make is the trap will work / has worked before finding out which trap was actually activated. — Axel147 (talk) 20:40, 13 October 2014 (UTC)

I must be missing something. Imagine a hat with red and blue balls. 95% of the balls are red. The probability of picking a red ball is indeed 95%. Now, imagine this hat contains the 95% confidence intervals of the sample mean from all samples of some given size "n". To be clear, all of these confidence intervals are estimated from their respective sample data and thus will differ from one another. Nevertheless, 95% of those confidence intervals contain the population mean. Any confidence interval drawn from the hat has a 95% probability of containing the true population mean. So, my one sample of size "n" will produce a 95% confidence interval of the sample mean and the probability that this confidence interval contains the true population mean will be 95%. -- (talk) 18:49, 24 February 2015 (UTC)

Yes, you are missing something. Before you draw a ball, there is a 95% probability that it will be a red one. But once you've drawn it, it is either red or blue: there is no longer any probability. Even if you don't look at the ball and don't know what color you've drawn, on a frequentist understanding of probability the color of the ball in your hand is a fact about the world and there is no probability to it. A bayesian might say that there is a 95% probability that the ball in your hand is red, because to him such a statement describes his state of evidence or belief, but a frequentist cannot say this. The same is true of confidence intervals. 95% of them cover the parameter, but once you've drawn one out of the hat as it were, it either covers the parameter or it doesn't. To say that there is a 95% probability that a particular interval covers the parameter is to use bayesian language. And this is not merely a linguistic or interpretative issue: if one does want to make a bayesian statement of that kind, one would need to conditionalize on all the available information, i.e. any prior information that there might be and on the results of the experiment itself. Depending on what information is available, this might result in a probability very different from 95%. Dezaxa (talk) 13:51, 10 March 2015 (UTC)

New Misunderstandings section[edit]

Please add your darlings -- always with citations. Thanks. Fgnievinski (talk) 22:36, 16 September 2014 (UTC)

Most of the quoted 'misunderstandings' are the same issue, so are they all necessary??? Also, it's not clear whether Wikipedia is saying the quotes are true statements or misunderstandings. Thanks Mmitchell10 (talk) 20:24, 17 September 2014 (UTC)
You are right. I've tried addressed some of these issues in the last edit. Fgnievinski (talk) 16:10, 18 September 2014 (UTC)
While I agree with the general thrust of the section on misunderstandings, the wording of some of the examples is not clear, and some of the references are to blog posts, which are not acceptable as authoritative sources. Also, the referenced article "The Confidence Interval of the Mean" by Oakley Gordon is not correct. He says "The second misconception is to interpret the confidence interval above as stating that there is a 95% probability that the true value of the population mean is between 46.90 and 54.10. The correct interpretation is that there is a 95% probability that the confidence interval contains the true value of the population mean." Neither of these interpretations is correct. One cannot say of a particular computed sample interval that it has a 95% probability of covering the mean, only that 95% of such samples will cover the mean in a long run of samples. I will find some time in the next few days to tidy this up unless someone else does. Dezaxa (talk) 08:32, 20 October 2014 (UTC)
I've gone ahead and rewritten the section. I hope it is clearer now. A couple of the references are still to web based sources where I would prefer a traditional text book, but many of my books are quite old. If someone can add some up-to-date text book references, then please do so. I've also taken the liberty to add the phrase "from some future experiment" to the Meaning and Interpretation section and to remove the disputed tag. I believe that with this qualification, the second paragraph of that section does not contradict what is said in the Misunderstandings section. Dezaxa (talk) 06:44, 27 October 2014 (UTC)

A quick test as to the understandability of the current article[edit]

OK, not having done statistics for ages and having just read the article, have I got this right? (Second attempt, which is a sign to me that it's not currently as clear as it might be!)

1. You do some work on a sample of a population, and you come up with a result. The 95% CI is not directly a prediction of the range of figures that the 'true' result for the whole population is in; rather it is a prediction of how accurate your process is. (Something affected by, for example, the size of your sample.) Specifically, it says that if you repeatedly sample the population and work out the 95% CI each time, then the true result will tend to be in 95% of those (very probably different albeit with plenty of overlap) confidence intervals? (In each case, it either is or it isn't...)

2. Although the 'correct' figure very probably is not exactly in the middle of the CI, in terms of being able to say 'this result is probably "about right"', the narrower the CI for any given percentage the better? Lovingboth (talk) 13:03, 28 January 2015 (UTC)

Sorry, you might want to post that to a forum such as Fgnievinski (talk) 01:12, 29 January 2015 (UTC)