Talk:Accuracy and precision

From Wikipedia, the free encyclopedia
Jump to: navigation, search
          This article is of interest to the following WikiProjects:
WikiProject Statistics (Rated B-class, High-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

B-Class article B  This article has been rated as B-Class on the quality scale.
 High  This article has been rated as High-importance on the importance scale.
 
WikiProject Mathematics (Rated B-class, High-importance)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
B Class
High Importance
 Field: Probability and statistics
One of the 500 most frequently viewed mathematics articles.
WikiProject Firearms (Rated C-class)
WikiProject icon This article is within the scope of WikiProject Firearms, a collaborative effort to improve the coverage of firearms on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
Checklist icon
 ???  This article has not yet received a rating on the project's importance scale.
 

accuracy, TRUENESS and precision[edit]

Hi, i am German and just wanted to take a look to what you english speaking people are writing about this theme. I think you are not with the bible of metrology, the VIM. (vocabulaire international de metrologie) http://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2008.pdf There, the three words are definded exactly and according to ISO. Take a look and don't be afraid, it's written in french AND english :-) I think "measurement accuracy" is the generic therme word, something like the chief word. "precision" is described correct here, but the actual "accuracy" should be called "trueness" I am not encouraged enough to change an english article. But you can take a look at Richtigkeit in the german wikipedia. There, i have put some grafics which show how accuracy, precision and trueness are described in the VIM. You can use them to change this article. Good luck! cu 2clap (talk) 17:58, 14 January 2011 (UTC)

Re accuracy, TRUENESS and precision 2[edit]

Hi I am not English too I am Slovak, but similar problem is also in my language. I agree with you. Anyhow I studied using accuracy and precision as is using in article (another problem is translation). But, really according ISO 5725 Accuracy (trueness and precision)is common term for trueness and precision. So here "accuracy" should be called "trueness" but historically many people using accuracy as trueness (incorrectly according ISO 5725) Another problem is new terminology translation to other ISO language version. For instance accuracy(trueness and precission) is in Slovak version of ISO named precission (trueness and conformity). So this same word using for precission (at some universites for example) in sense of article, now is used as accuracy in sense of ISO 5725 (and best translation of this word presnost is corectness). —Preceding unsigned comment added by 212.5.210.202 (talk) 10:41, 27 April 2011 (UTC)

-) — Preceding unsigned comment added by 86.45.42.149 (talk) 10:02, 5 December 2013 (UTC)

Gauge R&R[edit]

I don't see any references to GRR (Gauge Repeatability and Reproducibility), which is an efficient method of quantifying precision error. I'd like to start an article on that, does anyone oppose to this suggestion? tks. Mlonguin 16:50, 16 March 2007 (UTC)


Positive start[edit]

Just what I was looking for. [[User:Nichalp|¶ ɳȉčḩåḽṗ | ]] 19:07, Nov 6, 2004 (UTC)

Gauge R&R[edit]

I don't see any references to GRR (Gauge Repeatability and Reproducibility), which is an efficient method of quantifying precision error. I'd like to start an article on that, does anyone oppose to this suggestion? tks. Mlonguin 16:50, 16 March 2007 (UTC)

Seems like this is a narrow definition[edit]

This only works in the case that you have many samples. More generally accuracy describes how closely to the truth something is, and precision describes how closely you measure or describe something. Nroose 18:30, 7 May 2005 (UTC)

Yes, I don't like how precision is defined in the article as repeatability. Repeatability is closer to the idea of reliability. In the texts and sites that I've read, precision is defined in terms of the smallest increments of the number scale shown on (or by) the measuring device. So, a ruler whose smallest increments are mm allows the user to report measurements to that level of precision. I think the article is using the term for when the measuring "device" is the act of collecting a set of data that yields a statistical result. Reading down to the bottom of the article, I do find the more conventional usage of the term precision described after all. It would be good to find this usage mentioned at the top of the article. Also, the term valid used early in the article isn't necessarily the same as the familiar idea of validity in the social sciences. (In those disciplines, the concepts of reliability and validity are discussed in texts, with many subtypes of each.)

Arrow analogy[edit]

That values *average* close to the true value doesn't make those individual values accurate. It makes their *average* an accurate estimate of the true value, but that is not the same thing. If a meteorologist gets the temperature forecast wrong by +20 degrees one day, and -20 degrees the next, we don't say he's accurate because the average is right. --Calair 02:05, 26 May 2005 (UTC)

Well....I guess we need to be clear here. Is accuracy a property of an individual measurement, or of a population of measurements? Obviously you cannot define the precision of an individual measurement. I think accuracy is the same in this respect. ike9898 13:54, May 26, 2005 (UTC)
Actually, it is possible to talk meaningfully about the expected precision of individual measurements, from a probabilistic standpoint. It works something like this (I'm a bit rusty here, apologies in advance for any errors):
Let A be some property we want to measure. Let {M1, M2, ...} be a sequence of attempts to measure that property by some means, all of them under consistent conditions. For the moment I'm going to suppose we have a large number of Mi; I'll get back to this later.
Each Mi can be represented as the sum of three components:
Mi = A + S + Ei
Here, A is the true mass of the coin, S is the systematic error, and Ei is the random error. S is the same for all Mi, defined as mean({Mi}) - A, and Ei is defined as Mi - A - S.
Ei varies according to i, but when the sequence {E1, E2, ...} is examined, it behaves as if all terms were generated from a single random distribution function E. (Basically, E is a function* on the real numbers**, such that the integral of E(x) over all real x is 1; the integral of E(x) between x=a and x=b is the probability that any given 'random number' from that distribution will lie between a and b.)
The precision of our measurements is then based on the variance of E. It happens that the mean value of this distribution is 0, because of how we defined S and hence Ei. This means that the variance of E is simply the integral (over all real x) of E(x)*x2, and the expected precision of any individual measurements can be defined as the square root of that variance***.
*Possibly including Dirac delta functions.
**Technically, "real numbers multiplied by the units of measurement", but I'll ignore units here.
***Possibly multiplied by a constant.
Now, supposing the number of Mi is *not* large - perhaps as small as just one measurement. Expected precision of that measurement is then the answer to the question "if we took a lot more measurements of similar quantities, under the same conditions, what would the precision be?" (Actually *calculating* the answer to that question can be a tricky problem in itself.)
Time for an example. Suppose we're trying to weigh a rock on a digital scale. When there's no weight on the scale, it reads 0.0 grams. We test it with known weights of exactly 1.0, 2.0, 5.0, 10.0, 20.0, 50.0, and 100.0 grams, and every time it gives the correct value, so we know it's very reliable for weights in the range 0-100 grams, and we know the rocks are somewhere within that range.
But, being a digital scale, it only gives a reading to the nearest multiple of 0.1 grams. Since a rock's weight is unlikely to be an exact multiple of 0.1 grams, this introduces an error. The error function from this cause is effectively a square function with value 10 between -0.05 & +0.05, and 0 outside that range, giving it a variance of 8.3e-4 and so a precision of about 0.03 grams. Which is to say, "for a randomly chosen rock, we can expect rounding to cause an error of about +/- 0.03 grams".
Getting back to the arrows, the precision of a given arrow is effectively how predictable that single arrow is - if you know that on average the archer aims 10 cm high of the bullseye, how close to that 10-cm-above-the-bullseye mark is the arrow likely to be? See Circular error probable for another example of this concept. When talking about a specific measurement or arrow, 'precision' is meaningfully applied before the fact rather than afterwards.
It's a bit like the difference between "the coin I'm about to toss has a 50% chance of being heads" and "the coin I've just tossed has a 0% chance of being heads, because I'm looking at it and it came up tails.
Whew, that was long; I hope it makes some sort of sense. --Calair 00:40, 27 May 2005 (UTC)

I don't like the arrow analogy. Analogies are supposed to make concepts clearer by reference to something that is easier to understand. I find that the arrow analogy is more confusing than the actual concepts of accuracy and precision. This is because the actual concepts are discussed in terms of single observations, but the analogy uses a distribution of observations. I don't see how precision can be modelled with a single arrow unless you show a very fat arrow being less precise than a very sharp arrow because the fat arrow covers a wider area at once, such as the difference between a ruler that shows only cm versus one that shows mm. As for using a set of arrows, any one of those arrows can be called accurate, but using an average position as a measure of the accuracy is kind of weak as far as the analogy goes. Also, I think the clustering of arrows together better describes the idea of reliability, rather than precision.

agreed completely. The arrows seem to imply things like you said that aren't always true. When a machine is giving you output on the same item usually its the group that is meaningful not the one value. The arrows analogy seems t0 totaly neglects the problem of having accurate but impercise values in a series. So if the bullseye is 1.532 and I get values of 1.5; 1.53; 1.532; 2 I was accurate. The percision of the data varies but yet they are all 100% accurate. So the final value I derive from these data points will not be very percise due to the wide range, but that doesn't mean the arrows didn't all hit their target- they did with perfect accuracy its just if this was a real world type analogy we didn't know where the bullseye was and can only look at the values of the arrows (as if the arrows were different widths and we couldn't see the bullseye after the hit) The arrow kinda neglects this example that I've worded poorly. I second the view that percision and repeatability need to be differentiated as well. Basically we need to distinguish between individual measurements being percise and a string of data being percise as being repeatable (percise in the whole- each being close to the other but not neccesarily each measurement being claimed to have a known percision of its own).--24.29.234.88 (talk) 11:39, 30 January 2009 (UTC)

The arrow analogy is completely wrong as it confuses Accuracy with lack of Bias. The top target is a perfect example. That shot pattern is not one of high accuracy because each shot is not close to the target value. mean squared error (MSE) = (Bias^2 + Variance). If the Bias is small and the Variance is small, then you do get low MSE (high Accuracy). The picture above showing a distribution and labeling Accuracy as the distance from the center of the distribution to the true value also confuses Bias with Accuracy. 165.112.165.99 (talk) 12:31, 11 April 2013 (UTC)

You are right, there is a problem with the traditional use of the term "accuracy" in metrology. ISO and BIPM have change their terminology so that "accuracy" now means that a measurement has both "trueness" (a new term for "lack of bias") and "precision". See the new paragraph Terminology of ISO 5725. SV1XV (talk) 00:01, 12 April 2013 (UTC)
I agree that the arrow analogy is poor. In addition to the arguments above, when shooting at a target, an archer can see where the bullseye is, and will adjust his/her shot depending on where previous arrows have struck. What's more, the existing "target" section was rambling and poorly written. If a target analogy is useful (which I don't think it is), the section would need a complete rewrite anyway. So I have removed this section. --Macrakis (talk) 16:48, 5 July 2013 (UTC)

Precision, repeatability and reliability[edit]

I don't think precision and repeatability are quite the same. Repeatability unequivocally involves time, whereas precision may not. Also, there is no mention of reliability here.

Please consider the relation with time average and sample average - see Ergodic Theorem.

In measurement system analysis, at least in gauge calibration work, repeatability is the variation in measurements taken with the same operator, instrument, samples, and method at pretty much the same time. Reproducibility is a variation across operators, holding the instrument, samples, method, and time constant. (See [Gauge R&R]) Measurement systems might drift across time, but we still have the problem of making some judgement about some Platonic ideal truth that may not be knowable. Maybe precision is about the granularity of some representation of an ideal, like precision (arithmetic), while accuracy is some statement about the quality of the correspondence between the ideal and the representation. 70.186.213.30 18:14, 15 July 2006 (UTC)

Terminology according to ISO[edit]

The International Organization for Standardization (ISO) provides the following definitions

Accuracy: The closeness of agreement between a test result and the accepted reference value.

Trueness: The closeness of agreement between the average value obtained from a large series of test results and an accepted reference value.

Precision: The closeness of agreement between independent test results obtained under stipulated conditions.

Reference: International Organization for Standardization. ISO 5725. Accuracy (trueness and precision) of measurement methods and results. Geneve: ISO, 1994.

In the case of a random variable with a normal distribution, trueness can be interpreted as the closeness of its mean (i.e. expected valuee) to the reference value and the precision as the inverse of its standard error.

BIPM also adopts the ISO 5725-1 definitions. I added a new section and a relevant citation. However in the USA the term "trueness" is not widely used yet. SV1XV (talk) 00:06, 12 April 2013 (UTC)

Quantifying accuracy and precision[edit]

High precision is better than low precision. High deviation is worse that low deviation. How can you identify precision with deviation? Bo Jacoby 12:28, 11 August 2006 (UTC)

I found a phrase in Merriam-Webster's Unabridged that I like: minute conformity. That covers both senses of precision that people describe: degree of closeness in a set of measured values, and smallest increments of a measuring scale on a measuring device. Anyhow, I see in the article that standard deviation has become a standard way to quantify precision, though one could consider average absolute deviation. Now because of the inverse relationship between precision and deviation, I suppose precision could be defined in those inverse terms. As deviation approaches infinity, precision approaches zero, and vice versa?207.189.230.42 07:40, 10 December 2006 (UTC)
Numerically, precision is defined as the reciprocal of the variance. Jackzhp (talk) 01:26, 19 December 2007 (UTC)
Can you cite a reliable source for that claim?  --Lambiam 08:56, 19 December 2007 (UTC)
1. James Hamilton "Time Series Analysis". 1994. page 355. 2. Price Convexity and Skewness by Jianguo Xu in Journal of Finance, 2007, vol. 62, issue 5, pages 2521-2552. On page 2525, Precision is defined as the reciprocal of variance. 3. some research paper in finance, use Precision weighted portfolio. after you verify it, i guess you will change this article and variance. Jackzhp (talk) 22:50, 19 December 2007 (UTC)
Well, there are several problems with that. I concede that Hamilton states that "the reciprocal of the variance [...] is known as the precision". First, this is done in a specific context, Bayesian analysis. Definitely not all authors use this terminology. For example, S. H. Kim, Statistics and Decisions: An Introduction to Foundations (Chapman & Hall, 1992, ISBN 9780442010065), states that "high precision is tantamount to low variance" and introduces the notion of efficiency, which is proportional to the reciprocal of variance, as a measure of precision, but never defines precision as a numerical quantity. The author even writes "a lower bound exists on the precision" (page 155) where that lower bound is a lower bound on the variance, and not its reciprocal. Then, in contexts in which precision of measurement or manufacture is important, the standard deviation (and therefore variance) is a dimensioned quantity, like 0.52 mm, giving a variance of 0.27 mm2. Most people, including professional statisticians involved in quality control, might be puzzled on reading that some part is manufactured to a precision of 3.7 mm−2, and might not understand that this is supposed to mean something like ±0.52 mm. Other sources have incompatible definitions. Federal Standard 1037C offers this definition: "precision: 1. The degree of mutual agreement among a series of individual measurements, values, or results; often, but not necessarily, expressed by the standard deviation."[1] A web page given as a reference in our article has this to say: "There are several ways to report the precision of results. The simplest is the range (the difference between the highest and lowest results) often reported as a ± deviation from the average. A better way, but one that requires statistical analysis would be to report the standard deviation."[2] So I think it would be unwise to write, without any reservation, that "precision is defined as the reciprocal of the variance". At best we could write that some authors define it thusly in the context of estimation theory.  --Lambiam 11:21, 20 December 2007 (UTC)
So how about put all kinds of definition in the article? and let readers to choose one to use. At present, only one meaning is assumed in the article. Jackzhp (talk) 18:40, 21 December 2007 (UTC)
I have no objections in principle against an additional section "Accuracy and precision in estimation theory", in which it then could be stated (with citations) that some authors define precision as the reciprocal of variance.  --Lambiam 22:34, 21 December 2007 (UTC)
Definitions on these subjects vary more than anything I know of. Let me point out that NIST in "Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, Technical Note 1297" [1]states that accuracy is qualitative concept and hence, should not associated with numbers!, instead use expressions like "standard uncertainty" which can be associated with numbers, (and that precision should not be used for accuracy). They also recommend not using the term "inaccuracy" at all. This document have lot of additional definitions that pop up in discussions like this one. All this can be found in appendix D in this document from NIST. I believe the definitions are intended to be very general regarding to area of interest. I my opinion the terms "random error" and "systematic error" is much more self-describing and intuitive. The definitions "accuracy" and "precision" just leads to to much confusion. Wodan haardbard (talk) 13:06, 26 January 2009 (UTC)

just a funny tag, contextually[edit]

free to delete :)

Very amusing.  :) Deleted, LOL. 02:25, capitalist 02:25, 24 August 2006 (UTC)

Second picture is confusing[edit]

The second picture (propability density) is confusing: if you look at the picture it seems that the precision increases with the standard deviation, which is not true. The same holds for the Accuracy.

In the picture should be shown the inverse of accuracy and precision.

formulas[edit]

mathematical formulas might provide more insight.. Can anyone do that? As2431 20:01, 5 December 2006 (UTC)

No formulas, just ambiguity[edit]

The more I analyze the sentences with terms "accuracy", "precision", "resolution", the less I understand the meaning of these terms.

What does it mean, to increase the accuracy? What is high accuracy?
What does it mean, to increase the precision? What is high precision?
What does it mean, to increase the resoluiton? What is high resolution? What is, for example, "600nm resolution"?

What mathematical formulas can be written for the meaning-less quantities? I qualify all these terms as highly ambiguous and I suggest to avoid their use at all.

We should say random deviation, systematic deviation, lower limit of resolution, ot if you like, random error, systematic error, and so on, to characterise the performance.

Then the colleagues will not try to increase the errors. dima 02:56, 10 April 2007 (UTC)

P.S.: Do you know, what does it mean, for example, "to drift with the North wind"? What direction does one move at such a drift?

inaccurate textual description of accuracy formula under biostatistics?[edit]

In the section "Accuracy in biostatistics," the article says:

   That is, the accuracy is the proportion of false positives and true negatives in the population. It is a parameter of the test.

This is not, however, what the formula says. I think that "false positives" in the above statement ought to be replaced with "true positives." What do you think? Hsafer 04:55, 13 May 2007 (UTC)

I reworked it - hopefully should be less ambiguous now. --Calair 07:38, 13 May 2007 (UTC)

What is accuracy, precission and trueness?[edit]

I am confused with deifinition given.

The amc technical brief by Analytical Methods Committee No. 13 Sep 2003 (Royal Society of Chemistry 2003) in"Terminology - the key to understanding analytical science. Part 1: Accuracy, precision and uncertainty." [3]

is giving different definition: according to them Accuracy is a combination of systematic and random errors. Its is not just pure systematic errors. therefore trueness is used to represent the systematic errors, precision for random.

Take a note on that Linas193.219.36.45 09:17, 25 May 2007 (UTC)

AMC are using the paradigm used in ISO 5725, the VIM and others. Going back a ways, 'accurate' used to mean 'close to the truth' and 'precise' meant 'closely defined' (in English pretty much as in measurement, historically ). Somewhere in the '80's, someone - probably ISO TC/69, who are responsible for ISO statistical definitions - defined 'accuracy' as 'closeness of _results_ to the true value'. Individual results are subject to both random and systematic error, so that defined accuracy as incorporating both parts of error. Precision covers the expected size of random error well - that's essentially what it describes. But having defined 'accuracy' as including both, there was no term out there for describing the size of the systematic part. So 'Trueness' was born as a label for the systematic part of 'closeness'. As a result, for measurement, we _now_ have 'accuracy' including both trueness (systematic) and precision (random), and of course this is why the AMC uses the terms it does - they are the current ISO terms. However, things invariably get tricky when this way of looking at the problem collides with ordinary Engish usage or historical measurement usage, both of which tend to use 'accuracy' to refer to the systematic part of error. Of course, this leaves this page with a bit of a problem; it becomes important to decide which set of terms it's intended to use... SLR Ellison (talk) 22:46, 1 June 2014 (UTC)

Confused[edit]

"Further example, if a measure is supposed to be ten yards long but is only 9 yards, 35 inches measurements can be precise but inaccurate." I'm a bit confused as to what this is saying - can somebody clarify? --Calair 01:33, 9 June 2007 (UTC)

Is "accuracy in reporting" appropriate in this article?[edit]

This seems to me to be a very different sort of topic. All of the other sections are referring to the scientific concepts of accuracy and precision within the context of measurement (or statistical treatment of data). These terms have a fairly precise meaning in that context, but in the reporting context that is missing. Is there a reference that includes reporting in this context? (I have searched but cannot find one.) Until a citation can be made to make this fit in here, I will delete it. Anyone can feel free to revert it if they have a citation to add to make it work. 128.200.46.67 (talk) 23:02, 20 April 2008 (UTC)

[[[Link title]

Accurate but not precise[edit]

The lead states that "The results of calculations or a measurement can be accurate but not precise...", and there is a graphic lower down to illustrate this. However, the text in "Accuracy vs precision - the target analogy" contradicts this by stating that it is not possible to have reliable accuracy in individual measurements without precision (but you might be able to get an accurate estimate of the true value with multiple measurements).

One or other of these needs to be changed (I think the lead is wrong).

Possibly a distinction needs to be made between measurements and instruments. I would think (i.e. this may be POV/OR) that an instrument could be accurate but imprecise (if it was zeroed correctly but had a large random error) or precise but inaccurate (if it had a small random error but wasn't zeroed correctly).

But as "Accuracy vs precision - the target analogy" says, without precision, an individual measurment will only be accurate if you are lucky (and is pretty useless, because if you have no independent confirmation of the true value, and if you do have independent confirmation, the imprecise measure adds nothing new), while a set of measurements can merely be used to estimate the true value. 62.172.108.23 (talk) 11:23, 9 July 2008 (UTC)

Totally agree with you man. I've got some analytical chem textbook I could cite from around here somewhere.

Here's my view, advise if anyone disagree.

The article states: "accuracy is the degree of closeness of a measured or calculated quantity to its actual (true) value." This seems relevant to dealing with inaccurate measurements only. So if the value is 1421 and I say 1.4E3 I am accurate. That I wasn't very close doesn't matter- that's a discussion of percision. Now if I say 2.1E3 I am inaccurate and equally impercise. You could say my accuracy was way off, but it wouldn't hold true for an accurate number that was just impercise,even more so, such as 1E3, even though that number subsumes the inaccurate value and as such is also quite far from the true value- agree? So this quote only applies to inaccurate numbers or data in my view. Of course with a data string the discussion would depend on what the additional claims are, i.e. if the claim is that the average is something and the true value is 95% likely to be within a range and it is then again the data set was accurate its percision notwithstanding, and it would be fallacy to say it was inaccurate when nobody claimed the individual numbers had meaning prior to the statistical analysis.


For example the bullseye graphic is somewhat misleading by exclusion. If I threw a dart the size of the room and it hit every point on the bullseye that would be accurate. The fact that my next throw is way off with respect to the location of the prior 'perfect throw' but still covers the whole bullseye doesn't affect the accuracy at all, only the percision. So noting that you can still be accurate, even perfectly so, and impercise would be good. This is relevant to data, a string or otherwise, that has large confidence intervals or large ranges expressed by few significant figures.

Agree disagree? (and yes my spelling of precision is not very percise, but I only claim my spelling to be +/- 30% of letters correct so I am accurate).--24.29.234.88 (talk) 11:21, 30 January 2009 (UTC)

A Venn Diagram Like Assessment of “Precise and Not Accurate”[edit]

“A measurement system can be accurate but not precise (reproducible/repeatable), precise (reproducible/repeatable) but not accurate, neither, or both.”

Let us try to simplify the understanding of this statement a little by considering what happens when what we are measuring (the “element of reality”) takes a single real value (ultimately, it seems that this must in some sense be the case for all measurements in that we can construct some mathematical function associated with a characteristic function that embodies all the information of what we are trying to measure – assuming that what we are trying to measure can be embodied by some type of “event space”). THEN:

I) “(Not precise) AND (accurate)” implies that we get the true result BUT that what we measure is NOT reproducible (so we consistently get many DIFFERENT true results). It seems that in many cases, since what is being measured has some degree of static nature (in the sense that it's value or probability is constant enough for us to want to measure it), THEN this eventuality is NOT possible. There are probably “rational framing” issues here – for instance, a ruler on a spaceship at varying near light speeds WILL have many different “true” lengths BUT the experimental circumstances under which the measurement occurred was not like-for-like. So, we potentially have that “(Not precise) AND (accurate)”=Empty-Set (assuming that the True value of what we are measuring can take ONLY one value at a time)

II) “Precise AND (not accurate/reproducible)” implies that we get a reproducibly WRONG result. This is perfectly possible.

III) “Precise and accurate” is what we usually aim for.

IV) NOT precise and NOT accurate = NOT (precise OR accurate).

IN the dart-board example, I imagine that the “truth” is meant to be represented by the centre of the dart board (ie: this is the measure of “accuracy”) WHEREAS the precision is the tightness of clustering. BUT, how on Earth can one justify one's measurement of a “true value” if there is low clustering WITHOUT using averages/statistics? Once we assume that what we're measuring can only take one value at a time (ie: it is NOT multi-valued), then case (I) becomes the empty set of events.

BUT, IF what we are measuring CAN take many values AT THE SAME TIME (presumably, in quantum mechanics where, in a sense, it is not possible to deal with physical quantities to a high temporal resolution [if they exist at all at such resolutions] then the “[quantised] best guess location of an electron” would have several values, which the experimenter might NOT want to encode into the form of a statistic (even if that statistic were just an array of the best-guess locations). Still, this just seems like a weird thing to do, and my characteristic function argument above would still result in a statistic whose measurement only yields one value.

Can anyone think of a reasonable “multivalued” measurement/element of reality? AnInformedDude (talk) 23:53, 5 February 2013 (UTC)

Accuracy and precision in logic level modeling and IC simulation[edit]

As described in the SIGDA Newsletter [Vol 20. Number 1, June 1990] a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality. Another reference for this topic is "Logic Level Modelling", by John M. Acken, Encyclopedia of Computer Science and Technology, Vol 36, 1997, page 281–306.

I removed this sentence because I believe it can be understood only by who wrote it, and by who read the cited articles. The concept (which is not explained) might be interesting. If the author wants to make the sentence clear, and reinsert it in the article, I will not object. Paolo.dL (talk) 14:53, 8 August 2008 (UTC)

The thing is, the quote Precision is measured with respect to detail and accuracy is measured with respect to reality is actually the best quote in the entire article. Precision may have the meaning ascribed in this article within the world of statistics, but I was always taught that the difference is that precision is the resolution of the measurement, as opposed to accuracy, which is its 'trueness'. Why can't we say this in the opening paragraph, and then follow it up with a more detailed definition including all the references to 'repeatable', etc? Blitterbug (talk) 11:22, 7 July 2009 (UTC)

Thickness of the lines on a ruler[edit]

This article needs a few simple "everyday life" examples. As it is written now I can't figure out if the fact that the lines marking off the millimetres on my ruler are not infinitely thin, make it inaccurate or imprecise. Roger (talk) 13:09, 15 July 2011 (UTC)

Misleading Probability density graphic[edit]

According to the Probability density graphic, increasing accuracy is NOT same as higher accuracy. Same with precision. Shouldn't the distance labels be 1/Accuracy and 1/Precision ? — Preceding unsigned comment added by 99.66.147.165 (talk) 01:00, 6 October 2011 (UTC)

Wrong merge[edit]

The logical congruency between this page and Precision and recall is not such that a merge is possible. Please consider removing the merge tag. Bleakgh (talk) 20:11, 3 June 2012 (UTC)

Target image[edit]

The target image with four holes around the center stated "High accuracy, low precision", however, without precision there is no accuracy of a series of measurements. Just because the average of four scattered or "bad shots" is in the center does not mean the cluster is accurate. If the repeatability of a group of measurements is poor (low precision) there can be no accuracy. Vsmith (talk) 16:27, 5 July 2013 (UTC)

the images and section have been removed, see Talk:Accuracy_and_precision#Arrow_analogy above. Vsmith (talk) 17:56, 5 July 2013 (UTC)

Clarification[edit]

I appreciate the depth of discussion (particular how it's been used historically and in different disciplines) but got a bit lost. Maybe we could clarify the main conceptual distinction as: Accuracy = exactness Precision = granularity Then develop discussion from this primary distinction which is conceptual, rather than tied to any historical or disciplinary uses of the terms.Transportia (talk) 18:17, 18 January 2014 (UTC)


Philosophical Question[edit]

Does this difference make any sense at all? First of all, the difference between accuracy and precision as defined here can only make sense in terms of a definite and known goal? For instance, an archer might try to hit a target by shooting arrows onto it. If he has a large scatter, then one can call him "imprecise" (although e.g. in German this is complete synonym to "inaccurate"). If he has a large scatter then the archer might be called "precise". But if he systematically failes the target center, we call im "inaccurate"? Well, this seems to me rather nonsense - "precise but inaccurate"?! ;)

If I understand right, you want to express the difference of the situation where out of an ensemble of scattered points (e.g. statistically) the estimated mean is closer to the/a "true" mean then the standard deviation (or some multiple of it maybe) of this scatter field. This is really arbitrary! But in terms of statistics this is even complete nonsense at all - and probably related to terms like "bias". If you don't know the true target then you cannot tell anything about a systematic deviation. In experimental physics, you compensate for this by performing several independent experiments with different setups. But the only thing one can observe is that maybe two results of two different experiments are inconsistent within e.g. one standard deviation (arbitrariness!). But which experiment was more "true" is infeasible to determine. A third and a fourth etc. experiment might then point more to the one or to the other. But if you cannot figure out in your experiment the thing which might provoke a possible bias you have no clue whether your experiment or the others have a bias.

But let's come back to the original question. In my eyes, accuracy and precision are by no means really different, as are systematic and stochastic/statistic uncertainties, as long as you have no information, in which direction your systematic error goes, or about whether it is present at all.

  1. ^ http://physics.nist.gov/Pubs/guidelines/contents.html