Talk:Accuracy and precision

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Accuracy, TRUENESS and precision[edit]

Hi, i am German and just wanted to take a look to what you english speaking people are writing about this theme. I think you are not with the bible of metrology, the VIM. (vocabulaire international de metrologie) http://www.bipm.org/utils/common/documents/jcgm/JCGM_200_2008.pdf There, the three words are definded exactly and according to ISO. Take a look and don't be afraid, it's written in french AND english :-) I think "measurement accuracy" is the generic therme word, something like the chief word. "precision" is described correct here, but the actual "accuracy" should be called "trueness" I am not encouraged enough to change an english article. But you can take a look at Richtigkeit in the german wikipedia. There, i have put some grafics which show how accuracy, precision and trueness are described in the VIM. You can use them to change this article. Good luck! cu 2clap (talk) 17:58, 14 January 2011 (UTC)


Just putting my own independent comment below regarding 'precision'. KorgBoy (talk) 05:41, 20 March 2017 (UTC)

The discussion about precision should always begin or end with a discussion about whether or not 'precision' has units. In other words, is it measurable and convertible to something quantitative, like a number or value? What really confuses readers is that - there's this word 'precision', but nobody seems to say whether it can be quantified like an 'accuracy' or 'uncertainty'. For example, is precision the same as 'variance', or maybe a 'standard deviation'? If so, then it should be stated. Otherwise, tell people straight up if precision is just a descriptive word, or if can have a number associated with it. KorgBoy (talk) 05:39, 20 March 2017 (UTC)

Re Accuracy, TRUENESS and precision 2[edit]

Hi I am not English too I am Slovak, but similar problem is also in my language. I agree with you. Anyhow I studied using accuracy and precision as is using in article (another problem is translation). But, really according ISO 5725 Accuracy (trueness and precision)is common term for trueness and precision. So here "accuracy" should be called "trueness" but historically many people using accuracy as trueness (incorrectly according ISO 5725) Another problem is new terminology translation to other ISO language version. For instance accuracy(trueness and precission) is in Slovak version of ISO named precission (trueness and conformity). So this same word using for precission (at some universites for example) in sense of article, now is used as accuracy in sense of ISO 5725 (and best translation of this word presnost is corectness). —Preceding unsigned comment added by 212.5.210.202 (talk) 10:41, 27 April 2011 (UTC)

-) — Preceding unsigned comment added by 86.45.42.149 (talk) 10:02, 5 December 2013 (UTC)

What are accuracy, precision and trueness?[edit]

I am confused with deifinition given.

The amc technical brief by Analytical Methods Committee No. 13 Sep 2003 (Royal Society of Chemistry 2003) in"Terminology - the key to understanding analytical science. Part 1: Accuracy, precision and uncertainty." [1]

is giving different definition: according to them Accuracy is a combination of systematic and random errors. Its is not just pure systematic errors. therefore trueness is used to represent the systematic errors, precision for random.

Take a note on that Linas193.219.36.45 09:17, 25 May 2007 (UTC)

AMC are using the paradigm used in ISO 5725, the VIM and others. Going back a ways, 'accurate' used to mean 'close to the truth' and 'precise' meant 'closely defined' (in English pretty much as in measurement, historically ). Somewhere in the '80's, someone - probably ISO TC/69, who are responsible for ISO statistical definitions - defined 'accuracy' as 'closeness of _results_ to the true value'. Individual results are subject to both random and systematic error, so that defined accuracy as incorporating both parts of error. Precision covers the expected size of random error well - that's essentially what it describes. But having defined 'accuracy' as including both, there was no term out there for describing the size of the systematic part. So 'Trueness' was born as a label for the systematic part of 'closeness'. As a result, for measurement, we _now_ have 'accuracy' including both trueness (systematic) and precision (random), and of course this is why the AMC uses the terms it does - they are the current ISO terms. However, things invariably get tricky when this way of looking at the problem collides with ordinary Engish usage or historical measurement usage, both of which tend to use 'accuracy' to refer to the systematic part of error. Of course, this leaves this page with a bit of a problem; it becomes important to decide which set of terms it's intended to use... SLR Ellison (talk) 22:46, 1 June 2014 (UTC)

Clarification[edit]

I appreciate the depth of discussion (particular how it's been used historically and in different disciplines) but got a bit lost. Maybe we could clarify the main conceptual distinction as: Accuracy = exactness Precision = granularity Then develop discussion from this primary distinction which is conceptual, rather than tied to any historical or disciplinary uses of the terms.Transportia (talk) 18:17, 18 January 2014 (UTC)

Philosophical question[edit]

Does this difference make any sense at all? First of all, the difference between accuracy and precision as defined here can only make sense in terms of a definite and known goal? For instance, an archer might try to hit a target by shooting arrows onto it. If he has a large scatter, then one can call him "imprecise" (although e.g. in German this is complete synonym to "inaccurate"). If he has a large scatter then the archer might be called "precise". But if he systematically failes the target center, we call im "inaccurate"? Well, this seems to me rather nonsense - "precise but inaccurate"?! ;)

If I understand right, you want to express the difference of the situation where out of an ensemble of scattered points (e.g. statistically) the estimated mean is closer to the/a "true" mean then the standard deviation (or some multiple of it maybe) of this scatter field. This is really arbitrary! But in terms of statistics this is even complete nonsense at all - and probably related to terms like "bias". If you don't know the true target then you cannot tell anything about a systematic deviation. In experimental physics, you compensate for this by performing several independent experiments with different setups. But the only thing one can observe is that maybe two results of two different experiments are inconsistent within e.g. one standard deviation (arbitrariness!). But which experiment was more "true" is infeasible to determine. A third and a fourth etc. experiment might then point more to the one or to the other. But if you cannot figure out in your experiment the thing which might provoke a possible bias you have no clue whether your experiment or the others have a bias.

But let's come back to the original question. In my eyes, accuracy and precision are by no means really different, as are systematic and stochastic/statistic uncertainties, as long as you have no information, in which direction your systematic error goes, or about whether it is present at all.

The two terms really are different. Accuracy is "bias" and precision is "variance". To actually measure the bias, there needs to be a "true" value to compare against. Still, I agree the terminology is confusing. Prax54 (talk) 22:14, 9 January 2015 (UTC)
The "philosohical question" is valid. For this reason (and for others as well) contemporary metrology has moved away from the traditional terms and uses "uncertainty". See "ISO/BIPM GUM: Guide to the Expression of Uncertainty in Measurement" (1995/2008) [2]. The Yeti 02:34, 10 January 2015 (UTC)
The question is valid, but in theory there is a decomposition of error into bias and variance. Considering that this article is confusing as it is now, it would be worth adding a discussion of uncertainty to the article to reflect the more recent guide you linked. Prax54 (talk) 03:46, 10 January 2015 (UTC)
The "philosophical question" is a physical question: The "true value" cannot be known, and thus without a true value, a term such as "bias" is meaningless. And this is precisely why uncertainty in a measurement is categorized according to the method used to quantify it (statistical or non-statistical), and it is precisely why it is the uncertainty in a measurement that is preferred to be quantified rather than the error in the measurement, which cannot be quantified without uncertainty. "Error" and "uncertainty" are strictly different terms. "Error" is a meaningful term only if a "true value" exists. Since the value of the measurand cannot be determined, in practice a conventional value is sometimes used. In such a case, where a reference value is used as an accepted "true value", the term "error" becomes meaningful and indeed a combined error can be decomposed into random and systematic components. But even in that case, quantifying "error" rather than "uncertainty" is un-necessary (although traditional) and inconsistent with the general case. The terms "accuracy" and "precision", along with a whole bunch of other words, such as "imprecision", "inaccuracy", "trueness" are strictly qualitative terms in the absence of a "true value", which seems to be really absent. Therefore, these naturally mixed-up terms should not be used as quantitative terms: There is uncertainty, and uncertainty alone. And, the preferred way to categorize it is as "statistically evaluated uncertainty" and "non-statistically evaluated uncertainty".
The article should make a clear the distinction between "error" and "uncertainty", and then explain the terminology associated with these two terms. Currently it focuses only on "error", which it clearly states in the lead section. However, there are references to ISO 5725 and VIM, which are among the standard documents in metrology, and which clearly prefers to evaluate uncertainty rather than error. The latest, corrected versions of these standard documents are at least 5 years old. VIM still has issues with the 'true value', which was made clear in NIST's TN-1297. TN-1267 is a pretty good summary that costs only 25 pages. I think it is an elegant document that succeeds in explaining a confusing subject. Another good one is "Introduction to the evaluation of uncertainty" published in 2000 by Fathy A. Kandil of National Physical Laboratory (NPL), UK (12 pages).
After pages-and-pages of discussions in the talk pages of relevant articles, I think it is still (very) difficult for the general wikipedia reader to obtain a clear understanding of the following terminology in common usage: precision, certainty, significant figures, number of significant figures, the right-most significant figure, accuracy, trueness, uncertainty, imprecision, arithmetic precision, implied precision, etc. After struggling with very many wikipedia pages, it is highly probable that the reader will leave with more questions than answers; Is it precise or accurate? or both? The number of significant figures, or the right-most significant figures indicates accuracy (or precision)? What's significant about significant figures? (in fact I think I saw somebody complaining about people asking what was significant about significant figures, which is the one and only significant question about significant figures really. There are horrendous inconsistencies in the terminology common to numerical analysis and metrology (and maybe other contexts), and it may be confusing to apply the terms to individual numbers and sets of measurements.
WaveWhirler (talk) 19:39, 29 March 2015 (UTC)
There is already a WP article, Measurement uncertainty, which covers the ISO GUM "uncertainty" approach. The Yeti 16:40, 30 March 2015 (UTC)
Well there are also Random_error and Systematic_error, but apparently they didn't invalidate this article so far (although I think this one should have invalidated them). By the way, Random_error and Systematic_error are suggested to be merged into Observational_error, which is pointed out as the "Main article" of Measurement_uncertainty#Random_and_systematic_errors, except with the name "Measurement error". Frankly spoken, all I see is a great effort resulting in a mess (as I have already pointed out in my first post in this section), which can hardly help to a person who is not familiar with the terminology but wants to learn. That's the point.
Random error [VIM 3.13]; result of a measurement minus the mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions. Averaging operation eliminates a truly random error on the long run as explained by the law of large numbers, and thus the average of an infinitely many measurements (performed under repeatability conditions) does not contain random error. Consequently, subtracting the hypothetical mean of infinitely many measurements from the total error gives the random error. Random error is equal to error minus systematic error. Because only a finite number of measurements can be made, it is possible to determine only an estimate of random error.
Systematic error [VIM 3.14]; mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions minus the value of the measurand. Systematic error is equal to error minus random error. Like the value of the measurand, systematic error and its causes cannot be completely known. As pointed out in GUM, the error of the result of a measurement may often be considered as arising from a number of random and systematic effects that contribute individual components of error to the error of the result. Although the term bias is often used as a synonym for the term systematic error, because systematic error is defined in a broadly applicable way in VIM while bias is defined only in connection with a measuring instrument, we recommend the use of the term systematic error.
Here is the important part: Two titles "Measurement uncertainty" and "Accuracy and precision" refer to exactly the same subject matter, except the terms precision and accuracy require a "true value" to be put at the bulls eye in those notorious figures used to explain the concepts of precision and accuracy in numerical analysis books, so that the concept of "error" can be defined and quantified relative to the bulls eye, and then a nice scatter can be obtained around a mean that is possibly not the bulls eye, which gives the definitions of "precision" and "accuracy". If those two articles were terribly in need to be separated and a "true value" is required to make one of them valid, at least the article should mention that.
All that bagful of terms can be (and quite commonly are) applied, with potential variations, to;
* individual number representations
* mathematical models/numerical methods
* sets of measurements
* Data acquisition (DAQ) measurement systems (i.e. expensive instruments whose manufacturers tend to supply specifications for their equipment that define its accuracy, precision, resolution and sensitivity, where those specifications may very well be written with incompatible terminologies that involve the very same terms.)
WaveWhirler (talk) 20:06, 30 March 2015 (UTC)
Matters are much more concrete in manufacturing. If you're making bolts that will be compatible with the nuts you're making and the nuts other manufacturers are making, you need to keep measuring them. You might use micrometers for this (among other things). You wouldn't measure every bolt you made; you'd sample them at reasonable intervals, established on good statistical principles. You'd also check your micrometers every so often with your own gauge blocks and suchlike, but you'd check those too; ultimately you'd send your best micrometers or gauge blocks to a calibration house or metrology lab, and they in turn would test their devices against others, establishing a trail of test results going all the way back to international standards. You'll find more about this in Micrometer#testing. The distinction between accuracy and precision is highly relevant to these processes and to communication among engineers and operators, and between manufacturers and metrologists. The terms are necessarily general and the relevant ISO standards and suchlike do tend to use rather abstract language, but we might do well to lean towards the practical rather than the philosophical here. NebY (talk) 17:19, 30 March 2015 (UTC)
I appreciate the short intro to bolt making, but the entire section of Micrometer#Testing, which is composed of more than 4500 characters, does not include a single instance of the word "precision", and neither does what you wrote up there. Although "the distinction between accuracy and precision [may be] highly relevant to these processes", I don't see how it is explained in the context of these processes (anywhere).
However, I get your point on "being practical", and in fact "we might do well to lean towards the practical rather than the philosophical here" sounds like a quite alright intention to me. Any sort of simplification can be preserved for the sake of providing a smoother introduction, but with the expense of (explicitly) noting the simplification, because this is wikipedia.
Standards are not abstract, numbers are. That's why abstract mathematics (or pure mathematics) emphasizes that the representation of a number (in any numeral system), is not the number itself, any more than a company's sign is the actual company. And, what you refer to as "philosophical" in this particular discussion is how metrology is practiced by the NIST, whether bolt or light or sound or anything else: Each effect, random or systematic, identified to be involved in the measurement process is quantified either statistically (Type A) or non-statistically (Type B) to yield a "standard uncertainty component", and all components are combined using a first-order Taylor series approximation of the output function of the measurement (i.e. the law of propagation of uncertainty, or commonly the "root-sum-of-squares"), to yield "combined standard uncertainty". The terms precision and accuracy are better be avoided:
"The term precision, as well as the terms accuracy, repeatability, reproducibility, variability, and uncertainty, are examples of terms that represent qualitative concepts and thus should be used with care. In particular, it is our strong recommendation that such terms not be used as synonyms or labels for quantitative estimates. For example, the statement "the precision of the measurement results, expressed as the standard deviation obtained under repeatability conditions, is 2 µΩ" is acceptable, but the statement "the precision of the measurement results is 2 µΩ" is not.
Although ISO 3534-1:1993 , Statistics — Vocabulary and symbols — Part 1: Probability and general statistical terms, states that "The measure of precision is usually expressed in terms of imprecision and computed as a standard deviation of the test results", we recommend that to avoid confusion, the word "imprecision" not be used; standard deviation and standard uncertainty are preferred, as appropriate."
WaveWhirler (talk) 20:06, 30 March 2015 (UTC)
I was addressing the OP's philosophical concerns about "true" values, which they'd expressed in terms of experimental physics. I didn't want to bore them further by explaining how accuracy and precision differ when using micrometers. NebY (talk) 18:25, 1 April 2015 (UTC)

Accuracy = (trueness, precision) reloaded[edit]

Hi. Please someone add a source/reference for the first given definition of accuracy. It might be an outdated one, but if there is no source at all, I might be tempted to delete it, as for the second definition there IS a source (the VIM) which would be then the accepted and (only) valid one. --Cms metrology (talk) 18:29, 10 May 2017 (UTC)