Please, no more errors from your laboratory
‘Uncertainty’ is a major metrological concept with learned tomes being written on how to estimate this parameter, not the least being the Guide to the Expression of Uncertainty in Measurement (GUM). However, misuse of the terms ‘uncertainty’ and ‘error’ comes down simply to a lack of awareness of their definitions and meanings, compounded by the fact that both terms are in general use in the English language and have been highjacked by specific metrological definitions.
Examples of good and bad practice
Bad practice
This new technique is accurate within the limits of our errors ...
... could eliminate errors caused by rapid fluctuation of instrumental mass bias ...
Error bars in Fig. 1 represent the 1 SD uncertainties on the external reproducibility of 6 spot analyses ...
Good practice
This new technique is biasfree within the limits of our precision values obtained under intermediate measurement conditions ...
... could minimise measurement uncertainties caused by rapid fluctuation of instrumental mass bias ...
The range bars associated with data points in Figure 1 represent the repeatability measurement precision (expressed as 1s) of six spot analyses ...
Commentary
The current formal definition of uncertainty (VIM 3) is so full of metrological rectitude (see below) as to be almost uninterpretable by practicing Geoanalysts. An earlier (and now discounted) definition may be helpful in illuminating this concept:
Uncertainty: An estimate attached to a test result which characterises the range of values within which the true value is asserted to lie (ISO 35341: Statistics – Vocabulary and Symbols).
So:
 The uncertainty is the ± value that should be attached to all measurements.
 It is recommended here that the ± value should represent the 95% confidence limits (although this must be stated).
 As much effort should be put into the estimation of the uncertainty as the measurement itself (although this is rarely the case).
 The magnitude of the uncertainty is likely to dictate whether a measurement is fit for its intended purpose.
 The two contributors to the measurement uncertainty are the systematic error component and the random error component.
Systematic errors arise from contributions such as technique bias and unsuspected (and uncorrected) interference effects, and are measured by the bias of the technique. All analysts strive to eliminate bias from their measurements and may erroneously assume it to be absent. In fact, it is quite difficult to characterise the systematic error, although participation in proficiency testing and the comparative analysis of certified reference materials are possible ways of revealing its magnitude.
Random errors arise from statistical variations that affect all analytical measurements and may be estimated by making replicate measurements on a suitable matrixmatched sample under repeatability conditions.
In summary, uncertainty, which provides the range within which the ‘true value’ is estimated to lie at a stated confidence level, is a key parameter for use in the interpretation of a measurement and in deciding whether it is fit for purpose. An error is the difference between the value of a measurement and a reference value (such as the certified value of a certified reference material) and is quantified by the bias in the measurement. Two types of error contribute to the uncertainty, systematic errors and random errors. A simple estimate of the random error contribution to uncertainty may be made by making replicate measurements under repeatability conditions (qv). Systematic errors are more difficult to estimate as the sources are either unsuspected, unidentified or assumed erroneously to be absent.
The preferred metrological way of estimating uncertainty is the socalled ‘bottomup’ approach that involves an estimation of the uncertainty contribution (sometimes referred to as the uncertainty budget) of every stage in the analytical process. A more pragmatic ‘topdown’ approach involves the systematic measurement of a range of certified reference materials of relevant matrix with an assessment of the agreement between measured and certified values. An assessment of performance in a relevant proficiency testing scheme over a period of time, also provides information on systematic errors.
DEFINITIONS
Measurement uncertainty (uncertainty of measurement, uncertainty)
Nonnegative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used.
NOTE 1 Measurement uncertainty includes components arising from systematic effects, such as components associated with corrections and the assigned quantity values of measurement standards, as well as the definitional uncertainty. Sometimes estimated systematic effects are not corrected for but, instead, associated measurement uncertainty components are incorporated.
NOTE 2 The parameter may be, for example, a standard deviation called standard measurement uncertainty (or a specified multiple of it), or the halfwidth of an interval, having a stated coverage probability.
NOTE 3 In general, for a given set of information, it is understood that the measurement uncertainty is associated with a stated quantity value attributed to the measurand. A modification of this value results in a modification of the associated uncertainty. (VIM: 2.26)
Measurement error (error of measurement, error)
Measured quantity value minus a reference quantity value.
NOTE 1 The concept of ‘measurement error’ can be used both a) when there is a single reference quantity value to refer to, which occurs if a calibration is made by means of a measurement standard with a measured quantity value having a negligible measurement uncertainty or if a conventional quantity value is given, in which case the measurement error is known, and b) if a measurand is supposed to be represented by a unique true quantity value or a set of true quantity values of negligible range, in which case the measurement error is not known.
NOTE 2 Measurement error should not be confused with production error or mistake. (VIM: 2.16)
Systematic measurement error (systematic error of measurement, systematic error)
Component of measurement error that in replicate measurements remains constant or varies in a predictable manner.
NOTE 1 A reference quantity value for a systematic measurement error is a true quantity value, or a measured quantity value of a measurement standard of negligible measurement uncertainty, or a conventional quantity value.
NOTE 2 Systematic measurement error, and its causes, can be known or unknown. A correction can be applied to compensate for a known systematic measurement error.
NOTE 3 Systematic measurement error equals measurement error minus random measurement error. (VIM: 2.17)
Measurement bias (bias)
Estimate of a systematic measurement error. (VIM: 2.18)
Random measurement error (random error of measurement, random error)
Component of measurement error that in replicate measurements varies in an unpredictable manner.
NOTE 1 A reference quantity value for a random measurement error is the average that would ensue from an infinite number of replicate measurements of the same measurand.
NOTE 2 Random measurement errors of a set of replicate measurements form a distribution that can be summarized by its expectation, which is generally assumed to be zero, and its variance.
NOTE 3 Random measurement error equals measurement error minus systematic measurement error. (VIM: 2.19)
Confidence level
Confidence intervals are constructed at a confidence level, such as 95%, selected by the user.
What does this mean? It means that if the same population is sampled on numerous occasions and interval estimates are made on each occasion, the resulting intervals would bracket the true population parameter in approximately 95% of the cases. A confidence stated at a 1−α level can be thought of as the inverse of a significance level, α. (http://www.itl.nist.gov/div898/handbook/prc/section1/prc14.htm)
Index to terms
Concept 
Metrological terms considered 
Metrological terms covered 
The ambiguities associated with the use of ‘ppm’ 


What symbols are used to represent the properties of population and sampledistributions 


Avoiding the use of the term standards when referring to (certified) reference materials or calibrators 
Reference material, certified reference material, standard reference material, calibrator, calibration, validation, measurement standard (étalon), verification 

Explaining the difference between uncertainty and error 
Measurement uncertainty, measurement error, systematic measurement error, measurement bias, random measurement error, confidence level 

Distinguishing between repeatability, intermediate precision and reproducibility and discouraging the use of ‘internal precision’ 
Measurement precision, repeatability condition of measurement, intermediate precision of measurement, reproducibility condition of measurement, repeatability, intermediate precision, reproducibility 

Explaining the difference between accuracy, bias and trueness 
Measurement accuracy, measurement trueness, measurement bias 
> Download our Glossary Leaflet here. 