Ea 4 16 G Rev00 December 2003 Rev

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

Publication
Reference EA-4/16 G:2003

EA guidelines
on the expression
of uncertainty
in quantitative testing

PURPOSE

The purpose of this document is to harmonise the evaluation of uncertainties associated with
measurement and test results within EA. To achieve this, recommendations and advice are
given for the evaluation of those uncertainties.

December 2003 rev00 Page 1 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

Authorship

The EA Expert group on uncertainty of measurement prepared this document on behalf of


the EA Laboratory Committee.

Official Language

The text may be translated into other languages as required. The English language version
remains the definitive version.

Copyright

The copyright of this text is held by EA. The text may not be copied for resale.

Further information

For further information about this publication, contact your National member of EA. Please
check our website http://www.european-accreditation.org for up-to-date information.

Category: Application documents and Technical Advisory documents for Conformity


Assessment Bodies

EA-4/16 is a guidance document

Date of endorsement : November 2003

Date of implementation : November 2004

Transitional period : one year

December 2003 rev00 Page 2 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

CONTENTS

1 INTRODUCTION

2 SCOPE OF APPLICATION

3 POLICY STATEMENT

4 BRIEF SUMMARY OF THE GUM

5 TUTORIAL ON MEASUREMENT AND QUANTITATIVE TESTING

5.1 Requirements

5.2 Specific difficulties of uncertainty evaluation in testing

6 USE OF VALIDATION AND METHOD PERFORMANCE DATA FOR UNCERTAINTY


EVALUATION

6.1 Sources of method performance and validation data

6.2 Data accumulated during validation and verification of a test method prior to application
in the testing environment

6.3 Interlaboratory study of test methods performance according to ISO 5725 or equivalent

6.4 Test or measurement process quality control data

6.5 Proficiency testing data

6.6 Significance of uncertainty contributions

6.7 Use of prior study data

7 REPORTING RESULTS OF A QUANTITATIVE TEST

8 STEPWISE IMPLEMENTATION OF THE UNCERTAINTY CONCEPT

9 ADVANTAGES OF UNCERTAINTY EVALUATION FOR TESTING LABORATORIES

10 REFERENCES

11 BIBLIOGRAPHY

12 APPENDIX

December 2003 rev00 Page 3 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

1 INTRODUCTION

The Guide to the Expression of Uncertainty in Measurement (GUM) [1] is recognised by


EA as the master document on measurement uncertainty. Therefore, consistency with
the GUM is generally required for specific guidance or recommendations for the
evaluation of measurement uncertainty in any field of application associated with EA
activity.

In general, the GUM is also applicable in testing, although there are decisive differences
between measurement and testing procedures. The very nature of some testing
procedures may make it difficult to apply the GUM strictly. Section 6 provides guidance
on how to proceed in such cases.
Wherever possible accredited testing laboratories are required, when reporting the
uncertainties associated with quantitative results, to do so in accordance with the GUM.
A basic requirement of the GUM is the use of a model for the evaluation of uncertainty.
The model should include all quantities that can contribute significantly to the uncertainty
associated with the test result. There are circumstances, however, where the effort
required developing a detailed model is unnecessary. In such a case other identified
guidance should be adopted, and other methods based, for example on validation and
method performance data be used.
To ensure that clients benefit fully from laboratories services, accredited testing
laboratories have developed appropriate principles for their collaboration with
clients. Clients have the right to expect that the test reports are factually correct, useful
and comprehensive. Depending on the situation, clients are also interested in quality
features, especially
the reliability of the results and a quantitative statement on this reliability, i.e.
uncertainty
the level of confidence of a conformity statement about the product that can be
inferred from the testing result and the associated expanded uncertainty.
Other quality features such as repeatability, intermediate precision reproducibility,
trueness, robustness and selectivity are also important for the characterisation of the
quality of a test method.

This document does not deal with the use of uncertainty in conformity assessment. In
general, the quality of a test result does not reflect the best achievable or the smallest
uncertainty. Section 2 defines the scope of application of this guide and Section 3
presents a policy statement jointly made by EUROLAB, EURACHEM and EA.
Sections 4, 5 and 6 are tutorial. Section 4 provides a brief summary of the
GUM. Section 5 summarises the existing requirements according to ISO/IEC 17025 [7]
and the strategy for the implementation of uncertainty evaluation. It also addresses
some difficulties associated with uncertainty evaluation in testing. Section 6 explains the
use of validation and method performance data for evaluating uncertainty in testing. EA
requirements on reporting the result of a measurement are given in Section 7. Guidance
on a stepwise implementation of uncertainty in testing is provided in Section 8. The
benefits of elaborating the uncertainty associated with the values obtained in quantitative
testing are indicated in Section 9.

December 2003 rev00 Page 4 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

2 SCOPE OF APPLICATION

This document is intended to provide guidance for the evaluation1 of uncertainty in


quantitative testing. Any test involving the determination of a numerical value of a
measurand or a characteristic is called quantitative testing. For the evaluation of
uncertainty in calibration, EA4/02 [11] should be consulted.

3 POLICY STATEMENT

Extract from ILAC-G17:2002 Introducing the Concept of Uncertainty of Measurement in


Testing in Association with the Application of the Standard ISO/IEC 17025 [15] :
1. The statement of uncertainty of measurement should contain sufficient information for
comparative purposes;
2. The GUM and ISO/IEC 17025 form the basic documents but sector specific interpretations
may be needed;
3. Only uncertainty of measurement in quantitative testing is considered for the time being. A
strategy on handling results from qualitative testing has to be developed by the scientific
community;
4. The basic requirement should be either an estimation of the overall uncertainty, or
identification of the major components followed by an attempt to estimate their size and the
size of the combined uncertainty;
5. The basis for the estimation of uncertainty of measurement is to use existing experimental
data should be used (quality control charts, validation, round robin tests, PT, CRM,
handbooks etc.);
6. When using a standard test method there are three cases:
when using a standardised test method, which contains guidance to the uncertainty
evaluation, testing laboratories are not expected to do more than to follow the
uncertainty evaluation procedure as given in the standard2;
if a standard gives a typical uncertainty of measurement for test results, laboratories
are allowed to quote this figure if they can demonstrate full compliance with the test
method;
if a standard implicitly includes the uncertainty of measurement in the test results
there is no further action necessary2.

Testing laboratories should not be expected to do more than take notice of, and apply the
uncertainty-related information given in the standard, i.e. quote the applicable figure, or
perform the applicable procedure for uncertainty estimation. Standards specifying test
methods should be reviewed concerning estimation and statement of uncertainty of test
results, and revised accordingly by the standards organisation.

7. The required depth of the uncertainty estimations may be different in different technical
fields. Factors to be taken into account include:
common sense;
influence of the uncertainty of measurement on the result (appropriateness of the
determination);
appropriateness;
classification of the degree of rigour in the determination of uncertainty of
measurement.

1 The term evaluation has been used in preference to the term estimation. The former term is more
general and is applicable to different approaches for uncertainty. This choice is also made to be
consistent with the vocabulary used in GUM.
2 The laboratories have to demonstrate full compliance with the test methods.

December 2003 rev00 Page 5 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

8. In certain cases it can be sufficient to report only the reproducibility;


9. When the estimation of the uncertainty of measurement is limited any report of the
uncertainty should make this clear;
10. There should be no development of new guides where usable guides already exist.

4 BRIEF SUMMARY OF THE GUM

The GUM is based on sound theory and provides a consistent and transferable
evaluation of measurement uncertainty and supports metrological traceability. The
following paragraphs provide a brief interpretation of the basic ideas and concepts.
Three levels in the GUM can be identified. These are basic concepts, recommendations
and evaluation procedures. Consistency requires the basic concepts to be accepted and
the recommendations to be followed. The basic evaluation procedure presented in the
GUM, the law of propagation of uncertainty, applies to linear or linearised models (see
below). It should be applied whenever appropriate, since it is straightforward and easy
to implement. However, for some cases more advanced methods such as the use of
higher-order expansion of the model or the propagation of probability distributions may
be required.
The basic concepts in uncertainty evaluation are
the knowledge about any quantity that influences the measurand is in principle
incomplete and can be expressed by a probability density function (PDF) for the
values attributable to the quantity based on that knowledge
the expectation value of that PDF is taken as the best estimate of the value of the
quantity
the standard deviation of that PDF is taken as the standard uncertainty
associated with that estimate
the PDF is based on knowledge about a quantity that may by inferred from
- repeated measurementsType A evaluation
- scientific judgement based on all the available information on the possible
variability of the quantityType B evaluation.

This document interprets the GUM as based on


a model formulated to account for the interrelation of the input quantities that
influence the measurand
corrections included in the model to account for systematic effects; such
corrections are essential for achieving traceability to stated references (e.g.
CRMs, reference measurement procedures, SI units).
the reporting of the result of a measurement that specifies the value and a
quantitative indication of the quality of that result
the provision, when required, of an interval about the result of a measurement
that may be expected to encompass a large fraction of the values that could
reasonably be attributed to the measurand. This interval, often expressed in
terms of an expanded uncertainty, is a very suitable quantitative indication of the
quality of the result. The expanded uncertainty is often expressed as a multiple
of the standard uncertainty. The multiplying factor is termed the coverage factor
k (see Section 7).
The evaluation procedure comprises four parts:
Derivation of the model of the measurement. Because in general this is the most
difficult part of the evaluation, the use of a cause-effect-relationship linking the
input quantities to the measurand is recommended
The provision of probability density functions (PDFs) for the input quantities to the
model, given information about these quantities. In many cases in practice, it is
necessary to specify only the expectation value and standard deviation of each

December 2003 rev00 Page 6 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

PDF, i.e. the best estimate of each quantity and the standard uncertainty
associated with that estimate
Propagation of uncertainty. The basic procedure (the law of propagation of
uncertainty) can be applied to linear or linearised models, but is subject to some
restrictions. A working group of the Joint Committee for Guides in Metrology
(JCGM) is preparing guidance for a more general method (the propagation of
PDFs) that includes the law of propagation of uncertainty as a special case
Stating the complete result of a measurement by providing the best estimate of
the value of the measurand, the combined standard uncertainty associated with
that estimate and an expanded uncertainty (Section 7).

The GUM [1] provides guidance on stating a complete result of a measurement in its
section 7, titled Reporting uncertainty. Section 7 in this document follows the
recommendations of the GUM and provides some more detailed guidance. Note that the
GUM permits the use of either the combined standard uncertainty uc(y) or the expanded
uncertainty U(y), i.e. the half width of an interval having a stated level of confidence, as a
measure of uncertainty. However, if the expanded uncertainty is used, one must state
the coverage factor k, which is equal to the value of U(y)/uc(y).
For the evaluation of the uncertainty associated with the measurand Y one needs only to
know
the model, Y = f(X1,..., XN),
the best estimates xi of all input quantities Xi and
the uncertainties u(xi) and the correlation coefficients r(xi,xj) associated with xi
and with xi and xj.
The best estimate xi is the expected value of the PDF for Xi, u(xi) is the standard
deviation of that PDF and r(xi,xj) is the ratio of the covariance between xi and xj and the
product of the standard deviations.
To state the combined standard uncertainty uc(y) associated with the measurement
result y, no further knowledge of the PDF is required. To state the half width of an
interval having a stated level of confidence, i.e. an expanded uncertainty, it is necessary
to know the PDF. This requires more knowledge since the two parameters, expectation
value and standard deviation, do not fully specify a PDF unless it is known to be
Gaussian.

Section 7 provides guidance on obtaining the expanded uncertainty in those cases


where a Gaussian PDF is not assumed for the measurand Y.

5 TUTORIAL ON MEASUREMENT AND QUANTITATIVE TESTING

5.1 Requirements
In principle, the standard ISO/IEC 17025 does not include new requirements
concerning measurement uncertainty but it deals with this subject in more detail than
the previous version of this standard:

5.4.6 Estimation of uncertainty of measurement

5.4.6.1 A calibration laboratory, or a testing laboratory performing its own calibrations,


shall have and shall apply a procedure to estimate the uncertainty of
measurement for all calibrations and types of calibrations.

5.4.6.2 Testing laboratories shall have and shall apply procedures for estimating
uncertainty of measurement. In certain cases the nature of the test method

December 2003 rev00 Page 7 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

may preclude rigorous, metrologically and statistically valid, calculation of


uncertainty of measurement. In these cases the laboratory shall at least
attempt to identify all the components of uncertainty and make a reasonable
estimation, and shall ensure that the form of reporting of the result does not
give a wrong impression of the uncertainty. Reasonable estimation shall be
based on knowledge of the performance of the method and on the
measurement scope and shall make use of, for example, previous experience
and validation data.

NOTE 1 The degree of rigor needed in an estimation of uncertainty of


measurement depends on factors such as:
the requirements of the test method ;
the requirements of the client ;
the existence of narrow limits on which decisions on conformance to a
specification are based.

NOTE 2 In those cases where a well-recognized test method specifies limits to


the values of the major sources of uncertainty of measurement and specifies the form
of presentation of calculated results, the laboratory is considered to have satisfied this
clause by following the test method and reporting instructions (see 5.10).

5.4.6.3 When estimating the uncertainty of measurement, all uncertainty components,


which are of importance in the given situation shall be taken into account
using appropriate methods of analysis.

NOTE 1 Sources contributing to the uncertainty include, but are not necessarily
limited to, the reference standards and reference materials used, methods and
equipment used, environmental conditions, properties and conditions of the item being
tested or calibrated, and the operator.

NOTE 2 The predicted long-term behaviour of the tested and/or calibrated item
is not normally taken into account when estimating the measurement uncertainty.

NOTE 3 For further information, see ISO 5725 and the Guide to the Expression
of Uncertainty in Measurement (see bibliography).

5.2 Specific difficulties of uncertainty evaluation in testing


The terms test result and measurement result correspond to two well-defined
concepts. In metrology the word measurand as defined in VIM [2, clause 2.6] is used
and in testing the word characteristic as defined in ISO 3534-2 [6] is preferred.

December 2003 rev00 Page 8 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

Measurand (VIM 2.6) Characteristic (ISO 3534)


Particular quantity subject to measurement A property which helps to differentiate
between items of a given population
(measurable) quantity (VIM 1.1)
attribute of a phenomenon, body or a
substance that may be distinguished
qualitatively and determined quantitatively

The difference between the terminology used in measurement and testing activities
will be more clearly seen upon comparing the definitions of the two operations:

Measurement (VIM 2.1) Test (ISO/IEC Guide 2 [3])


Set of operations having the object of Technical operation that consist of the
determining a value of a quantity determination of one or more
characteristics of a given product,
process or service according to a
specified procedure

A measurand as defined by the VIM is therefore a particular case of a characteristic as


defined by ISO 3535, in the sense that a well-defined characteristic can be regarded as
a measurand. In particular, a quantitative characteristic is a quantity in the VIM
definition, and in the course of a test the value of that quantity will be determined by
measurement. It follows that the properties of measurement results and quantitative
test results can be expected to be identical. Further, in both cases an appropriate
definition of the measurand or of the characteristic is essential. Here, appropriate
means sufficiently detailed and related to the process of measuring or testing and
sometimes also related to the further use of the result.

There are, however, important differences in the practice of measurement (as seen in
calibration and in testing), and these affect the practice of uncertainty evaluation:

A measurement process typically yields a result that in principle is independent of the


measurement method apart from different uncertainties associated with different
methods. For example, temperature values indicated by a mercury thermometer and a
platinum resistance thermometer can be expected to be similar (to an extent dictated
by their associated uncertainties), but the uncertainty associated with the former value
will be much larger than that associated with the latter.

A test result typically depends on the method and on the specific procedure used to
determine the characteristic, sometimes strongly. In general, different test methods
may yield different results, because a characteristic is not necessarily a well-defined
measurand.

In measurement procedures, environmental and operational conditions will either be


maintained at standardised values or be measured in order to apply correction factors
and to express the result in terms of standardised conditions. For example, in
dimensional measurements the temperatures of workpieces will be measured in order
to correct the result for the effects of thermal expansion, and in gas flow measurement
pressure and temperature will either be maintained at specified values or measured
and used as a basis for correction.

December 2003 rev00 Page 9 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

Test methods are often determined by conventions. These conventions reflect different
concerns or aims:

the test must be representative of the real conditions of use of the product

the test conditions are often a compromise between extreme conditions of use

the test conditions must be easily reproducible in a laboratory

individual test conditions should control the variability in the test result.

To achieve the last aim, a nominal value and a tolerance for the relevant conditions are
defined. The test temperature is often specified, e.g. 38.0 C 0.5 C. However, not all
conditions can be controlled. This lack of knowledge introduces variability to the
results. A desirable feature of a test method is to control such variability.

For tests, an indicator (such as a physical quantity) is used to express the test results.
For instance, the ignition time is often used as an indicator for a burning test. The
uncertainty associated with the measurement of the ignition time adds variability to the
test results. However, this contribution to the variability is generally dwarfed by
contributions inherent in the test method and uncontrolled conditions, although this
aspect should be confirmed.

Testing laboratories should scrutinise all elements of the test method and the
conditions prevailing during its application in order to evaluate the uncertainty
associated with a test result.

In principle, the mathematical model describing the test procedure can be established
as proposed in the GUM. However, the derivation of the model may be infeasible for
economic or other reasons. In such cases alternative approaches may be used. In
particular, the major sources of variability can often be assessed by interlaboratory
studies as stated in ISO 5725 [8], which provides estimates of repeatability,
reproducibility and (sometimes) trueness of the method.

Despite the differences in terminology above, for the purposes of this document, a
quantitative test result is considered to be a measurement result in the sense used in
the GUM. The important distinction is that a comprehensive mathematical model, which
describes all the effects on the measurand, is less likely to be available in testing. The
evaluation of uncertainty in testing may therefore require the use of validation and
method performance studies as described in section 6.

6 USE OF VALIDATION AND METHOD PERFORMANCE DATA FOR


UNCERTAINTY EVALUATION

6.1 Sources of method performance and validation data


The observed performance characteristics of test methods are often essential in
evaluating the uncertainty associated with the results (Section 4). This is particularly
true where the results are subject to important and unpredictable effects, which can
best be considered as random effects, or where the development of a comprehensive
mathematical model is impractical.
Method performance data also very frequently includes the effect of several sources of
uncertainty simultaneously and its use may accordingly simplify considerably the

December 2003 rev00 Page 10 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

process of uncertainty evaluation. Information on test method performance is typically


obtained from
data accumulated during validation and verification of a test method prior to its
application in the testing environment
interlaboratory studies according to ISO 5725
accumulated quality control (that is, check sample) data
proficiency testing schemes as described in EA-3/04 [10].
This section provides general guidance on the application of data from each of these
sources.

6.2 Data accumulated during validation and verification of a test method prior to
application in the testing environment
6.2.1 In practice, the fitness for purpose of test methods applied for routine testing is
frequently checked through method validation and verification studies. The data so
accumulated can inform the evaluation of uncertainty for test methods. Validation
studies for quantitative test methods typically determine some or all of the following
parameters:

Precision. Studies within a laboratory will obtain precision under repeatability conditions
and intermediate conditions, ideally over time and across different operators and types
of test item. The observed precision of a testing procedure is an essential component
of overall uncertainty, whether determined by a combination of individual variances or
by a study of the complete method in operation.

Bias. The bias of a test method is usually determined by studying relevant reference
materials or test samples. The aim is typically to identify and eliminate significant bias.
In general, the uncertainty associated with the determination of the bias is an important
component of overall uncertainty.

Linearity. Linearity is an important property of methods used to make measurements


over a range of values. Correction for significant non-linearity is often accomplished by
the use of non-linear calibration functions. Alternatively, the effect is avoided by the
choice of a restricted operating range. Any remaining deviations from linearity are
normally sufficiently accounted for by the use of overall precision data. If these
deviations are negligible compared with the uncertainties associated with calibration,
additional uncertainty evaluation is not required.

Capability of detection. The lower limit of operability of a test method may be


established. The value obtained is not directly relevant to the evaluation of uncertainty.
The uncertainty in the region at or near this lower limit is likely to be significant
compared with the value of the result, leading to practical difficulties in assessing and
reporting uncertainty. Reference to appropriate documentation on the treatment and
reporting of results in this region is accordingly recommended [13].

Selectivity and specificity. These terms relate to the ability of a test method to respond
to the appropriate measurand in the presence of interfering influences, and are
particularly important in chemical testing. They are, however, qualitative concepts and
do not directly provide uncertainty information, though the influence of interfering
effects may in principle be used in uncertainty evaluation [12].

Robustness or ruggedness. Many method development or validation protocols require


that the sensitivity to particular parameters be investigated directly. Ruggedness data
can therefore provide information on the effect of important parameters, and is
particularly important in establishing whether a given effect is significant [13].

December 2003 rev00 Page 11 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

6.2.2 Experimental studies of method performance should be carried out


carefully. In particular:

Representativeness is essential: as far as possible, studies should be conducted


to provide a realistic survey of the number and range of effects operating during
normal use of the method, as well as covering the range of values and sample
types within the scope of the method. Estimates of precision covering a wide
variety of sources of variation are particularly appropriate in this respect.
Where factors are suspected to interact, the effect of interaction should be taken
into account. This may be achieved either by ensuring random selection from
different levels of interacting parameters, or by careful systematic design to obtain
both variance and covariance information.
In carrying out studies of overall bias, it is important that the reference materials
and values are relevant to the materials under routine test.

Careful experimental design is accordingly invaluable in ensuring that all relevant


factors are duly considered and properly evaluated.

6.2.3 The general principles of applying validation and performance data to uncertainty
evaluation are similar to those applicable to the use of performance data (above).
However, it is likely that the performance data available will adequately cover fewer
contributions. Correspondingly further supplementary estimates will be required. A
typical procedure is:
Compile a list of relevant sources of uncertainty. It is usually convenient to include
any measured quantities held constant during a test, and to incorporate
appropriate precision terms to account for the variability of individual
measurements or the test method as a whole. A cause and effect diagram [13] is a
very convenient way to summarise the uncertainty sources, showing how they
relate to each other and indicating their influence on the uncertainty associated
with the result
Assemble the available method performance and calibration data
Check to see which sources of uncertainty are adequately accounted for by the
available data. It is not generally necessary to obtain separately the effects of all
contributions; where several effects contribute to an overall performance figure, all
such effects may be considered to be accounted for. Precision data covering a
wide variety of sources of variation are therefore particularly useful as they will
often encompass many effects simultaneously (but note that in general precision
data alone are insufficient unless all other factors are assessed and shown to be
negligible)
For any sources of uncertainty not adequately covered by existing data, either
seek additional information from the literature or existing data (certificates,
equipment specifications, etc.) or, plan experiments to obtain the required
additional data.

6.3 Interlaboratory study of test methods performance according to ISO 5725


or equivalent
6.3.1 Interlaboratory studies according to ISO 5725 typically provide the repeatability
standard deviation sr and reproducibility standard deviation sR (both as defined in ISO
3534-1 [5]) and may also provide an estimate of trueness (measured as bias with
respect to a known reference value). The application of these data to the evaluation of
uncertainty in testing is discussed in detail in ISO TS 21748 [9]. The general principles
are:

December 2003 rev00 Page 12 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

i) Establishing the relevance of method performance data to measurement results


from a particular measurement process. Section 6.2 of this document provides
details of the measures required.
ii) Establishing the relevance of method performance data to the test item by
identifying differences in sample treatment, sampling, or expected level of
response between the laboratorys test item and those test items examined in a
collaborative study. An adjustment of the reproducibility standard deviation to
take account of, for example, changes in precision with level of response may be
necessary.
iii) Identifying and evaluating the additional uncertainties associated with factors not
adequately covered by the interlaboratory study (see 6.3.2).
iv) Using the principles of the GUM to combine all the significant contributions to
uncertainty, including the reproducibility standard deviation (adjusted if
necessary), any uncertainty associated with the laboratory component of bias for
the test method, and uncertainties arising from additional effects identified in iii).

These principles are applicable to test methods that have been subjected to
interlaboratory study. For these cases, reference to ISO TS 21748 is recommended for
details of the relevant procedure. The EURACHEM/CITAC guide [12] also gives
guidance on the application of interlaboratory study data in chemical testing.

6.3.2 The additional sources (6.3.1 iii)) that may need particular consideration are:

Sampling. Collaborative studies rarely include a sampling step. If the method used
in-house involves sub-sampling, or the measurand is a bulk property of a small
sample, the effects of sampling should be investigated and their effects included
Pre-treatment. In most studies, samples are homogenised, and may additionally
be stabilised, before distribution. It may be necessary to investigate and add the
effects of the particular pre-treatment procedures applied in-house
Method bias. Method bias is often examined prior to or during interlaboratory
study, where possible by comparison with reference methods or materials. Where
the bias itself, the standard uncertainties associated with the reference values
used, and the standard uncertainty associated with the estimated bias are all small
compared with the reproducibility standard deviation, no additional allowance need
be made for the uncertainty associated with method bias. Otherwise, it will be
necessary to make such allowance.
Variation in conditions. Laboratories participating in a study may tend to steer their
results towards the means of the ranges of the experimental conditions, resulting
in underestimates of the ranges of results possible within the method definition.
Where such effects have been investigated and shown to be insignificant across
their full permitted range, however, no further allowance is required.
Changes in sample type. The uncertainty arising from samples with properties
outside the range covered by the study will need to be considered.

6.4 Test or measurement process quality control data


6.4.1 Many test or measurement processes are subject to control checks based on periodic
measurement of a stable, but otherwise typical, test item to identify significant
deviations from normal operation. Data obtained in this way over a long period of time
provide a valuable source of data for uncertainty evaluation. The standard deviation of
such a data set provides a combined estimate of variability arising from many potential
sources of variation. It follows that if applied in the same way as method performance
data (above), the standard deviation provides the basis for an uncertainty evaluation
that immediately accounts for the majority of the variability that would otherwise require
evaluation from separate effects.

December 2003 rev00 Page 13 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

6.4.2 Quality control (QC) data of this kind will not generally include sub-sampling, the effect
of differences between test items, the effects of changes in the level of response, or
inhomogeneity in test items. QC data should accordingly be applied with caution to
similar materials, and with due allowance for additional effects that may reasonably
apply.

6.4.3 Data points from QC data that gave rise to rejection of measurement and test results
and to corrective action should normally be eliminated from the data set before
calculating the standard deviation.

6.5 Proficiency testing data


6.5.1 Proficiency tests are intended to check periodically the overall performance of a
laboratory, and are best used for that purpose (EA-3/04 [10] and references cited
therein). A laboratorys results from its participation in proficiency tests can accordingly
be used to check the evaluated uncertainty, since that uncertainty should be
compatible with the spread of results obtained by that laboratory over a number of
proficiency test rounds.

6.5.2 In general, proficiency tests are not carried out sufficiently frequently to provide good
estimates of the performance of an individual laboratorys implementation of a test
method. Additionally, the nature of the test items circulated will typically vary, as will the
expected result. It is thus difficult to accumulate representative data for well-
characterised test items. Furthermore, many schemes use consensus values to assess
laboratory performance, which occasionally lead to apparently anomalous results for
individual laboratories. Their use for the evaluation of uncertainty is accordingly limited.
However, in the special case where
the types of test items used in the scheme are appropriate to the types tested
routinely
the assigned values in each round are traceable to appropriate reference values,
and
the uncertainty associated with the assigned value is small compared with the
observed spread of results,
the dispersion of the differences between the reported values and the assigned values
obtained in repeated rounds provides a basis for an evaluation of the uncertainty
arising from those parts of the measurement procedure within the scope of the
scheme.

6.5.3 Systematic deviation from traceable assigned values and any other sources of
uncertainty (such as those noted in connection with the use of interlaboratory study
data obtained in accordance with ISO 5725) must also be taken into account.

6.5.4 It is recognised that the above approach is relatively restricted. Recent guidance from
EUROLAB [14] suggests that proficiency testing data may have wider applicability in
providing a preliminary estimate of uncertainty in some circumstances.

6.6 Significance of uncertainty contributions


6.6.1 Not all the uncertainty sources identified during an uncertainty evaluation will make a
significant contribution to the combined uncertainty; indeed, in practice it is likely that
only a small number will. Those few clearly need careful study to obtain reliable
estimates of their contributions. A preliminary estimate of the contribution of each
component or combination of components to the uncertainty should therefore be made,
by judgement if necessary, and attention paid to those that are most significant.

6.6.2 In deciding whether an uncertainty contribution can be neglected, it is important to


consider

December 2003 rev00 Page 14 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

The relative sizes of the largest and the smaller contributions. For example, a
contribution that is one fifth of the largest contribution will contribute at most 2% of
the combined standard uncertainty
The effect on the reported uncertainty. It is imprudent to make approximations that
materially affect the reported uncertainty or the interpretation of the result
The degree of rigour justified for the uncertainty evaluation, taking into account the
client and regulatory and other external requirements identified, for example,
during contract review.

6.7 Use of prior study data


In order to use the results of prior studies of the method to evaluate the uncertainty, it is
necessary to demonstrate the validity of applying prior study results. Typically, this will
consist of:
Demonstration that a precision comparable to that obtained previously can be
achieved
Demonstration that the use of the bias data obtained previously is justified,
typically through the determination of bias on relevant reference materials (see, for
example, ISO Guide 33 [4]), by satisfactory performance on relevant proficiency
schemes, or other interlaboratory comparisons
Continued performance within statistical control as shown by regular QC sample
results and the implementation of effective analytical quality assurance
procedures.

Where the conditions above are met, and the method is operated within its scope and
field of application, it is normally acceptable to apply the data from prior studies
(including validation studies) directly to uncertainty evaluations in the laboratory in
question.

For methods operating within their defined scope, when the reconciliation stage shows
that all the identified sources have been included in the validation study or when the
contributions from any remaining sources have been shown to be negligible, the
reproducibility standard deviation sR may be used as the combined standard
uncertainty.

If there are any significant sources of uncertainty that are not included in the validation
study their contribution is evaluated separately and combined with sR to obtain the
overall uncertainty.

7 REPORTING RESULTS OF A QUANTITATIVE TEST

A quantitative test always yields a value, which should preferably be expressed in SI


units. The guidance in this section should be followed if an associated uncertainty is
also to be reported (see ISO/IEC 17025 [7]).

7.1 Once the expanded uncertainty has been calculated for a specified level of confidence
(typically 95%), the test result y and the expanded uncertainty U should be reported as
y U and accompanied by a statement of confidence. This statement will depend on
the nature of the probability distribution; some examples are presented below.

All clauses below that relate to a 95% level of confidence require modification if a
different level of confidence is required.

December 2003 rev00 Page 15 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

7.1.1 Normal distribution

It is generally safe to assume a normal distribution from the viewpoint of providing a


coverage interval at the 95% level of confidence when the model is linear in the input
quantities and one of the following three possibilities applies:
1. There is a single, dominant contribution to the uncertainty, which arises from a
normal distribution, and the corresponding degrees of freedom exceed 30.
2. The three largest uncertainty contributions are of comparable size.
3. The three largest contributions are of comparable size, and the effective degrees
of freedom3 exceed 30.
Under these circumstances the following statement can be made:

The reported expanded uncertainty is based on a standard uncertainty multiplied by a


coverage factor k = 2, which for a normal distribution provides a level of confidence of
approximately 95%.

Note: Normality should NOT be assumed if the measurement model is significantly


non-linear in the region of interest, particularly if uncertainties in input values are large
compared with the input values themselves. Under these circumstances, reference to
more advanced texts, e.g. the GUM, is necessary.

7.1.2 t-distribution

The t-distribution may be assumed if the conditions for normality (above) apply but the
degrees of freedom is less than 30. Under these circumstances the following statement
(in which the appropriate numerical values are substituted for XX and YY) can be
made:

The reported expanded uncertainty is based on a standard uncertainty multiplied by a


coverage factor k = XX, which for a t-distribution with eff = YY effective degrees of
freedom provides a level of confidence of approximately 95%.

7.1.3 Dominant (non-normal) contributions in a Type B evaluation of


uncertainty

If the uncertainty associated with the measurement result is dominated by a


contribution resulting from an input quantity that is non-normal and that contribution is
so large that a normal or t-distribution is not obtained when the quantity is convolved
with the remaining input quantities, special consideration should be given to obtaining a
coverage factor that will provide a level of confidence of approximately 95%. For an
additive model, i.e. when the measurand can be expressed as a linear combination of
the input quantities, the PDF for the measurand can be obtained by convolving, i.e
propagating, the PDFs for the input quantities. Even in this case, and almost always
when the model is non-linear, the mathematics required can, however, be difficult. A
practical approach is to make the assumption that the resulting distribution will be little
different in form from that of the dominant component.

In many cases a rectangular distribution will be assigned to a dominant non-normal


input quantity. In such a case a rectangular distribution can then be assigned to the
3
The effective degree of freedom can be estimated by one of the following:
- taking the effective degree of freedom for a single, dominant contribution
- using the Welch-Satterthwaite formula given in the GUM and EA-4/02
- (approximately) by taking the number of degrees of freedom for the largest
contribution.

December 2003 rev00 Page 16 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

measurand. An expanded uncertainty at the 95% level of confidence can be obtained


by multiplying the combined uncertainty by 0.953 = 1.65. Under these circumstances
the following statement can be made:

The reported expanded uncertainty is dominated by a single component of uncertainty


for which a rectangular probability distribution has been assumed. A coverage factor of
1.65 (= 0.953) has therefore been used in order to provide a level of confidence of
approximately 95%.

7.2 For the purposes of this document the term approximately is interpreted
as meaning effectively or for most practical purposes.

7.3 Reference should also be made to the method by which the uncertainties
have been evaluated.

7.4 In some testing situations it may not be possible to evaluate a metrologically sound
numerical values for each component of uncertainty; in such circumstances the means
of reporting should be such that this is clear. For example, if the uncertainty is based
only on repeatability without consideration being made to other factors then this should
be stated.

7.5 Unless sampling uncertainty has been fully taken into account, it should also be made
clear that the result and the associated uncertainty apply to the tested sample only and
do not apply to any batch from which the sample may have been taken.

7.6 The number of decimal digits in a reported uncertainty should always reflect practical
measurement capability. In view of the process for evaluating uncertainties, it is rarely
justified to report more than two significant digits. Often a single significant digit is
appropriate. Similarly, the numerical value of the result should be rounded so that the
last decimal digit corresponds to the last digit of the uncertainty. The normal rules of
rounding can be applied in both cases.

For example, if a result of 123.456 units is obtained, and an uncertainty of 2.27 units
has resulted from the evaluation, the use of two significant decimal digits would give
the rounded values 123.5 units 2.3 units.

7.7 The test result can usually be expressed as y U. However there may be situations
where the upper and lower bounds are different; for example if cosine errors are
involved. If such differences are small then the most practical approach is to report the
expanded uncertainty as the larger of the two. However, if there is a significant
difference between the upper and lower values they should be evaluated and reported
separately. This may be achieved, for example, by determining the shortest coverage
interval at the desired level of confidence in the PDF for the measurand.

For example, for an uncertainty of +6.5 units and 6.7 units, for practical purposes
6.7 units could simply be stated. However, if the values were +6.5 units and 9.8
units they should be separated, e.g. +6.5 units; 9.8 units.

8 STEPWISE IMPLEMENTATION OF THE UNCERTAINTY CONCEPT

It is recognised that the knowledge of mathematical modelling and the determination of


the various influence factors is generally different in different testing fields.

December 2003 rev00 Page 17 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

This aspect has to be taken into account when implementing ISO/IEC 17025.
Laboratories cannot in general be expected to initiate scientific research to assess the
uncertainties associated with their measurements and tests. The respective
requirements of the accreditation bodies should be adapted according to the current
state of knowledge in the respective testing field.

If a mathematical model as a basis for the evaluation of measurement uncertainty is


not available, laboratories can

list those quantities and parameters that are expected to have a significant
influence on the uncertainty and estimate their contribution to the overall
uncertainty
use data concerning repeatability or reproducibility that might be available from
validation, internal quality assurance or interlaboratory comparisons
refer to data or procedures given in the relevant testing standards
combine the approaches mentioned above.

Laboratories should strive to refine their uncertainty evaluations, where appropriate,


taking into account for instance

recent data from internal quality assurance in order to broaden the statistical
basis for the uncertainty evaluation
new data from the participation in interlaboratory comparisons or proficiency tests
revisions of the relevant standards
specific guidance documents for the respective testing field.

Consequently, accreditation bodies will be able to redefine their requirements


concerning measurement uncertainty according to the development of knowledge in
the field. In the long term differences in the requirements for different sectors on the
manner in which measurement uncertainty is evaluated will diminish. Laboratories
should, however, select the most suitable approach for their area and evaluate
measurement uncertainty to the extent appropriate to the intended use.

9 ADVANTAGES OF UNCERTAINTY EVALUATION FOR TESTING


LABORATORIES

There are several advantages linked with the evaluation of measurement uncertainty in
testing, although the task can be time-consuming.

Measurement uncertainty assists in a quantitative manner in important issues such


as risk control and the credibility of test results

A statement of measurement uncertainty can represent a direct competitive


advantage by adding value and meaning to the result

The knowledge of quantitative effects of single quantities on the test result


improves the reliability of the test procedure. Corrective measures may be
implemented more efficiently and hence become more cost-effective

The evaluation of measurement uncertainty provides starting points for optimising


the test procedures through a better understanding of the test process

Clients such as product certification bodies need information on the uncertainty


associated with results when stating compliance with specifications

December 2003 rev00 Page 18 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

Calibration costs can be reduced if it can be shown from the evaluation that
particular influence quantities do not substantially contribute to the uncertainty.

10 REFERENCES

[1] Guide to the Expression of Uncertainty in Measurement. BIPM, IEC, IFCC, ISO,
IUPAC, IUPAP, OIML. International Organization for Standardization, Printed in
Switzerland, ISBN 92-67-10188-9, First Edition, 1993. Corrected and reprinted 1995.

[2] International Vocabulary of Basic and General Terms in Metrology (VIM).


International Organization for Standardization, 1993 (under revision).

[3] ISO/IEC Guide 2:1996, Standardization and related activities - General vocabulary

[4] ISO Guide 33:2000, Uses of certified reference materials

[5] ISO/IEC 3534-1:1994, Statistics - Vocabulary and symbols Part 1: Probability and
general statistical terms

[6] ISO/IEC 3534-2:1994, Statistics - Vocabulary and symbols Part 2: Statistical quality
control

[7] ISO/IEC 17025:1999, General requirements for the competence of testing and
calibration laboratories

[8] ISO/IEC 5725: 1994, Accuracy (trueness and precision) of measurement methods
and results

[9] ISO/TS 21748: 2002, - Guide to the use of repeatability, reproducibility and trueness
estimates in measurement uncertainty evaluation

[10] EA-3/04, Use of Proficiency Testing as a Tool for Accreditation in Testing (with
EUROLAB and EURACHEM) Aug 2001

[11] EA-4/02 Expression of the Uncertainty of Measurements in Calibration (including


supplements 1 and 2 to EA-4/02) (previously EAL-R2), Dec 1999

[12] EURACHEM / CITAC Guide CG 4, Quantifying Uncertainty in Analytical


Measurement (second edition) 2000

[13] EURACHEM, The Fitness for Purpose of Analytical Methods (ISBN 0- 948926-12-
0) 1998

[14] EUROLAB, Technical report No.1/2002, June 2002.

[15] ILAC G17:2002, Introducing the Concept of Uncertainty of Measurement in Testing


in Association with the Application of the Standard ISO/IEC 17025, November 2002

December 2003 rev00 Page 19 of 28


EA 4/16 - EA guidelines on the expression of uncertainty in quantitative testing

11 BIBLIOGRAPHY

AFNOR FD X 07-021 Mtrologie et application de la statistique Aide la dmarche


pour lvaluation et lutilisation de lincertitude des mesures et des rsultats dessais
(1999) (Help to the process for the evaluation and the use of the measurement and test
result uncertainty)

S L R Ellison, V Barwick. Accred. Qual. Assur. (1998) 3 101 105.

12 APPENDIX

Inventory of documents (normative and non normative, existing or in the process of


drafting) on measurement uncertainty (Document established by the CEN / WG 122
and the EA group uncertainty ) synthesis prepared by Bernd Siebert.

December 2003 rev00 Page 20 of 28


EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

Appendix: Alphabetic list of documents


CEAL Measurement uncertainty for environmental laboratories
CEN 12282 In vitro diagnostic medical devices- Measurement of quantities in samples of biological origin Description of
reference materials
CEN ISO 18153 In vitro diagnostic medical devices- Measurement of quantities in samples of biological origin Metrological traceability of
values for catalytic concentration of enzymes assigned to calibration and control materials.
CEN/ISO 17511 In vitro diagnostic medical devices- Measurement of quantities in samples of biological origin Metrological
traceability of values assigned to calibration and control materials.
CLAS Reference Document General Guidelines for Evaluating and Expressing the Uncertainty of Accredited laboratories Measurement Results.
5
DIN (DRAFT) 32646 Chemische Analyse -Erfassungs- und Bestimmungsgrenze als Verfahrenskenn-gren - Ermittlung in einem
Ringversuch unter Vergleichs-bedingungen - Begriffe, Bedeutung, Vorgehensweise
DIN 1319 Teil 3 Teil 4 DIN 1319 Teil 3.Auswertung v. Messungen einer einzelnen Messgrsse, Messunsicherheit;
DIN 1319 Teil 4 Behandlung von Unsicherheiten bei der Auswertung von Messungen
DIN 32645 Chemische Analytik -Nachweis-, Erfassungs- und Bestimmungsgrenze - Ermittlung unter Wiederholbedingungen -
Begriffe, Verfahren, Auswertung
DIN 51309 Kalibrierung von Drehmomentmessgerten fr statische Drehmomente (Februar 1998)
DIN 58932-3 Haematology- Determination of the concentration of blood corpuscles- Par 3 Determination of the concentration of
erythrocytes; Reference method
DIN 58932-4 Haematology- Determination of the concentration of blood corpuscles- Part 4: Determination of leucocytes; reference
method
DKD R 7-1 Kalibrierung elektronischer nichtselbstttiger Waagen
DKD R 7-1 Blatt 1 bis 3 Kalibrierung elektronischer nichtselbstttiger Waagen
EA-10/03 Calibration of Pressure Balances (July 1997)
EA-10/04 Uncertainty of Calibration Results in Force Measurement (August 1996)
EA-10/14 EA Guidelines on the Calibration of Static Torque Measuring Devices (June 2000)
EA-4/02 Expression of the uncertainty of measurement in Calibration
EA-4/02 / DKD-3, E1 Angabe der Meunsicherheit bei Kalibrierungen / Expression of the Uncertainty of Measurements in Calibration
EN 13274-1 to -8 Respiratory protective devices Methods of test Parts 1 to 8
EN 550(1984), EN 552 (1984), Sterilization of medical devices (CEN/TC 204)
EN 554(1984), EN ISO 14967
(2000) and EN ISO
14160(1998)
EN 875, EN 876, EN 895, EN Destructive testing of welds (CEN/TC 121/SC 5)
910, EN 1043-1, EN 1043-2,
December 2003 rev00 Page 21 of 28
EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

EN 1321, EN 1320, PrEN ISO


17641-2, prEN ISO 17641-3

December 2003 rev00 Page 22 of 28


EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

Appendix: Alphabetic list of documents continued


EN 970, EN 1290, EN 1435, Non-destructive testing of welds (CEN/TC 121/WG 13)
EN 1713, EN 1714
EN ISO 14253-1 Geometrical product specification (GPS). Inspection by measurement of workpieces and measuring equipments. Part 1
: decision rules for proving conformance or non-conformance with specifications.
EN ISO 4259 Petroleum products - Determination and application of precision data in relation to methods of test
EN 12286 In vitro diagnostic medical devices- Measurement of quantities in samples of biological origin Presumptions of
reference measurement procedures.
EN 24185 Measurement of liquid flow in closed conduits - Weighing method (ISO 4185:1980)
EN 29104 Measurement of fluid flow in closed conduits -- Methods of evaluating the performance of electromagnetic flow-meters
for liquids
EN ISO 2922 Acoustics Measurement of noise emitted by vessels on inland water ways and harbours
EN ISO 4871 Acoustics Declaration and verification of noise emission values of machinery and equipment
EN ISO 5167 Measurement of fluid flow by means of pressure differential devices - Part 1: Orifice plates, nozzles and Venturi tubes
inserted in circular cross-section conduits running full
EN ISO 6817 Measurement of conductive liquid flow in closed conduits - Methods using electromagnetic flow-meters (ISO
6817:1992)
EN ISO 9300 Measurement of gas flow by means of critical flow Venturi nozzles
EN ISO-8316 Measurement of liquid flow in closed conduits - Method by collection of the liquid in a volumetric tank (ISO 8316:1987)
ENV ISO 13530 Water Quality Guide to analytical quality control for water analysis (ISO/TR 13530:1997)
EURACHEM Quantifying Uncertainty in Analytical Measurement
EUROLAB EUROLAB Technical Report Measurement Uncertainty a collection for beginners
FD X 07-021 Fundamental standards - Metrology and statistical applications - Aid in the procedure for estimating and using
uncertainty in measurements and test results (AFNOR)
GUM Guide to the Expression of uncertainty in measurement
Hanser Verlag Method for the estimation of uncertainty of hardness testing machines; PC file for the determination
(NOTE: This is a comprehensive technical book, but not discussed in the context of this inverntory.)
ISO TS 14253-2 GPS - Inspection by measurement of workpieces and measuring equipment -- Part 2: Guide to the estimation of
uncertainty in GPS measurement, in calibration equipment and in product verification
ISO 11200-ISO 11205 Acoustics Determination of emission sound pressure levels of noise sources (series of standards in 6 parts)
ISO 11453 Statistical interpretation of data - Tests and confidence interfals relating to proportions (1996)
ISO 11843-1 Capability of detection - Part 1: Terms and definitions (1997)
ISO 11843-2 Capability of detection - Part 2: Methodology in the linear calibration case (2000)
ISO 13752 Air quality - Assessment of uncertainty of a measurement method under field conditions using a second method as
reference (1998)
December 2003 rev00 Page 23 of 28
EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

Appendix: Alphabetic list of cited documents continued


ISO 14111 Natural gas - Guidelines for traceability in analysis
ISO 15195 Clinical Laboratory medicine Requirements for reference measurement Laboratories
ISO 16269-7 Statistical interpretation of data - Part 7: Median - Estimation and confidence interval (2001)
ISO 3095 Acoustics Measurement of noise emitted by railbound vehicles.
ISO 3534-1 Statistics - Vocabulary and symbols - Part 1: Probability and general statistical terms (1993)
ISO 3534-2 Statistics - Vocabulary and symbols - Part 2: Statistical quality control (1993)
ISO 3534-3 Statistics - Vocabulary and symbols - Part 3: Design of experiments (1999)
ISO 362 Acoustics Measurement of noise emitted by accelerating road vehicles Engineering Method
ISO 3740-3747 Acoustics Determination of sound power levels of noise sources using sound pressure (series of standards in 8
parts).
ISO 5479 Statistical interpretation of data - Tests for departure from the normal distribution (1997)
ISO 5725-1 Accuracy (trueness and precision) of measurement method and results - Part 1: General principles and definitions
(1994)
ISO 5725-2 Accuracy (trueness and precision) of measurement method and results - Part 2: Basic method for the determination
of repeatability and reproducibility of a standard measurement method (1994)
ISO 5725-3 Accuracy (trueness and precision) of measurement method and results - Part 3: Intermediate measures of the
precision of a standard measurement method (1994)
ISO 5725-4 Accuracy (trueness and precision) of measurement method and results - Part 4: Basic method for the determination
of the trueness of a standard measurement method (1994)
ISO 5725-5 Accuracy (trueness and precision) of measurement method and results - Part 5: Alternative methods for the
determination of the precision of a standard measurement method (1998)
ISO 5725-6 Accuracy (trueness and precision) of measurement method and results - Part 6: Use in practice of accuracy values
(1994)
ISO 6142 Gas analysis - Preparation of calibration gas mixtures - Gravimetric method
ISO 6143 Gas analysis - Comparison method for determining and checking the composition of calibration gas mixtures
ISO 6144, ISO 6145-1, Gas analysis - Volumetric methods and quality aspects (several documents)
ISO/TR 14167, ISO/DIS
14912, etc.
ISO 6879 Air quality - Performance characteristics and related concepts for air quality measuring methods (1995)
ISO 6974-1 Natural gas - Determination of composition with defined uncertainty by gas chromatography - Part 1: Guidelines for
tailored analysis
ISO 7574-1 to ISO 7574-4 Acoustics Statistical methods for determining and verifying noise emission values of machinery and equipment
(series of standards in 4 parts).
ISO 8466-1 Water quality - Calibration and evaluation of analytical methods and estimation of performance characteristics - Part
December 2003 rev00 Page 24 of 28
EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

1: Statistical evaluation of the linear calibration function (1990)


ISO 8466-2 Water quality - Calibration and evaluation of analytical methods and estimation of performance characteristics - Part
2: Calibration strategy for non-linear second order calibration functions(1993)

December 2003 rev00 Page 25 of 28


EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

Appendix: Alphabetic list of cited documents continued-


ISO 9169 Air quality - Determination of performance characteristics of a measurement method (1996)
ISO 9614-1 to ISO 9614-3 Acoustics Determination of sound power levels of noise sources using sound intensity (series of standards in 3
parts)..
VIM International vocabulary of basic and general terms in metrology (1993)
ISO CD 7507-1 Petroleum and liquid petroleum products - Calibration of vertical cylindrical tanks - Part 1: Strapping Method
ISO DIS 11222 Air quality Determination of the uncertainty of the time average of air quality measurements
ISO DIS 14956 Air quality Evaluation of the suitability of a measurement procedure by comparison with a required measurement
uncertainty
ISO TR 10017 Guidance on statistical techniques for ISO 9001:1994 (1999)
ISO TR 13425 Guide for the selection of statistical methods in standardization and specification (1995)
ISO TR 13530 Water quality - Guide to analytical quality control for water analysis (1997)
ISO TR 13843 Water quality - Guidance on validation of microbiological methods (2000)
ISO TR 20461 Bestimmung der Messunsicherheit von Volumenmessungen nach dem geometrischen Verfahren
ISO/TR 5168 Measurement of fluid flow - Evaluation of uncertainties
ISO/TR 7066-1 Assessment of uncertainty in calibration and use of flow measurement devices - Part 1: Linear calibration
relationships
M3003 (UKAS) The expression of uncertainty and confidence in measurement
NEN 3114 Accuracy of measurements - Terms and definitions (1990)
NEN 6303 Vegetable and animal oils and fats - Determination of repeatability and reproducibility of methods of analysis by
interlaboratory tests (1988, in Dutch)
NEN 7777 Draft Environment - Performance characteristics of measurement methods (2001 in Dutch)
NEN 7778 Draft Environment - Equivalency of measurement methods(2001in Dutch)
FD V 03-116 Analyse des produits agricoles et alimentaires. Guide dapplication des donnes mtrologiques (AFNOR)
NIST Technical Note 1297 Guidelines for evaluating and expressing uncertainty of NIST measurement results
NKO-PR2.8 (EA-4/02 in Uitdrukken van de meetonzekerheid (vertaling van EAL-R2) (translation in Dutch of EAL-R2)
Dutch)
NPR 2813 (NEN, Uncertainty of length measurment Terms, definitions and guidelines
Netherlands)
NPR 7779 Draft Environment - Evaluation of the uncertainty of measurement results (2002 in Dutch)
prEN ISO 15011-1, prEN ISO Health and safety in welding and allied processes (CEN/TC 121/SC 9)
15011-2, prEN ISO 15011-3,
EN ISO 10882-1, EN ISO
10882-2
prEN ISO 8655-1 prEN ISO 8655-1 Piston operated volumetric apparatus terms prEN ISO 8655-1 Piston operated volumetric
December 2003 rev00 Page 26 of 28
EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

apparatus frarimetric test methods.


prISO 11904-1 Acoustics Determination of sound immissions from sound sources placed close to the ears Part 1: Technique
using microphones in real ears (MIRE-technique)...
SINAL DT-0002 Guida per la valutazione e la espressione dellincertezza nelle misurazioni

December 2003 rev00 Page 27 of 28


EA 4/16 EA guidelines on the expression of uncertainty in quantitative testing

Appendix: Alphabetic list of cited documents continued-


SINAL DT-0002/1 Guida per la valutazione e la espressione dellincertezza nelle misurazioni, esempi applicativi di valutazioni
dellincertezza nelle misurazioni elettriche
SINAL DT-0002/3 Guida per la valutazione e la espressione dellincertezza nelle misurazioni, avvertenze per la valutazione
dellincertezza nel campo dellanalisi chimica
SINAL DT-0002/4 Guida per la valutazione e la espressione dellincertezza nelle misurazioni, esempi applicativi di valutazione
dellincertezza nelle misurazioni chimiche
SINAL DT-0002/5 Guida per la valutazione e la espressione dellincertezza nelle misurazioni, esempio applicativo per misurazioni su
materiali strutturali
SIT Doc-519 Introduzione ai criteri di valutazione dellincertezza di misura nelle tarature.
SIT/Tec-003/01 Linea guida per la taratura di bilance
TELARC Technical Guide Precision and Limits of Detection for Analytical Methods
Number 5
UKAS Publ. ref: LAB12 The Expression of Uncertainty in Testing
VDI 24449-Part 3 Measurement methods test criteria General method for the determination of the uncertainty of calibratable
measurement methods
VDI/VDE 2620 Entwurf Unsichere Messungen und ihre Wirkung auf das Messergebnis (Dez. 1998)
VDI/VDE 2622, Bl 2 Entw Kalibrieren von Messmitteln fr elektrische Gren - Methoden zur Ermittlung der Messunsicherheit (Okt. 1999)

December 2003 rev00 Page 28 of 28

You might also like