100% found this document useful (1 vote)
273 views72 pages

Iso TS 28037-2010

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
273 views72 pages

Iso TS 28037-2010

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

TECHNICAL ISO/TS

SPECIFICATION 28037

First edition
2010-09-01

Determination and use of straight-line


calibration functions
Détermination et utilisation des fonctions d'étalonnage linéaire

Reference number
ISO/TS 28037:2010(E)

© ISO 2010
ISO/TS 28037:2010(E)

PDF disclaimer
This PDF file may contain embedded typefaces. In accordance with Adobe's licensing policy, this file may be printed or viewed but
shall not be edited unless the typefaces which are embedded are licensed to and installed on the computer performing the editing. In
downloading this file, parties accept therein the responsibility of not infringing Adobe's licensing policy. The ISO Central Secretariat
accepts no liability in this area.
Adobe is a trademark of Adobe Systems Incorporated.
Details of the software products used to create this PDF file can be found in the General Info relative to the file; the PDF-creation
parameters were optimized for printing. Every care has been taken to ensure that the file is suitable for use by ISO member bodies. In
the unlikely event that a problem relating to it is found, please inform the Central Secretariat at the address given below.

COPYRIGHT PROTECTED DOCUMENT


© ISO 2010
All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying and microfilm, without permission in writing from either ISO at the address below or
ISO's member body in the country of the requester.
ISO copyright office
Case postale 56 • CH-1211 Geneva 20
Tel. + 41 22 749 01 11
Fax + 41 22 749 09 47
E-mail [email protected]
Web www.iso.org
Published in Switzerland

ii © ISO 2010 – All rights reserved


ISO/TS 28037:2010(E)

Contents Page

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Normative references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

3 Terms and definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

4 Conventions and notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

5 Principles of straight-line calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


5.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
5.2 Inputs to determining the calibration function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
5.2.1 Measurement data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
5.2.2 Associated uncertainties and covariances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5.3 Determining the calibration function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5.4 Numerical treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.5 Uncertainties and covariance associated with the calibration function parameters . . . . . . . . . . . . . . . . . 7
5.6 Validation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.7 Use of the calibration function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5.8 Determining the ordinary least squares best-fit straight line to data . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

6 Model for uncertainties associated with the yi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


6.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6.2 Calibration parameter estimates and associated standard uncertainties and covariance . . . . . . . . . . . . . 10
6.3 Validation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.4 Organization of the calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

7 Model for uncertainties associated with the xi and the yi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


7.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7.2 Calibration parameter estimates and associated standard uncertainties and covariance . . . . . . . . . . . . . 18
7.3 Validation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7.4 Organization of the calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

8 Model for uncertainties associated with the xi and the yi and covariances associated with the
pairs (xi , yi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
8.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
8.2 Calibration parameter estimates and associated standard uncertainties and covariance . . . . . . . . . . . . . 24

9 Model for uncertainties and covariances associated with the yi . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


9.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9.2 Calibration parameter estimates and associated standard uncertainties and covariance . . . . . . . . . . . . . 25
9.3 Validation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
9.4 Organization of the calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

10 Model for uncertainties and covariances associated with the xi and the yi . . . . . . . . . . . . . . . . . . 31
10.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
10.2 Calibration parameter estimates and associated standard uncertainties and covariance . . . . . . . . . . . . . 31
10.3 Validation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

11 Use of the calibration function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37


11.1 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
11.2 Forward evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

c ISO 2010 — All rights reserved iii


ISO/TS 28037:2010(E)

Annexes
A (informative) Matrix operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2 Elementary operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2.1 Matrix-vector multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2.2 Matrix-matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2.3 Matrix transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2.4 Identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.2.5 Inverse of a square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.3 Elementary definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.3.1 Symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.3.2 Invertible matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.3.3 Lower-triangular and upper-triangular matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.3.4 Orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.4 Cholesky factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.4.1 Cholesky factorization algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.4.2 Interpretation of the Cholesky factorization of a covariance matrix . . . . . . . . . . . . . . . . . . . . . . 43
A.4.3 Solution of a lower-triangular system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
A.4.4 Solution of an upper-triangular system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
A.5 Orthogonal factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
A.5.1 QR factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
A.5.2 RQ factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

B (informative) Application of the Gauss-Newton algorithm to generalized distance regression . . 46

C (informative) Orthogonal factorization approach to solving the generalized Gauss-Markov prob-


lem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
C.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
C.2 Calibration parameter estimates and associated standard uncertainties and covariance . . . . . . . . . . . . . 48
C.3 Validation of the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

D (informative) Provision of uncertainties and covariances associated with the measured x- and
y-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
D.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
D.2 Response data 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
D.2.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
D.2.2 Measurement model for uncertainties and covariances associated with the yi . . . . . . . . . . . . . . . 52
D.3 Response data 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
D.4 Stimulus data 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
D.5 Stimulus data 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
D.6 Stimulus and response data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

E (informative) Uncertainties known up to a scale factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

F (informative) Software implementation of described algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

G (informative) Glossary of principal symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

iv c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Foreword
ISO (the International Organization for Standardization) is a worldwide federation of national standards bodies (ISO
member bodies). The work of preparing International Standards is normally carried out through ISO technical
committees. Each member body interested in a subject for which a technical committee has been established has the
right to be represented on that committee. International organizations, governmental and non-governmental, in liaison
with ISO, also take part in the work. ISO collaborates closely with the International Electrotechnical Commission
(IEC) on all matters of electrotechnical standardization.

International Standards are drafted in accordance with the rules given in the ISO/IEC Directives, Part 2.

The main task of technical committees is to prepare International Standards. Draft International Standards adopted
by the technical committees are circulated to the member bodies for voting. Publication as an International Standard
requires approval by at least 75 % of the member bodies casting a vote.

In other circumstances, particularly when there is an urgent market requirement for such documents, a technical
committee may decide to publish other types of normative document:

— an ISO Publicly Available Specification (ISO/PAS) represents an agreement between technical experts in an ISO
working group and is accepted for publication if it is approved by more than 50 % of the members of the parent
committee casting a vote;

— an ISO Technical Specification (ISO/TS) represents an agreement between the members of a technical committee
and is accepted for publication if it is approved by 2/3 of the members of the committee casting a vote.

An ISO/PAS or ISO/TS is reviewed after three years in order to decide whether it will be confirmed for a further
three years, revised to become an International Standard, or withdrawn. If the ISO/PAS or ISO/TS is confirmed, it is
reviewed again after a further three years, at which time it must either be transformed into an International Standard
or be withdrawn.

Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights.
ISO shall not be held responsible for identifying any or all such patent rights.

ISO/TS 28037:2010 was prepared by Technical Committee ISO/TC 69, Applications of statistical methods, Subcom-
mittee SC 6, Measurement methods and results.

c ISO 2010 — All rights reserved v


ISO/TS 28037:2010(E)

Introduction
Calibration is an essential part of many measurement procedures and often involves fitting to measured data a cali-
bration function that best describes the relationship of one variable to another. This Technical Specification considers
straight-line calibration functions that describe a dependent variable Y as a function of an independent variable X.
The straight-line relationship depends on the intercept A and the slope B of the line. A and B are referred to as the
parameters of the line. The purpose of a calibration procedure is to determine estimates a and b of A and B for a
particular measuring system under consideration on the basis of measurement data (xi , yi ), i = 1, . . . , m, provided
by the measuring system. The measurement data have associated uncertainty, which means there will be uncertainty
associated with a and b. This Technical Specification describes how a and b can be determined given the data and
the associated uncertainty information. It also provides a means for evaluating the uncertainties associated with these
estimates. The treatment of uncertainty in this Technical Specification is carried out in a manner consistent with
ISO/IEC Guide 98-3:2008, “Guide to the expression of uncertainty in measurement” (GUM).

Given the uncertainty information associated with the measurement data, an appropriate method can be specified
to determine estimates of the calibration function parameters. This uncertainty information may include quantified
covariance effects, relating to dependencies among some or all of the quantities involved.

Once the straight-line model has been fitted to the data, it is necessary to determine whether or not the model and
data are consistent with each other. In cases of consistency, the model so obtained can validly be used to predict a
value x of the variable X corresponding to a measured value y of the variable Y provided by the same measuring
system. It can also be used to evaluate the uncertainties associated with the calibration function parameters and the
uncertainty associated with the predicted value x.

The determination and use of a straight-line calibration function can therefore be considered to consist of five steps:

1 Obtaining uncertainty and covariance information associated with the measurement data – although dependent
on the particular area of measurement, examples are provided within this Technical Specification;

2 Providing best estimates of the straight-line parameters;

3 Validating the model, both in terms of the functional form (does the data reflect a straight-line relationship?)
and statistically (is the spread of the data consistent with their associated uncertainties?) using a chi-squared
test;

4 Obtaining the standard uncertainties and covariance associated with the estimate of the straight-line parameters.

5 Using the calibration function for prediction, that is, determining an estimate x of the X-variable and its associated
uncertainty corresponding to a measured value y of the Y -variable and its associated uncertainty.

The above steps are shown diagrammatically in Figure 1.

The main aim of this Technical Specification is to consider steps 2 to 5. Therefore, as part of step 1, before using this
Technical Specification, the user will need to provide standard uncertainties, and covariances if relevant, associated
with the measured Y -values and, as appropriate, those associated with the measured X-values. Account should be
taken of the principles of the GUM in evaluating these uncertainties on the basis of a measurement model that is
specific to the area of concern.

ISO 11095:1996 [14] is concerned with linear calibration using reference materials. It differs from this Technical
Specification in the ways given in Table 1.

The numerical methods given are based on reference [6].

vi c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)
Inputs
measurement data (xi , yi ), i = 1, . . . , m,
and associated covariance matrix U

model
Y = A + BX

Calibration
estimates a of A and b of B

Validation
model residuals and observed
chi-squared value χ2obs

Uncertainty
evaluation standard uncertainties u(a) and
u(b), and covariance cov(a, b)

Prediction
measured value y of Y and
associated standard uncertainty u(y)

predicted value x of X and


associated standard uncertainty u(x)

Figure 1 — Summary of the steps in the determination and use of straight-line calibration functions

Table 1 — Differences between ISO 11095:1996 and ISO/TS 28037:2010

Feature ISO 11095:1996 ISO/TS 28037:2010


Specifically addresses reference materials Yes More general
X-values assumed to be known exactly Yes More general uncertainty information
All measured values obtained independently Yes More general uncertainty information
Terminology aligned with GUM No Yes
Types of uncertainty structure treated Two Five, including the most general case
Only uncertainty associated with random errors Yes More general uncertainty information
Consistency test ANOVA Chi-squared
Uncertainty associated with predictions Ad hoc GUM compatible

c ISO 2010 — All rights reserved vii


TECHNICAL SPECIFICATION ISO/TS 28037:2010(E)

Determination and use of straight-line calibration functions

1 Scope

This Technical Specification is concerned with linear, that is, straight-line, calibration functions that describe the
relationship between two variables X and Y , namely, functions of the form Y = A + BX. Although many of the
principles apply to more general types of calibration function, the approaches described exploit the simple form of the
straight-line calibration function wherever possible.

Values of the parameters A and B, are determined on the basis of measured data points (xi , yi ), i = 1, . . . , m. Various
cases are considered relating to the nature of the uncertainties associated with these data. No assumption is made
that the errors relating to the yi are homoscedastic (having equal variance), and similarly for the xi when the errors
are not negligible.

Estimates of the parameters A and B are determined using least squares methods. The emphasis of this Technical
Specification is on choosing the least squares method most appropriate for the type of measurement data, in particular
methods that reflect the associated uncertainties. The most general type of covariance matrix associated with the
measurement data is treated, but important special cases that lead to simpler calculations are described in detail.

For all cases considered, methods for validating the use of the straight-line calibration functions and for evaluating
the uncertainties and covariance associated with the parameter estimates are given.

The Technical Specification also describes the use of the calibration function parameter estimates and their associated
uncertainties and covariance to predict a value of X and its associated standard uncertainty given a measured value
of Y and its associated standard uncertainty.

NOTE 1 The Technical Specification does not give a general treatment of outliers in measurement data, although the validation
tests given can be used as a basis for identifying discrepant data.

NOTE 2 The Technical Specification describes a method to evaluate the uncertainties associated with the measurement data
in the case that those uncertainties are known only up to a scale factor (Annex E).

2 Normative references

The following referenced documents are indispensable for the application of this document. For dated references,
only the edition cited applies. For undated references, the latest edition of the referenced document (including any
amendments) applies.

ISO/IEC Guide 99:2007, International vocabulary of metrology — Basic and general concepts and associated terms
(VIM)

ISO/IEC Guide 98-3:2008, Uncertainty of measurement — Part 3: Guide to the expression of uncertainty in measure-
ment (GUM:1995)

3 Terms and definitions

For the purposes of this document, the terms and definitions given in ISO/IEC Guide 98-3:2008 and ISO/IEC Guide
99:2007 and the following apply.

A glossary of principal symbols is given in Annex G.

c ISO 2010 — All rights reserved 1


ISO/TS 28037:2010(E)

3.1
measured quantity value
quantity value representing a measurement result

[ISO/IEC Guide 99:2007 2.10]

3.2
measurement uncertainty
non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based
on the information used

[ISO/IEC Guide 99:2007 2.26]

3.3
standard measurement uncertainty
measurement uncertainty expressed as a standard deviation

[ISO/IEC Guide 99:2007 2.30]

3.4
covariance associated with two quantity values
parameter characterizing the interdependence of the quantity values being attributed to two measurands, based on
the information used

3.5
measurement covariance matrix
covariance matrix
matrix of dimension N × N associated with a vector estimate of a vector quantity of dimension N × 1, containing on
its diagonal the squares of the standard uncertainties associated with the respective components of the vector estimate
of the vector quantity, and, in its off-diagonal positions, the covariances associated with pairs of components of the
vector estimate of the vector quantity

NOTE 1 A covariance matrix Ux of dimension N × N associated with the vector estimate x of a vector quantity X has the
representation
cov(x1 , x1 ) · · · cov(x1 , xN )
 
Ux =  .. .. .. ,
. . .
cov(xN , x1 ) · · · cov(xN , xN )
where cov(xi , xi ) = u2 (xi ) is the variance (squared standard uncertainty) associated with xi and cov(xi , xj ) is the covariance
associated with xi and xj . cov(xi , xj ) = 0 if elements Xi and Xj of X are uncorrelated.

NOTE 2 Covariances are also known as mutual uncertainties.

NOTE 3 A covariance matrix is also known as a variance-covariance matrix.

NOTE 4 Definition adapted from ISO/IEC Guide 98-3:2008/Suppl. 1:2008, definition 3.11 [13].

3.6
measurement model
mathematical relation among all quantities known to be involved in a measurement

[ISO/IEC Guide 99:2007 2.48]

3.7
functional model
statistical model involving errors associated with the dependent variable

2 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

3.8
structural model
statistical model involving errors associated with the independent and dependent variables

3.9
calibration
operation that, under specified conditions, in a first step, establishes a relation between the quantity values with
measurement uncertainties provided by measurement standards and corresponding indications with associated mea-
surement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement
result from an indication

NOTE 1 A calibration may be expressed by a statement, calibration function, calibration diagram, calibration curve, or
calibration table. In some cases, it may consist of an additive or multiplicative correction of the indication with associated
measurement uncertainty.

NOTE 2 Calibration should not be confused with adjustment of a measuring system, often mistakenly called ‘self-calibration’,
nor with verification of calibration.

NOTE 3 Often the first step alone in the above definition is perceived as being calibration.

[ISO/IEC Guide 99:2007 2.39]

3.10
probability distribution
hrandom variablei function giving the probability that a random variable takes any given value or belongs to a given
set of values

NOTE 1 The probability on the whole set of values of the random variable equals 1.

NOTE 2 A probability distribution is termed univariate when it relates to a single (scalar) random variable, and multivariate
when it relates to a vector of random variables. A multivariate probability distribution is also described as a joint distribution.

NOTE 3 A probability distribution can take the form of a distribution function or a probability density function.

NOTE 4 Definition and note 1 adapted from ISO 3534-1:1993, definition 1.3 and ISO/IEC Guide 98-3:2008, definition C.2.3;
notes 2 and 3 adapted from ISO/IEC Guide 98-3:2008/Suppl. 1:2008, definition 3.1 [13].

3.11
normal distribution
probability distribution of a continuous random variable X having the probability density function
"  2 #
1 1 ξ−µ
gX (ξ) = √ exp − ,
σ 2π 2 σ

for −∞ < ξ < +∞

NOTE 1 µ is the expectation and σ is the standard deviation of X.

NOTE 2 The normal distribution is also known as the Gaussian distribution.

NOTE 3 Definition and note 1 adapted from ISO 3534-1:1993, definition 1.37; note 2 adapted from ISO/IEC Guide 98-3:2008,
definition C.2.14.

3.12
t-distribution
probability distribution of a continuous random variable X having the probability density function
 
ν+1
Γ −(ν+1)/2
ξ2

2
gX (ξ) = √ 1+ ,
πνΓ(ν/2) ν

c ISO 2010 — All rights reserved 3


ISO/TS 28037:2010(E)

for −∞ < ξ < +∞, with parameter ν, a positive integer, the degrees of freedom of the distribution, where

Z ∞
Γ(z) = tz−1 e−t dt, z > 0,
0

is the gamma function

[ISO/IEC Guide 98-3:2008/Suppl. 1:2008 3.5]

3.13
chi-squared distribution
χ2 distribution
probability distribution of a continuous random variable X having the probability density function

ξ (ν/2)−1
 
ξ
gX (ξ) = exp − ,
2ν/2 Γ(ν/2) 2

for 0 ≤ ξ < ∞, with parameter ν, a positive integer, where Γ is the gamma function

NOTE The sum of the squares of ν independent standardized normal variables is a χ2 random variable with parameter ν; ν is
then called the degrees of freedom.

3.14
positive definite matrix
matrix M of dimension n × n having the property z > M z > 0 for all non-zero vectors z of dimension n × 1

3.15
positive semi-definite matrix
matrix M of dimension n × n having the property z > M z ≥ 0 for all non-zero vectors z of dimension n × 1

4 Conventions and notation

For the purpose of this Technical Specification the following conventions and notations are adopted.

4.1 X is termed the independent variable and Y the dependent variable even when the knowledge of X and Y is
‘interchangeable’, as in Clause 7, for example.

4.2 The quantities A and B are termed the parameters of the straight-line calibration function Y = A + BX. A
and B are also used to denote (dummy) variables in expressions involving the calibration function parameters.

4.3 The quantities Xi and Yi are used as (dummy) variables to denote the co-ordinates of the ith data point.

4.4 The constants A∗ and B ∗ are (unknown) values of A and B that specify the straight-line calibration function
Y = A + B ∗ X for a particular measuring system under consideration.

4.5 The constants Xi∗ and Yi∗ are the (unknown) co-ordinates of the ith data point provided by the measuring
system satisfying Yi∗ = A∗ + B ∗ Xi∗ .

4.6 xi and yi are the measured values of the co-ordinates of the ith data point.

4.7 a and b are estimates of the calibration function parameters for the measuring system.

4.8 x∗i and yi∗ are estimates of the co-ordinates of the ith data point satisying yi∗ = a + bx∗i .

4 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

4.9 A vector of dimension m × 1 is denoted thus:


 
x1
x =  ... , x> =
 
x1 ... xm ,
 

xm
and a matrix of dimension m × n is denoted thus:
   
a11 . . . a1n a11 ... am1
A =  ... .. ,
.. >  .. .. .. .
A = .

. .  . . 
am1 ... amn a1n ... amn
The dimension of the vector or matrix is always specified to avoid possible confusion.

4.10 > denotes transpose.

4.11 The zero matrix is denoted by 0 and the unit vector is denoted by 1.

4.12 Some symbols have more than one meaning. The context clarifies the usage.

4.13 Numbers displayed in tables to a fixed number of decimal places are correctly rounded representations of
numbers stored to higher precision, as would be the case in a spreadsheet, for example. Therefore, minor inconsistencies
may be perceived between displayed column sums and the column sums of the displayed numbers.

4.14 In some tables, a subclause number above a column or columns indicates where the formula is given for
determining the values below.

4.15 In the examples, while data values are provided to a given precision, the results of calculations are provided
to a higher precision to allow the user to compare results when undertaking the calculations.

5 Principles of straight-line calibration

5.1 General

5.1.1 This clause considers how a relationship Y = A + BX describing the dependent variable Y (also called
‘response’) as a function of the independent variable X (also called ‘stimulus’) can be determined from measurement
data. In the context of calibration, the measurement data arise when a measuring instrument specified by (unknown)
values A∗ and B ∗ of the calibration function parameters is ‘stimulated’ by artefacts with calibrated values of Xi given
in standard units, of a property of the artefacts, and the corresponding ‘responses’ or indications Yi of the instrument
are recorded. The relationship provides the response Y of the system given an artefact with calibrated quantity X.
This process is termed ‘forward evaluation’. More useful in practice, the relationship allows a measured response y
of Y to be converted to an estimate x, in standard units, of the property X of an artefact. This process is termed
‘inverse evaluation’ or ‘prediction’.

5.1.2 The calibration of a measuring system should take into account measurement uncertainties, and, if present,
covariances associated with the measurement data. The output of a calibration procedure is a calibration function to
be used for prediction (and, if required, forward evaluation). The output also includes the standard uncertainties and
covariance associated with the estimates a and b of the parameters describing the calibration function, which are used
to evaluate the standard uncertainties associated with prediction (and forward evaluation).

5.2 Inputs to determining the calibration function

5.2.1 Measurement data

The information required to determine the straight-line calibration function are the measurement data and their
associated standard uncertainties and covariances. In this Technical Specification, the measurement data are denoted

c ISO 2010 — All rights reserved 5


ISO/TS 28037:2010(E)

by (xi , yi ), i = 1, . . . , m, that is, m pairs of measured values of X and Y . It is assumed that m is at least two and
that the values of xi are not all equal to each other.

NOTE The uncertainties associated with the estimates a and b generally decrease as m increases. Therefore, calibration should
aim to use as many measured data points as is economically viable.

5.2.2 Associated uncertainties and covariances

The standard uncertainties associated with xi and yi are denoted by u(xi ) and u(yi ) respectively. The covariance
associated with xi and xj is denoted by cov(xi , xj ). Similarly, those associated with yi and yj , and with xi and yj ,
are denoted by cov(yi , yj ) and cov(xi , yj ), respectively. Annex D indicates how the uncertainties and covariances
associated with the measured response and stimulus variables can be evaluated and gives an interpretation of that
uncertainty information. The complete uncertainty information is represented by an array of elements (matrix) U of
dimension 2m × 2m holding the variances (squared standard uncertainties) u2 (xi ) and u2 (yi ) and the covariances:
u2 (x1 )
 
. . . cov(x1 , xm ) cov(x1 , y1 ) . . . cov(x1 , ym )
 .. .. .. .. .. .. 

 . . . . . . 

 cov(xm , x1 ) . . . 2
u (xm ) cov(xm , y1 ) . . . cov(xm , ym ) 
U = .
 cov(y1 , x1 ) . . . cov(y1 , xm )
 u2 (y1 ) . . . cov(y1 , ym ) 

 .. .. .. .. .. .. 
 . . . . . . 
cov(ym , x1 ) ... cov(ym , xm ) cov(ym , y1 ) ... u2 (ym )
For many applications, some or all covariances are taken as zero (see 5.3).

NOTE This Technical Specification is concerned with problems in which the u(xi ) or the u(yi ) are generally different.

5.3 Determining the calibration function

5.3.1 The inputs to determining the calibration function are the measurement data and their associated uncertain-
ties and possibly covariances. Given parameters A and B, the inputs can be used to provide a measure of the departure
of the ith data point (xi , yi ) from the line Y = A + BX. The estimates a and b are determined by minimizing a sum
of squares of these departures, or a more general measure when any covariances are non-zero. How this is achieved
depends on the ‘uncertainty structure’ associated with the measurement data. This uncertainty structure relates to
the answers to the following questions:

i) Are the uncertainties associated with the measured values xi negligible?

ii) Are the covariances associated with pairs of measured values negligible?

5.3.2 The following cases, given in increasing order of complexity and depending on the answers to the questions
in 5.3.1, are considered in this Technical Specification:

a) The only uncertainties are associated with the measured values yi and all covariances associated with the data
are regarded as negligible (Clause 6);

b) Uncertainties are associated with the measured values xi and yi and all covariances associated with the data are
regarded as negligible (Clause 7);

c) Uncertainties are associated with the measured values xi and yi and the only covariances are associated with the
pairs (xi , yi ) (Clause 8);

d) The only uncertainties are associated with the measured values yi and the only covariances are associated with
the yi and the yj (i 6= j) (Clause 9);

e) The most general case in which there are uncertainties associated with the measured values xi and yi and
covariances associated with all pairs of values of the xi , the xj , the yk and the y` (Clause 10).

6 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

5.3.3 For each case in 5.3.2 are given

a) the prescribed measurement data and uncertainty structure,

b) the corresponding statistical model,

c) the least squares problem addressed,

d) the calculation steps,

e) properties of the statistical model,

f) validation of the model,

g) organization of the calculations for the computer, where appropriate,

h) a numerical algorithm, where appropriate, and

i) one or more worked examples.

5.4 Numerical treatment

In Annex C, a general approach to the most general case e) in 5.3.2 is given. It can be used to treat all the other
cases and uses sophisticated, numerically stable methods. The cases a) to c) in 5.3.2 can, however, be treated using
elementary operations, which can be implemented in a spreadsheet, for example. The cases d) and e) in 5.3.2 require
some matrix operations, which are straightforward to implement in a computer language supporting matrix arithmetic,
but are not well suited to spreadsheet calculations.

5.5 Uncertainties and covariance associated with the calibration function parameters

5.5.1 For all cases considered, estimates of the calibration function parameters can be expressed (explicitly or
implicitly) as functions of the measurement data. The principles of the GUM [ISO/IEC Guide 98-3:2008] can be
applied to propagate the uncertainties and covariances associated with the measurement data through these functions
to obtain those associated with these parameter estimates. In this way, the measurement data are used to provide
estimates a and b of the calibration function parameters, and to evaluate standard uncertainties u(a) and u(b) and
the covariance cov(a, b) associated with these estimates. For the cases a) and d) in 5.3.2, the propagation is exact
since the parameter estimates can be expressed as linear combinations of the inputs yi . For the other cases, in which
the parameter estimates cannot be so expressed, the propagation is approximate, based on a linearization about the
parameter estimates. For many purposes, the approximation incurred by the linearization will be sufficiently accurate.

NOTE When the propagation of uncertainty is approximate, and particularly if the uncertainties involved are large (for example,
in some areas of biological measurement), an approach based on the propagation of distributions can be employed. This approach
[ISO/IEC Guide 98-3:2008/Suppl. 1:2008] uses a Monte Carlo method (not treated in this Technical Specification).

5.5.2 The primary outputs in describing the straight-line calibration function are the parameter estimate vector a
of dimension 2 × 1 and the covariance matrix Ua of dimension 2 × 2 given by

u2 (a)
   
a cov(a, b)
a= , Ua = , (1)
b cov(b, a) u2 (b)

where u(a) and u(b) are the standard uncertainties associated with a and b, respectively, and cov(a, b) = cov(b, a) is
the covariance associated with a and b.

c ISO 2010 — All rights reserved 7


ISO/TS 28037:2010(E)

5.6 Validation of the model

5.6.1 In determining the estimates a and b of the straight-line calibration function parameters, it is assumed
that the model Y = A + BX is valid and that the uncertainties associated with the measurement data give a credible
measure of the departure of the measurement data from a straight line. Once a and b have been determined, the actual
departure of the data points from the best-fit calibration function can be assessed against a predicted departure. This
comparison involves an aggregate measure of departure expressed in terms of the sum of squares χ2obs of m weighted
residuals, the ith weighted residual being a measure of the departure of the ith data point from the line, or, when the
covariance associated with the ith data point (xi , yi ) is non-zero, a more general form. If χ2obs is much bigger than
expected, on statistical grounds there is reason to call into question the validity of the model assumptions.

5.6.2 From a statistical viewpoint, the measurement data can be regarded as realizations of random variables. If
the probability distributions characterizing these random variables were known, it would be possible in principle to
determine the probability distribution for the aggregate measure of departure in 5.6.1. Then the probability could
be calculated that χ2obs , regarded as a draw from this aggregate distribution, exceeded any particular quantile of the
distribution. However, as the information about these quantities is often limited to the measured values themselves
and their associated variances (taken to be the expectations and variances, respectively, of the random variables
characterized by these distributions), there is insufficient information to determine the distribution for this measure.
Instead, the assessment of validity is performed assuming that the distributions for these quantities are normal. With
this assumption, which is henceforth taken to hold, at least for validation purposes, the distribution for this measure
is χ2ν with ν = m − 2 degrees of freedom. Accordingly, the probability that χ2obs exceeds any particular quantile of χ2ν
can be determined (see 6.3, 7.3, 9.3, 10.3). The 95 % quantile is used.

NOTE 1 If χ2obs exceeds the 95 % quantile of χ2ν , the straight-line calibration function can be regarded as not explaining the
data sufficiently well for practical purposes. In such a case, the data and associated uncertainties should be checked for possible
mistakes. A calibration function consisting of a polynomial in X of degree 2 or higher or some other mathematical form can be
entertained; such a consideration is beyond the scope of this Technical Specification.

NOTE 2 There is a possibility that the model is ‘too good’ in that the observed value χ2obs is significantly smaller than the
expected value. This possibility typically corresponds to the uncertainties associated with the measurement data being quoted
as too large, and is not considered further in this Technical Specification.

5.6.3 In order to obtain as much value as possible from a calibration, it is desirable that input uncertainties are
derived prior to determining the calibration function parameters, rather than being evaluated once a fit to the data
has been determined, with associated uncertainties estimated from the data or known up to a scale factor. The latter
case is considered in Annex E.

5.6.4 If, in any particular case, the validation of the model fails, that is, χ2obs exceeds the 95 % quantile of χ2ν
(see 5.6.2), the calculated standard uncertainties u(a) and u(b) and covariance cov(a, b) (see 5.5.2) should be regarded
as unreliable, as should the uncertainty associated with a predicted value (see 5.7).

5.7 Use of the calibration function

5.7.1 The calibration function is typically used for prediction (inverse evaluation) where, given an estimate of Y and
its associated standard uncertainty, the corresponding value of X is estimated and its associated standard uncertainty
is evaluated. Evaluation of the latter uncertainty makes use of the standard uncertainties associated with the estimates
a and b as well as their associated covariance. See 11.1.

5.7.2 Forward evaluation where, given an estimate of X and its associated uncertainty, the corresponding value
of Y is obtained together with its associated standard uncertainty, is sometimes required, for example, when comparing
the calibrations of a set of similar instruments. See 11.2.

NOTE It is assumed that the conditions of measurement that held during the calibration hold at the time the calibration
function is subsequently used. Otherwise, either a new calibration would be necessary or an appropriate adjustment should
be made to take account of any change such as drift that might have occurred (and that any associated uncertainty is also
handled). Control charts can be useful for this purpose.

8 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

5.8 Determining the ordinary least squares best-fit straight line to data

5.8.1 The ordinary (unweighted) least squares best-fit straight line to the data is defined by the values a and b of
the parameters A and B that minimize
Xm
(yi − A − Bxi )2 . (2)
i=1

These values satisfy the equations given by equating to zero the partial derivatives of first order of expression (2) with
respect to A and B.

5.8.2 a and b can be calculated in the following steps:


m m
1 X 1 X
1 Set x0 = xi and y0 = yi ;
m i=1 m i=1

2 ei = xi − x0 and yei = yi − y0 , i = 1, . . . , m;
Set x
Pm
x
ei yei
3 Set b = Pi=1m and a = y0 − bx0 .
e2i
i=1 x

5.8.3 The values x0 and y0 are such that the best-fit line to the translated data points (e
xi , yei ) passes through the
origin and has the same slope as the best-fit line to the original data points (xi , yi ).

NOTE Mathematically, the best-fit parameters are determined by solving a pair of linear equations involving a matrix of
dimension 2 × 2. For the transformed data points, this matrix is diagonal, allowing the solution parameters to be determined
easily. Translating the data also has a beneficial effect in terms of numerical accuracy of the computed solution [4, page 33].

5.8.4 The methods described in the Clauses 6 to 10 below constitute extensions of the calculations in 5.8.2 that
take into account the prescribed uncertainty information.

6 Model for uncertainties associated with the yi

6.1 General

6.1.1 This clause considers the case 5.3.2 a), namely when the following information is provided for i = 1, . . . , m:

a) measurement data (xi , yi ), and

b) standard uncertainty u(yi ) associated with yi .

Annex D provides guidance on obtaining these uncertainties. All other uncertainties and covariances associated with
the data are regarded as negligible.

6.1.2 The case 5.3.2 a) corresponds to that described by the statistical model
yi = A∗ + B ∗ xi + ei , i = 1, . . . , m, (3)
where the ei are realizations of independent random variables with expectations zero and variances u2 (yi ) [9, page 1].
A∗ and B ∗ are the (unknown) values of the calibration function parameters for the measuring system for which a
calibration is required and which provides the measurement data. This model, having no uncertainty associated with
the xi , is known as a functional model.

6.1.3 Let wi = 1/u(yi ), i = 1, . . . , m. The estimates a and b are those that minimize the weighted sum of squares
m
X m
X
Ri2 ≡ wi2 (yi − A − Bxi )2 (4)
i=1 i=1

c ISO 2010 — All rights reserved 9


ISO/TS 28037:2010(E)

with respect to A and B. This minimization problem is known as a weighted least squares (WLS) problem. These
estimates satisfy the equations given by equating to zero the partial derivatives of first order of expression (4) with
respect to A and B.

6.2 Calibration parameter estimates and associated standard uncertainties and covariance

6.2.1 Estimates a and b are calculated in steps 1 to 5; the standard uncertainties u(a) and u(b) and covari-
ance cov(a, b) are evaluated in step 6:
m
1 X
1 Set wi = , i = 1, . . . , m, and F 2 = wi2 ;
u(yi ) i=1

m m
1 X 2 1 X 2
2 Set g0 = w xi and h0 = w yi ;
F 2 i=1 i F 2 i=1 i

3 Set gi = wi (xi − g0 ) and hi = wi (yi − h0 ), i = 1, . . . , m;


m
X
4 Set G2 = gi2 ;
i=1

m
1 X
5 Set b = gi hi and a = h0 − bg0 ;
G2 i=1

1 g2 1 g0
6 Set u2 (a) = 2
+ 02 , u2 (b) = 2 and cov(a, b) = − 2 .
F G G G
NOTE 1 Steps 1 to 5 are equivalent to the steps:
Pm 2
Pm 2
1 wi xi wi yi
i) Set wi = , i = 1, . . . , m, x0 = Pi=1
m 2
and y0 = Pi=1
m ;
u(yi ) w
i=1 i
w2
i=1 i

ii) ei = xi − x0 and yei = yi − y0 , i = 1, . . . , m;


Set x
Pm
wi2 x
ei yei
iii) Set b = Pi=1
m 2 2
and a = y0 − bx0 .
w x
i=1 i i
e

NOTE 2 Steps 1 to 5 determine the least squares solution to the system of equations

wi a + wi xi b = wi yi , i = 1, . . . , m.

NOTE 3 If the u(yi ) are identical, so that the wi are identical, a and b are the same as those given by the ordinary least
squares best-fit line in 5.8.2.

NOTE 4 u2 (a), u2 (b) and cov(a, b) in step 6 are obtained by applying the law of propagation of uncertainty in ISO/IEC Guide
98-3:2008 to a and b as provided by steps 1 to 5.

6.2.2 The estimates a and b determined in 6.1.3 have the following properties [15] for data yi according to the
model (3):

i) The estimates a and b are given by linear combinations of the data yi .

ii) The estimates a and b can be regarded as realizations of random variables whose expectations are A∗ and B ∗ ,
respectively.

iii) The covariance matrix for the random variables in ii) is specified by u2 (a), u2 (b) and cov(a, b) calculated in 6.2.1.

10 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Property i) states that a and b are derived using a linear estimation method. Property ii) states that the linear
estimation method is unbiased. Properties ii) and iii) jointly show that the estimation method is consistent in the
sense that as the number m of data points is increased, the estimates a and b converge to A∗ and B ∗ , respectively.

The estimation method of 6.1.3 has the following optimal property for data yi according to the model (3):

iv) The estimates ă and b̆ provided by any unbiased, linear estimation method can be regarded as realizations of
random variables whose variances are at least as large as those associated with the WLS estimation method.

Property iv) can be interpreted as follows. For constants c and d, the standard uncertainty u(că + db̆) associated with
a linear combination of the estimates ă and b̆ provided by any unbiased, linear estimation method is at least as great
as u(ca + db). Properties i) to iv) justify the use of least squares methods for data compatible with the model (3). Note
that in the use of this model statements are only made about the expectations and variances associated with the ei ;
the associated distributions are not further specified. If the additional assumption is made that the ei are realizations
of normally distributed random variables, then further properties associated with the WLS estimation method can be
made:

v) The random variables in ii) are characterized by a bivariate normal distribution centred on A∗ and B ∗ with
covariance matrix specified by u2 (a), u2 (b) and cov(a, b).

vi) The estimates a and b are maximum likelihood estimates, corresponding to the most likely values of A and B
that could have given rise to the observed measurement data yi .

vii) In the context of Bayesian inference, the state-of-knowledge distribution for A and B, given the observed mea-
surement data yi , is a bivariate normal distribution centred on a and b with covariance matrix specified by u2 (a),
u2 (b) and cov(a, b).

6.3 Validation of the model

If m > 2, the validity of the model can be partially tested using the weighted residuals ri (continued from 6.2.1):

7 Form ri = wi (yi − a − bxi ), i = 1, . . . , m;

m
X
8 Form the observed chi-squared value χ2obs = ri2 and degrees of freedom ν = m − 2;
i=1

9 Check whether χ2obs exceeds the 95 % quantile of χ2ν , and if it does reject the straight-line model.

NOTE The chi-squared test is based on an assumption that the ei in model (3) are realizations of independent normal random
variables.

6.4 Organization of the calculations

The calculations in 6.2.1 and 6.3 can be organized into one or two tableaux for implementation in a spreadsheet as in
Tables 2 and 3, which can be amalgamated into a single table or spreadsheet.

Table 2 — Data for the weighted least squares straight-line calibration function

x1 y1 u(y1 )
x2 y2 u(y2 )
.. .. ..
. . .
xm ym u(ym )

c ISO 2010 — All rights reserved 11


ISO/TS 28037:2010(E)

Table 3 — Organization of the calculations to determine the weighted least squares straight-line calibration
function

6.2.1 step 5
6.2.1 steps 2, 3 6.3 step 7 6.3 step 8
g0 h0 a
w1 w12 w12 x1 w12 y1 g1 h1 g12 g1 h1 r1 r12
w2 w22 w22 x2 w22 y2 g2 h2 g22 g2 h2 r2 r22
.. .. .. .. .. .. .. .. .. ..
. . . . . . . . . .
2 2 2 2 2
wm wm wm xm wm ym gm hm gm gm hm rm rm
F2 =
P 2 P 2 P 2
G2 =
P 2
χ2obs
P P 2
wi wi xi wi yi gi gi hi b = ri

12 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

EXAMPLE (EQUAL WEIGHTS) Table 4 gives six data points and their associated standard uncertainties. The measured
values xi are taken to be exact and the standard uncertainty associated with the measured values yi is u(yi ) = 0,5. The weights
are therefore taken to be wi = 1/u(yi ) = 2,0, i = 1, . . . , 6.

Table 4 — Data representing six measurement points, equal weights

xi yi u(yi )
1,0 3,3 0,5
2,0 5,6 0,5
3,0 7,1 0,5
4,0 9,3 0,5
5,0 10,7 0,5
6,0 12,1 0,5

The best fit straight-line parameters are calculated as in Table 5. From the table, g0 = 84,000/24,000 = 3,500,
h0 = 192,400/24,000 = 8,017, b = 123,000/70,000 = 1,757 and a = 8,017 − (1,757)(3,500) = 1,867.

Table 5 — Calculation tableau associated with the data in Table 4

wi wi2 wi2 xi wi2 yi gi hi gi2 gi hi ri ri2


3,500 8,017 a = 1,867
2,000 4,000 4,000 13,200 −5,000 −9,433 25,000 47,167 −0,648 0,419
2,000 4,000 8,000 22,400 −3,000 −4,833 9,000 14,500 0,438 0,192
2,000 4,000 12,000 28,400 −1,000 −1,833 1,000 1,833 −0,076 0,006
2,000 4,000 16,000 37,200 1,000 2,567 1,000 2,567 0,810 0,655
2,000 4,000 20,000 42,800 3,000 5,367 9,000 16,100 0,095 0,009
2,000 4,000 24,000 48,400 5,000 8,167 25,000 40,833 −0,619 0,383
24,000 84,000 192,400 70,000 123,000 b = 1,757 1,665

The standard uncertainties and covariance associated with the fitted parameters can also be evaluated, using the formulæ
in 6.2.1, from information in Table 5:

u2 (a) = 1/24,000 + (3,500)2 /70,000, so that u(a) = 0,465;


u2 (b) = 1/70,000, so that u(b) = 0,120;
u(a, b) = −3,500/70,000 = −0,050.

The observed chi-squared value is χ2obs = 1,665 with ν = 4 degrees of freedom, as calculated in Table 5 using 6.3. Since χ2obs
does not exceed the 95 % quantile of χ2ν , namely 9,488, this is no reason to doubt the consistency of the straight-line model and
the data.

The data and fitted straight-line calibration function are displayed in Figure 2. Standard uncertainties associated with the yi
are illustrated (and in subsequent figures) by vertical lines, centred on yi and having extremities yi − u(yi ) and yi + u(yi ). The
weighted residuals are shown in Figure 3.

c ISO 2010 — All rights reserved 13


ISO/TS 28037:2010(E)

Figure 2 — Data in Table 4 and fitted straight-line calibration function obtained in Table 5

Figure 3 — Weighted residuals ri obtained in Table 5

14 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

EXAMPLE (UNEQUAL WEIGHTS) Table 6 gives six data points and their associated standard uncertainties. The xi are
taken to be exact. The yi were obtained using two instrument settings, so that for larger values of X, the yi are less accurate.

Table 6 — Data representing six measurement points, unequal weights

xi yi u(yi )
1,0 3,2 0,5
2,0 4,3 0,5
3,0 7,6 0,5
4,0 8,6 1,0
5,0 11,7 1,0
6,0 12,8 1,0

The best fit straight-line parameters are calculated as in Table 7. From the table, g0 = 39,000/15,000 = 2,600,
h0 = 93,500/15,000 = 6,233, b = 65,000/31,600=2,057 and a = 6,233 − (2,057)(2,600) = 0,885.

Table 7 — Calculation tableau associated with the data in Table 6

wi wi2 wi2 xi wi2 yi gi hi gi2 gi hi ri ri2


2,600 6,233 a = 0,885
2,000 4,000 4,000 12,800 −3,200 −6,067 10,240 19,413 0,516 0,266
2,000 4,000 8,000 17,200 −1,200 −3,867 1,440 4,640 −1,398 1,955
2,000 4,000 12,000 30,400 0,800 2,733 0,640 2,187 1,088 1,183
1,000 1,000 4,000 8,600 1,400 2,367 1,960 3,313 −0,513 0,263
1,000 1,000 5,000 11,700 2,400 5,467 5,760 13,120 0,530 0,281
1,000 1,000 6,000 12,800 3,400 6,567 11,560 22,327 −0,427 0,182
15,000 39,000 93,500 31,600 65,000 b = 2,057 4,131

The standard uncertainties and covariance associated with the fitted parameters can be evaluated, using the formulæ in 6.2.1,
from information in Table 7:

u2 (a) = 1/15,000 + (2,600)2 /31,600, so that u(a) = 0,530;


u2 (b) = 1/31,600, so that u(b) = 0,178;
u(a, b) = −2,600/31,600 = −0,082.

The observed chi-squared value is χ2obs = 4,131 with ν = 4 degrees of freedom, as calculated in Table 7 using 6.3. Since χ2obs
does not exceed the 95 % quantile of χ2ν , namely 9,488, this is no reason to doubt the consistency of the straight-line model and
the data.

The data and fitted straight-line calibration function are displayed in Figure 4. The weighted residuals are shown in Figure 5.

c ISO 2010 — All rights reserved 15


ISO/TS 28037:2010(E)

Figure 4 — Data in Table 6 and fitted straight-line calibration function obtained in Table 7

Figure 5 — Weighted residuals ri obtained in Table 7

16 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

7 Model for uncertainties associated with the xi and the yi

7.1 General

7.1.1 This clause considers the case 5.3.2 b), namely when the following information is provided for i = 1, . . . , m:

a) measurement data (xi , yi ),

b) standard uncertainty u(xi ) associated with xi , and

c) standard uncertainty u(yi ) associated with yi .

Annex D provides guidance on obtaining these uncertainties. All covariances associated with the data are regarded as
negligible.

7.1.2 The case 5.3.2 b) corresponds to that described by the statistical model

xi = Xi∗ + di , yi = Yi∗ + ei , Yi∗ = A∗ + B ∗ Xi∗ , i = 1, . . . , m, (5)

where the di and ei are realizations of independent random variables with expectations zero and variances u2 (xi )
and u2 (yi ), respectively. This model is known as a structural model. In the model, (xi , yi ) represent the measured
co-ordinates of the (unobserved) point (Xi∗ , Yi∗ ) lying on the line Y = A∗ + B ∗ X.

7.1.3 As the xi (in addition to the yi – see Clause 6) have associated uncertainties, account is also taken of them in
determining a straight-line calibration function. The problem of determining a and b in this context is one of weighted
orthogonal distance regression (weighted ODR) [3] or generalized distance regression (GDR) [2]. In the statistical
literature it is referred to as an errors-in-variables model [7], [9, page 50], [17, page 189]. The estimates a and b are
those that minimize the sum of squares
m
X  2
vi (xi − Xi )2 + wi2 (yi − A − BXi )2 ,

(6)
i=1

with respect to A, B, and Xi , i = 1, . . . , m, for weights vi = 1/u(xi ) and wi = 1/u(yi ). Each solution estimate x∗i ,
along with a and b, specifies the estimate (x∗i , yi∗ ), yi∗ = a + bx∗i , of (Xi∗ , Yi∗ ) in model (5).

7.1.4 Given A and B, the values x∗i that minimize the sum of squares (6) with respect to Xi are given by

1
x∗i = x∗i (A, B) = u2 (yi )xi + (yi − A)Bu2 (xi ) Ti ,
 
Ti = . (7)
u2 (yi ) + B 2 u2 (xi )

Using expressions (7) the optimization problem can be posed in terms of parameters A and B alone by replacing Xi
in expression (6) by x∗i (A, B) giving
m n
X o
2 2
vi2 [xi − x∗i (A, B)] + wi2 [yi − yi∗ (A, B)] , yi∗ (A, B) = A + Bx∗i (A, B). (8)
i=1

7.1.5 If
1/2
Ri = Ri (A, B) = {−B [xi − x∗i (A, B)] + [yi − yi∗ (A, B)]} Ti , (9)
then the sum of squares (8) is equivalent to
m
X
Ri2 .
i=1

The term Ri has the following geometric interpretation. The normal vector to the line Y = A + BX is given by
(−B, 1)> /(1 + B 2 )1/2 and Ri is a weighted multiple of the signed component of (xi − x∗i (A, B), yi − yi∗ (A, B))> in the
direction of the normal vector.

c ISO 2010 — All rights reserved 17


ISO/TS 28037:2010(E)

NOTE 1 In ordinary least squares (see 5.8) and weighted least squares (see Clause 6), the distance to the line is measured
‘vertically’, that is, in the Y -direction, reflecting the fact that the deviation of the measured point (xi , yi ) from the line can be
accounted for in terms of an error ei associated with yi , since xi is assumed to be known accurately. Weighted ODR addresses
the case where there are also uncertainties associated with the xi .

NOTE 2 Expressions (7) are given by equating to zero the partial derivatives of first order of expression (6) with respect to
A, B and the Xi .

NOTE 3 If ui (xi ) = 0 then, in expressions (7), x∗i (A, B) = xi (so that yi∗ (A, B) = A + Bxi ) and Ti = 1/u2 (yi ) = wi2 .
Consequently, Ri in expression (9) is given by Ri = wi (yi − A − Bxi ). Thus, if u(xi ) = 0 the term Ri is evaluated in the same
way as in expression (4) in 6.1.3.

NOTE 4 If u(xi ) = u(yi ) = ui , say, then x∗i (A, B) defines the point on the line Y = A + BX closest to (xi , yi ) and
1 1 1 1
Ti = , Ri = {−B [xi − x∗i (A, B)] + [yi − yi∗ (A, B)]} .
u2i 1 + B 2 ui (1 + B 2 )1/2

Since the normal vector to the line is (−B, 1)> /(1 + B 2 )1/2 , Ri is the weighted distance from the point (xi , yi ) to the line
Y = A + BX.

7.1.6 Subclause 7.1.3 involves A, B and the Xi , i = 1, . . . , m, as variables in the minimization. The calculations
given in 7.2.1 perform this minimization in a two-stage iteration [2]:

(i) from approximations to a and b, determine the corresponding optimal x∗i , and

(ii) in terms of these x∗i , determine new approximations to a and b that will reduce the sum of squares (6).

NOTE No notational distinction is made between x∗i at a typical iteration and the final solution value.

7.2 Calibration parameter estimates and associated standard uncertainties and covariance

7.2.1 Estimates a and b are calculated in steps 1 to 6 below using the iterative scheme indicated in 7.1.6; the
standard uncertainties u(a) and u(b) and covariance cov(a, b) are evaluated in step 7 (see Annex B):

1 Obtain initial approximations e a and eb to a and b, for example, by determining the weighted least squares best-fit
line to the data (see 6.2.1 steps 1 to 5), ignoring the uncertainties associated with the xi ;

1 h i
2 Set ti = , x∗i = xi u2 (yi ) + (yi − e
a)ebu2 (xi ) ti and zi = yi − e
a − ebxi , i = 1, . . . , m;
u2 (yi ) + eb2 u2 (xi )

1/2
3 Set fi = ti , gi = fi x∗i and hi = fi zi , i = 1, . . . , m;

4 Determine the (unweighted) least squares solution δa and δb to the system of equations

(δA)fi + (δB)gi = hi , i = 1, . . . , m,

that is
m
X
i) Set F 2 = fi2 ;
i=1

m m
1 X 1 X
ii) Set g0 = f i gi and h0 = fi hi ;
F 2 i=1 F 2 i=1

iii) Set gei = gi − g0 fi and e


hi = hi − h0 fi , i = 1, . . . , m;
m
X
iv) e2 =
Set G gei2 ;
i=1

18 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

m
1 X e
v) Set δb = gei hi and δa = h0 − (δb)g0 ;
e2
G i=1

5 Update the current approximations to the parameters and residuals: a := e


e a + δa, eb := eb + δb,
ri = h̃i − (δb)g̃i , i = 1, . . . , m;

6 Repeat steps 2 to 5 until convergence has been achieved. Set a = e


a, b = eb;

1 g2 1 g0
7 Set u2 (a) = 2
+ 0 , u2 (b) = and cov(a, b) = − , where g0 , h0 , etc., are the values calculated in step 4.
F Ge 2 G
e 2 e2
G

NOTE 1 The calculations in step 4 are similar to those in steps 1 to 5 in 6.2.1.

NOTE 2 In step 2, x∗i defines the point (x∗i , a + bx∗i ) on the current approximation to the best-fit straight-line calibration
function that is closest, as a weighted distance, to the measured data point (xi , yi ).

NOTE 3 In step 3, the calculated hi represents a value of the generalized distance Ri in expression (9) from the ith data point
to the current estimate of the straight-line calibration function. The algorithm is designed to minimize the sum of squares of
such distances.

NOTE 4 In step 4, the corrections δa and δb will generally decrease in magnitude by ultimately an approximately constant
factor from iteration to iteration. The size of the reduction depends largely on the uncertainties associated with the data: the
smaller these uncertainties are, the greater the reduction will be. The iterative scheme can be terminated when the magnitudes
of the corrections are judged to be negligible.

NOTE 5 The residuals calculated in step 5 are associated with the solution of the system of equations solved in step 4. At
convergence, the ri calculated in step 5 is the same as the hi calculated in step 3.

NOTE 6 Strictly only the residuals calculated in step 5 at the final iteration are required. However, for ease of presentation
in tableau format (Table 9 in 7.4), the residuals are calculated at each iteration.

NOTE 7 u2 (a), u2 (b) and cov(a, b) in step 7 are obtained by applying the law of propagation of uncertainty in ISO/IEC Guide
98-3:2008 to a and b as provided by steps 1 to 6.

7.2.2 While it is possible to derive the properties in 6.2.2 for the WLS estimation method, the fact that the estimates
a and b determined by minimizing the sum of squares (6) depend non-linearly on the data xi and yi means that the
corresponding properties for weighted ODR cannot be straightforwardly stated. The estimates a and b determined
in 7.1.3 have the following properties for data xi and yi according to the model (5):

i) The estimates a and b are given by non-linear functions of the data xi and yi .

ii) The estimates a and b can be regarded as realizations of random variables whose expectations are approximately
A∗ and B ∗ , respectively.

iii) The elements of the covariance matrix for the random variables in ii) are approximated by u2 (a), u2 (b) and
cov(a, b) calculated in 7.2.1.

The approximations in ii) and iii) will be more accurate for data having smaller associated uncertainties. However,
the estimation method has the following consistency property:

iv) For data satisfying the model (5), as the number m of data points increases, the estimates a and b converge to
A∗ and B ∗ , respectively [16].

By contrast, the WLS estimation method will generally underestimate the magnitude of the slope parameter [5] for
data generated according to the model (5).

If the additional assumption is made that the di and ei are realizations of normally distributed random variables, then
further properties associated with the weighted ODR estimation method can be stated:

c ISO 2010 — All rights reserved 19


ISO/TS 28037:2010(E)

v) The random variables in ii) are characterized approximately by a bivariate normal distribution centred on A∗ and
B ∗ with covariance matrix specified by u2 (a), u2 (b) and cov(a, b).

vi) The estimates a and b are maximum likelihood estimates, corresponding to the most likely values of A and B
that could have given rise to the observed measurement data xi and yi .

vii) In the context of Bayesian inference, the state-of-knowledge distribution for A and B, given the observed mea-
surement data xi and yi , is approximated by a bivariate normal distribution centred on a and b with covariance
matrix specified by u2 (a), u2 (b) and cov(a, b).

7.3 Validation of the model

If m > 2, the validity of the model can be partially tested using the weighted residuals ri calculated in step 5 in 7.2.1
at the final iteration, that is, at convergence (continued from 7.2.1):
m
X
8 Form the observed chi-squared value χ2obs = ri2 and degrees of freedom ν = m − 2;
i=1

9 Check whether χ2obs exceeds the 95 % quantile of χ2ν , and if it does reject the straight-line model.

NOTE The chi-squared test is based on an assumption that the di and ei in model (5) are realizations of independent normal
random variables and on a first order approximation.

7.4 Organization of the calculations

The calculations in 7.2.1 and 7.3 can be organized into two sequences of tableaux, suitable for implementation in a
spreadsheet. The first tableau (Table 8), given approximations e a and eb (see 7.2.1 step 1), calculates the fi , gi and hi
(see 7.2.1 step 3). The second tableau (Table 9) uses these fi , gi and hi to calculate corrections δa and δb (see 7.2.1
step 4).

Table 8 — Calculations for determining the straight-line calibration function, given approximations e
a and e
b to
the line parameter estimates a and b

7.2.1 step 1 or step 5


7.2.1 step 2 7.2.1 step 2 7.2.1 step 3
a
e eb
x1 u(x1 ) y1 u(y1 ) t1 x∗1 z1 f1 g1 h1
x2 u(x2 ) y2 u(y2 ) t2 x∗2 z2 f2 g2 h2
.. .. .. .. .. .. .. .. .. ..
. . . . . . . . . .
xm u(xm ) ym u(ym ) tm x∗m zm fm gm hm

Table 9 — Organization of the calculations to determine corrections δa and δb for the GDR straight-line
calibration function

7.2.1 steps
4 ii), 4 iii) 7.2.1 steps 4 v), 5 7.3 step 8
g0 h0 δa
f12 f1 g1 f1 h1 g1
e h1
e g12
e g1e
e h1 r1 r12
f22 f2 g2 f2 h2 g2
e h2
e g22
e g2e
e h2 r2 r22
.. .. .. .. .. .. .. .. ..
. . . . . . . . .
2 2 2
fm fm gm fm hm gm
e hm
e gm
e gme
e hm rm rm
F2 = fi2 e2 = gi2 ri2
P P P P P P
fi gi f i hi G e gi e
e hi δb

20 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

EXAMPLE Table 10 gives six measured data points and their associated standard uncertainties.

Table 10 — Six measured data points and corresponding uncertainties

xi u(xi ) yi u(yi )
1,2 0,2 3,4 0,2
1,9 0,2 4,4 0,2
2,9 0,2 7,2 0,2
4,0 0,2 8,5 0,4
4,7 0,2 10,8 0,4
5,9 0,2 13,5 0,4

In order to determine initial approximations e


a and e
b (7.2.1 step 1), a weighted least squares straight-line calibration function is
determined. Following the scheme described in 6.2, the tableaux given in Tables 11 and 12 are obtained.

Table 11 — Data representing six measurement points

xi yi u(yi )
1,2 3,4 0,2
1,9 4,4 0,2
2,9 7,2 0,2
4,0 8,5 0,4
4,7 10,8 0,4
5,9 13,5 0,4

Table 12 — Calculation tableau associated with the data in Table 11 to determine initial approximations e
a
and e
b

wi wi2 wi2 xi wi2 yi gi hi gi2 gi hi ri ri2


2,573 3 6,186 7 a = 0,658 3
5,000 0 25,000 0 30,000 0 85,000 0 −6,866 7 −13,933 3 47,151 1 95,675 6 0,818 6 0,670 1
5,000 0 25,000 0 47,500 0 110,000 0 −3,366 7 −8,933 3 11,334 4 30,075 6 −1,700 6 2,892 0
5,000 0 25,000 0 72,500 0 180,000 0 1,633 3 5,066 7 2,667 8 8,275 6 1,557 7 2,426 4
2,500 0 6,250 0 25,000 0 53,125 0 3,566 7 5,783 3 12,721 1 20,627 2 −1,879 1 3,531 0
2,500 0 6,250 0 29,375 0 67,500 0 5,316 7 11,533 3 28,266 9 61,318 9 0,111 3 0,012 4
2,500 0 6,250 0 36,875 0 84,375 0 8,316 7 18,283 3 69,166 9 152,056 4 0,416 3 0,173 3
93,750 0 241,250 0 580,000 0 171,308 3 368,029 2 b = 2,148 3 9,705 2

The initial approximations are e


a = 0,658 3 and eb = 2,148 3. Given these approximations, the first tableau (Table 13) of
the form in Table 8 can be calculated to obtain fi , gi and hi . The second tableau (Table 14) of the form in Table 9 then
calculates increments δa = −0,078 4 and δb = 0,011 1 (7.2.1 step 4). At the end of the iteration, the approximations e
a and e
b
are updated (7.2.1 step 5):
a := e
e a + δa = 0,658 3 − 0,078 4 = 0,579 9;
eb := eb + δb = 2,148 3 + 0,011 1 = 2,159 4;

With these updated values of e a and eb, two new tableaux are formed (Tables 15 and 16) to determine further corrections
δa = −0,001 0 and δb = 0,000 2. The process is repeated a third time (Tables 17 and 18). In this case, the magnitudes of
the corrections are less than 0,000 05, which is judged to be negligible for the purpose, and the final approximations to the
parameter estimates are a = 0,578 8 and b = 2,159 7.

The standard uncertainties and covariance (7.2.1 step 7) associated with the fitted parameters can also be evaluated from
information in the final tableau (Table 18):
u2 (a) = 1/21,897 7 + (3,141 4)2 /54,427 1 so that u(a) = 0,476 4;
u2 (b) = 1/54,427 1, so that u(b) = 0,135 5;
u(a, b) = −3,141 4/54,427 1 = −0,057 7.

The observed chi-squared value is χ2obs = 2,743 with ν = 4 degrees of freedom, as calculated in Table 18 using 7.3. Since χ2obs
does not exceed the 95 % quantile of χ2ν , namely 9,488, this is no reason to doubt the consistency of the straight-line model and
the data.

The data points and weighted ODR straight-line calibration function are graphed in Figure 6. The graph also gives, for each
i, the location of (x∗i , yi∗ ), the point on the line closest in probabilistic terms to the data point (xi , yi ). The weighted residuals
are illustrated in Figure 7.

c ISO 2010 — All rights reserved 21


ISO/TS 28037:2010(E)

Table 13 — First iteration to determine fi , gi and hi , given e


a and e
b

xi u(xi ) yi u(yi ) ti x∗i zi fi gi hi


0,658 3 2,148 3
1,200 0 0,200 0 3,400 0 0,200 0 4,452 2 1,262 6 0,163 7 2,110 0 2,664 2 0,345 5
1,900 0 0,200 0 4,400 0 0,200 0 4,452 2 1,769 9 −0,340 1 2,110 0 3,734 5 −0,717 6
2,900 0 0,200 0 7,200 0 0,200 0 4,452 2 3,019 2 0,311 6 2,110 0 6,370 6 0,657 5
4,000 0 0,200 0 8,500 0 0,400 0 2,901 9 3,812 6 −0,751 5 1,703 5 6,494 7 −1,280 2
4,700 0 0,200 0 10,800 0 0,400 0 2,901 9 4,711 1 0,044 7 1,703 5 8,025 3 0,076 1
5,900 0 0,200 0 13,500 0 0,400 0 2,901 9 5,941 6 0,166 7 1,703 5 10,121 4 0,284 0

Table 14 — First iteration to determine corrections δa and δb, given fi , gi and hi

fi2 fi gi f i hi gi
e hi
e gi2
e gie
e hi ri ri2
3,123 9 −0,043 7 δa = −0,078 4
4,452 2 5,621 6 0,729 0 −3,927 3 0,437 8 15,423 6 −1,719 3 0,481 4 0,231 8
4,452 2 7,879 9 −1,514 1 −2,857 0 −0,625 3 8,162 3 1,786 4 −0,593 5 0,352 3
4,452 2 13,442 2 1,387 4 −0,220 9 0,749 8 0,048 8 −0,165 6 0,752 3 0,565 9
2,901 9 11,063 6 −2,180 7 1,173 2 −1,205 7 1,376 4 −1,414 5 −1,218 7 1,485 2
2,901 9 13,671 0 0,129 7 2,703 8 0,150 6 7,310 8 0,407 3 0,120 6 0,014 5
2,901 9 17,241 6 0,483 8 4,799 9 0,358 5 23,038 7 1,720 8 0,305 2 0,093 1
22,062 2 68,919 9 −0,964 8 55,360 6 0,615 2 δb = 0,011 1 2,742 9

Table 15 — As Table 13 but for the second iteration

xi u(xi ) yi u(yi ) ti x∗i zi fi gi hi


0,579 9 2,159 4
1,200 0 0,200 0 3,400 0 0,200 0 4,414 6 1,287 3 0,228 9 2,101 1 2,704 7 0,480 8
1,900 0 0,200 0 4,400 0 0,200 0 4,414 6 1,792 2 −0,282 7 2,101 1 3,765 5 −0,594 1
2,900 0 0,200 0 7,200 0 0,200 0 4,414 6 3,036 5 0,357 9 2,101 1 6,379 9 0,751 9
4,000 0 0,200 0 8,500 0 0,400 0 2,885 8 3,821 2 −0,717 5 1,698 8 6,491 3 −1,218 9
4,700 0 0,200 0 10,800 0 0,400 0 2,885 8 4,717 7 0,070 9 1,698 8 8,014 2 0,120 5
5,900 0 0,200 0 13,500 0 0,400 0 2,885 8 5,944 8 0,179 6 1,698 8 10,098 8 0,305 1

Table 16 — As Table 14 but for the second iteration

fi2 fi gi f i hi gi
e hi
e gi2
e gie
e hi ri ri2
3,141 2 −0,000 3 δa = −0,001 0
4,414 6 5,682 7 1,010 3 −3,895 3 0,481 4 15,173 4 −1,875 1 0,482 3 0,232 6
4,414 6 7,911 7 −1,248 2 −2,834 4 −0,593 5 8,033 9 1,682 2 −0,592 8 0,351 4
4,414 6 13,404 7 1,579 8 −0,220 1 0,752 4 0,048 4 −0,165 6 0,752 5 0,566 2
2,885 8 11,027 1 −2,070 6 1,155 1 −1,218 4 1,334 2 −1,407 4 −1,218 7 1,485 2
2,885 8 13,614 3 0,204 6 2,678 1 0,120 9 7,172 0 0,323 8 0,120 3 0,014 5
2,885 8 17,155 5 0,518 3 4,762 6 0,305 6 22,682 4 1,455 3 0,304 4 0,092 7
21,901 2 68,796 1 −0,005 7 54,444 3 0,013 2 δb = 0,000 2 2,742 7

Table 17 — As Table 13 but for the third iteration

xi u(xi ) yi u(yi ) ti x∗i zi fi gi hi


0,578 8 2,159 7
1,200 0 0,200 0 3,400 0 0,200 0 4,413 8 1,287 5 0,229 6 2,100 9 2,705 0 0,482 3
1,900 0 0,200 0 4,400 0 0,200 0 4,413 8 1,792 4 −0,282 2 2,100 9 3,765 7 −0,592 8
2,900 0 0,200 0 7,200 0 0,200 0 4,413 8 3,036 6 0,358 2 2,100 9 6,379 5 0,752 5
4,000 0 0,200 0 8,500 0 0,400 0 2,885 5 3,821 2 −0,717 4 1,698 7 6,490 9 −1,218 7
4,700 0 0,200 0 10,800 0 0,400 0 2,885 5 4,717 6 0,070 8 1,698 7 8,013 7 0,120 3
5,900 0 0,200 0 13,500 0 0,400 0 2,885 5 5,944 7 0,179 2 1,698 7 10,098 0 0,304 4

22 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Table 18 — As Table 14 but for the third iteration

fi2 fi gi fi hi gi
e hi
e gi2
e gie
e hi ri ri2
3,141 4 0,000 0 δa = 0,000 0
4,413 8 5,682 9 1,013 3 −3,894 7 0,482 3 15,168 5 −1,878 5 0,482 3 0,232 7
4,413 8 7,911 3 −1,245 4 −2,834 0 −0,592 8 8,031 5 1,680 0 −0,592 8 0,351 4
4,413 8 13,402 7 1,580 9 −0,220 2 0,752 5 0,048 5 −0,165 7 0,752 5 0,566 2
2,885 5 11,025 8 −2,070 2 1,154 8 −1,218 7 1,333 5 −1,407 3 −1,218 7 1,485 2
2,885 5 13,612 6 0,204 3 2,677 6 0,120 3 7,169 5 0,322 0 0,120 3 0,014 5
2,885 5 17,153 1 0,517 1 4,761 9 0,304 4 22,675 6 1,449 6 0,304 4 0,092 7
21,897 7 68,788 4 0,000 0 54,427 1 0,000 1 δb = 0,000 0 2,742 7

Figure 6 — Data in Table 10 and fitted straight-line calibration function obtained in Tables 11 to 18

Figure 7 — Weighted distances obtained in Table 18

c ISO 2010 — All rights reserved 23


ISO/TS 28037:2010(E)

8 Model for uncertainties associated with the xi and the yi and covariances associated
with the pairs (xi , yi )

8.1 General

8.1.1 This clause considers the case 5.3.2 c), namely when the following information is provided for i = 1, . . . , m:

a) measurement data (xi , yi ),

b) standard uncertainty u(xi ) associated with xi ,

c) standard uncertainty u(yi ) associated with yi , and

d) covariance cov(xi , yi ) associated with xi and yi .

Annex D provides guidance on obtaining these uncertainties and covariances. All other covariances associated with
the data are regarded as negligible.

8.1.2 The case 5.3.2 c) corresponds to that of the statistical model

xi = Xi∗ + di , yi = Yi∗ + ei , Yi∗ = A∗ + B ∗ Xi∗ , i = 1, . . . , m, (10)

where each pair (di , ei ) is a realization of a bivariate random variable with expectation (0, 0)> and covariance matrix
having diagonal elements u2 (xi ) and u2 (yi ) and off-diagonal elements cov(xi , yi ) = cov(yi , xi ), namely

u2 (xi )
 
cov(xi , yi )
,
cov(yi , xi ) u2 (yi )

that is independent of the other such random variables.

NOTE The assumption that the (di , ei ) are realizations of bivariate normal random variables is only needed for the validation
of the model (10).

8.2 Calibration parameter estimates and associated standard uncertainties and covariance

8.2.1 Algorithmically, this case can be handled by an extension (see Annex B) of the treatment in Clause 7. The
calculations are identical to those in that clause, except step 2 in 7.2.1 is replaced by

1
2 Set ti = ,
u2 (yi ) − 2bcov(xi , yi ) + eb2 u2 (xi )
e
n o
x∗i = [u2 (yi ) − ebcov(xi , yi )]xi − [cov(xi , yi ) − ebu2 (xi )](yi − e
a) ti and

zi = yi − e
a − ebxi , i = 1, . . . , m;

8.2.2 All the properties stated in 7.2.2 apply for data generated according to the model (10) and the remainder of
Clause 7 follows analogously.

24 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

9 Model for uncertainties and covariances associated with the yi

9.1 General

9.1.1 This clause considers the case 5.3.2 d), namely when the following information is provided for i = 1, . . . , m:

a) measurement data (xi , yi ),

b) standard uncertainty u(yi ) associated with yi , and

c) covariances cov(yi , yj ) associated with the pair (yi , yj ), j = 1, . . . , m, j 6= i.

9.1.2 The squared standard uncertainties and covariances comprise the covariance matrix
u2 (y1 )
 
cov(y1 , y2 ) . . . cov(y1 , ym−1 ) cov(y1 , ym )
 cov(y2 , y1 ) 2
 u (y 2 ) . . . cov(y , y
2 m−1 ) cov(y2 , ym ) 

Uy = 
 .
.. .
.. . .. .
.. .. 
 . 

 cov(ym−1 , y1 ) cov(ym−1 , y2 ) . . . u2 (ym−1 ) cov(ym−1 , ym ) 
cov(ym , y1 ) cov(ym , y2 ) . . . cov(ym , ym−1 ) u2 (ym )

of dimension m × m associated with y = (y1 , . . . , ym )> . Annex D provides guidance on obtaining these uncertainties
and covariances. All other uncertainties and covariances associated with the data are regarded as negligible.

9.1.3 The case 5.3.2 d) corresponds to that of the statistical model

yi = A∗ + B ∗ xi + ei , i = 1, . . . , m, (11)

where e = (e1 , . . . , em )> is a realization of a multivariate random variable with vector expectation equal to the zero
vector of dimension m × 1 and covariance matrix of dimension m × m equal to Uy [21].

9.1.4 Estimates a and b are those that minimize the generalized sum of squares [8]
 >  
y1 − (A + Bx1 ) y1 − (A + Bx1 )
.. −1  .. > −1
 Uy   = e Uy e, (12)
  
 . .
ym − (A + Bxm ) ym − (A + Bxm )
where e = y − A1 − Bx, with respect to A and B. The problem of determining a and b in this context is known as
one of Gauss-Markov regression (GMR) [2].

NOTE For the case where Uy is diagonal, the generalized sum of squares (12) simplifies to expression (4) in 6.1.3 leading to a
WLS problem.

9.2 Calibration parameter estimates and associated standard uncertainties and covariance

9.2.1 If Uy is positive definite, so that the lower-triangular Cholesky factor Ly of dimension m×m of Uy = Ly Ly >
exists [10] (see also A.4), estimates a and b of A and B can be calculated directly using the same general scheme
as in 6.2.1 after some preliminary calculations using matrix-vector operations. Otherwise more involved numerical
methods would be required. These operations transform the generalized sum of squares (12) into an ordinary sum of
squares (2) as in 5.8.1, that is, the problem becomes an unweighted least squares problem with no covariance.

9.2.2 Parameter estimates a and b are calculated in steps 1 to 7 below; the standard uncertainties u(a) and u(b)
and covariance cov(a, b) are evaluated in step 8:

1 Calculate the Cholesky factor Ly of dimension m × m of Uy = Ly Ly > ; see A.4.1;

2 Let 1 be the vector of ones of dimension m × 1. Solve the three lower-triangular systems of equations Ly f = 1,
>
Ly g = x and Ly h = y, where f = (f1 , . . . , fm ) , etc. for f , g and h; see A.4.3;

c ISO 2010 — All rights reserved 25


ISO/TS 28037:2010(E)

m
X
3 Set F 2 = fi2 ;
i=1

m m
1 X 1 X
4 Set g0 = 2 fi gi and h0 = 2 fi hi ;
F i=1 F i=1

5 Set gei = gi − g0 fi and e


hi = hi − h0 fi , i = 1, . . . , m;
m
X
6 e2 =
Set G gei2 ;
i=1

m
1 X e
7 Set b = gei hi and a = h0 − bg0 ;
e2
G i=1

1 g2 1 g0
8 Set u2 (a) = 2
+ 0 , u2 (b) = and cov(a, b) = − .
F Ge 2 G
e 2 e2
G
9.2.3 The estimates a and b determined in 9.1.4 have the following properties [15] for data yi according to the
model (11):

i) The estimates a and b are given by linear combinations of the data yi .

ii) The estimates a and b can be regarded as realizations of random variables whose expectations are A∗ and B ∗ ,
respectively.

iii) The covariance matrix for the random variables in ii) is specified by u2 (a), u2 (b) and cov(a, b) calculated in 9.2.2.

Property i) states that a and b are derived using a linear estimation method. Property ii) states that the linear
estimation method is unbiased. Properties ii) and iii) jointly show that the estimation method is consistent in the
sense that as the number m of data points is increased, the estimates a and b converge to A∗ and B ∗ , respectively.

The estimation method of 9.1.4 has the following optimal property for data yi according to the model (11):

iv) The estimates ă and b̆ provided by any unbiased, linear estimation method can be regarded as realizations of
random variables whose variances are at least as large as those associated with the GMR estimation method.

Property iv) can be interpreted as follows. For constants c and d, the standard uncertainty u(că + db̆) associated with
a linear combination of the estimates ă and b̆ provided by any unbiased, linear estimation method is at least as great as
u(ca + db). Properties i) to iv) justify the use of least squares methods for data compatible with the model (11). Note
that in the use of this model statements are only made about the expectations and variances associated with the ei ;
the associated distributions are not further specified. If the additional assumption is made that the ei are realizations
of random variables characterized by a multivariate normal distribution, then further properties associated with the
GMR estimation method can be made:

v) The random variables in ii) are characterized by a bivariate normal distribution centred on A∗ and B ∗ with
covariance matrix specified by u2 (a), u2 (b) and cov(a, b).

vi) The estimates a and b are maximum likelihood estimates, corresponding to the most likely values of A and B
that could have given rise to the observed measurement data yi .

vii) In the context of Bayesian inference, the state-of-knowledge distribution for A and B, given the observed mea-
surement data yi , is a bivariate normal distribution centred on a and b with covariance matrix specified by u2 (a),
u2 (b) and cov(a, b).

NOTE 1 The properties listed above are the same as those for the WLS estimation method of 6.1.3 for data generated
according to the model (3).

26 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

NOTE 2 u2 (a), u2 (b) and cov(a, b) in step 8 are obtained by applying the law of propagation of uncertainty in ISO/IEC Guide
98-3:2008 to a and b as provided by steps 1 to 7.

9.3 Validation of the model

If m > 2, the validity of the model can be partially tested using the weighted residuals ri (continued from 9.2.2):

9 hi − be
Form ri = e gi , i = 1, . . . , m;
m
X
10 Form the observed chi-squared value χ2obs = ri2 and degrees of freedom ν = m − 2;
i=1

11 Check whether χ2obs exceeds the 95 % quantile of χ2ν , and if it does reject the straight-line model.

NOTE The chi-squared test is based on an assumption that the ei in model (11) are realizations of random variables characterized
by a multivariate normal distribution.

9.4 Organization of the calculations

The calculations in 9.2.2 and 9.3 can be organized into a number of tableaux as in Tables 19 to 21. Table 20 contains
the fi , gi and hi calculated in steps 1 and 2 in 9.2.2 in terms of the Cholesky factorization Ly of the covariance matrix
Uy . Table 21 uses these fi , gi and hi to calculate estimates a and b of the parameters of the straight-line calibration
function.
Table 19 — Data for the Gauss-Markov straight-line calibration function

x1 y1
x2 y2
.. ..
. .
xm ym

Table 20 — Initial calculations to determine the Gauss-Markov straight-line calibration function

9.2.2 steps 1, 2
f1 g1 h1
f2 g2 h2
.. .. ..
. . .
fm gm hm

Table 21 — Organization of the calculations to determine the Gauss-Markov straight-line calibration function

9.2.2 9.2.2 step 7


steps 4, 5 9.3 step 9 9.3 step 10
g0 h0 a
f12 f1 g1 f 1 h1 g1
e h1
e g12
e g1e
e h1 r1 r12
f22 f2 g2 f 2 h2 g2
e h2
e g22
e g2 h2
e e r2 r22
.. .. .. .. .. .. .. .. ..
. . . . . . . . .
2 2 2
fm fm gm f m hm gm
e hm
e gm
e gme
e hm rm rm
F2 = fi2 e2 = gi2 χ2obs = ri2
P P P P P P
fi gi f i hi G e gie
e hi b

c ISO 2010 — All rights reserved 27


ISO/TS 28037:2010(E)

EXAMPLE Table 22 gives ten measured data points (xi , yi ) and the standard uncertainties associated with the yi .

The data are obtained using the model described in D.2.2 with uR = 1,0, uS,1 = 1,0 and uS,2 = 2,0.

Table 22 — Data representing ten measurement points, the yi having an associated covariance matrix

xi yi
1,0 1,3
2,0 4,1
3,0 6,9
4,0 7,5
5,0 10,2
6,0 12,0
7,0 14,5
8,0 17,1
9,0 19,5
10,0 21,0

The covariance matrix Uy of dimension 10 × 10 associated with the yi is


 2,0 1,0 1,0 1,0 1,0 0,0 0,0 0,0 0,0 0,0

 1,0 2,0 1,0 1,0 1,0 0,0 0,0 0,0 0,0 0,0 
 1,0 1,0 2,0 1,0 1,0 0,0 0,0 0,0 0,0 0,0 
 1,0 1,0 1,0 2,0 1,0 0,0 0,0 0,0 0,0 0,0

 
 1,0 1,0 1,0 1,0 2,0 0,0 0,0 0,0 0,0 0,0 
Uy = 
 0,0
.
0,0 0,0 0,0 0,0 5,0 4,0 4,0 4,0 4,0 
 0,0 0,0 0,0 0,0 0,0 4,0 5,0 4,0 4,0 4,0

 
 0,0 0,0 0,0 0,0 0,0 4,0 4,0 5,0 4,0 4,0 
 
0,0 0,0 0,0 0,0 0,0 4,0 4,0 4,0 5,0 4,0
0,0 0,0 0,0 0,0 0,0 4,0 4,0 4,0 4,0 5,0

The Cholesky factor Ly of dimension 10 × 10 of Uy = Ly Ly > , calculated using either algorithm described in A.4.1, is
 1,414 2 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0

 0,707 1 1,224 7 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 
 0,707 1 0,408 2 1,154 7 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 
 0,707 1 0,408 2 0,288 7 1,118 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0
 

 0,707 1 0,408 2 0,288 7 0,223 6 1,095 4 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 
Ly = 
 0,000 0
.
0,000 0 0,000 0 0,000 0 0,000 0 2,236 1 0,000 0 0,000 0 0,000 0 0,000 0 
 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 1,788 9 1,341 6 0,000 0 0,000 0 0,000 0

 
 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 1,788 9 0,596 3 1,201 9 0,000 0 0,000 0 
 
0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 1,788 9 0,596 3 0,369 8 1,143 5 0,000 0
0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 1,788 9 0,596 3 0,369 8 0,269 1 1,111 4

The vectors f , g and h in Table 23 are calculated according to step 2 in 9.2.2.

Table 23 — Initial calculation tableau associated with the data in Table 22

fi gi hi
0,707 1 0,707 1 0,919 2
0,408 2 1,224 7 2,816 9
0,288 7 1,732 1 4,416 7
0,223 6 2,236 1 3,957 8
0,182 6 2,738 6 5,696 3
0,447 2 2,683 3 5,366 6
0,149 1 1,639 8 3,652 2
0,092 5 1,849 0 4,428 4
0,067 3 2,219 8 5,320 8
0,052 9 2,646 3 5,536 0

The best fit straight-line parameters in Table 24 are calculated according to Table 21. From Table 24,
g0 = 4,404 8/1,071 4 = 4,111 1, h0 = 9,004 8/1,071 4 = 8,404 4, b = 54,218 5/24,629 6 = 2,201 4 and
a = 8,404 4 − (2,201 4)(4,111 1) = −0,645 6.

28 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Table 24 — Calculation tableau associated with the data in Table 22

fi2 fi gi f i hi gi
e hi
e gi2
e gie
e hi ri ri2
4,111 1 8,404 4 a = −0,645 6
0,500 0 0,500 0 0,650 0 −2,199 9 −5,023 6 4,839 5 11,051 4 −0,180 9 0,032 7
0,166 7 0,500 0 1,150 0 −0,453 6 −0,614 2 0,205 8 0,278 6 0,384 4 0,147 7
0,083 3 0,500 0 1,275 0 0,545 3 1,990 6 0,297 3 1,085 4 0,790 2 0,624 5
0,050 0 0,500 0 0,885 0 1,316 8 2,078 5 1,734 0 2,737 0 −0,820 2 0,672 7
0,033 3 0,500 0 1,040 0 1,988 0 4,161 9 3,952 3 8,273 9 −0,214 5 0,046 0
0,200 0 1,200 0 2,400 0 0,844 7 1,608 0 0,713 6 1,358 3 −0,251 6 0,063 3
0,022 2 0,244 4 0,544 4 1,026 9 2,399 4 1,054 6 2,464 0 0,138 7 0,019 2
0,008 5 0,170 9 0,409 4 1,468 9 3,651 4 2,157 8 5,363 6 0,417 7 0,174 5
0,004 5 0,149 3 0,357 9 1,943 3 4,755 5 3,776 3 9,241 2 0,477 7 0,228 2
0,002 8 0,140 1 0,293 0 2,428 7 5,091 2 5,898 6 12,365 0 −0,255 2 0,065 1
1,071 4 4,404 8 9,004 8 24,629 6 54,218 5 b = 2,201 4 2,074 0

The standard uncertainties and covariance associated with a and b are evaluated from Table 24 using step 8 in 9.2.2:

u2 (a) = 1/1,071 4 + (4,111 1)2 /24,629 6, so that u(a) = 1,272 6;


u2 (b) = 1/24,629 6, so that u(b) = 0,201 5;
u(a, b) = −4,111 1/24,629 6 = −0,166 9.

The observed chi-squared value is χ2obs = 2,074 with 8 degrees of freedom, as calculated in Table 24 using 9.3. Since χ2obs does
not exceed the 95 % quantile of χ2ν , namely 15,507, this is no reason to doubt the consistency of the straight-line model and the
data.

The data points and fitted straight-line calibration function are shown in Figure 8. The weighted residuals are illustrated in
Figure 9.

c ISO 2010 — All rights reserved 29


ISO/TS 28037:2010(E)

Figure 8 — Data in Table 22 and fitted straight-line calibration function obtained in Table 24

Figure 9 — Weighted residuals ri calculated in Table 24

30 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

10 Model for uncertainties and covariances associated with the xi and the yi

10.1 General

10.1.1 This clause considers the case 5.3.2 e), namely the most general case in which all measurement data have
associated uncertainties and covariances. Annex D provides guidance on obtaining these uncertainties and covariances.

10.1.2 The standard uncertainties and covariances comprise the covariance matrix

u2 (x1 )
 
... cov(x1 , xm ) cov(x1 , y1 ) ... cov(x1 , ym )
 .. .. .. .. .. .. 

 . . . . . . 

 cov(xm , x1 ) . . . u2 (xm ) cov(xm , y1 ) . . . cov(xm , ym ) 
U = 

 cov(y1 , x1 ) . . . cov(y1 , xm ) u2 (y1 ) ... cov(y1 , ym ) 

 .. .. .. .. .. .. 
 . . . . . . 
cov(ym , x1 ) . . . cov(ym , xm ) cov(ym , y1 ) . . . u2 (ym )

>
of dimension 2m × 2m associated with the vector (x1 , . . . , xm , y1 , . . . , ym ) of measurement data of dimension 2m × 1.

10.1.3 The case 5.3.2 e) corresponds to that of the statistical model

xi = Xi∗ + di , yi = Yi∗ + ei , Yi∗ = A∗ + B ∗ Xi∗ , i = 1, . . . , m, (13)

where the vector (d1 , . . . , dm , e1 , . . . , em )> of dimension 2m × 1 is a realization of a multivariate random variable with
vector expectation equal to the zero vector of dimension 2m × 1 and covariance matrix of dimension 2m × 2m equal
to U [21].

10.1.4 Estimates a and b are those that minimize the generalized sum of squares
 >  
x1 − X1 x1 − X1
 ..   .. 

 . 


 . 
  >  
 xm − Xm  U −1 
  xm − Xm = d
 −1 d

 y1 − (A + BX1 )  y1 − (A + BX1 ) U , (14)


 

 e e
 ..   .. 
 .   . 
ym − (A + BXm ) ym − (A + BXm )

where d = x − X and e = y − A1 − Bx, with respect to A, B and Xi , i = 1, . . . , m. The problem of determining a


and b in this context is known as one of generalized Gauss-Markov regression (GGMR) [2].

10.2 Calibration parameter estimates and associated standard uncertainties and covariance

10.2.1 If U is positive definite, so that the lower-triangular Cholesky factor L of dimension 2m × 2m of U = LL>
exists [10] (also see A.4), estimates a and b of A and B can be calculated in an iterative scheme using matrix-
vector operations. Otherwise, more involved numerical methods would be required. These operations transform the
generalized sum of squares (14) into an ordinary sum of squares (2) as in 5.8.1, that is, the problem becomes an
unweighted least squares problem with no covariance. The iterative scheme also involves approximations x∗i , which
define the points (x∗i , A + Bx∗i ) on the line closest to the measured data points (xi , yi ), where closeness is measured
in terms of weighted distance, taking into account the uncertainty information specified by U .

10.2.2 Estimates a and b are calculated in steps 1 to 10 below using an iterative scheme based on that in 6.2.1;
the standard uncertainties u(a) and u(b) and covariance cov(a, b) are evaluated in step 11:

1 Obtain initial approximations et = (e


x1 , . . . , x a, eb)> to the parameters;
em , e

c ISO 2010 — All rights reserved 31


ISO/TS 28037:2010(E)

2 Calculate the vector of dimension 2m × 1,


 
x1 − xe1
 .. 

 . 

 xm − xem    
  x−x e
f =
 y1 − ea + ebe
x1
= ,


 y−ea1 − ebe
x
 .. 
.
 
  
ym − ea + ebe
xm

and the (Jacobian) matrix of dimension 2m × (m + 2),


 
−1 0 · · · 0 0 0 0
 0 −1 · · · 0 0 0 0 
 .. .. . . . ..
 
 . . .. .. .. .. 
 . . 

 0
 0 · · · −1 0 0 0 
 
 0 0 · · · 0 −1 0 0

 −I 0 0
J =  −eb 0 · · · 0 = ,
 
 0 −1 −e x1  −ebI −1 −ex
 0 −b · · · 0 0 −1 −e x2
 e 

 . .. .. .. .. ..
..

 .
 . . . . . . .


 
 0 0 · · · −b 0 −1 −e
e xm−1 
0 0 · · · 0 −eb −1 −e
xm
>
where x
e = (e
x1 , . . . , x
em ) , e
a and eb are extracted from the current estimate et of the parameter vector;

3 Calculate the Cholesky factor L of dimension 2m × 2m of U = LL> [10]; see A.4.1;

4 Solve the lower-triangular systems


Lf
e=f and LJ
e = J,
e of dimension 2m × 1 and transformed matrix J
to determine the transformed vector f e of dimension 2m × (m + 2);
see A.4.3;

5 e >f
Form the vector g = J e >J
e of dimension (m + 2) × 1 and matrix H = J e of dimension (m + 2) × (m + 2);

6 Determine the Cholesky factor M , a lower-triangular matrix of dimension (m + 2) × (m + 2) in H = M M > ;


see A.4.1;

7 Solve the lower-triangular system M q = −g to determine the vector q of dimension (m + 2) × 1; see A.4.3;

8 Solve the upper-triangular system M > δt = q to determine the correction vector δt of dimension (m + 2) × 1;
see A.4.4;

9 Update the current approximations to the parameters: et := et + δt;

10 Repeat steps 2 to 9 until convergence has been achieved. Set a = e


a and b = eb (elements m + 1 and m + 2 of et);

11 Partition M obtained in step 6 as  


M 11 0
M= ,
M 21 M 22
where  
m11 0
M 22 =
m21 m22
is the lower right lower-triangular submatrix of dimension 2 × 2 of M . Then
m222 + m221 m211 1 m21
u2 (a) = , u2 (b) = = 2 and cov(a, b) = − .
m211 m222 m211 m222 m22 m11 m222

32 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

NOTE 1 In step 1, initial approximations are provided by e t = (x1 , . . . , xm , a0 , b0 )> , where a0 and b0 are the straight-line
parameter values determined by a weighted least squares fit to the data; see 6.2.1.

NOTE 2 In step 8, the correction vector δt will generally decrease in magnitude by ultimately an approximately constant
factor from iteration to iteration. The size of the reduction depends largely on the uncertainties associated with the data: the
smaller the uncertainties are, the greater the reduction will be. The iterative scheme can be terminated when the magnitude of
the correction is judged to be negligible.

NOTE 3 In step 8, the correction δt is given by the least squares solution of the matrix equation

J
e δt = fe ,
the solution of which is defined by the normal equations
>
e δt = −J
e J
H=J e fe = −g.

NOTE 4 Steps 5 to 8 solve the normal equations using Cholesky factorization. A numerically more stable approach is to
use a QR factorization [10] of J
e (see A.5.1). The scheme described in C.2 employs a QR factorization and avoids calculations
involving the inverse of L such as those in step 4.

NOTE 5 In matrix terms, the covariance matrix associated with the estimates a and b is
Ua = M −> −1
22 M 22 .

NOTE 6 A more general and numerically more stable approach to solving the generalized Gauss-Markov regression problem
is outlined in C.2. The above approach assumes that the matrix U is positive definite and does not represent any strong
correlation.

NOTE 7 u2 (a), u2 (b) and cov(a, b) in step 11 are obtained by applying the law of propagation of uncertainty in ISO/IEC
Guide 98-3:2008 to a and b as provided by steps 1 to 10.

10.2.3 The fact that the estimates a and b determined by minimizing the sum of squares (14) depend non-linearly
on the data xi and yi means that the properties for GGMR cannot be straightforwardly stated. The estimates a and
b determined in 10.1.4 have the following properties for data xi and yi according to the model (13):

i) The estimates a and b are given by non-linear functions of the data xi and yi .

ii) The estimates a and b can be regarded as realizations of random variables whose expectations are approximately
A∗ and B ∗ , respectively.

iii) The elements of the covariance matrix for the random variables in ii) are approximated by u2 (a), u2 (b) and
cov(a, b) calculated in 10.2.2.

The approximations above will be more accurate for data having smaller associated uncertainties. However, the
estimation method has the following consistency property:

iv) For data satisfying the model (13), as the number m of data points increases, the estimates a and b converge to
A∗ and B ∗ , respectively [16].

If the additional assumption is made that the di and ei are realizations of random variables characterized by a
multivariate normal distribution, then further properties associated with the GGMR estimation method can be stated:

v) The random variables in ii) are characterized approximately by a bivariate normal distribution centred on A∗ and
B ∗ with covariance matrix specified by u2 (a), u2 (b) and cov(a, b).

vi) The estimates a and b are maximum likelihood estimates giving the most likely values of A and B that could
have given rise to the observed measurement data xi and yi .

vii) In the context of Bayesian inference, the state-of-knowledge distribution for A and B, given the observed mea-
surement data xi and yi , is approximated by a bivariate normal distribution centred on a and b with covariance
matrix specified by u2 (a), u2 (b) and cov(a, b).

c ISO 2010 — All rights reserved 33


ISO/TS 28037:2010(E)

10.3 Validation of the model

If m > 2, the validity of the model can be partially tested using the weighted residuals fei (continued from 10.2.2):

2m
X
12 Form the observed chi-squared value χ2obs = fei2 and degrees of freedom ν = m − 2;
i=1

13 Check whether χ2obs exceeds the 95 % quantile of χ2ν , and if it does reject the straight-line model.

NOTE The chi-squared test is based on an assumption that the di and ei in model (13) are realizations of random variables
characterized by a multivariate normal distribution and on a first order approximation.

34 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

EXAMPLE Table 25 gives seven measured data points (xi , yi ) obtained using the measurement models described in D.2
and D.4.

The covariance matrix associated with the yi is derived using the measurement model (D.1) with uS = 2,0 and uR = 1,0.

The data xi and associated covariance matrix are derived using the measurement model (D.2) with z1 = 50, z2 = 100, z3 = 200,
u(z1 ) = 0,5, u(z2 ) = u(z3 ) = 1,0, and uD,i = 0,5.

Table 25 — Data representing seven measurement points, the xi and yi having associated covariance matrices

xi yi
50,4 52,3
99,0 97,8
149,9 149,7
200,4 200,1
248,5 250,4
299,7 300,9
349,1 349,2

The covariance matrix Ux of dimension 7 × 7 associated with the xi is


 0,50 0,00 0,25 0,00 0,25 0,00 0,25

 0,00 1,25 1,00 0,00 0,00 1,00 1,00 
 0,25 1,00 1,50 0,00 0,25 1,00 1,25 
Ux =  0,00 0,00 0,00 1,25 1,00 1,00 1,00 .
 
 0,25 0,00 0,25 1,00 1,50 1,00 1,25 
 
0,00 1,00 1,00 1,00 1,00 2,25 2,00
0,25 1,00 1,25 1,00 1,25 2,00 2,50

The Cholesky factor Lx of dimension 7 × 7 of Ux = Lx Lx > , as calculated using either algorithm described in A.4.1, is
 0,707 1 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0

 0,000 0 1,118 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 
 0,353 6 0,894 4 0,758 3 0,000 0 0,000 0 0,000 0 0,000 0 
Lx =  0,000 0 0,000 0 0,000 0 1,118 0 0,000 0 0,000 0 0,000 0 .
 
 0,353 6 0,000 0 0,164 8 0,894 4 0,740 2 0,000 0 0,000 0 
 
0,000 0 0,894 4 0,263 8 0,894 4 0,211 5 0,731 9 0,000 0
0,353 6 0,894 4 0,428 6 0,894 4 0,343 6 0,292 8 0,622 5

The covariance matrix Uy of dimension 7 × 7 associated with the yi is


 5,00 1,00 1,00 1,00 1,00 1,00 1,00

 1,00 5,00 1,00 1,00 1,00 1,00 1,00 
 1,00 1,00 5,00 1,00 1,00 1,00 1,00 
Uy =  1,00 1,00 1,00 5,00 1,00 1,00 1,00 .
 
 1,00 1,00 1,00 1,00 5,00 1,00 1,00 
 
1,00 1,00 1,00 1,00 1,00 5,00 1,00
1,00 1,00 1,00 1,00 1,00 1,00 5,00

The Cholesky factor Ly of dimension 7 × 7 of Uy = Ly Ly > , as calculated using either algorithm described in A.4.1, is
 2,236 1 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0

 0,447 2 2,190 9 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 
 0,447 2 0,365 1 2,160 2 0,000 0 0,000 0 0,000 0 0,000 0 
Ly =  0,447 2 0,365 1 0,308 6 2,138 1 0,000 0 0,000 0 0,000 0 .
 
 0,447 2 0,365 1 0,308 6 0,267 3 2,121 3 0,000 0 0,000 0 
 
0,447 2 0,365 1 0,308 6 0,267 3 0,235 7 2,108 2 0,000 0
0,447 2 0,365 1 0,308 6 0,267 3 0,235 7 0,210 8 2,097 6

The covariance matrix U of dimension 14 × 14 is given by


 
Ux 0
U= .
0 Uy

NOTE For this example, there is correlation associated with each pair xi and xj and each pair yi and yj but no correlation
associated with the pair xi and yj , that is, cov(xi , yj ) = 0 for all i and j.

c ISO 2010 — All rights reserved 35


ISO/TS 28037:2010(E)

The Cholesky factor L of dimension 14 × 14 of U = LL> is given by


 
Lx 0
L= .
0 Ly

The weighted least squares fit to the data (6.2.1 steps 1 to 5) gives approximations e
a = 0,270 7 and e
b = 1,001 1. The iterative
>
scheme is started with e
t = (x1 , . . . , x7 , e
a, e
b) .

Table 26 gives the initial vector e


t0 , the corrections δtk for the kth iteration, k = 1, . . . , 4, and the final estimate e
t =e
t4 .

Table 26 — Change in parameter vector e


t

et0 δt1 × 10−2 δt2 × 10−4 δt3 × 10−6 δt4 × 10−8 et4
50,400 0 17,253 1 1,258 0 3,078 2 0,290 4 50,572 7
99,000 0 −43,150 1 −3,214 5 −6,320 1 −0,710 1 98,568 2
149,900 0 −29,164 1 −3,960 4 −3,888 9 −0,756 4 149,608 0
200,400 0 2,967 7 −10,762 9 −0,602 4 −1,716 5 200,428 6
248,500 0 24,039 4 −11,406 4 3,237 8 −1,706 4 248,739 3
299,700 0 −22,251 0 −15,776 7 −3,358 1 −2,611 0 299,475 9
349,100 0 −20,619 2 −16,621 7 −3,380 5 −2,742 9 348,892 1
0,270 7 7,504 0 −33,395 7 0,100 6 −5,301 9 0,342 4
1,001 1 0,011 0 0,211 3 0,007 6 0,033 7 1,001 2

The best estimates of A and B are a = 0,342 4 and b = 1,001 2.

At the final iteration the matrix M of dimension 9 × 9 is


 
1,775 5 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0
 0,181 0 1,695 9 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 

 −0,424 6 −0,543 0 1,430 6 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 

 0,181 0 0,237 8 0,689 3 1,531 2 0,000 0 0,000 0 0,000 0 0,000 0 0,000 0 
M = −0,424 6 0,505 3 0,099 9 −0,724 9 1,245 3 0,000 0 0,000 0 0,000 0 0,000 0 ,
 

 0,374 8 −0,560 7 −0,237 7 −0,426 9 −0,030 6 1,346 5 0,000 0 0,000 0 0,000 0 

 −0,230 8 −0,293 1 −0,827 1 0,093 2 −0,582 8 −0,971 1 0,832 9 0,000 0 0,000 0 
0,051 3 0,048 2 0,097 1 0,002 2 0,064 5 0,092 7 0,389 9 0,676 2 0,000 0
 
−10,765 2 −3,038 1 −0,381 5 13,930 2 30,184 4 38,841 2 127,115 5 107,270 6 110,967 7

so that M 22 (10.2.2 step 11) is  


0,676 2 0,000 0
M 22 = .
107,270 6 110,967 7
The standard uncertainties and covariance associated with a and b as evaluated in step 11 in 10.2.2 are

(110,967 7)2 + (107,270 6)2


u2 (a) = , so that u(a) = 2,056 9;
(0,676 2)2 (110,967 7)2
1
u2 (b) = , so that u(b) = 0,009 0;
(110,967 7)2
107,270 6
cov(a, b) = − = −0,012 9.
(0,676 2)(110,967 7)2

The observed chi-squared value is χ2obs = 1,772 with ν = 5 degrees of freedom, as calculated in step 12 in 10.3. Since χ2obs does
not exceed the 95 % quantile of χ2ν , namely 11,070, this is no reason to doubt the consistency of the straight-line model and the
data.

36 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

11 Use of the calibration function

The use of the calibration function for prediction and forward evaluation is independent of the method used to obtain
estimates of the calibration function parameters and evaluate their associated standard uncertainties and covariance.

11.1 Prediction

11.1.1 Consider the following are prescribed, following an application of one of Clauses 6 to 10:

a) straight-line parameters estimates a and b, and standard uncertainties u(a) and u(b) and covariance cov(a, b)
associated with a and b, and

b) measured value y of Y and associated standard uncertainty u(y).

Consider that y has been obtained independently of the measurement data used to establish the calibration function.

11.1.2 The estimate x of X corresponding to y is


y−a
x= . (15)
b

11.1.3 The standard uncertainty u(x) associated with x is given by


1 y−a 1
c(a) = − , c(b) = − 2 , c(y) = ,
b b b
u2 (x) = c2 (a)u2 (a) + c2 (b)u2 (b) + 2c(a)c(b)cov(a, b) + c2 (y)u2 (y).

NOTE 1 The formula for u2 (x) is established using the law of propagation of uncertainty in ISO/IEC Guide 98-3:2008. It is
approximate, being based on a linearization of the formula (15). c(a), c(b) and c(y) represent sensitivity coefficients.

NOTE 2 For computational purposes, a matrix formulation may be advantageous:

u2 (a)
" # " #
cov(a, b) 0 c(a)
2 >
u (x) = c cov(b, a) u2 (b) 0 c, c= c(b) .
0 0 u2 (y) c(y)

NOTE 3 In the case b = 0, that is, the best-fit line is y = a, which corresponds to an inadmissible calibration function,
prediction cannot be carried out.

NOTE 4 The validity of the standard uncertainty u(x) depends on the satisfaction of the relevant chi-squared test given in
Clauses 6 to 10.

c ISO 2010 — All rights reserved 37


ISO/TS 28037:2010(E)

EXAMPLE 1 Regarding the numerical example of weighted least squares (WLS) with known equal weights described in
Clause 6, the best fit straight-line parameters and their associated standard uncertainties and covariance are

a = 1,867, b = 1,757, u(a) = 0,465, u(b) = 0,120, cov(a, b) = −0,050.

Let y = 10,5 be an additional measured value of Y and u(y) = 0,5 its associated standard uncertainty.

From 11.1.2, an estimate of the value x of X corresponding to y is

x = (10,5 − 1,867)/1,757 = 4,913.

Using 11.1.3, the associated standard uncertainty u(x) is given by

c(a) = −1/1,867 = −0,569,


c(b) = −(10,5 − 1,867)/(1,757)2 = −2,796,
c(y) = 1/1,757 = 0,569,
u2 (x) = (−0,569)2 (0,217) + (−2,796)2 (0,014) + (2)(−0,569)(−2,796)(−0,050) + (−0,569)2 (0,5)2 = 0,104,

so that u(x) = 0,322.

EXAMPLE 2 Regarding the numerical example of weighted least squares (WLS) with known unequal weights described in
Clause 6, the best fit straight-line parameters and their associated standard uncertainties and covariance are

a = 0,885, b = 2,057, u(a) = 0,530, u(b) = 0,178, cov(a, b) = −0,082.

Let y = 10,5 be an additional measured value of Y and u(y) = 1,0 its associated standard uncertainty.

From 11.1.2, an estimate of the value x of X corresponding to y is

x = (10,5 − 0,885)/2,057 = 4,674.

Using 11.1.3, the associated standard uncertainty u(x) is given by

c(a) = −1/0,885 = −0,486,


c(b) = −(10,5 − 0,885)/(2,057)2 = −2,272,
c(y) = 1/2,057 = 0,486,
u2 (x) = (−0,486)2 (0,281) + (−2,272)2 (0,032) + (2)(−0,486)(−2,272)(−0,082) + (−0,486)2 (1,0)2 = 0,284,

so that u(x) = 0,533.

In this example and 11.1 EXAMPLE 1, the influence of the different uncertainties associated with the value of y can be seen in
the corresponding uncertainties associated with the respective values of x.

38 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

11.2 Forward evaluation

Consider the following are prescribed, following an application of one of Clauses 6 to 10:

a) straight-line parameters estimates a and b, and standard uncertainties u(a) and u(b) and covariance cov(a, b)
associated with a and b, and

b) measured value x of X and associated standard uncertainty u(x).

Consider that x has been obtained independently of the measurement data used to establish the calibration function.

11.2.1 The estimate y of Y corresponding to x is

y = a + bx. (16)

11.2.2 The standard uncertainty u(y) associated with y is given by

c(a) = 1, c(b) = x, c(x) = b,


u (y) = c (a)u (a) + c (b)u (b) + 2c(a)c(b)cov(a, b) + c2 (x)u2 (x).
2 2 2 2 2

NOTE 1 The formula for u2 (y) is established using the law of propagation of uncertainty in ISO/IEC Guide 98-3:2008. It is
approximate, being based on a linearization of the formula (16). c(a), c(b) and c(y) represent sensitivity coefficients.

NOTE 2 For computational purposes, a matrix formulation may be advantageous:

u2 (a)
" # " #
cov(a, b) 0 c(a)
2 >
u (y) = c cov(b, a) u2 (b) 0 c, c= c(b) .
0 0 u2 (x) c(x)

NOTE 3 The validity of the standard uncertainty u(y) depends on the satisfaction of the relevant chi-squared test given in
Clauses 6 to 10.

c ISO 2010 — All rights reserved 39


ISO/TS 28037:2010(E)

EXAMPLE Regarding the numerical example of weighted least squares (WLS) with known equal weights described in Clause 6,
the best fit straight-line parameters and their associated standard uncertainties and covariance are

a = 1,867, b = 1,757, u(a) = 0,465, u(b) = 0,120, cov(a, b) = −0,050.

Let x = 3,5 be an additional measured value of X and and u(x) = 0,2 its associated standard uncertainty, and assume that
cov(x, a) = cov(x, b) = 0, that is, there is no correlation associated with x and a, and with x and b.

From 11.2.1, an estimate of the value y of Y corresponding to x is

y = 1,867 + (1,757)(3,5) = 8,017.

Using 11.2.2 the associated standard uncertainty u(y) is given by

u2 (y) = 0,217 + (3,5)2 (0,014) + (2)(3,5)(−0,050) + (1,757)2 (0,2)2 = 0,165,

so that u(y) = 0,406.

40 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Annex A
(informative)

Matrix operations

A.1 General

This annex describes matrix operations that are used in this Technical Specification.

A.2 Elementary operations

In the following operations, A is a matrix of dimension m × n with element A(i, j) = aij in the ith row and jth
column, B is a matrix of dimension n × k, C is a (square) matrix of dimension m × m and d a vector of dimension
n × 1 with jth element dj .

A.2.1 Matrix-vector multiplication

The matrix-vector product Ad is the vector e of dimension m × 1 with ith element ei defined by
n
X
ei = aij dj = ai1 d1 + ai2 d2 + · · · + ain dn .
j=1

A.2.2 Matrix-matrix multiplication

The matrix product AB is the matrix of dimension m × k whose jth column is the product of A and the jth column
of B.

A.2.3 Matrix transpose

The transpose A> of the matrix A is the matrix of dimension n × m with element A(j, i) = aji in the jth row and
ith column.

A.2.4 Identity matrix

The identity matrix of order m is the matrix I of dimension m × m such that I(j, j) = 1, j = 1, . . . , m, and all other
elements are zero.

A.2.5 Inverse of a square matrix

The inverse of C, if it exists, is denoted by C −1 and is the matrix of dimension m × m such that

CC −1 = C −1 C = I.

The transpose of C −1 is equal to the inverse of C > and is denoted by C −> .

A.3 Elementary definitions

In the following definitions, C is a (square) matrix of dimension m × m with element C(i, j) = cij in the ith row and
jth column.

c ISO 2010 — All rights reserved 41


ISO/TS 28037:2010(E)

A.3.1 Symmetric matrix

The matrix C is symmetric if cij = cji , i = 1, . . . , m, j = 1, . . . , m, that is, C = C > .

A.3.2 Invertible matrix

The matrix C is invertible if its inverse C −1 (see A.2.5) exists.

A.3.3 Lower-triangular and upper-triangular matrix

The matrix C is lower-triangular if cij = 0, i < j, and upper-triangular if cij = 0, i > j.

A.3.4 Orthogonal matrix

The matrix C is orthogonal if C > C = I.

A.4 Cholesky factorization

The Cholesky factorization of the symmetric positive definite matrix U of dimension m × m is a lower-triangular
matrix L of dimension m × m such that U = LL> [10].

A.4.1 Cholesky factorization algorithms

A.4.1.1 The following algorithm computes a lower-triangular matrix L such that U = LL> .

Initialization
for k = 1 : m
for j = k : m
L(j, k) := U (j, k)
end
end
for k = 2 : m
for j = 1 : k − 1
L(j, k) := 0
end
end
Factorization
for k = 1 : m p
L(k, k) := L(k, k)
for j = k + 1 : m
L(j, k) := L(j, k)/L(k, k)
end
for j = k + 1 : m
for l = j : m
L(l, j) := L(l, j) − L(l, k)L(j, k)
end
end
end

NOTE To overwrite the lower-triangular elements U (i, j), i ≥ j of U with its Cholesky factorization, implement only the steps
in the Factorization stage of the algorithm in A.4.1.1, using U instead of L.

42 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

A.4.1.2 The calculations in A.4.1.1 can be re-organized to involve more vector-vector operations in order to improve
execution speed in computer languages that support vector and array operations. For example,

Initialization
for j = 1 : m
L(j, 1 : j) := U (j, 1 : j)
end
for j = 1 : m − 1
L(j, j + 1 : m) := 0
end
Factorization
for j = 1 : m
if j > 1
>
L(j : m, j) := L(j : m, j) − L(j : m, 1 : j − 1) (L(j, 1 : j − 1))
end p
L(j : m, j) := L(j : m, j)/ L(j, j)
end

NOTE To overwrite the lower-triangular elements U (i, j), i ≥ j of U with its Cholesky factorization, implement only the steps
in the Factorization stage of the algorithm in A.4.1.2, using U instead of L.

A.4.2 Interpretation of the Cholesky factorization of a covariance matrix

A.4.2.1 Suppose Ei , i = 1, . . . , m are m independent random variables each with expectation zero and variance
one and let ei be realization of Ei . Let

y1 = l11 e1 ,
y2 = l21 e1 + l22 e2 .

Then u2 (y1 ) = l11


2
and u2 (y2 ) = l21
2 2
+ l22 . The common dependence of y1 and y2 on e1 means that y1 and y2 have
associated correlation, with covariance cov(y1 , y2 ) = l11 l21 . Continuing, suppose

y3 = l31 e1 + l32 e2 + l33 e3 ,


..
.
ym = lm1 e1 + lm2 e2 + · · · + lmm em .

A.4.2.2 In matrix terms, y = Le, with L lower-triangular. The common dependence of y1 and y3 on e1 means
that there is correlation associated with y1 and y3 . Similarly, the common dependence of y2 and y3 on e1 and e2
means that there is correlation associated with y2 and y3 , and so on.

A.4.2.3 Given a covariance matrix U associated with data yi , the Cholesky factorization U = LL> calculates
the coefficients lij such that the covariance matrix can be explained by assuming that the yi are defined in A.4.2.1 as
realizations of linear combinations defined by lij of independent random variables Ei . In practice, covariance matrices
are often defined in terms of factorizations U = BB > and given U there are infinitely many factors B that can be used
to construct U . The Cholesky factorization, in which the linear combinations are represented by a lower-triangular
matrix, is unique up to the numerical sign of the columns of L.

A.4.3 Solution of a lower-triangular system

A.4.3.1 If L is a lower-triangular matrix of dimension m × m such that L(j, j) 6= 0, j = 1, . . . , m, and x is a vector


of dimension m × 1, the following algorithm computes the vector y, where y is such that Ly = x, that is, y = L−1 x.

c ISO 2010 — All rights reserved 43


ISO/TS 28037:2010(E)

Initialization
for j = 1 : m
y(j) := x(j)
end
Solution
y(1) := y(1)/L(1, 1)
for j = 2 : m
for k = 1 : j − 1
y(j) := y(j) − L(j, k)y(k)
end
y(j) := y(j)/L(j, j)
end

NOTE To overwrite the vector x with the solution y, implement only the steps in the Solution stage of the algorithm in A.4.3.1,
using x instead of y.

A.4.3.2 The algorithm in A.4.3.1 can be applied to solve the matrix equation LY = X by successively applying
it to each column of X. The solution is mathematically given by Y = L−1 X.

A.4.4 Solution of an upper-triangular system

A.4.4.1 The solution of an upper-triangular system can be determined in terms of the transpose of a lower-
triangular matrix. If L is a lower-triangular matrix of dimension m × m such that L(j, j) 6= 0, j = 1, . . . , m, and x is
a vector of dimension m × 1, the following algorithm computes the vector y, where y is such that L> y = x, that is,
y = L−> x.

Initialization
for j = 1 : m
y(j) := x(j)
end
Solution
y(m) := y(m)/L(m, m)
for j = j = m − 1 : −1 : 1
for k = j + 1 : m
y(j) := y(j) − L(k, j)y(k)
end
y(j) := y(j)/L(j, j)
end

NOTE To overwrite the vector x with the solution y, implement only the steps in the Solution stage of the algorithm in A.4.4.1,
using x instead of y.

A.4.4.2 The algorithm in A.4.4.1 can be applied to solve the matrix equation L> Y = X by successively applying
it to each column of X. The solution is mathematically given by Y = L−> X.

A.5 Orthogonal factorization

Orthogonal matrices are combinations of rotations and reflections and have the property that pre-multiplication of a
vector by an orthogonal matrix does not change the magnitude of that vector (the square root of the sum of squares
of its elements). The columns of an orthogonal matrix can be regarded as defining a system of orthogonal axes. The
importance of orthogonal factorization techniques is that they allow matrix equations to be solved in a numerically
stable way. Algorithms for computing orthogonal factorizations of a matrix are described in references [1, 10, 20].

44 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

A.5.1 QR factorization

The QR factorization of a matrix A of dimension m × n, with m ≥ n, can be written as


 
R1
A = QR = [Q1 Q2 ] = Q1 R1 ,
0

where Q = [Q1 Q2 ] is an orthogonal matrix of dimension m × m, Q1 is the matrix consisting of the first n columns
of Q, with Q>
1 Q1 = I, and R1 is an upper-triangular matrix of dimension n × n.

NOTE The QR factorization of a matrix A of dimension m × n, with m < n, can also be obtained. In this Technical
Specification, since all matrices for which the calculation of the QR factorization is required have m ≥ n, the factorization is
not provided.

A.5.2 RQ factorization

A.5.2.1 The RQ factorization of a matrix B of dimension m × n, with m ≥ n, can be written as


 
T1
B = TZ = Z,
T2

where Z is orthogonal and T 2 is upper-triangular.

A.5.2.2 The RQ factorization of a matrix B of dimension m × n, with m < n, can be written as


 
Z1
B = T Z = [0 T 2 ] = T 2Z 2,
Z2

where Z is orthogonal and T 2 is upper-triangular.

c ISO 2010 — All rights reserved 45


ISO/TS 28037:2010(E)

Annex B
(informative)

Application of the Gauss-Newton algorithm to generalized distance regression

B.1 This annex derives the algorithms in 7.2.1 and 8.2.1 using the Gauss-Newton algorithm.

B.2 The algorithms in 7.2.1 and 8.2.1 are particular implementations of the iterative Gauss-Newton algorithm [10]
for minimizing a sum of squares of non-linear functions:
m
X
F (A) = fi2 (A), A = (A1 , . . . , An )> , m ≥ n.
i=1

B.3 Let a
e be an approximation to the solution parameters a and

∂f1 ∂f1
 

f1 (A)
 ···
 ∂A1 ∂An 
f =
 .. 
and

J = .. .. .. 
.   . . .


fm (A)
 ∂fm ∂fm 
···
∂A1 ∂An

be, respectively, the vector of dimension m × 1 of function values and Jacobian matrix of dimension m × n of partial
derivatives of first order with respect to the parameters, evaluated at the approximation a
e to the parameters.

B.4 Let p solve


J > J p = −J > f . (B.1)
Then an updated estimate of the solution parameters is given by a
e := a
e + p.
>
B.5 For the algorithms in 7.2.1 and 8.2.1, A = (A, B) and the function fi (A) is a measure of the generalized
distance from the ith data point (xi , yi ) to the line y = A + Bx.

B.6 Let U i be the covariance matrix associated with the ith data point:

u2 (xi )
 
cov(xi , yi )
Ui = ,
cov(yi , xi ) u2 (yi )

and let x∗i ≡ x∗i (A, B), known as the ith footpoint, solve
 >  
xi − x xi − x
min d2i (x, A, B) = U −1
i , (B.2)
x yi − A − Bx yi − A − Bx

a function of A and B.

B.7 If fi2 (A, B) is defined by


fi2 (A, B) = d2i (x∗i (A, B), A, B),
i.e., d2i (x, A, B) evaluated at the solution x∗i , then the values of A and B that minimize
m
X
F (A, B) = fi2 (A, B)
i=1

determine the generalized distance regression best fit line. Implementation of the Gauss-Newton algorithm requires
the determination of the partial derivatives of first order of fi (A, B) with respect to A and B to form the Jacobian
matrix J .

46 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

>
B.8 Let n = (−B, 1) be a vector orthogonal to the line y = A + Bx and suppose that x∗i is a solution of the
> >
footpoint problem (B.2). Setting xi = (xi , yi ) , x∗i = (x∗i , A + Bx∗i ) and

ti = n> U i n, (B.3)

then, expressing function values and derivatives in terms of Ui , A, B, xi , yi and x∗i ,


−1/2
fi (A, B) = ti n> (xi − x∗i ) , (B.4)

and    
∂fi −1/2 > 0 ∂fi −1/2 > 0
= −ti n , = −ti n . (B.5)
∂A 1 ∂B x∗i

B.9 The solution x∗i of the footpoint problem (B.2) is given by


   
pi −B −qi xi + pi (yi − A)
= Ui , x∗i = . (B.6)
qi 1 −qi + pi B

NOTE Expressions (B.3), (B.4), (B.5) and (B.6) are defined in terms of U i rather than its inverse U −1
i . There is no requirement
for U i to be invertible, but n> U i n must be non-zero.

B.10 The algorithms in 7.2.1 and 8.2.1 implement the Gauss-Newton algorithm using explicit expressions for
fi (A, B), ∂fi /∂A and ∂fi /∂B. Solving for the update step p in expression (B.1) is formulated as a problem of
determining the weighted least squares best fit straight-line (see 6.2.1 steps 1 to 5) for transformed data derived from
the measurement data (xi , yi ), associated covariance matrices U i and the current approximations to A and B.

c ISO 2010 — All rights reserved 47


ISO/TS 28037:2010(E)

Annex C
(informative)

Orthogonal factorization approach to solving the generalized Gauss-Markov problem

C.1 General

The iterative algorithm described in 10.2.2 assumes that the covariance matrix U of dimension 2m × 2m is positive
definite and hence invertible. In particular, invertibility requires that all u(xi ) > 0 and u(yi ) > 0. In this annex a
general algorithm is described that is appropriate for all valid (symmetric positive semi-definite) covariance matrices
U . All that is required is that the covariance matrix can be factorized as U = BB > , where B is a matrix of dimension
2m×p (p ≥ m). Often covariance matrices are derived in terms of such a factorization. If U is invertible, B could be its
Cholesky factor. The algorithm proceeds similarly to that described in 10.2.2 and requires the calculation of residuals
f and Jacobian matrix J , but the increment δt is determined using two orthogonal factorizations. Mathematically,
δt minimizes
X p
>
c c= c2i subject to the constraints f = −J δt + Bc.
i=1

C.2 Calibration parameter estimates and associated standard uncertainties and covariance

Estimates a and b are calculated as in steps 1 to 9 below; the standard uncertainties u(a) and u(b) are evaluated in
step 10:

1 Obtain initial approximations et = (e


x1 , . . . , x a, eb)> to the parameters;
em , e

2 Calculate the vector of dimension 2m × 1,


 
x1 − xe1
 .. 

 . 

 xm − x
em    
  x−x e
f =
 y1 − ea + ebe
x1
= ,


 y−ea1 − ebe
x
 .
.

.
 
  
ym − ea + ebe
xm

and the (Jacobian) matrix of dimension 2m × (m + 2),


 
−1 0 · · · 0 0 0 0
 0 −1 · · · 0 0 0 0 
 .. .. . . . ..
 
 . . .. .. .. .. 
 . . 

 0
 0 · · · −1 0 0 0 
 
 0 0 · · · 0 −1 0 0

 −I 0 0
J =  −eb 0 · · · 0 = ,
 
 0 −1 −e x1  −ebI −1 −ex
 0 −b · · · 0 0 −1 −e x2
 e 

 . .. .. .. .. ..
..

 .
 . . . . . . .


 
 0 0 · · · −b 0 −1 −e
e xm−1 
0 0 · · · 0 −eb −1 −e
xm
>
where the x
e = (e
x1 , . . . , x
em ) , e
a and eb are extracted from the current estimate et of the parameter vector;

3 Determine the QR factorization of J :  


R1
J =Q ,
0

48 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

where Q is an orthogonal matrix of dimension 2m × 2m and R1 is an upper-triangular matrix of dimension


(m + 2) × (m + 2); see A.5.1;

4 Form the matrix product Q> B and determine its RQ factorization

Q> B = T Z,

where T is a matrix of dimension 2m × p and Z is an orthogonal matrix of dimension p × p; see A.5.2;

5 e = Q> f and partition f


Set f e and T :
" #  
f
e
1 T 11 T 12
f
e=
e , T = ,
f 2
0 T 22

e is a vector of dimension (m + 2) × 1, f
where f e is a vector of dimension (m − 2) × 1, T 11 is a matrix of dimension
1 2
(m + 2) × (p − m + 2), T 12 is a matrix of dimension (m + 2) × (m − 2) and T 22 is an upper-triangular matrix of
dimension (m − 2) × (m − 2);

>
6 Solve the upper-triangular system T 22 e
e2 = f
e to determine the vector e
2 e2 = (e
e2,1 , . . . , ee2,m−2 ) of dimension
(m − 2) × 1; see A.4.4;

7 e2 − f
Solve the upper-triangular system R1 δt = T 12 e e to determine the increment δt; see A.4.4;
1

8 Update the current approximations to the parameters: et := et + δt;

9 Repeat steps 2 to 8 until convergence has been achieved. Set a = e


a and b = eb (elements m + 1 and m + 2 of et);

10 Let Ra be the lower right submatrix of dimension 2 × 2 of R1 and Ta the lower right submatrix of dimension
2 × 2 of T 11 . Solve the upper-triangular system

Ra Ka = Ta ,

for the upper triangular matrix Ka of dimension 2 × 2 (see A.4.4) and set Ua = Ka Ka > . Then

u2 (a) = Ua (1, 1), u2 (b) = Ua (2, 2) and cov(a, b) = Ua (1, 2).

NOTE 1 The approach described in C.2 represents the most general solution to determining linear calibration functions using
least squares methods. All other approaches described in this document can be solved as special cases.

NOTE 2 Steps 1, 2, 8 and 9 in C.2 are identical to, respectively, steps 1, 2, 9 and 10 in 10.2.2.

C.3 Validation of the model

If m > 2, the validity of the model can be partially tested using the elements of the vector e
e2 (continued from C.2):

m−2
X
11 Form the observed chi-squared value χ2obs = ee22,i and degrees of freedom ν = m − 2;
i=1

12 Check whether χ2obs exceeds the 95 % quantile of χ2ν , and if it does reject the straight-line model.

NOTE The chi-squared test is based on an assumption that the di and ei in model (13) are realizations of random variables
characterized by a multivariate normal distribution and on a first order approximation. Under this assumption the vector e e2
of dimension (m − 2) × 1 is associated with a multivariate Gaussian distribution with covariance matrix equal to the identity
matrix of dimension (m − 2) × (m − 2) so that χ2obs is associated with a χ2 distribution with m − 2 degrees of freedom.

c ISO 2010 — All rights reserved 49


ISO/TS 28037:2010(E)

EXAMPLE 1 The QR factorization approach can be applied to the numerical example described in Clause 10.

The covariance matrix Ux arises in factored form (see D.4) as

U x = Bx B x > ,

where  0,5 0,0 0,0 0,0 0,0 0,0 0,0 0,5 0,0 0,0

 0,0 0,5 0,0 0,0 0,0 0,0 0,0 0,0 1,0 0,0 
 0,0 0,0 0,5 0,0 0,0 0,0 0,0 0,5 1,0 0,0 
Bx =  0,0 0,0 0,0 0,5 0,0 0,0 0,0 0,0 0,0 1,0 .
 
 0,0 0,0 0,0 0,0 0,5 0,0 0,0 0,5 0,0 1,0 
 
0,0 0,0 0,0 0,0 0,0 0,5 0,0 0,0 1,0 1,0
0,0 0,0 0,0 0,0 0,0 0,0 0,5 0,5 1,0 1,0

The covariance matrix Uy = By By > also arises in factored form with


 2,0 0,0 0,0 0,0 0,0 0,0 0,0 1,0

 0,0 2,0 0,0 0,0 0,0 0,0 0,0 1,0 
 0,0 0,0 2,0 0,0 0,0 0,0 0,0 1,0 
By =  0,0 0,0 0,0 2,0 0,0 0,0 0,0 1,0 .
 
 0,0 0,0 0,0 0,0 2,0 0,0 0,0 1,0 
 
0,0 0,0 0,0 0,0 0,0 2,0 0,0 1,0
0,0 0,0 0,0 0,0 0,0 0,0 2,0 1,0

The complete covariance matrix U of dimension 14 × 14 is factorized as U = BB > , where B is the matrix of dimension 14 × 18
 
Bx 0
B= .
0 By

For this example, the algorithm in C.2 is mathematically equivalent to that in 10.2.2 and the two approaches give very similar
numerical results.

EXAMPLE 2 Table C.1 gives seven measured data points (xi , yi ) obtained using the measurement models described in D.2
and D.5.

The covariance matrix associated with the yi is derived using the model (D.1) with uS = 2,0 and uR = 1,0, and is the same as
in Annex C EXAMPLE 1.

The data xi and associated covariance matrix are derived using the measurement model (D.3) with z1 = 50, z2 = 100, z3 = 200,
u(z1 ) = 0,5 and u(z2 ) = u(z3 ) = 1,0, so that
U x = Bx B x > ,
where  0,5 0,0 0,0

 0,0 1,0 0,0 
 0,5 1,0 0,0 
Bx =  0,0 0,0 1,0 .
 
 0,5 0,0 1,0 
 
0,0 1,0 1,0
0,5 1,0 1,0

The complete covariance matrix U of dimension 14 × 14 can be factorized as U = BB > , where B is the matrix of dimension
14 × 11  
Bx 0
B= .
0 By
For this example, the algorithm in 10.2.2 cannot be applied since U is not positive definite. The algorithm described in C.2 can
be used instead.

Table C.2 gives the initial vector e


t0 , the corrections δe
tk for the kth iteration, k = 1, . . . , 5, and the final estimate e
t =e
t5 .

50 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Table C.1 — Data representing seven measurement points, the xi and yi having associated covariance matrices

xi yi
50,5 47,1
99,7 98,4
150,2 153,7
199,5 194,0
249,9 251,9
299,2 297,5
349,7 349,0

Table C.2 — Change in parameter vector e


t

et0 δt1 × 10−2 δt2 × 10−4 δt3 × 10−6 δt4 × 10−8 δt5 × 10−10 et5
50,500 0 30,822 9 3,187 4 23,295 7 8,112 4 16,423 1 50,808 6
99,700 0 55,831 3 −13,836 5 26,113 6 −0,206 3 15,677 0 100,257 0
150,200 0 86,654 2 −10,649 1 49,409 3 7,906 1 32,100 2 151,065 5
199,500 0 −59,071 1 −48,597 6 −49,584 9 −44,790 4 −43,047 0 198,904 4
249,900 0 −28,248 2 −45,410 2 −26,289 1 −36,678 0 −26,623 7 249,613 0
299,200 0 −3,239 8 −62,434 1 −23,471 3 −44,996 7 −27,369 8 299,161 3
349,700 0 27,583 1 −59,246 7 −0,175 5 −36,884 3 −10,946 8 349,969 9
−1,852 8 −50,620 3 −140,085 6 −63,931 6 −100,943 2 −68,134 5 −2,373 1
1,004 2 0,173 8 0,857 1 0,321 7 0,610 8 0,372 2 1,006 0

c ISO 2010 — All rights reserved 51


ISO/TS 28037:2010(E)

Annex D
(informative)

Provision of uncertainties and covariances associated with the measured x- and


y-values

D.1 General

This annex indicates how the uncertainties and covariances associated with the measured response and stimulus
measurement values can be obtained. The approach is based on the use of a measurement model of the processes un-
derlying the determination of response and stimulus data, and the application of the law of propagation of uncertainty
in ISO/IEC Guide 98-3:2008. Illustrative examples are used for this purpose.

D.2 Response data 1

D.2.1 General

D.2.1.1 Suppose the quantity Y representing instrument response can be expressed by the measurement model
Y = Y0 + E, (D.1)
where Y0 is a quantity realized by the indicated response and E a quantity representing a systematic effect. Suppose
that the knowledge of Y0 is encoded by a distribution with standard deviation uR . This distribution is typically based
on an analysis of a number of repeated indications of Y . Y0 is estimated by the average of these indications and
uR is the standard uncertainty associated with this estimate. Suppose that the knowledge of E is such that E has
expectation zero (that is, any necessary correction has been applied) and variance u2S (obtained from an understanding
dependent on the specific nature of the instrument).

D.2.1.2 It follows from expression (D.1) that, by applying the law of propagation of uncertainty in ISO/IEC Guide
98-3:2008, the standard uncertainty u(yi ) associated with a measured value yi of Y is given by
u2 (yi ) = u2S + u2R .

Moreover, the covariance associated with measured values yi and yj of Y is


cov(yi , yj ) = u2S .

D.2.1.3 Thus the covariance matrix in this case is


 2
uS + u2R u2S u2S

...
2
 uS uS + u2R
2
... u2S 
Uy =  .
 
.. .. .. ..
 . . . . 
u2S u2S ... u2S + u2R

D.2.2 Measurement model for uncertainties and covariances associated with the yi

D.2.2.1 The data used in the example in Clause 9 are derived from a measuring system where two groups of
measurements are made. Each group of measurements is subject to a different systematic effect with the two effects
being uncorrelated, that is, 
Y0,i + E1 , i = 1, . . . , m1 < m,
Yi =
Y0,i + E2 , i = m1 + 1, . . . , m,
where Y0,i is a quantity realized by the ith indicated response and E1 and E2 quantities representing systematic effects.
Suppose that the knowledge of Y0,i is such that Y0,i has variance u2R and that the knowledge of Ek is such that Ek
has expectation zero and variance u2S,k , for k = 1, 2.

52 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

D.2.2.2 The standard uncertainty u(yi ) associated with a measured value yi of Yi is given by
 2
2 uR + u2S,1 , i = 1, . . . , m1 ,
u (yi ) =
u2R + u2S,2 , i = m1 + 1, . . . , m.

The covariances associated with measured values yi and yj are


 2
 uS,1 , 1 ≤ i ≤ m1 , 1 ≤ j ≤ m1 ,
cov(yi , yj ) = u2S,2 , m1 + 1 ≤ i ≤ m, m1 + 1 ≤ j ≤ m,
0, otherwise.

D.2.2.3 The covariance matrix in this case is


 2
uS,1 + u2R . . . u2S,1 0 ... 0

 .. .. .. .. .. .. 
 . . . . . . 
2
u2S,1 + u2R
 
 uS,1 ... 0 ... 0 
Uy =  .

 0 . .. 0 u2S,2 + u2R ... u2S,2 

 .
. . . .. .. .. .. 
 . . . . . . 
0 ... 0 u2S,2 ... u2S,2 + u2R

D.3 Response data 2

D.3.1 The measurement model is identical to that in expression (D.1) except that instead of the systematic effect
E being absolute, D is a relative systematic effect:
Y = Y0 (1 + D).

D.3.2 The treatment is analogous to that of D.2 except that now, using uD to denote the relative standard
uncertainty associated with an estimate of Y0 ,
u2 (yi ) = yi2 u2D + u2R ,
cov(yi , yj ) = yi yj u2D .

D.3.3 The covariance matrix in this case is


 2 2
y1 uD + u2R y1 y2 u2D y1 ym u2D

...
 y2 y1 u2D y2 uD + u2R
2 2
... y2 ym u2D 
Uy =  .
 
.. .. .. ..
 . . . . 
ym y1 u2D ym y2 u2D ... 2 2
ym uD + u2R

D.4 Stimulus data 1

D.4.1 The data used in the example in Clause 10 are derived from the following measurement model, motivated by
practice in mass metrology where a number of masses are used to generate multiple calibration values xi . The stimulus
data xi are realizations of random variables Xi , i = 1, . . . , 7, defined in terms of random variables Zk , k = 1, 2, 3 and
Di , i = 1, . . . , 7:
X1 = Z1 + D 1 ,
X2 = Z2 + D2 ,
X3 = Z1 + Z2 + D3 ,
X4 = Z3 + D4 , (D.2)
X5 = Z1 + Z3 + D 5 ,
X6 = Z2 + Z3 + D 6 ,
X7 = Z1 + Z2 + Z3 + D7 .

c ISO 2010 — All rights reserved 53


ISO/TS 28037:2010(E)

The random variables Zk , k = 1, 2, 3, have expectations zk and variances u2 (zk ), while the Di have expectations zero
and variances u2D,i . (In mass calibration, the values zk are the calibrated values for the masses and u(zk ) the associated
uncertainties.)

D.4.2 The uncertainties u(zk ) and u(di ) are propagated through the measurement model in D.4.1 to those associated
with estimates xi of Xi using the law of propagation of uncertainty in ISO/IEC Guide 98-3:2008. The common
dependence of Xi on Zk means that some of the covariances are nonzero. The propagation is most easily described in
matrix terms. Let  
C = CD CZ
be the sensitivity matrix of dimension 7 × 10, where C D = I is the identity matrix of dimension 7 × 7 and
 
1 0 0
 0 1 0 
 
 1 1 0 
 
CZ =   0 0 1 .
 1 0 1 
 
 0 1 1 
1 1 1

D.4.3 Let S D be the diagonal matrix of dimension 7 × 7 with diagonal elements S D (i, i) = uD,i , i = 1, . . . , 7, and
S Z the diagonal matrix of dimension 3 × 3 with diagonal elements S Z (k, k) = u(zk ), k = 1, 2, 3. Set
 
  SD 0  
Bx = C D C Z = SD CZ SZ .
0 SZ

D.4.4 Then the best estimate of X is given by x = C Z z of dimension 7 × 1 and the associated covariance matrix
of dimension 7 × 7 is given by
Ux = Bx Bx > = S 2D + C Z S 2Z C >
Z.

The term S 2D is the variance contribution arising from the Di while the second term is the contribution from the Zk .

D.5 Stimulus data 2

D.5.1 The data used in Annex C EXAMPLE 2 are derived from the following measurement model related to that
described in D.4. The stimulus data xi are realizations of random variables Xi , i = 1, . . . , 7, defined in terms of
random variables Zk , k = 1, 2, 3:

X1 = Z1 ,
X2 = Z2 ,
X3 = Z1 + Z2 ,
X4 = Z3 , (D.3)
X5 = Z1 + Z3 ,
X6 = Z2 + Z3 ,
X7 = Z1 + Z2 + Z3 .

The random variables Zk have expectations zk and variances u2 (zk ), k = 1, 2, 3.

D.5.2 The uncertainties u(zk ) are propagated through the measurement model in D.5.1 to those associated with
estimates xi of Xi using the law of propagation of uncertainty in ISO/IEC Guide 98-3:2008. Following the notation
of D.4, the best estimate associated with Xi is given by x = C Z z of dimension 7 × 1 and the associated covariance
matrix of dimension 7 × 7 is given by

Ux = Bx Bx > = C Z S 2Z C >
Z, Bx = C Z S Z .

In this case Ux is not invertible.

54 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

D.6 Stimulus and response data

D.6.1 Correlation, that is, non-zero covariances, associated with the measurement data xi and yi arises through
the presence of effects that are common to both.

D.6.2 Suppose X and Y can be expressed by the measurement model

X = X0 + T, Y = Y0 + T, (D.4)

where X0 , Y0 and T are independent random variables with expectations x0 , y0 and zero, and variances u2 (x0 ), u2 (y0 )
and u2 (t), respectively.

D.6.3 It follows from expression (D.4) that, by applying the law of propagation of uncertainty in ISO/IEC Guide
98-3:2008, the standard uncertainties u(xi ) and u(yi ) associated with measured values xi of X and yi of Y are given
by
u2 (xi ) = u2 (x0 ) + u2 (t), u2 (yi ) = u2 (y0 ) + u2 (t).
Moreover, the covariance associated with xi and yi is

cov(xi , yi ) = u2 (t).

D.6.4 If instead X and Y can be expressed by the measurement model

X = X0 + T, Y = Y0 − T,

the covariance associated with xi and yi is


cov(xi , yi ) = −u2 (t).

c ISO 2010 — All rights reserved 55


ISO/TS 28037:2010(E)

Annex E
(informative)

Uncertainties known up to a scale factor

E.1 This annex describes a method to evaluate the uncertainties associated with the measurement data in the case
that those uncertainties are known only up to a scale factor.

E.2 This Technical Specification generally assumes that uncertainties associated with the measurement data are
provided. This is the situation if the quantities (variables) involved have been characterized according to the principles
of ISO/IEC Guide 98-3:2008 and ISO/IEC Guide 98-3:2008/Suppl. 1:2008 [13] in terms of probability distributions.
The measurement value is given by the expectation of the variable and its associated variance by the variance of the
variable.

NOTE In particular, regarding a measured value y as a realization of a variable characterized by a t-distribution with scale
parameter s and ν degrees of freedom (ν > 2), the standard uncertainty associated with y is given by u(y) = [ν/(ν − 2)]1/2 s,
the standard deviation of that distribution.

E.3 Since the calibration function is generally to be used in practical measurement, the evaluation of the uncer-
tainties associated with the calibration data should be as complete and rigorous as possible. The estimates of the
calibration function parameters and their associated uncertainties can then be used with confidence. A relaxation
of this consideration addressed in this Technical Specification is the case in which uncertainties are known up to a
multiplicative scaling constant. The commonest case [9, page 30] is that in which it is believed that the measured
y-values have essentially identical uncertainty, but their common standard uncertainty σ is unknown. (This is an
instance of the more general case where the covariance matrix U = σ 2 U 0 , where U 0 is given, but σ is unknown.) If
m > 2, it is possible in this regard to provide an estimate σ b of σ on the basis of the dispersion of the data points
about the fitted line. This estimate is known as the posterior estimate of σ, the qualification ‘posterior’ referring to
the fact that it can only be determined after a best-fit line has been obtained for the data.

E.4 The posterior estimate is determined using the same concepts as those used in model validation. By making the
further assumption that the input data are a realization of a multivariate normally distributed variable, the posterior
b is chosen so that χ2obs is equal to m − 2, the expectation of the chi-squared distribution with m − 2 degrees
estimate σ
of freedom. No validation of the model and the data can be carried out in this case since the posterior estimate has
been chosen so that the validation criterion is automatically satisfied.

E.5 This method should therefore only be used with extreme caution. For instance, if a plot of the data indicates
that a straight-line calibration function is not appropriate, the method should not be used.

E.6 The parameter estimates a and b do not depend on the scale factor σ. An estimate of σ is only required to
evaluate the standard uncertainties u(a) and u(b) and covariance cov(a, b) associated with these estimates. For the
case of U known completely, u(a), u(b) and cov(a, b) can be evaluated from the data and U alone; no assumption about
the distributions associated with the data is necessary. With an assumption of normality, the parameter estimates
can be regarded as realizations of variables characterized by a certain bivariate distribution, as follows.

E.7 In the case where the data can be regarded as a realization of a multivariate normal distribution with known
covariance matrix U , the bivariate distribution in E.6 is normal with covariance matrix Ua with elements u2 (a), u2 (b)
and covariance cov(a, b) as in expression (1).

E.8 For the case in E.4 where the multivariate normal distribution has covariance matrix U = σ 2 U 0 , where U 0 is
known and σ is unknown, U 0 is used in place of U in the calculations. The covariance matrix

u20 (a)
 
cov0 (a, b)
U a,0 =
cov0 (b, a) u20 (b)

associated with the estimates of the straight-line calibration parameters can be calculated. If m > 2, the observed
chi-squared value (see 6.3) can be used to provide a posterior estimate of the scale factor associated with the input

56 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

uncertainties. Let χ2obs be calculated as in step 8 in 6.3, and set

b2 = χ2obs /(m − 2).


σ

E.9 The scale-adjusted covariance matrix

b2 (a)
 
u cd
ov(a, b)
U
ba =
cd
ov(b, a) b2 (b)
u

is then given by
U b2 U a,0 ,
ba = σ

that is, the scale-adjusted standard uncertainties u


b(a) and u
b(b) and covariance cd
ov(a, b) associated with the fitted
parameters are given by

b2 (a) = σ
u b2 u20 (a), b2 (b) = σ
u b2 u20 (b), cd b2 cov0 (a, b).
ov(a, b) = σ (E.1)

E.10 The estimates (E.1) are based on the fit to a finite number m of data points and for small m will underestimate
the variance of the distribution for the fitted parameters. For m > 4, a better estimate is determined [19, chapter 8]
using
m − 2 χ2obs χ2
σe2 = = obs .
m−4m−2 m−4

NOTE Under the assumption of normality, the parameter estimates are then associated with a bivariate t-distribution with
b a and m − 2 degrees of freedom. For m > 4, the covariance matrix of that distribution is given by
scale matrix U
 
e2 (a)
u cov(a,
f b) m−2 b
U
ea = = e2 U a,0 ,
Ua = σ (E.2)
cov(b,
f a) e2 (b)
u m−4

where the inflating factor (m − 2)/(m − 4) accounts for the fact that σ is being estimated rather than known in advance.

c ISO 2010 — All rights reserved 57


ISO/TS 28037:2010(E)

EXAMPLE (UNKNOWN WEIGHTS) In this example, the xi are taken to be exact and the yi to have equal but unknown
standard uncertainties, and a posterior estimate of the uncertainties associated with the fitted parameters is evaluated from the
residuals of the fit. The fit is determined by taking the weights equal to unity (implying that the standard uncertainties u(yi )
are also nominally equal to unity). The data are given in Table E.1.

Table E.1 — Data representing six measurement points, with weights set to unity

xi yi u(yi )
1,000 3,014 1
2,000 5,225 1
3,000 7,004 1
4,000 9,061 1
5,000 11,201 1
6,000 12,762 1

The best fit straight-line parameters are calculated as in Table E.2. From the table, g0 = 21,000/6,000 = 3,500,
h0 = 48,267/6,000 = 8,044, b = 34,363/17,500 = 1,964 and a = 8,044 − (1,964)(3,500) = 1,172.

Table E.2 — Calculation tableau associated with the data in Table E.1

wi wi2 wi2 xi wi2 yi gi hi gi2 gi hi ri ri2


3,500 8,044 a = 1,172
1,000 1,000 1,000 3,014 −2,500 −5,031 6,250 12,576 −0,122 0,015
1,000 1,000 2,000 5,225 −1,500 −2,819 2,250 4,229 0,126 0,016
1,000 1,000 3,000 7,004 −0,500 −1,040 0,250 0,520 −0,059 0,003
1,000 1,000 4,000 9,061 0,500 1,017 0,250 0,508 0,035 0,001
1,000 1,000 5,000 11,201 1,500 3,157 2,250 4,735 0,211 0,045
1,000 1,000 6,000 12,762 2,500 4,718 6,250 11,794 −0,191 0,037
6,000 21,000 48,267 17,500 34,363 b = 1,964 0,116

The data and fitted straight-line calibration function are graphed in Figure E.1. The weighted residuals are illustrated in
Figure E.2. Because the u(yi ) are arbitrarily given the value unity, in this case the uncertainty bars greatly exceed the residuals
in magnitude.

Figure E.1 — Data in Table E.1 and fitted straight-line calibration function obtained in Table E.2

If it were known a priori that u(yi ) = 1, i = 1, . . . , m, then uncertainties associated with the fitted parameters would be
calculated from the information in Table E.2:

u2 (a) = 1/6,000 + (3,500)2 /17,500, so that u(a) = 0,931;


u2 (b) = 1/17,500, so that u(b) = 0,239;
cov(a, b) = −3,500/17,500 = −0,200.

Because these calculations are based on the arbitrary assignment u(yi ) = 1, the posterior estimate σ
b of u(yi ) is required in

58 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Figure E.2 — Weighted residuals calculated using the fitted straight-line calibration function obtained in
Table E.2

order to evaluate the uncertainties associated with the fitted parameters. From the table,

χ2obs 0,116
b2 =
σ = b = 0,171.
= 0,029, or σ
m−2 4

This value of σ
b represents an estimate of the standard uncertainties u(yi ) associated with the yi based on the observed chi-
squared value. Given this posterior estimate, the calculations can be repeated with u(yi ) = 0,171. The estimates for a and b
would be unchanged, but the observed chi-squared value and uncertainties would be scaled as follows:
ri
ri = ,
σ
b
so that χ2obs /σ
b2 = m − 2 = 4, the expectation of the chi-squared distribution with 4 degrees of freedom. Using formulæ (E.1),
b2 (a) = σ
u b2 u20 (a) = 0,867σ
b2 = 0,025, so that u
b(a) = 0,931σ
b = 0,159;
b2 (b) = σ
u b2 u20 (b) = 0,057σ
b2 = 0,002, so that u
b(b) = 0,239σ
b = 0,041;
cov(a, b2 cov0 (a, b) = −0,200σ
c b) = σ b2 = −0,006.

The elements of Ub a are those that would be evaluated if it were known, a priori, that u(yi ) = σ
b. However, σb is an estimate of
the standard uncertainties associated with the yi . For m > 4, an inflation factor of (m − 2)/(m − 4) can be incorporated into
the covariance matrix to account for the additional uncertainty that arises from the fact that σb is an estimate derived from m
data points. Using formula (E.2),
   
m−2b 0,025 −0,006 0,050 −0,012
U
ea = Ua = 2 = ,
m−4 −0,006 0,002 −0,012 0,003

e(a) = (0,050)1/2 = 0,225, u


so that u e(b) = (0,003)1/2 = 0,058 and cov(a,
f b) = −0,012.

c ISO 2010 — All rights reserved 59


ISO/TS 28037:2010(E)

Annex F
(informative)

Software implementation of described algorithms

F.1 Software implementing the algorithms described in this Technical Specification for determining and using
straight-line calibration functions has been developed by the National Physical Laboratory (NPL) in the United
Kingdom. The software is available as a compressed ZIP folder from the web sites of NPL at www.npl.co.uk/
mathematics-scientific-computing/software-support-for-metrology/software-downloads-(ssfm) and the
International Organization for Standardization at standards.iso.org/iso/ts/28037/.

F.2 Software, developed in the MATLAB programming language [18], is provided in the form of M-files and html
files published using MATLAB Version 7.10.0 (R2010a). For users of MATLAB, the M-files may be run directly and
also modified to run the algorithms for different measurement data. For users who do not have access to MATLAB, the
software is best viewed as the provided html files. The software may be used as the basis for preparing implementations
of the algorithms in other programming languages. Within the files, calls are made to a number of MATLAB functions
that are also included with the software. For example, the function algm gdr1 steps 2 to 5 implements steps 2 to 5 of
the calculation procedure for the case 5.3.2 b) (uncertainties are associated with the measured values xi and yi and all
covariances associated with the data are regarded as negligible) specified in 7.2.1. In addition, some use of MATLAB
built-in functions is made, such as for obtaining the Cholesky factorization of a matrix. MATLAB scripts (having
extension ‘.m’) and html files (‘.html’) are provided as follows:

— TS28037 WLS1 runs the numerical example of weighted least squares (WLS) with known equal weights described
in Clause 6, and performs the prediction described in 11.1 EXAMPLE 1 and forward evaluation described in 11.2;

— TS28037 WLS2 runs the numerical example of weighted least squares (WLS) with known unequal weights described
in Clause 6 and performs the prediction described in 11.1 EXAMPLE 2;

— TS28037 WLS3 runs the numerical example of weighted least squares (WLS) with unknown equal weights described
in Annex E;

— TS28037 GDR1 runs the numerical example of generalized distance regression (GDR) described in Clause 7;

— TS28037 GDR2 runs a numerical example to illustrate the algorithm for generalized distance regression (GDR)
described in Clause 8;

— TS28037 GMR runs the numerical example of Gauss-Markov regression (GMR) described in Clause 9;

— TS28037 GGMR1 runs the numerical example of generalized Gauss-Markov regression (GGMR) described in
Clause 10;

— TS28037 GGMR2 runs the numerical example of GGMR described in Clause 10 and Annex C EXAMPLE 1 using
the orthogonal factorization approach described in C.2;

— TS28037 GGMR3 runs the numerical example of GGMR described in Annex C EXAMPLE 2 using the orthogonal
factorization approach described in C.2.

While prediction and forward evaluation are implemented only in the scripts that solve WLS problems, the MATLAB
code corresponding to these uses of the calibration function may be copied and pasted into any of the provided scripts.

F.3 The software should be used in conjunction with this Technical Specification. It is strongly recommended that
users study this Technical Specification before running the software.

F.4 The software is provided with a software licence agreement (REF: MSC/L/10/001) and the use of the software
is subject to the terms laid out in that agreement. By running the MATLAB code, the user accepts the terms of the
agreement. Enquiries about the software should be directed to NPL at [email protected].

60 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Annex G
(informative)

Glossary of principal symbols

A intercept of the straight-line calibration function

A∗ unknown value of A for a particular measuring system

a estimate of A

a vector (a, b)> of parameter estimates

B slope of the straight-line calibration function

B∗ unknown value of B for a particular measuring system

b estimate of B

cov(a, b) covariance associated with a and b

di xi − Xi∗ , a realization of a random variable with expectation zero and variance u2 (xi )

ei yi − Yi∗ , a realization of a random variable with expectation zero and variance u2 (yi )

L lower-triangular matrix

m number of measured points

ri weighted residual or weighted distance for the ith data point in terms of a and b

Ri weighted residual or weighted distance for the ith data point expressed algebraically in terms of A and
B

U covariance matrix of dimension 2m × 2m associated with measurement data (xi , yi ), i = 1, . . . , m

Ua covariance matrix of dimension 2 × 2 associated with a

Ux covariance matrix of dimension m × m associated with measurement data xi , i = 1, . . . , m

Uy covariance matrix of dimension m × m associated with measurement data yi , i = 1, . . . , m

uR standard deviation of random variable with distribution encoding knowledge of a random effect

uS standard deviation of random variable with distribution encoding knowledge of a systematic effect

u(z) standard uncertainty associated with z, with z denoting a, b, xi , yi , etc.

vi reciprocal of u(xi )

wi reciprocal of u(yi )

X independent (stimulus) variable

Xi ith independent (stimulus) variable

c ISO 2010 — All rights reserved 61


ISO/TS 28037:2010(E)

Xi∗ unknown value of the ith independent (stimulus) variable provided by a measuring system

x estimate of X (in the case of prediction) or measured value of X (forward evaluation)

xi ith measured value of X

x∗i estimate of ith independent (stimulus) variable

Y dependent (response) variable

Yi ith dependent (response) variable

Yi∗ unknown value of the ith dependent (response) variable provided by a measuring system

y measured value of Y (in the case of prediction) or estimate of Y (forward evaluation)

yi ith measured value of Y

yi∗ estimate of ith dependent (response) variable

ν degrees of freedom of a model, a chi-squared distribution or a t-distribution

σ standard deviation of a random variable characterized by a probability distribution

σ
b posterior estimate of σ

χ2obs observed chi-squared value

χ2ν chi-squared distribution with ν degrees of freedom

62 c ISO 2010 — All rights reserved


ISO/TS 28037:2010(E)

Bibliography

[1] Anderson, E., Bai, Z., Bischof, C. H., Blackford, S., Demmel, J., Dongarra, J. J., Croz, J. D.,
l. A. Greenbaum, Hammarling, S., McKenney, A., and Sorensen, D. C. LAPACK Users’ Guide, 3rd
ed., SIAM, Philadelphia, PA, 1999, http://www.netlib.org/lapack/lug/

[2] Bartholomew-Biggs, M., Butler, B. P., and Forbes, A. B. Optimisation algorithms for generalised
regression on metrology, In Advanced Mathematical and Computational Tools in Metrology IV (Singapore,
2000), P. Ciarlini, A. B. Forbes, F. Pavese, and D. Richter, Eds., World Scientific, pp. 21–31

[3] Boggs, P. T., Byrd, R. H., and Schnabel, R. B. A stable and efficient algorithm for nonlinear orthogonal
distance regression, SIAM J. Sci. Stat. Comput. 8, 6 (1987), 1052–1078

[4] Butler, B. P., Cox, M. G., Ellison, S. L. R., and Hardcastle, W. A., Eds., Statistics Software
Qualification: Reference Data Sets, Royal Society of Chemistry, Cambridge, 1996

[5] Carroll, R. J., Ruppert, D., and Stefanski, L. A. Measurement error in nonlinear models, Chapman&
Hall/CRC, Boca Raton, 1995

[6] Cox, M. G., Forbes, A. B., Harris, P. M., and Smith, I. M. The classification and solution of regression
problems for calibration, Tech. Rep. CMSC 24/03, National Physical Laboratory, Teddington, UK, 2003

[7] Draper, N. R., and Smith, H. Applied Regression Analysis, Wiley, New York, 1998, Third edition

[8] Forbes, A. B., Harris, P. M., and Smith, I. M. Generalised Gauss-Markov regression, In Algorithms for
Approximation IV (Huddersfield, UK, 2002), J. Levesley, I. Anderson, and J. C. Mason, Eds., University of
Huddersfield, pp. 270–277

[9] Fuller, W. A. Measurement Error Models, Wiley, New York, 1987

[10] Golub, G. H., and Van Loan, C. F. Matrix Computations, North Oxford Academic, Oxford, 1983

[11] ISO 3534-1:2006, Statistics – Vocabulary and symbols – Part 1: General statistical terms and terms used in
probability

[12] ISO 3534-2:2006, Statistics – Vocabulary and symbols – Part 2: Applied statistics

[13] ISO/IEC Guide 98-3/Suppl. 1, Uncertainty of measurement – Part 3: Guide to the expression of uncertainty
in measurement (GUM:1995) – Supplement 1: Propagation of distributions using a Monte Carlo method

[14] ISO 11095:1996, Linear calibration using reference materials

[15] Kendall, M. G., and Stuart, A. The Advanced Theory of Statistics, Volume 2: Inference and Relationship.
Charles Griffin, London, 1961.

[16] Kukush, A., and Van Huffel, S. Consistency of elementwise-weighted total least squares estimator in a
multivariate errors-in-variables model AX = B, Metrika 59, 1 (February 2004), 75–97

[17] Mardia, K. V., Kent, J. T., and Bibby, J. M. Multivariate Analysis, Academic Press, London, 1979

[18] MATLAB http://www.mathworks.com/products/matlab/

[19] Migon, H. S. and Gamerman, D. Statistical Inference: An Integrated Approach, Arnold, London, 1999

[20] Paige, C. C. Fast numerically stable computations for generalized least squares problems, SIAM J. Numer.
Anal. 16 (1979), 165–171

[21] Strang, G., and Borre, K. Linear Algebra, Geodesy and GPS, Wiley, Wellesley-Cambridge Press, 1997

c ISO 2010 — All rights reserved 63


ISO/TS 28037:2010(E)

ICS 03.120.30
Price based on 63 pages

© ISO 2010 – All rights reserved

You might also like