Error Types and Error Propagation
Error Types and Error Propagation
Error Types and Error Propagation
=
N
x x
N
i
i
The variance is the square of the standard
deviation; v =
2
.
B.1.3. Standard Error
The standard error or error in the
mean is
N
= .
This quantity is also referred to as the stan-
dard deviation of the mean, because it is an
estimate of the standard deviation of the
distribution of means that would be obtained
if the mean were measured many times.
Taking more measurements of a given quan-
tity might not improve the standard devia-
tion, but it should make the standard error
smaller (scaling it as N
-
).
So how do you know whether the stan-
dard deviation or the standard error is the
more important quantity? It depends on the
question. If you want to know where a
measurement is likely to fall compared to the
mean value, the standard deviation tells you
this. On the other hand, if you want to know
how well you have determined the average
value itself, you need to find the standard
error (standard deviation of the mean). You
will usually, but not always, be most inter-
ested in the latter.
As an example, lets say that everyone
in the class is asked to take 20 measurements
of the height of a certain lab TA. Each stu-
dent can determine his or her own average
3 Appendix V. Uncer. & Error Propagation
value and standard deviation. The standard
error will be an indication of the spread in
the average values reported by all the stu-
dents. It should be our best overall estimate
of how well we know the TAs height.
One could also combine all the read-
ings from all of the students into one large
file and calculate its mean and standard
deviation. These might be very similar to the
values reported by individual students and
have a similar spread in values. Only when
you consider the new standard error would
you realize that the measurement really has
been improved by adding a lot more data.
B.1.4. Two Variables
We can readily extend the concept of
the standard deviation to the measurement of
two variables where our N measurements of
x and y are to be compared to the function
y=f(x). The standard deviation for such
measurements would be defined as
( ) ( )
m N
x f y
N
i
i i
=1
2
where m is the number of free parameters
determined from the data. For a linear rela-
tion, with the intercept and slope determined
from the data by a least-squares fit, m =2.
B.2. Error Estimates
When we have made only a few obser-
vations, the laws of probability are not appli-
cable to the determination of uncertainties.
The number of observations in a student
laboratory may be too small to justify using
the standard deviation to estimate the uncer-
tainty in a measurement. However, it is usu-
ally possible from an inspection of the meas-
uring instruments to set limits on the range in
which the true value is most likely to lie.
Consider a ruler graduated in centime-
ters with fine rulings in millimeters as shown
in Figure 2 (with only the 5 and 6 cm marks
visible). We wish to determine the position
of the arrow. We are certain the arrow is
between 5.3 and 5.4 cm and we should be
able to estimate its position to a fraction of a
division. A reasonable estimate might be
5.34 + 0.02. (In reading most scales we
should attempt to estimate some fraction of
the smallest division, usually between one
half to one tenth of the scale division.)
Uncertainties estimated in this way are
referred to as external errors, i.e., estimating
the uncertainties requires additional steps
beyond making the measurements.
For a complete uncertainty analysis,
both internal errors and external errors
should be calculated and checks should be
made that the results are consistent. In our
experiments students will usually be
instructed to choose one particular method or
the other.
C. Error Propagation
In many cases, the quantity that we
wish to determine is derived from several
measured quantities. For example, suppose
that we have measured the quantities t
t
and y
y
(the s refer to the relatively
small uncertainties in t and y). We have
determined that
y =5.32 0.02 cm
t =0.103 0.001 s (1)
We can find g from the relation
g =g(y,t) =2y/t
2
, (2)
which yields g =10.02922m/s
2
. To find
the uncertainty in g caused by the uncertain-
ties in y and t, we consider separately the
contribution due to the uncertainty in y and
the contribution due to the uncertainty in t.
|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
5 6
Figure 2: Example ruler reading
Appendix V. Uncer. & Error Propagation 4
Each contribution may be considered sepa-
rately so long as the variables y and t are
independent of each other. We denote the
contribution due to the uncertainty in y by
the symbol
gy
(read as delta-g-y or
uncertainty in g due to y). The total error
in g is obtained by combining the individual
contributions in quadrature:
2 2
y t
g g g
+ = (3)
The basis of the quadrature addition is an
assumption that the measured quantities have
a Gaussian distribution about their mean val-
ues. (Distribution functions are described in
Appendix VI.) When two (or more) inde-
pendent Gaussians are added, the width of
the new, combined distribution of values is
given by this same quadrature rule. The s
describe the width of this distribution. This
rule is the same as the rule for adding the
lengths of vectors that are independent (i.e.
at right angles to each other).
This quadrature addition may be used
when the function depends on more meas-
ured quantities. For a function f(a,b,...,z),
2 2 2
fz fb fa f
+ + = . (4)
However, rather than blindly applying this
formula, you may avoid needless computa-
tion by estimating separately the contribution
to the uncertainty in the result from each of
the individual variables and to ignore any
terms that are much smaller than the largest
terms. Because we add the squares of the
individual contributions, relatively small
terms have a very small effect on the total
uncertainty.
There are two methods by which one
may calculate
gy
and
gt
, the contributions
of y and t to the uncertainty in g. In each
case, the basic idea is to determine by how
much g would change if y (or t) were
changed by its uncertainty.
C.1. Derivative Method
The variation of a function f with
respect to a variable x is equivalent to taking
the first term in the Taylor series expansions
of f with respect to x:
x fx
x
f
=
(5)
The derivative in Eq. 5 (f/x) is a partial
derivative. You may not have encountered
partial derivatives yet in your math class.
Simply put, when taking a partial derivative
with respect to one variable, treat any other
variables as constants.
Since each uncertain variable will
increase, not decrease the final uncertainty,
we will usually quote the uncertainty in f due
to the uncertainty in x as the absolute value
of Eq. 5., i.e.,
fx x
f
x
(5a)
The individual contribution to the
uncertainty in f from a measured uncertainty
in x is the product of the uncertainty in x
with the partial derivative of f with respect to
x. The total error in f is obtained by combin-
ing the individual contributions in quadra-
ture, as given in Eq. 4.
For the example given in Eq. 2, the
corresponding formulae are
gt
=|g/t|
t
=|-4y/t
3
|
t
(6)
gy
=|g/y|
y
=|2/t
2
|
y
(7)
so that
5 Appendix V. Uncer. & Error Propagation
2
2
2
3
2 4
=
y t g
t t
y
(8)
Substituting in the values from Eq. 1 yields a
final answer:
( ) ( )
2
2
2
2
3
s cm 8 . 19 02 . 0
103 . 0
2
001 . 0
103 . 0
32 . 5 4
=
=
g
C.2. Computational Method
Equations 4 and 5 represent the begin-
ning of the formal method of error propaga-
tion. It is often a good estimate if we instead
calculate the variations directly, thereby
avoiding the need to take derivatives.
We can approximate Eq. 5a by a finite
difference such as
( ) ( )
,
, ,
f x x
f x f x = + (9)
Consider Eq. 2. We replace Eqs. 6 and
7 by
gt
=|g(t+
t
, y)
- g(t,y)| (10)
=|2y/(t +
t
)
2
- 2y/t
2
|
and
gy
=|g(t, y+
y
) - g(t,y)| (11)
=|2(y+
y
)/t
2
- 2y/t
2
|.
We again apply Equation 2 to obtain the total
uncertainty in g. (For
t
<< t, Equations 6
and 10 are equivalent, and for
y
<< y,
Equations 7 and 11 are equivalent.)
Note that in both methods, it is essen-
tial that the variations be performed sepa-
rately and that the results be added in
quadrature.
C.3.Simple Error Propagation
Often you will simply add, subtract,
multiply or divide measured values and it is
helpful to know how to quickly calculate the
associated errors.
Addition & Subtraction
If g = g(y,t) = y + t
then
gy
=
y
and
gt
=
t
so that
2 2 2 2
y t g g g
y t
+ = + =
Stated more simply, if you are adding
two values, simply add their associated
uncertainties in quadrature to obtain the
uncertainty in the sum. The same principle
holds if you are subtracting two numbers;
add their uncertainties in quadrature to find
the uncertainty in the difference. There are
no negative uncertainties, uncertainties are
always positive numbers and always add.
Note also that if you are adding a con-
stant, such as 1, or a quantity with a very
small uncertainty to some other quantity, the
result above shows that the final uncertainty
is simply the uncertainty in that other
quantity.
Multiplication
If g = g(y,t) = y t
then
gy
=t
y
and
gt
= y
t
so that
2 2 2 2 2 2
y t g g g
t y
y t
+ = + =
Division
If g =g(y,t) =y/t
then
gy
=
y
(1/t) and
gt
= (y/t
2
)
t
so that
2
2
2
2
2
2 2
1
y t g g g
t t
y
y t
= + =
Appendix V. Uncer. & Error Propagation 6
Its also worth pointing out that frac-
tional or percentage uncertainties in multi-
plication and division behave much like
absolute uncertainties in addition. In other
words, if g =yt,
2
2 2 2 2 2
=
+
=
y t yt
t y
g
y
t
y t g
with the same result holding if g =y/t.
If either y or t is a constant or has a
relatively small fractional uncertainty, then it
can be ignored and the total uncertainty is
just due to the remaining term.
Furthermore, if one of the measured
quantities is raised to a power, the fractional
uncertainty due to that quantity is merely
multiplied by that power before adding the
result in quadrature. For our original exam-
ple of g =2y/t
2
,
( ) ( )
2
2
2
2
2
2
3
2
2
2 4
=
+
=
y t t y
t t y
g
y
t
y t g
For the values in our example,
y
/y =0.5%
and
t
/t =1% (so 2
t
/t =2%), so we can see
that the contribution from the uncertainty in
y is negligible compared to the contribution
fromt. We can therefore conclude that the
fractional uncertainty in our measured result
for g is about 2%:
g =10.0 0.2 m/s
2
D. Significant Figures
Significant figures are those figures
about which there exists no or very little
uncertainty. In the example illustrated in
Figure 2, the 5 and 3 are known exactly,
while the 2 is known to some degree of
certainty. Thus, the number has 3 significant
figures. Care should be taken to distinguish
between significant figures and decimal
places. The scale reading could have been
expressed as 0.0532 m, but it would still
have 3 significant figures. The zeros pre-
ceding the 5 are place markers and are
not significant figures. On the other hand,
quoting the result as 5.320 cm would imply
that the 2 is well known while the trailing 0
is also known, but with some degree of
uncertainty. A trailing zero after the decimal
point is thus considered to be significant.
Reporting Results (Measurement
Intervals)
It is important to report your results
with the correct number of significant fig-
ures. Suppose you have obtained from your
calculations
g =9.98328 m/s
2
with
g
=0.067695 m/s
2
.
Begin by rounding the uncertainty in
your result to one significant figure (or pos-
sibly 2), i.e.,
g
= 0.07 m/s
2
. (Since the
uncertainty only tells you how well you have
measured your result, it doesnt make sense
to quote an uncertainty to more than one or
two significant figures.) Then quote g to the
same number of decimal places, i.e.,
g = (9.980.07) m/s
2
,
not
g =(9.98328 0.067695) m/s
2
nor
g =(9.98328 0.07) m/s
2
nor
g =(10 0.07) m/s
2
.
The bold example above has a form some-
times referred to as a measurement inter-
val. If you are using scientific notation, al-
ways use the same power of 10 for both the
quantity and its uncertainty. For example,
quote
h = (6.40.3)10
-34
Js,
not
h = (6.4 10
-34
) J s (3 10
-35
) J s.
You must include the units.