0% found this document useful (0 votes)
38 views

Error Analysis

This document discusses error propagation and how to calculate the total error in a final result when that result depends on multiple measured quantities that each have their own random errors. It provides the derivation of the standard formula for propagating errors through addition, subtraction, multiplication and division and examples of how to apply this formula.

Uploaded by

petr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Error Analysis

This document discusses error propagation and how to calculate the total error in a final result when that result depends on multiple measured quantities that each have their own random errors. It provides the derivation of the standard formula for propagating errors through addition, subtraction, multiplication and division and examples of how to apply this formula.

Uploaded by

petr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Florida Institute of Technology © 2020 by J.

Gering

Appendix C
Error Propagation And Comparison

Introduction

Error propagation has been a fixture in engineering and science for decades. Often it is used as
part of a failure analysis. It can also be used during the design process to determine tolerances of
parts of a system in order to avoid a system failure. Consequently it is sometimes used in
lawsuits and liability claims when a structure or system fails catastrophically.

Typically you will calculate a final result from several measurements, and compare this result to
a theoretical prediction. But even if the theory is valid and careful measurements were made, the
experimental result will differ slightly from the prediction. For the moment, we ignore any error
in the theoretical value. Then the experiment is a success if the discrepancy between theory and
experiment is less than the total error in the final result. The total error in the final result is
determined by a specific method of combining the random errors in the quantities used to
calculate the final result. This method is called error propagation.

Derivation of the Error in a Final Result

We take the final result f as a function of two measured quantities: x and y. So we can write f =
f(x,y). Perhaps we measure the length and a width of a rectangle. The product of the average
width and length gives the rectangle’s area. We want to calculate the error in this area due to the
errors in the length and width. Our goal is to determine the (square of the) standard deviation of
this function f.
1 N
! σ f = ∑ fi − f ( )
2
2
(1)
N i =1

We assume the difference between any one value of f and its average is a sum of the
corresponding differences in each measured value weighted by the rate at which a small change
in each measured quantity affects the final result. This weighting factor is the derivative of the
function with respect to each measured quantity. In other words, we perform a Taylor’s Series
expansion in Eqn. (2). Substituting Eqn. (2) into Eqn. (1) gives Eqn. (3).

df df
(fi − f ) ≅ (x i − x ) + (yi − y ) (2)
dx dy

⎡ 2 2
⎛ df ⎞ ⎤
(σ ) 1
( ) ⎛ df ⎞
( ) ⎛ df ⎞
( ⎛ df ⎞
) ( )
2
∑ ⎢⎢ xi − x
2 2
= ⎥
f
N ⎜⎝ dx ⎟⎠ + yi − y ⎜⎝ dy ⎟⎠ + x i − x ⎜⎝ dx ⎟⎠ yi − y ⎜⎝ dy ⎟⎠ ⎥ (3)
⎣ ⎦

The last term in Eqn. (3) represents the degree to which any fluctuations in both x and y affect σf.
If x and y are independent, measured quantities then these sorts of correlations do not occur.
Usually this is the case, so this correlation term will be neglected.

Appendix - C - 1
Florida Institute of Technology © 2020 by J. Gering

2 2

(σ ) 1
∑ (x ) ⎛ df ⎞ 1
∑ (y ) ⎛ df ⎞
2 2 2
f
= i
−x ⎜⎝ dx ⎟⎠ + N i
−y ⎜⎝ dy ⎟⎠ (4)
N

Recognizing the squares of the standard deviation of x and y gives Eqn. (5). This is the primary
equation for propagating errors. When 3 or more measured quantities go into a calculation, more
terms are added to the right hand side of Eqn. (5).

2 2

(σ ) = (σ ) ⎛ df ⎞
( )
2 2 2 ⎛ df ⎞

f x ⎜⎝ dx ⎟⎠ + σ y ⎜⎝ dy ⎟⎠ (5)

In practice, each measured quantity (x or y) is usually an average of several measurements. The


σx's and σy's are either estimated uncertainties or they are the standard deviations. Next we offer
examples of how Eqn. (5) is used. You must use these equations in your data analysis unless the
laboratory manual states a less sophisticated error analysis will suffice. Though some texts
suggest that systematic errors can be treated in the same way, this is a debatable point. For our
purposes, error propagation will only be valid for random errors.

Example: Subtraction

If f = x − y , then df dx = 1 and df dy = −1 . Since these derivatives are squared, the minus


sign has no effect. It should be seen that the same equation results for addition. So Eqn. (5)
gives a rule for addition and subtraction where uncertainties add in quadrature (i.e. one
calculates the square root of the sum of the squares).

(σ ) + (σ )
2 2
σf = x y
(6)

Example: Multiplication

( ) ( ) ( )
2 2 2
If f = xy , then df dx = y and df dy = x , so Eqn. (5) gives σ f = y2 σx + x2 σy
This can be put into a more convenient form by dividing throughout by f 2 = x 2y 2 to obtain

2 2 2
⎛σf ⎞ ⎛ σx ⎞ ⎛ σy ⎞
⎜ ⎟ = ⎜ x ⎟ +⎜ y ⎟ (7)
⎝ f ⎠ ⎝ ⎠ ⎝ ⎠

Multiplying each term in Eqn. (6) by 100, gives the percent error. For division, the same
equation is found. So, for multiplication or division percent uncertainties add in quadrature.

(%σ ) + (%σ )
2 2
%σ f = x y
(8)

Appendix - C - 2
Florida Institute of Technology © 2020 by J. Gering

A Table of Error Propagation Formulas

Use the following formulas to compute the propagated error in a function f of two independent
variables (x and y). a is a constant and n is an integer. The errors in x and y might be estimated
errors or standard deviations. Then x and y represent averages.

f(x,y) = error in f, σf = percent error in f, %σf =


⎛ 2 ⎞
⎜⎜ σx + σy2 ⎟⎟
⎜⎜ ⎟⎟100%
x +y σx2 + σy2 ⎜⎜ x + y ⎟⎟⎟
⎝ ⎠

⎛ 2 ⎞
⎜⎜ σx + σy2 ⎟⎟
⎜⎜ ⎟⎟100%
x −y σx2 + σy2 ⎜⎜ x −y ⎟⎟⎟
⎝ ⎠

ax aσx %σx

(%σ ) + (%σ )
2 2
2 2 2 2
xy y σ +x σ
x y
x y

x
(%σ ) + (%σ )
2 2
σx2 x 2σy2
y + x y
y2 y4

x2 2xσx 2 (%σx )

ax n nax n−1σx n (%σx )

σx %σx
x 2
2 x

1 σx %σx
x x 2

sin (x ) , x in radians σx cos (x ) σx cot (x ) 100%

cos (x ) , x in radians σx sin (x ) σx tan (x ) 100% 


Appendix - C - 3
Florida Institute of Technology © 2020 by J. Gering

Error Comparison

Error propagation is a means to an end: comparing an experimental result to a theoretical


prediction. In a typical experiment you measure several quantities, calculate their errors,
calculate a final experimental result (xe) and propagate errors to find the error in the final result
σe. Also, suppose theory predicts a result xt ± σt. To determine whether theory and experiment
agree, we perform two more calculations. We calculate the magnitude of the difference between
the theory and experiment (sometimes called the discrepancy – d) and the propagated error in
this discrepancy σd. A difference calculation involves the error propagation formula for
subtraction to calculate σd. If d is larger than σd, then xe and xt do not agree within the limits of
experimental error. But if d is less than or about the same size as σd then the two values do
agree. To summarize:

How far off your results are: d = x experimental − x theoretical

How far off your results are allowed to be σ d = σ exp.


2
+ σ theo.
2
.

Results are within the limits of experimental error if: d ≤ σd


No agreement within the limits of experimental error if: d is significantly > σ d

When the results do not agree, it does not mean the experiment was worthless. If you estimated
some errors, then perhaps some estimates were too small. Also, this error propagation procedure
has only been applied to random errors. If systematic errors are present, this analysis neglected
them. There is no clear-cut method of handling systematic errors. It is usually impractical in an
introductory teaching laboratory to have calibration standards or other methods to test for
systematic errors. Still, sometimes it is possible to identify sources of systematic error due to
simplifications made in developing the theory. This course will only require the students to
make an educated qualitative estimate and/or discussion of systematic errors. However
categorization, propagation and comparison of random errors is a required in the majority of the
experiment in this course.

If σd is only slightly smaller than d, the experiment is marginally successful. However, when d is
two or three times σd then perhaps a procedural mistake or systematic error occurred. You must
uncover mistakes during the experiment, so perform the error analysis while you are in the
laboratory! Since mistakes are usually correctable, large differences between d and σd can affect
your grade.

Appendix - C - 4

You might also like