0% found this document useful (0 votes)
14 views7 pages

Treatment of Errors

Uploaded by

2816652024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views7 pages

Treatment of Errors

Uploaded by

2816652024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ASSESSMENT OF EXPERIMENTAL ERRORS

When a measurement is made, it is usual practice to assign to it an experimental error. The


measured quantity is then expressed as x ± e , where the range x − e to x + e represents the
range within which the observer believes, with reasonable confidence, the “true” value lies. One
might assign a probability distribution to the true value of the quantity. The centre of the
distribution would be x, and e would be a measure of the width of the distribution. The simplest
kind of distribution would be a rectangular one of width 2e, as in figure 1. A more realistic
distribution would be similar to figure 2, the familiar “normal” distribution curve.

Fig. 1 implies that the true value is equally likely to be any value within the error limits, and
cannot be outside them. Fig. 2 implies that the true value is most likely to be at the measured
value, but has some probability of being outside the error limits, but not far outside.

When we add two measurements, how do we calculate the error of the sum? Let us consider
x3 ± e3 = ( x1 ± e1 ) + ( x2 ± e2 ) . The most simple-minded thing to do is to simply add the errors
(e3 = e1 + e2 ) . This, however, is unrealistic. It gives us an extreme limit of error and in any
conceivable physical situation overestimates the error. The error in the sum may be determined
by taking the “convolution” of all possible “true” values of x1 with all possible true values of x2.
The convolution is a summing of the product of all the probabilities for values of x1 and x2 which
combine to give x3. This convolution is the true value probability distribution for x3. The
convolution for two rectangular distributions is illustrated in Fig. 3.

Figure 3
2

Although the full width of the distribution is given by the sum of the errors, the width at half
maximum, which is a more realistic width, is a smaller quantity.

If one assumes the normal distribution (Fig. 2), and combines two measurements, one again finds
that the width at half maximum is less than the sum of the individual widths, as shown in Fig. 4.

Figure 4

In fact, the value of e3 (half width at half maximum) is given by e3 = e12 +e22 . This result is exact
for the normal or Gaussian curve, and is the best approximation to use when the exact probability
distributions are not known. In the following, examples of error calculations are given.

Error Calculation Examples

Addition and subtraction of quantities such as d = a + b – c. For example:

d = 15.4 ± .20 + 2.22 ± 0.05 – 10.58 ± .15

The absolute error in d is ea2 +eb2 +ec2 the square root of the sum of the squares of the individual
absolute errors

ed = (.2) 2 +(.05) 2 +(.15) 2 =0.25

Thus d = 7.04 ± 0.25. This method of calculating errors assumes that the individual
measurements are independent. (Never express an error to more than two significant figures,
usually only one. A good rule is that if the first significant figure of the error is 3 or more,
express the error to only one figure.)
3

If you have your errors as percentages, you must convert to absolute errors for addition
(subtraction) calculations.
e.g. 10.25 ± 1% + 4.81 ± .5%
= 10.25 ± .10 + 4.81 ± .024
= 15.06 ± .10
= 15.06 ± .7%

Multiplication and Division

ab
Consider d=
c

The relative error in d, ∆d/d, is:

∆d  ∆a   ∆b   ∆c 
2 2 2

=   +  + 
d  a   b   c 

The percentage error in d, (∆d/d) x 100%, leads to

% ed = (% ea ) 2 + (% eb ) 2 + (% ec ) 2

If errors in measurements are absolute, you must convert them to % errors.

examples a = 15.40 ± .20 (± 1.3%)


b = 2.22 ± .05 (± 2.3%)
c = 10.58 ± .25 (± 2.4%)

therefore % ed = 1.32 + 2.32 + 2.4 2 = 3.6% (call it 4%)

ab
= d = 3 . 23 ± 4 %
c

or d = 3.23 ± 0.12 (calculated using 3.6%)

When doing a multistep calculation, calculate the errors for each step and carry them along.

a+b
e.g. d = 1+
c
4

a+b
First, add a+b and calculate the absolute error in the sum. Then calculate the error in , by
c
first converting your absolute errors into % errors, to get the % error in this quotient. Then add
a+b
the 1 (no error) to the quotient, making sure you use the absolute error in as the error in the
c
result. Convert the absolute error into a % error and then take the error in the “root”: ½ times the
% error in the whole quantity under it (see below for justification of this). The final error may be
expressed as absolute or a % error, but often, absolute error is better.

General Rules: (Errors should never have more than two figures, usually just one.) Quantities
should never have digits extending beyond the error limit. Thus:

15.47206 ± 3.325% should be expressed as

15.5 ± 3% or 15.5 ± .5

(Note that you may use the extra figures in a % error to calculate the absolute error, and vice-
versa, or to carry along into later calculations, but they should not appear in your final quoted
result.)

Error measurements in Functions:

1) Limiting values. This method evaluates the error ( ∆ ) in a quantity by taking half the
difference between the maximum and minimum value of the function.

log( x + e) − log( x − e)
For example: if we have a value such as x ± e then: ∆ (log( x ) ) = .
2
2) For a function such as y = f (x ) the analytic method uses the fundamental result of
calculus:
df ( x )
∂y = ∂x
dx
where ∂x and ∂y represent the (small) variations in x and y. which can substituted with the
uncertainties ( ∆ ' s ) in those quantities:
df ( x )
∆y = ∆x
dx
Powers and Roots

Suppose x is raised to the power n : y = xn Using our result from above, we have:
y
∆y = nx n −1∆x or: ∆y = n ∆x which can be re-arranged to give:
x
∆y ∆x
=n
y x
Thus, the percentage error in y is just n times the percentage (or relative) error in x.
5

So, if we have x = 15.40 ± 1.6% and n=2

y = xn ± n (1.6%).
y = 237 ± 3.2% (call it 3%)
or y = 237 ± 7

For n as a fraction, the same rule holds. For n = 1/3,

y= 3
x = x1/ 3
1
y =2.488 ± (1.6%)
3
y = 2.488 ± 0.53%
y = 2.24 ± .01

For trigonometric functions, such as y = sin( x ) then the uncertainty in sin x would be:

∆ (sin( x ) ) = cos( x ) ∆x

where it is understood that the uncertainty in x (i.e. ∆x ) is in radians.

Errors in Measurements of a Special Nature

Example: You are trying to measure the spacing “d” of a periodic pattern such as a grating or
diffraction pattern, and you have many measurements (xn values) of the “spot” positions.

i.e. x0 = 1.25 ± .05 mm (0.05 mm is your estimate of error in reading xn)


x1 = 1.94 ± .05 mm
x2 = 2.66 ± .05 mm
x3 = 3.39 ± .05 mm
↓ ↓
x12 = 9.63 ± .05 mm

Taking the difference x12 - x0 = (9.63 ± .05 – 1.25 ± .05) = 8.38 ± .07 mm

Dividing by 12 (spacings), we have d = 0.698 ± .006 mm. Thus, while the individual spacings
were measured to ±.07 mm, the average spacing is accurate to ± .006 mm. However, it is also
important to closely examine each reading (xn) and check the individual differences (xn – xn-1) to
be sure that you have not missed any “spots”.

Note also that a number of sets of (x12 - x0) would be required to statistically reduce your error
1
limits. The error can be reduced by a factor of where n is the number of independent
n
measurements of the quantity.
6

Graphs

Quite often relative error expressions contain reference to the probable relative error in the slope
∆S/S, of a graphical plot. To determine this we can apply the relation

∆S ( S max −S min ) / 2

S S

where Smax and Smin are the most probable upper and lower limits of the slope and S is the best
estimate of the slope. To establish the limits of the slope requires knowledge of the probable
error limits in the measured co-ordinates which make up the plot. The probable limits of the
slope are then set by the requirement that the lines of the slope maxima be drawn such as to not
exceed the most probable error limits of all the legitimate (acceptable) points on the plot. The
best estimate of the slope is obtained by drawing the line in such a manner that it most closely
approximates all the points. Such a line may, as a result, not pass through any of the measured
points but represents the best and simplest estimate of the relationship or average (if its linear)
between the dependent and independent variables.

The graph below illustrates these points for the case measurements to verify Hooke’s law. Note
the convention that the independent variable (force) is plotted on the x-axis and the dependent
variable (displacement) is plotted on the y-axis.

From F = kx, k = 1/slope = 1/S and ∆k/k ± ∆S/S giving ∆k = ± [∆S/S]⋅k.


7

Important Points!!!

Experimental error assessment is important. Be realistic and honest. Don’t expect experimental
errors to provide you with a way of excusing sloppy work. It is as bad to overestimate
experimental errors as to underestimate them.

When you compare your results with handbook values remember these points: The handbook
values are not sacred. The difference between your result and the handbook result is not an error
– it is a discrepancy. Your experimental error in your result may account for the discrepancy, or
it may not. There may be systematic errors in the experiment (badly calibrated voltmeter, for
example) that you cannot assess. You may or may not be able to come up with plausible
explanations for the discrepancies.

For those who wish to pursue further the topic of the treatment of experimental measurements
the following books are suggested:

Baird, D.C., Experimentation: An Introduction to Measurement Theory and Experimental


Design, Prentice Hall, 1962.

Beavington, P.R., Data Reduction and Error Analysis for the Physics Sciences, McGraw Hill,
1969.

Young, H.D., Statistical Treatment of Experimental Data, McGraw Hill, 1962.

You might also like