0% found this document useful (0 votes)
91 views5 pages

ERRORS

An error is the difference between an exact value and an approximated measured value. There are two main types of errors: systematic errors result from issues like miscalibration, while random errors remain even after correcting for systematic errors and can be dealt with statistically. Error propagation involves calculating how uncertainties in input values contribute to the total uncertainty in output values. For addition and subtraction, the total uncertainty is the sum of the individual uncertainties. For multiplication and division, the total uncertainty depends on the individual uncertainties and the operation.

Uploaded by

Owen Mulo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views5 pages

ERRORS

An error is the difference between an exact value and an approximated measured value. There are two main types of errors: systematic errors result from issues like miscalibration, while random errors remain even after correcting for systematic errors and can be dealt with statistically. Error propagation involves calculating how uncertainties in input values contribute to the total uncertainty in output values. For addition and subtraction, the total uncertainty is the sum of the individual uncertainties. For multiplication and division, the total uncertainty depends on the individual uncertainties and the operation.

Uploaded by

Owen Mulo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

ERRORS

An error, also called uncertainty or deviation  is the difference or the variation between a true/
exact value and the approximated value in measured data.

SOURCES OF ERRORS

1. Systematic and random errors.

No measurement made is ever exact. The accuracy (correctness) and precision (number of


significant figures) of a measurement are always limited by the degree of refinement of the
apparatus used, by the skill of the observer, and by the basic physics in the experiment. In doing
experiments we are trying to establish the best values for certain quantities, or trying to validate a
theory. We must also give a range of possible true values based on our limited number of
measurements.

Systematic error is the result of a mis-calibrated device, or a measuring technique which always
makes the measured value larger (or smaller) than the "true" value. An example would be using a
steel ruler at liquid nitrogen temperature to measure the length of a rod. The ruler will contract at
low temperatures and therefore overestimate the true length. Careful design of an experiment
will allow us to eliminate or to correct for systematic errors.

Even when systematic errors are eliminated there will remain a second type of variation in
measured values of a single quantity. These remaining deviations will be classed as random
errors, and can be dealt with in a statistical manner.

How can we estimate the uncertainty of a measured quantity? Several approaches can be used,
depending on the application.

A. Instrument Limit of Error (ILE) and Least Count


The least count is the smallest division that is marked on the instrument. Thus a meter
stick will have a least count of 1.0 mm, a digital stop watch might have a least count of
0.01 sec.

B. The instrument limit of error, ILE for short, is the precision to which a measuring device


can be read, and is always equal to or smaller than the least count. Very good measuring tools are
calibrated against standards maintained by the National Institute of Standards and Technology.

2. Estimated Error
Often other uncertainties are larger than the ILE. We may try to balance a simple beam
balance with masses that have an ILE of 0.01 grams, but find that we can vary the mass
on one pan by as much as 3 grams without seeing a change in the indicator. We would
use half of this as the estimated uncertainty, thus getting uncertainty of ±1.5 grams.
3. Round-off Errors and truncation errors

TYPES OF ERRORS

Relative and Absolute Errors

The quantity Dz the measurement of a quantity z is called the absolute error while Dz/z is


called the relative error or fractional uncertainty. Percentage error is the fractional error
multiplied by 100%. In practice, either the percentage error or the absolute error may be
provided. Thus in machining an engine part the tolerance is usually given as an absolute error,
while electronic components are usually given with a percentage tolerance.

The actual error in a quantity, having the same units as the quantity. Thus if 
c = (2.95 ± 0.07) m/s, the absolute error is 0.07 m/s.

Dz=0.07

Dz/z = 0.07/2.95

ERROR PROPAGATION

1. Addition and Subtraction: z = x + y     or    z = x - y


Derivation: We will assume that the uncertainties are arranged so as to make z as far from
its true value as possible.

Average deviations  Dz = |Dx| + |Dy| in both cases

With more than two numbers added or subtracted we continue to add the uncertainties.

Example:  w = (4.52 ± 0.02) cm, x = ( 2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm.
Find z = x + y - w and its uncertainty.

z = x + y - w = 2.0 + 3.0 - 4.5 = 0.5 cm

Dz = Dx + Dy + Dw = 0.2 + 0.6 + 0.02 = 0.82  rounding to  0.8 cm

So z = (0.5 ± 0.8) cm
.
 

2. For multiplication by an exact number, multiply the uncertainty by the same exact


number.

Example:  The radius of a circle is x = (3.0 ± 0.2) cm. Find the


circumference and its uncertainty.

C = 2 p x = 18.850 cm

 
DC = 2 p Dx = 1.257 cm (The factors of 2 and p are exact)

C = (18.8 ± 1.3) cm

Example:  x = (2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm. Find z = x - 2y and its
uncertainty.
 

z = x - 2y = 2.0 - 2(3.0) = -4.0


cm
 
Dz = Dx + 2 Dy = 0.2 + 1.2 = Using Eq 1b, z = (-4.0 ± 0.9)
1.4 cm cm.
So  z = (-4.0 ± 1.4) cm.

 
3. Multiplication: z = x y   

Derivation: We can derive the relation for multiplication easily. Take the largest values for
x and y, that is

z + Dz = (x + Dx)(y + Dy) = xy + x Dy + y Dx + Dx Dy

Usually Dx << x and Dy << y so that the last term is much smaller than the other terms and
can be neglected. Since z = xy,

Dz = y Dx + x Dy

Example:  w = (4.52 ± 0.02) cm, x = (2.0 ± 0.2) cm.  Find z = w x and its uncertainty.

z = w x = (4.52) (2.0) = 9.04 

So Dz = 0.1044 (9.04  ) = 0.944  which we .


round to  0.9  ,
z = (9.0 ± 0.9)  .

4. Division: z = x/y

 x  x  Dx Dy 
D     
 y y x y 

EXC :  x = ( 2.0 ± 0.2) cm, y = (3.0 ± 0.6) sec Find z = x/y.


 

5. Products of powers:  .

 
EXC :  w = (4.52 ± 0.02) cm, A = (2.0 ± 0.2) , y = (3.0 ± 0.6) cm. Find
.
6.   Mixtures of multiplication, division, addition, subtraction, and powers.

If z is a function which involves several terms added or subtracted we must apply the above rules
carefully.  This is best explained by means of an example.

EXC:  w = (4.52 ± 0.02) cm, x = (2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm. Find z = w x +y^2

EXC

Calculate z and Dz for each of the following cases.

(i) z = (x - 2.5 y + w) for x = (4.72 ± 0.12) m, y = (4.4 ± 0.2) m, w = (15.63 ± 0.16) m.


(ii) z = (w x/y) for w = (14.42 ± 0.03) m/ , x = (3.61 ± 0.18) m, y = (650 ± 20) m/s.
(iii) z =  for x = (3.55 ± 0.15) m.
(iv) z = v (xy + w) with v = (0.644 ± 0.004) m, x = (3.42 ± 0.06) m, y = (5.00 ± 0.12) m, w =   
(12.13 ± 0.08) .
(v) z = A sin y for A = (1.602 ± 0.007) m/s, y = (0.774 ± 0.003) rad.

You might also like