0% found this document useful (0 votes)
24 views12 pages

Lecture 2 2023

This document provides an overview of error analysis and key concepts like accuracy, precision, systematic errors, and random errors. It discusses estimating experimental uncertainty from a single measurement based on the instrument's least count and limit of error. It also covers estimating uncertainty through repeated measurements by calculating the average and standard deviation. The goal is for students to understand these error analysis concepts and be able to propagate errors in computations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views12 pages

Lecture 2 2023

This document provides an overview of error analysis and key concepts like accuracy, precision, systematic errors, and random errors. It discusses estimating experimental uncertainty from a single measurement based on the instrument's least count and limit of error. It also covers estimating uncertainty through repeated measurements by calculating the average and standard deviation. The goal is for students to understand these error analysis concepts and be able to propagate errors in computations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

KWAME NKRUMAH UNIVERSITY OF SCIENCE AND TECHNOLOGY

CHEMICAL ENGINEERING DEPARTMENT


CHE 357: EXPERIMENTAL DATA ANALYSIS
INSTRUCTOR: Dr. (Mrs.) Mizpah A. D. Rockson

LECTURE 2: Error Analysis

Learning Objectives
At the end of the lecture the student is expected to able to understand the following:

• Accuracy, precision, random errors, systematic errors


• Difference between uncertainty and error
• Propagation of errors and uncertainty in computations

2.1 Introduction

No physical quantity can be measured with perfect certainty; there are always errors in any measurement. This
means that if we measure some quantity and, then, repeat the measurement, we will almost certainly measure a
different value the second time. How, then, can we know the true value of a physical quantity? The short answer
is that we can’t. However, as we take greater care in our measurements and apply ever more refined
experimental methods, we can reduce the errors and, thereby, gain greater confidence that our measurements
approximate ever more closely the true value. Error analysis is the study of uncertainties in physical
measurements.

2.2 Accuracy and Precision

Experimental error is the difference between a measurement and the true value or between two measured
values. Experimental error, itself, is measured by its accuracy and precision.

Accuracy measures how close a measured value is to the true value or accepted value. Since a true or accepted
value for a physical quantity may be unknown, it is sometimes not possible to determine the accuracy of a
measurement.

Precision measures how closely two or more measurements agree with each other. Precision is sometimes
referred to as repeatability or reproducibility. A measurement which is highly reproducible tends to give values
which are very close to each other.

Figure 2.1 defines accuracy and precision by analogy to the grouping of arrows in a target.

1
Figure 2.1 Accuracy and Precision
Experimental errors are inherent in the measurement process and cannot be eliminated simply by repeating the
experiment no matter how carefully. There are two types of experimental errors: systematic errors and random
errors.

Systematic Errors (Bias Errors)

Systematic errors are errors that affect the accuracy of a measurement. Systematic errors result from a
measurement method that is inherently wrong. Systematic errors are one-sided errors, because, in the absence of
other types of errors, repeated measurements yield results that differ from the true or accepted value by the
same amount. The accuracy of measurements subject to systematic errors cannot be improved by repeating
those measurements.

Systematic errors cannot easily be analyzed by statistical analysis. Systematic errors can be difficult to detect,
but once detected can be reduced only by refining the measurement method or technique.

Common sources of systematic errors are faulty calibration of measuring instruments, poorly maintained
instruments, or faulty reading of instruments by the user. A common form of this last source of systematic error
is called parallax error, which results from the user reading an instrument at an angle resulting in a reading
which is consistently high or consistently low.

Random Errors

Random errors are errors that affect the precision of a measurement. Random errors are two-sided. errors,
because, in the absence of other types of errors, repeated measurements yield results that fluctuate above and
below the true or accepted value. Measurements subject to random errors differ from each other due to random,
unpredictable variations in the measurement process. The precision of measurements subject to random errors
can be improved by repeating those measurements. Random errors are easily analyzed by statistical analysis.
2
Random errors can be easily detected, but can be reduced by repeating the measurement or by refining the
measurement method or technique.

Common sources of random errors are problems estimating a quantity that lies between the graduations (the
lines) on an instrument and the inability to read an instrument because the reading fluctuates during the
measurement.

2.3 Difference between uncertainty and error

Uncertainty results from random errors and describes the lack of precision. It may be expressed on a fractional
or percentage basis:
𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦
𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑎𝑙 (𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒) 𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 =
𝑏𝑒𝑠𝑡 𝑣𝑎𝑙𝑢𝑒

𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦
𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 = × 100%
𝑏𝑒𝑠𝑡 𝑣𝑎𝑙𝑢𝑒

Error may be defined as the difference between the reported value and the true value:
𝐸𝑟𝑟𝑜𝑟 = 𝑅𝑒𝑝𝑜𝑟𝑡𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 − 𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒

Error results from systematic errors and describes the lack of accuracy. The error may be reported as the
fractional error or the percentage error:

𝑒𝑟𝑟𝑜𝑟
𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑎𝑙 (𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒)𝑒𝑟𝑟𝑜𝑟 =
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒

𝑒𝑟𝑟𝑜𝑟
𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟 = × 100%
𝑡𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒

2.4 Estimating Experimental Uncertainty for a Single Measurement


Any measurement you make will have some uncertainty associated with it, no matter how precise your
measuring tool. How do you actually determine the uncertainty, and once you know it, how do you report it?

The uncertainty of a single measurement is limited by the precision and accuracy of the measuring instrument,
along with any other factors that might affect the ability of the experimenter to make the measurement.

2.4.1 Instrument Limit of Error (ILE) and Least Count

The least count is the smallest division that is marked on the instrument. Thus a meter stick will have a least
count of 1.0 mm, a digital stop watch might have a least count of 0.01 sec.

The instrument limit of error, ILE for short, is the precision to which a measuring device can be read, and is
always equal to or smaller than the least count. Very good measuring tools are calibrated against standards
maintained by the National Standards Authority. The Instrument Limit of Error is generally taken to be the least
count or some fraction (1/2, 1/5, 1/10) of the least count. You may wonder which to choose, the least count or
half the least count, or something else. No hard and fast rules are possible; instead you must be guided by
common sense. If the space between the scale divisions is large, you may be comfortable in estimating to 1/5 or
3
1/10 of the least count. If the scale divisions are closer together, you may only be able to estimate to the nearest
1/2 of the least count, and if the scale divisions are very close you may only be able to estimate to the least
count. For some devices the ILE is given as a tolerance or a percentage. Resistors may be specified as having a
tolerance of 5%, meaning that the ILE is 5% of the resistor's value.

Example 2.1
Problem: For each of the following scales (all in centimeters) determine the least count, the ILE, and read the
length of the gray rod.

Solution
Least count (cm) ILE (cm) Length (cm)
(a) 1 0.2 9.6
(b) 0.5 0.1 8.5
(c) 0.2 0.05 11.90

2.4.2 Average Deviation: Estimated Uncertainty by Repeated Measurements

The statistical method for finding a value with its uncertainty is to repeat the measurement several times, find
the average, and find either the average deviation or the standard deviation.

When a measurement is repeated several times, we see the measured values are grouped around some central
value. This grouping or distribution can be described with two numbers: the mean, which measures the central
value, and the standard deviation which describes the spread or deviation of the measured values about the
mean. The standard deviation is sometimes referred to as the mean square deviation and measures how widely
spread the measured values are on either side of the mean.

The meaning of the standard deviation can be seen from Figure 2.2, which is a plot of data with a mean of 0.5.
SD represents the standard deviation. As seen in Figure 2.2, the larger the standard deviation, the more widely
spread the data is about the mean. For measurements which have only random errors, the standard deviation

4
means that 68% of the measured values are within 𝑠 from the mean, 95% are within 2𝑠 from mean, and 99% are
within 3𝑠 from the mean.

Figure 2.2: Measured Values of x

Consider, the measurement of the width of a piece of paper using a meter stick. Being careful to keep the meter
stick parallel to the edge of the paper (to avoid a systematic error which would cause the measured value to be
consistently higher than the correct value), the width of the paper is measured at a number of points on the
sheet, and the values obtained are entered in a data table. Note that the last digit is only a rough estimate, since
it is difficult to read a meter stick to the nearest tenth of a millimeter (0.01 cm).

Observation Width, cm
1 31.33
2 31.15
3 31.26
4 31.02
5 31.20

𝑠𝑢𝑚 𝑜𝑓 𝑜𝑏𝑠𝑒𝑟𝑣𝑒𝑑 𝑤𝑖𝑑𝑡ℎ𝑠 155.96 𝑐𝑚


𝐴𝑣𝑒𝑟𝑎𝑔𝑒 = = = 31.19 𝑐𝑚
𝑛𝑜. 𝑜𝑓 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠 5

This average is the best available estimate of the width of the piece of paper, but it is certainly not exact. We
would have to average an infinite number of measurements to approach the true mean value, and even then, we

5
are not guaranteed that the mean value is accurate because there is still some error from the measuring tool,
which can never be calibrated perfectly. So how do we express the uncertainty in our average value?

One way to express the variation among the measurements is to use the average deviation. This statistic tells us
on average (with 50% confidence) how much the individual measurements vary from the mean.

|𝑥1 − 𝑥̅ | + |𝑥2 − 𝑥̅ | + ⋯ + |𝑥𝑛 − 𝑥̅ |


𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛, 𝑑̅ =
𝑛
However, the standard deviation is the most common way to characterize the spread of a data set. The
standard deviation is always slightly greater than the average deviation, and is used because of its association
with the normal distribution that is frequently encountered in statistical analyses.
For the example above the average deviation is: 𝑑̅ = 0.086 𝑐𝑚 and the standard deviation is: 𝑠 = 0.12 𝑐𝑚

The significance of the standard deviation is this: if you now make one more measurement using the same meter
stick, you can reasonably expect (with about 68% confidence) that the new measurement will be within 0.12 cm
of the estimated average of 31.19 cm. In fact, it is reasonable to use the standard deviation as the uncertainty
associated with this single new measurement. However, the uncertainty of the average value is the standard
deviation of the mean, which is always less than the standard deviation.

Standard Deviation of the Mean (Standard Error)


When we report the average value of N measurements, the uncertainty we should associate with this average
value is the standard deviation of the mean, often called the standard error (SE).
𝑠
𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑒𝑎𝑛, 𝑜𝑟 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑒𝑟𝑟𝑜𝑟 (𝑆𝐸), 𝑠𝑥̅ =
√𝑛
The standard error is smaller than the standard deviation by a factor of 1⁄√𝑛 . This reflects the fact that we
expect the uncertainty of the average value to get smaller when we use a larger number of measurements, N. In
the previous example, we find the standard error is 0.05 cm, where we have divided the standard deviation of
0.12 by √5. The final result should then be reported as:

𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑃𝑎𝑝𝑒𝑟 𝑤𝑖𝑑𝑡ℎ = 31.19 ± 0.05 𝑐𝑚

When a reported value is determined by taking the average of a set of independent readings, the fractional
uncertainty is given by the ratio of the uncertainty divided by the average value. For this example,

𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 0.05 𝑐𝑚
𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑎𝑙 𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦 = = = 0.0016 ≈ 0.002
𝑎𝑣𝑒𝑟𝑎𝑔𝑒 31.19 𝑐𝑚

Note that the fractional uncertainty is dimensionless (the uncertainty in cm was divided by the average in cm).

The fractional uncertainty is also important because it is used in propagating uncertainty in calculations using
the result of a measurement.

6
2.5 Propagation of Uncertainties

Suppose two measured quantities x and y have uncertainties, ∆x and ∆y, determined by procedures described in
previous sections: we would report (x ± ∆x), and (y ± ∆y). From the measured quantities a new quantity, z, is
calculated from x and y. What is the uncertainty, ∆z, in z?

a. Addition and Subtraction: z = x + y or z = x – y

𝑈𝑠𝑖𝑛𝑔 𝑠𝑖𝑚𝑝𝑙𝑒𝑟 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟𝑠: ∆𝑧 = |∆𝑥| + |∆𝑦| + ⋯

𝑈𝑠𝑖𝑛𝑔 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛𝑠: ∆𝑧 = √(∆𝑥)2 + (∆𝑦)2 + ⋯

Example: w = (4.52 ± 0.02) cm, x = (2.0 ± 0.2) cm, y = (3.0 ± 0.6) cm. Find z = x + y - w and its uncertainty.
Solution
z = x + y - w = 2.0 + 3.0 - 4.5 = 0.5 cm
∆𝑧 = |∆𝑥| + |∆𝑦| + |∆𝑤| = 0.2 + 0.6 + 0.02 = 0.82 𝑟𝑜𝑢𝑛𝑑𝑖𝑛𝑔 𝑡𝑜 0.8 𝑐𝑚
∴ 𝑧 = 0.5 ± 0.8 𝑐𝑚

Solution with standard deviations: ∆𝑧 = 0.633 ∴ 𝑧 = 0.5 ± 0.6 𝑐𝑚

b. For multiplication by an exact number, multiply the uncertainty by the same exact number.
Example: The radius of a circle is x = (3.0 ± 0.2) cm. Find the circumference and its uncertainty.
C = 2 π x = 18.850 cm
∆C = 2 π ∆x = 1.257 cm (The factors of 2 and π are exact)
C = (18.8 ± 1.3) cm
We round the uncertainty to two figures since it starts with a 1, and round the answer to match.

c. Multiplication and Division: z = x y or z = x/y


The same rule holds for multiplication, division, or combinations, namely add all the relative errors to get the
relative error in the result.
∆𝑧 ∆𝑥 ∆𝑦
𝑈𝑠𝑖𝑛𝑔 𝑠𝑖𝑚𝑝𝑙𝑒𝑟 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟𝑠: = + +⋯
𝑧 𝑥 𝑦

∆𝑧 ∆𝑥 2 ∆𝑦 2
𝑈𝑠𝑖𝑛𝑔 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛𝑠: √
= ( ) +( ) +⋯
𝑧 𝑥 𝑦

Example: w = (4.52 ± 0.02) cm, x = (2.0 ± 0.2) cm. Find z = wx and its uncertainty.
z = w x = (4.52) (2.0) = 9.04 cm2
∆𝑧 0.02 𝑐𝑚 0.2 𝑐𝑚
𝑈𝑠𝑖𝑛𝑔 𝑠𝑖𝑚𝑝𝑙𝑒𝑟 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟𝑠: 2
= + = 0.1044
9.04 𝑐𝑚 4.52 𝑐𝑚 2.0 𝑐𝑚
So ∆z = 0.1044(9.04 cm2) = 0.944 which we round to 0.9 cm2 → 𝑧 = 9.0 ± 0.9 𝑐𝑚2
7
𝑈𝑠𝑖𝑛𝑔 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛𝑠: ∆𝑧 = 0.905 𝑐𝑚2 𝑎𝑛𝑑 𝑧 = 9.0 ± 0.9 𝑐𝑚2

d. Products of powers:𝒛 = 𝒙𝒎 𝒚𝒏
The results in this case are
∆𝑧 ∆𝑥 ∆𝑦
𝑈𝑠𝑖𝑛𝑔 𝑠𝑖𝑚𝑝𝑙𝑒𝑟 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟𝑠: = |𝑚| + |𝑛| +⋯
𝑧 𝑥 𝑦

∆𝑧 𝑚∆𝑥 2 𝑛∆𝑦 2
𝑈𝑠𝑖𝑛𝑔 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛𝑠: = √( ) +( ) +⋯
𝑧 𝑥 𝑦

e. The law of propagation of uncertainty


All of the above examples of propagation of errors are special cases of the general formula.
Consider a calculated variable z that is a function of two measured variables x and y, then one writes:
𝑧 = 𝑧(𝑥, 𝑦)
If the uncertainties associated with x and y are ∆x and ∆y, respectively, the uncertainty, ∆z in z is:
𝜕𝑧 𝜕𝑧
𝑈𝑠𝑖𝑛𝑔 𝑠𝑖𝑚𝑝𝑙𝑒𝑟 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟𝑠: ∆𝑧 = | | ∆𝑥 + | | ∆𝑦
𝜕𝑥 𝜕𝑦

𝜕𝑧 2 𝜕𝑧 2
𝑈𝑠𝑖𝑛𝑔 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛𝑠: ∆𝑧 = √( ) (∆𝑥) + ( ) (∆𝑦)2
2
𝜕𝑥 𝜕𝑦

The general formula for error propagation based on standard deviations is known as the law of
propagation of uncertainty. This formula should be used if the errors are not stated as a result of
average deviations.
Example 1:

A temperature measuring device works according to resistance in a copper wire which is correlated as

𝑅 = 𝑅𝑜 (1 + 𝛼(𝑇 − 20))

Where

Ro is the resistance at 20℃ = 6Ω ± 0.3%

α is the temperature coefficient of the resistance = 0.004℃−1 ± 1%

T is the temperature = 30℃ ± 1℃


Compute the resistance of the circuit and its uncertainty
Solution
The nominal resistance (best value) of the wire is given by

𝑅 = 6 Ω(1 + 0.004℃−1 (30 − 20)℃) = 6 Ω(1 + 0.04) = 6.24 Ω

8
Applying the law of propagation of uncertainty:

- It can be deduced that 𝑅 = 𝑅(𝑅𝑜 , 𝛼, 𝑇)


Taking partial derivatives:
𝜕𝑅
= (1 + 𝛼(𝑇 − 20)) = 1 + 0.004 ℃−1 (30 − 20) ℃ = 1.04
𝜕𝑅𝑜
𝜕𝑅
= 𝑅𝑜 (𝑇 − 20) = 6 Ω(30 − 20) ℃ = 60 Ω℃
𝜕∝
𝜕𝑅
= 𝑅𝑜 ∝= 6 Ω(0.004 ℃−1 ) = 0.024 Ω℃−1
𝜕𝑇
Computing values of uncertainties:

∆𝑅𝑜 = 0.3% 𝑜𝑓 𝑅𝑜 = 0.003 × 6 Ω = 0.018 Ω

∆∝= 1% 𝑜𝑓 0.004 ℃−1 = 0.01 × 0.004 ℃−1 = 0.00004 ℃−1

∆𝑇 = 1 ℃

𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑖𝑛𝑔 𝑡ℎ𝑒 𝑎𝑏𝑜𝑣𝑒 𝑖𝑛𝑡𝑜 𝑡ℎ𝑒 𝑙𝑎𝑤 𝑜𝑓 𝑝𝑟𝑜𝑝𝑎𝑔𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛𝑡𝑦

𝜕𝑅 2 𝜕𝑅 2 𝜕𝑅 2
∆𝑅 = √( 2
) (∆𝑅𝑜 ) + ( ) (∆∝) + ( ) (∆𝑇)2
2
𝜕𝑅𝑜 𝜕∝ 𝜕𝑇

0.03 Ω
∆𝑅 = √(1.04)2 (0.18)2 + (60)2 (0.00004)2 + (0.024)2 (1)2 = 0.03 Ω or × 100% = 0.49%
6.24 Ω
∴ 𝑹 = 𝟔. 𝟐𝟒 𝛀 ± 𝟎. 𝟎𝟑𝛀 𝐨𝐫 𝟔. 𝟐𝟒 𝛀 ± 𝟎. 𝟒𝟗%

Example 2:

Measuring g with a simple pendulum, L – Length, T- oscillation period

𝑇 = 2𝜋 (𝐿⁄𝑔)1⁄2 ⟹ 𝑔 = 4𝜋 2 𝐿⁄𝑇 2
Given

𝐿 = 92.95 ± 0.1 𝑐𝑚, 𝑇 = 1.936 ± 0.004 𝑠


Find g and its uncertainty.
The best value of g:
4𝜋 2 × 92.95 𝑐𝑚
𝑔 = = 979 𝑐𝑚⁄𝑠 2
(1.936 𝑠)2

9
Taking partial derivatives:

𝜕𝑔 1 1
= 4𝜋 2 2 = 4𝜋 2 × = 10.533
𝜕𝐿 𝑇 1.9362
𝜕𝑔 𝐿 92.95
= −8𝜋 2 3 = −8𝜋 2 × = −1011.400
𝜕𝑇 𝑇 1.9363
𝑨𝒑𝒑𝒍𝒚𝒊𝒏𝒈 𝒕𝒉𝒆 𝒍𝒂𝒘 𝒐𝒇 𝒑𝒓𝒐𝒑𝒂𝒈𝒂𝒕𝒊𝒐𝒏 𝒐𝒇 𝒖𝒏𝒄𝒆𝒓𝒕𝒂𝒊𝒏𝒕𝒚

𝜕𝑔 2 𝜕𝑔 2
∆𝑔 = √( ) (∆𝐿)2 + ( ) (∆𝑇)2
𝜕𝐿 𝜕𝑇

∆𝑔 = √(10.533)2 (0.1)2 + (−1011.400)2 (0.004)2 = 4 𝑐𝑚⁄𝑠 2

∴ 𝑔 = 979 ± 4 𝑐𝑚⁄𝑠 2

2.6 Significant Figures

The significant figures of a number are the digits from the first nonzero digit on the left to either
(a) the last digit (zero or nonzero) on the right if there is a decimal point, or
(b) the last nonzero digit of the number if there is no decimal point.

For example,
2300 or 2.3 x 103 has two significant figures.
2300. or 2.300 x l03 has four significant figures.
2300.0 or 2.3000 x 103 has five significant figures.
23,040 or 2.304 x 104 has four significant figures.
0.035 or 3.5 x 10-2 has two significant figures.
0.03500 or 3.500 x 10-2 has four significant figures.

(Note: The number of significant figures is easily shown and seen if scientific notation is used.)

The number of significant figures in the reported value of a measured or calculated quantity provides an
indication of the precision with which the quantity is known: the more significant figures, the more precise is
the value. The number of significant figures suggests a rough estimate of the relative uncertainty.

The number of significant figures implies an approximate relative uncertainty


1 significant figure suggests a relative uncertainty of about 10% to 100%
2 significant figures suggest a relative uncertainty of about 1% to 10%
3 significant figures suggest a relative uncertainty of about 0.1% to 1%

Generally, if you report the value of a measured quantity with three significant figures, you indicate that the
value of the third of these figures may be off by as much as a half-unit. Thus, if you report a mass as 8.3 g (two
10
significant figures), you indicate that the mass lies somewhere between 8.25 and 8.35 g, whereas if you give the
value as 8.300 g (four significant figures) you indicate that the mass lies between 8.2995 and 8.3005 g.

Note, however, that this rule applies only to measured quantities or numbers calculated from measured
quantities. If a quantity is known precisely-like a pure integer (2) or a counted rather than measured quantity (16
oranges)-its value implicitly contains an infinite number of significant figures (5 cows really means 5.0000 ...
cows).

When two or more quantities are combined by multiplication and/or division, the number of significant figures
in the result should equal the lowest number of significant figures of any of the multiplicands or divisors.

If the initial result of a calculation violates this rule, you must round off the result to reduce the number of
significant figures to its maximum allowed value, although if several calculations are to be performed in
sequence it is advisable to keep extra significant figures of intermediate quantities and to round off only the
final result.

(The raised quantities in parentheses denote the number of significant figures in the given numbers.)

The rule for addition and subtraction concerns the position of the last significant figure in the sum-that is, the
location of this figure relative to the decimal point.

The rule is: When two or more numbers are added or subtracted, the positions of the last significant figures of
each number relative to the decimal point should be compared. Of these positions, the one farthest to the left is
the position of the last permissible significant figure of the sum or difference.

11
Finally, a rule of thumb for rounding off numbers in which the digit to be dropped is a 5 is always to make the
last digit of the rounded-off number even:
1.35 ===> 1.4
1.25 ===> 1.2

2.7 Reporting of Measurements


Experimental uncertainties should almost always be rounded to one significant figure. The uncertainty should
be rounded off to one or two significant figures. If the leading figure in the uncertainty is a 1, we use two
significant figures; otherwise we use one significant figure.

The last significant figure in any stated answer should usually be of the same order of magnitude (in the same
decimal position) as the uncertainty.

Example
Round off z = 12.0349 cm and ∆z = 0.153 cm.
Since ∆z begins with a 1, we round off ∆z to two significant figures:
∆z = 0.15 cm.
Hence, round z to have the same number of decimal places: z = (12.03 ± 0.15) cm.

12

You might also like