PHY101 SuB Lecture 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

“To Err Is Human, to Explain and Correct Is Divine...

PHY101: Errors in experimental


measurements
Supratik Banerjee, Dept. of Physics, IIT Kanpur
Measurements in experiments
➢ In Physics experiments, very often we measure physical quantities (or measurables) like
the length, the mass, the temperature, the speed, the force, the universal constants
(Avogadro number, universal gas constant etc.) and many other things
➢ Some of them can be measured directly by using an instrument (primary measurements)
e.g. the length, breadth and depth of a book-shelf can be measured using a scale or more
popularly using a measuring tape, the temperature of our body or any material can be
measured using a thermometer.
➢ Some of them can be measured using a combination of measurement of some other
quantities and then using some exact law of physics which relates the targeted quantity
with the measured one. For example, to measure the density of a liquid we measure its
mass (M) and its volume (V) and then density will be given by M/V. Similarly, to
calculate the speed of a force free body we calculate its displacement (L) in time T and
the speed is given as L/T.
➢ If the body is not uniformly moving, then we have to plot L vs. T and the speed is given
by the slope of the plot (true for 1D, for more than 1D, plot componentwise).
Definition and sources of errors
➢ In a measurement process (direct or indirect), finally we obtain one or more
numerical values => measured values
➢ If the experimentalist is ‘happy’ with this value, then No question of error comes.
➢ Error comes into the picture when the measurer is Not ‘happy’ with the measured
value
➢ Error = Expected value* - measured value
➢ But what can be the reason(s) for the measurer NOT to be happy with the
measured values?
○ Maybe the experimenter has calculated/obtained the value theoretically in advance and the
measured value does not match with it
○ Maybe she already knows that one or more instruments are not functioning properly
○ Maybe she has realized that she was very casual in taking the measurements.
○ May be she has noticed unwanted environmental fluctuations during experiment
Classification of experimental errors

Experimental errors in primary measurements

Systematic errors: Random errors: Blunders:


due to known or due to unidentified due to human
identified sources and or arbitrary errors those cannot
can be methodically measurement be controlled or
handled issues and cannot corrected
be easily handled
Systematic errors
➢ Systematic errors affect the experimental results in a uniform way and they can, in
principle, be eliminated by repairing the source of error (clearly identifiable). The
systematic errors can globally be divided into four categories:

○ Instrumental error: caused due to an improper calibration of the instrument, e.g. if a thermometer reads 102°
C in the boiling water at normal atmospheric pressure or if the minimum count (least count) of a 30 cm
metal scale has changed due to thermal effect. This type of error can be avoided by a proper re-calibration.
○ Observational error: caused due to the fact that the observer’s eyes are not perpendicular to the plane of the
scale but at an angle (parallax) and can be avoided just by correcting the position consciously.
○ Steady environmental modification: yielded due to sudden (but steady) modification of the environment
during an experiment, e.g. electrical brownout. This can be corrected by re-calculating all the ‘reference’
values.
○ Theoretical simplification: introduced due to simplistic assumptions used to calculate a quantity using a
theoretical model. e.g. at constant volume and temperature, the true pressure of the surrounding
atmosphere deviates from the one measured by using ideal gas law. This type of errors can only be reduced
by the use of sophisticated (more realistic) models.
Random errors
➢ Random errors affect the experimental result in an arbitrary way (and not in a uniform
manner) and cannot be totally eliminated because the sources of error are often
unidentified (or partially identified) or haphazard. These type of errors can be classified
into two parts:

○ Random observational error: erroneous reading of the measured value from a scale (a meter scale or an
ammeter) due to diverse incorrect positions of the eye or due to a measurement beyond the least count of
the scale. Also caused by different reaction time (to start and stop a watch) in measuring an interval.
○ Irregular ambient fluctuations: caused by unpredictable and irregular fluctuations of the ambient
conditions. For example, in an experiment of Boyle’s law verification, experimental values are affected by
the fluctuation of temperature of the atmosphere.

➢ Although not eliminated, unlike systematic errors, random errors can be estimated using
different processes and the relative importance of the error on the measured value can
also be quantified to account for the credibility of the measurement.
Blunders
➢ Blunders are caused due to incorrigible mistakes whose sources are clearly identified but cannot
be restrained. For example, wrong tabulation of a measured value, misreading the scale due to
lack of concentration or confusion with repetitive numbers (like .484 vs .848 etc.).
➢ Blunders should be identified by multiple measurement and checking (or repeating) the
measurements of one person by another person.
➢ Blunders should not be included in the error analysis of experimental data.

Exercise
➢ Find and classify different sources of errors while measuring the time period
of oscillation of a metallic ball by a stopwatch and measuring the current in a
circuit using an ammeter.
Precision, trueness and accuracy
➢ An experimentally measured data with small random errors are said to have high ‘precision’
while a data with small systematic error are said to have high ‘trueness’.
➢ A data have high ‘accuracy’ only when it has both high precision and high trueness. A data
with low accuracy is said to have high ‘uncertainty’.
Estimation of random errors
➢ Random instrumental error: An estimated instrumental error is the maximum
amount of deviation of a measured value from the real value due to the
imperfection of the measuring instrument and the observer.
➢ A reasonable estimate of this type of error is given by the effective least count
(ELC) (and not the nominal least count) of the instrument, e. g. for measuring
the length of your mobile phone, you use a 15 cm or 30 cm scale and hence the
associated random error will be 0.1 cm.
➢ However, in null-point electrical measurement, if the deflection in the
galvanometer seems to remain zero for the range 351 Ω to 360 Ω., the ELC is 10 Ω
although the nominal least count may be 1 Ω.. Similarly, if the distance between
two consecutive calibrated marks of 0.1 A of an ammeter is far apart to understand
one half of it, then ELC will be 0.05 A.
Estimation of random errors
➢ Statistical error: This type of error is caused by unpredictable and irregular
fluctuations of the experimental conditions. The reason of such fluctuations are
often obscure.
➢ Statistical error ⇒ non-reproducibility of an experiment ⇒ if a variable x is
measured n times in a real experiment, we obtain a set of n values (x1, x2,...,xn)
which are not necessarily all equal.
➢ Since the values fluctuate due to some random factors, the measured values are
expected to be uniformly distributed below and above the ‘correct value’.
➢ A possible candidate for a representative value of the ensemble is given by the
“arithmetic mean” (̅x) or the ‘average’ since the total moment of all the values
with respect to the mean vanishes ⇒ 𝝨 (xi- ̅x) = 0.
➢ ̅x may or may not be equal to any xi and may also have a non-realistic value.
Estimation of random errors
➢ The total deviation of the measured values with respect to the mean always vanishes
⇒ it cannot represent the total random error involved in the measurement.
➢ An estimate of statistical error in n measures is obtained by the quantity 𝝨 |xi- ̅x|.
➢ Average statistical error involved in one single measurement is then given by the
quantity 𝝨 |xi- ̅x|/n.
➢ However, a better estimate of statistical error involved in a single measurement is
given by the standard deviation (𝜎) where

➢ Finally the measurement can be expressed as ̅x ± 𝜎.


Estimation of random errors

➢ If the measured values follow a Gaussian distribution with a sample size ≥ 10 ,


there is only a 68.2% probability that a measured value lies within ̅x ± 𝜎.
➢ However, the probability increases considerably if the measurement is expressed
as ̅x ± 2 𝜎 (95.4%) or ̅x ± 3 𝜎 (99.6%).
➢ In case we have a very large population (of size N), we often measure the value of
a certain variable for a smaller subset of the population (of size n), called a
sample.
➢ Evidently, sample mean is not equal to the population mean
➢ It is therefore better to take a relatively large number of samples and calculate the
mean of each samples.
➢ For independent measurements, the sample mean varies with a standard deviation
called the standard error = 𝜎/√n) < 𝜎 ⇒ sample mean can be considered to be a
random variable with much less variability (or statistical error)
Propagation of errors
➢ Let us now calculate the errors associated with a variable M = f(x, y, z), where x, y and z
are primarily measured quantities.
➢ By definition, we can write

➢ If the errors (𝛅) of x, y, z are known and they are very small compared to the values of x, y,
z, then we can also write 𝛅 ≈ d and therefore, we get

where the expression on the r.h.s. gives the maximum possible errors from all measurements..
Propagation of errors
➢ For linear combinations, the errors add up, e.g. if f(x, y, z)= ax + by - cz, then

➢ But for a multiplicative function, e.g. f(x, y, z)= xy/z , the fractional errors add up:

➢ For a power law, where f(x)= A xb, we get


Propagation of errors
➢ The aforesaid formalism makes sense when we know the amount of error (or at
least its upper bound) involved in measuring the independent quantities, e. g. in
the case of systematic errors or random instrumental errors.
➢ When we consider the statistical error, then one has to calculate the standard
deviation of the dependent variable f(x, y, z) in terms of the standard deviations of
the independent variables.
➢ If f(x, y, z)= ax + by - cz, then one can show that

➢ If x, y and z are independent, then

You might also like