Lecture 2 - Characteristics of MS
Lecture 2 - Characteristics of MS
Performance Characteristics
◼ Static
◼ Dynamic
2
Static Characteristics of Measurement Systems
• The characteristics of quantities or parameters measuring instruments that do not
vary with respect to time are called static characteristics.
• Sometimes, these quantities or parameters may vary slowly with respect to time.
✓ Accuracy:
▪ Accuracy is the closeness with which the reading measured value approaches an
accepted standard value or the true value of the measured quantity.
▪ Inaccuracy is the extent to which a reading might be wrong
▪ Accuracy of the measured signal depends upon the following factors:
• Intrinsic accuracy of the instrument itself;
• Accuracy of the observer;
• Variation of the signal to be measured; and
• Whether or not the quantity is being truly impressed upon the instrument
✓ Precision:
▪ Precision is the closeness with which individual measurements are distributed
about the mean value.
▪ Precision is a measure of the reproducibility of the measurements, i.e., precision
is a measure of the degree to which successive measurements differ from one
another.
▪ Answers:-How much do the measurements vary from trial to trial? 3
Accuracy vs Precision
The accuracy represents the degree of correctness of the
measured value with respect to the true value and the precision
represents degree of repeatability of several independent
measurements of the desired input at the same reference
conditions.
4
Repeatability
▪ Describes the closeness of output readings when the same input is applied
repetitively over a short period of time, with the same measurement conditions,
same instrument and observer, same location and same conditions of use
maintained throughout
Reproducibility
▪ Describes the closeness of output readings for the same input when there are
changes in the method of measurement, observer, measuring instrument,
location, conditions of use and time of measurement.
▪ Resolution
▪ The resolution of an instrument is the smallest change in the measured value to
which the instrument will respond.
▪ Thus, the resolution or discrimination of any instrument is the smallest change
in the input signal (quantity under measurement) which can be
▪ detected by the instrument.
▪ For a digital instrument it depends on the number of digits on the display.
5
▪ Range or Span
▪ The range of an instrument refers to the minimum and maximum values of
the input variable for which it has been designed to measure.
▪ The range chosen should be such that the reading is large enough to give
close to the required precision.
▪ Bandwidth
▪ The bandwidth of an instrument is the difference between the minimum and
maximum frequencies for which it has been designed.
▪ If the signal is outside the bandwidth of the instrument, it will not be able to
follow changes in the quantity being measured.
▪ Sensitivity
▪ The sensitivity of measurement is a measure of the change in instrument
output that occurs when the quantity being measured changes by a given
amount. The ratio of the change in output to the corresponding change in
input is defined as static sensitivity K.
6
Tolerance
Tolerance is a term that is closely related to accuracy and defines the maximum error
that is to be expected in some value.
While it is not, strictly speaking, a static characteristic of measuring instruments, it is
mentioned here because the accuracy of some instruments is sometimes quoted as a
tolerance value.
When used correctly, tolerance describes the maximum deviation of a manufactured
component from some specified value.
For instance, crankshafts are machined with a diameter tolerance quoted as so many
microns(10-6 m), and electric circuit components such as resistors have tolerances of
perhaps 5%.
Dead-band
Is the largest change of input to which the system does not respond (no change in
output value).
Threshold, Hysteresis, Drift, Back-lash, linearity
7
Dynamic Characteristics
The characteristics of the instruments, which are used to measure the quantities or
parameters that vary very quickly with respect to time are called dynamic
characteristics.
The dynamic characteristics of a measuring instrument describe its behavior
between the time a measured quantity changes value and the time when the
instrument output attains a steady value in response.
Speed of response
This is defined as the rapidity with which a measurement system responds to
changes in the measured quantity.
Measuring lag
It is the retardation or delay in the response of a measurement system to changes
in the measured quantity.
Fidelity
This is the degree to which a measurement system indicates changes in the
measured quantity without any dynamic error.
Dynamic error
It is the difference between the true value of the quantity being measured
changing with time and the value indicated by the measurement system.
8
The dynamic characteristics of a measurement system is governed
by instrument dynamic classification.
1. Zero order instruments (potentiometer)
2. First Order Instruments (thermocouple)
3. Second Order Instruments (accelerometer)
9
Calibration of instruments
Calibration is the process of configuring an instrument to
provide a result for a sample within an acceptable range.
10
When do instruments need to be calibrated?
Before major critical measurements- that requires highly
accurate data so that it remain unused before the test.
12
Measurement Errors
There is no measurement that can be made with perfect accuracy therefore there will
always be some errors.
An absolute error is defined as the difference between the measured and absolute (true)
value.
Errors in measurement systems can be divided into those that arise during the
measurement process and those that arise due to later corruption of the measurement
signal by induced noise during transfer of the signal from the point of measurement to
some other point.
It is extremely important in any measurement system to reduce errors to the minimum
possible level and then to quantify the maximum remaining error that may exist in any
instrument output reading.
Errors are usually classified into:
1. Gross errors
2. Systematic errors
3. Random Errors
13
Gross Errors
The errors occur because of mistakes in observed readings, or using instruments and in
recording and calculating measurement results.
This class of errors mainly covers human mistakes in reading instruments and
calculating measurement results.
The experimenter may grossly misread the scale due to an oversight or transpose the
reading while recording.
Sometimes, the gross errors may also occur due to improper selection of the
instrument.
More than one reading should be taken for the quantity under measurement.
Choose the best suitable instrument, based on the range of values to be measured. 14
Systematic Errors
These are errors in the output readings of a measurement
system that are consistently on one side of the correct
reading, that is, either all the errors are positive or they
are all negative.
Systematic errors in the output of many instruments are
due to factors inherent in the manufacture of the
instrument arising out of tolerances in the components of
the instrument.
Systematic Errors can emanate from different sources.
15
Sources of Systematic Errors
The main sources of systematic error in the output of
measuring instruments can be summarized as:
1. Effect of environmental disturbances, often called
modifying inputs
2. Instrumental errors
3. Observation errors
4. Changes in characteristics due to wear in instrument
components over a period of time
5. Resistance of connecting leads
16
Errors due to Environmental Inputs
The environmental conditions surrounding the measurement system can
lead to errors.
17
In different environment, the characteristics of measuring instruments vary
to some extent and cause measurement errors.
System designers are therefore charged with the task of either reducing the
susceptibility of measuring instruments to environmental inputs or,
alternatively, quantifying the effect of environmental inputs and correcting
for them in the instrument output reading.
18
Minimizing Environment Errors
Keeping conditions e.g temperature as nearly as constant
as possible
19
Instrumental errors:
These are due to;
Inherent shortcomings in the instruments; these may be due to construction,
calibration and operation of the measuring devices.
Misuse of the instruments; a good instrument used in a wrong way.
Loading effects e.g use of voltmeter on high resistance circuit
They can be reduced by;
✓ Calibration before measurement.
✓ Proper use of the measuring equipment
✓ Continuously monitoring for faults by checking for erratic behavior,
reproducibility and stability of results.
✓ Loading effects should be considered while planning any measurement.
20
Observational Errors
This type of errors occur due to observer while taking the meter readings.
❖ Systematic errors can frequently develop over a period of time because of wear in
instrument components.
❖ Recalibration often provides a full solution to this problem.
Connecting Leads
Not only should they be of adequate cross-section so that their resistance is minimized,
but they should be adequately screened if they are thought likely to be subject to electrical
or magnetic fields that could otherwise cause induced noise.
21
Random Errors
Errors caused by unpredictable variations in the measurement system.
2. Electrical noise
22
They can be reduced by calculating the average of a number of
repeated measurements, provided that the measured quantity
remains constant during the process of taking the repeated
measurements.
24
Common sources of noise in Instrumentation systems
Capacitive (electrostatic) coupling
25
Noise due to multiple earths
This is a condition caused when large currents machinery
connected to the same earth plane cause the potential to
vary between different points on the earth plane.
27
Noise in the form of voltage transients
When motors and other electrical equipment are switched on and off,
large changes of power consumption suddenly occur in the electricity
supply system.
Electrochemical Potentials:
29
Statistical analysis of measurements
Since in Random errors, the error vary from trail to trail,
so we have to use statistical or data- processing method to
reduce the errors.
The most important statistical operators are:
1. Mean - Sum of all values divided by the number of
quantities.
31
32
Self Assessment
1. Outline the necessity of having units in measurements and also briefly
describe different classes of SI units.
5. Distinguish between gross error, systematic error and random error with
examples. What are the methods for their elimination/reduction?
33