Unit 1
Unit 1
The science of measurement is known as metrology. Measurement is done to know whether the
component which has been manufactured is as per the requirements or not. Measurements will be of
mainly length, mass, time, angle, temperature, squareness, roundness, roughness, parallelism etc. For
measuring any quantity there must be some unit to measure and express.
Measurement is defined as the process or the act of measurement. It consists of obtaining a quantitative
comparison between a predefined standard and a measurand or unknown magnitude.
measurement, the process of associating numbers with physical quantities and phenomena.
Measurement is fundamental to the sciences; to engineering, construction, and other technical fields;
and to almost all everyday activities. For that reason the elements, conditions,
Hysteresis:
It is the difference between the indications of a measuring instrument when the same value of the
measured quantity is reached by increasing or by decreasing that quantity. The phenomenon of
hysteresis is due to the presence of dry friction as well as to the properties of elastic elements. It
results in the loading and unloading curves of the instrument being separated by a difference called the
hysteresis error. It also results in the pointer not returning completely to zero when the load is removed.
Hysteresis is particularly noted in instruments having elastic elements. The phenomenon of Hysteresis
in materials is due mainly to the presence of internal stresses. It can be reduced considerably by proper
heat treatment.
Sensitivity:
It is defined as the ratio of the linear movement of the pointer on the instrument to the change in the
measured variable causing this motion.
The sensitivity of an instrument should be high and the instrument should not have a range greatly
exceeding the value to be measured. However, some clearance should be kept for accidental overloads.
Resolution:
Resolution is also called as discrimination. and defined as the smallest increment of the input signal that
a measuring system is capable of displaying.
Threshold:
If the instrument input is increased very gradually from zero, there will be some minimum value below
which no output change can be detected. This minimum value defined the threshold of the instrument.
The main difference between threshold and resolution are :
Thus resolution defined the smallest measurable input change. While threshold defines the
smallest measurable input.
The threshold is measured when the input is varied from zero while the resolution is measured
when the input is varied from any arbitrary non zero value.
Linearity
Linearity is an indicator of the consistency of measurements over the entire range of measurements. In
general, it is a good indicator of performance quality of a sensor, but on its own, it can be a misleading
indicator. In simple terms, linearity tells us how well the instrument measurement corresponds to
reality.
Drift
Drift can be defined (VIM) as a slow change in the response of a gauge. Instruments used as
comparators for calibration. Short-term drift can be a problem for comparator measurements. The cause
is frequently heat build-up in the instrument during the time of measurement.
Zero stability
The zero stability is a measure of the meter's sensitivity, and is determined by the manufacturer. The
flow accuracy is defined as the zero stability divided by the mass flowrate. Each meter size and type
has a pressure drop character- istic curve that is prepared by the manufacturer
loading effect
When an instrument of lower sensitivity is used with a heavier load the measurement it makes is
incorrect, this effect is known as the loading effect. Loading effect in Voltmeter: When a voltmeter is
connected across a resistor to measure voltage, the voltmeter draws current for its working.
System response
When the measured variable of a measuring instrument in control system technology encounters
changes from one steady state value to a second steady state value it is a step signal and the response
shown by the output of a measuring instrument is called the step response.
Dead Zone
Dead Zone
Definition:
It is he largest change of input quantity for which there is no output of the instrument.
Dead Time
Definition:
Time required by an instrument to begin to respond to the change in a measurand.
Measurement methods
These are the methods of comparison used in measurement process. In precision measurement various
methods of measurement are adopted depending upon the accuracy required and the amount of
permissible error.
l. Direct method
2. Indirect method
4. Comparative method
5. Transposition method
6. Coincidence method
7. Deflection method
8. Complementary method
9. Contact method
It is based on the measurement of the base quantities used to define the quantity. For
example, measuring a quantity directly in accordance with the definition of that quantity, or
measuring a quantity indirectly by direct measurement of the quantities linked with the definition
of the quantity to be measured.
4. Comparative method:
In this method the value of the quantity to be measured is compared with known value of
the same quantity or other quantity practically related to it. So, in this method only the deviations
from a master gauge are determined, e.g., dial indicators, or other comparators.
5. Transposition method:
6. Coincidence method:
7. Deflection method:
In this method the value of the quantity to be measured is combined with a known value
of the same quantity. The combination is so adjusted that the sum of these two values is equal to
predetermined comparison value. For example, determination of the volume of a solid by liquid
displacement.
i) The standard used for comparison purposes must be accurately defined & should be
commonly accepted
ii) The apparatus used & the method adopted must be provable.
•The function of this element is to manipulate the signal presented to it preserving the
original nature of the signal
•It is not necessary that a variable manipulation element should follow the variable
conversion element •Some non-linear processes like modulation, detection, sampling,
filtering, chopping etc.,are performed on the signal to bring it to the desired form to be
accepted by the next stage of measurement system •This process of conversion is
called signal conditioning’
•The term signal conditioning includes many other functions in addition to Variable
conversion & Variable manipulation
•In fact the element that follows the primary sensing element in any instrument or
measurement system is called signal conditioning element’ When the elements of an
instrument are actually physically separated, it becomes necessary to transmit data
from one to another. The element that performs this function is called a data
transmission element’.
• The information about the quantity under measurement has to be conveyed to the
personnel handling the instrument or the system for monitoring, control, or analysis
purposes.
•This function is done by data presentation element
•In case data is to be monitored, visual display devices are needed
•These devices may be analog or digital indicating instruments like ammeters,
voltmeters etc
•In case data is to be recorded, recorders like magnetic tapes, high speed camera &
TV equipment, CRT, printers may be used
•For control & analysis purpose microprocessor or computers may be used. The final
stage in a measurement system is known as terminating stage’ Source:
Static performance characteristics
2. Static Calibration
Static calibration refers to the procedure where an input is constant or a variable is applied to an
instrument. Instruments are manufactured based on the property of irreversibility or directionality. This
implies that change in an input quantity will cause a corresponding change in the output. A calibration
standard must be at least ten times more accurate than the instrument to be calibrated.
Static performance characteristics include the linearity performance of the instrument, static sensitivity
of the instrument, repeatability of the same results, hysteresis resolution, and the readability of the
results.
Static performance characteristics influence data acquisition if the instruments are not properly
calibrated prior to the measurement. Understanding quality control and quality assurance procedures for
handling equipment is essential for the PE exam.
3. Linearity
If the relationship between the output and input can be expressed by the equation Q0= P + RQ1, where
P and R are constants, then the instrument is considered linear. Linearity is never fully achieved in real-
world situations, and the deviations from the ideal are referred to as linearity tolerances. For example,
3% independent linearity means that the output will remain within the values set by two parallel lines
spaced ± 3% of the full-scale output from the idealized line. If the input-output relationship is not linear
for an instrument, it may still be approximated to a linear form when it is used over a very restrictive
range.
4. Static Sensitivity
Sensitivity = Q0/Q1
Sensitivity influences the input parameters of an instrument. The sensitivity factor can also be referred
to as sensitivity drift or scale factor drift.
5. Repeatability Error
When an instrument is used to measure the same or an identical input many times and at different time
intervals, the output is never the same; it deviates from the recorded values. This deviation from the
ideal value is referred to as repeatability error.
6. Hysteresis-Threshold Resolution
When testing an instrument for repeatability, it is often noted that the input-out value does not coincide
with the inputs, which are continuously ascending and descending values. This occurs because of
hysteresis, which is caused by internal friction, sliding, external friction, and free play mechanisms.
Hysteresis can be eliminated by taking readings corresponding to the ascending and descending values
of the input and calculating their arithmetic mean.
Professional engineers who work with measurements and instrumentation should understand calibration
procedures of various instruments for proper data acquisition.
The type of error which affects the results of the experiment always in the same direction i.e., makes
the obtained result always higher or always lower than the true value is known as systematic error. In
fact, all instrumental errors are systematic. If the graduations of a meter scale are faulty or if the
measurements are carried out with a scale at a temperature other than that at which it was calibrated, a
systematic error will be introduced.
(i) Instrumental errors whose examples are zero error of screw gauge, vernier caliper, end error in
meter bridge, etc.
(iii) Error due to external causes, due to changes in temperature, pressure, velocity, height, etc.
Systematic errors are usually determinate. So they can be eliminated by taking proper precautions or
can be rectified. However, when the source of such errors can not be properly identified, the
experiment is repeated by different methods.
2. Random or accidental errors: The results of several measurements of the same quantity by the
same observer under identical conditions do not show in general exact agreement but differ from one
another by a small amount. The instrument may be a very good and sensitive one, the observer may
be very careful, yet such small differences in the results generally occur. No definite cause for such
errors can be traced; their sources are unknown and uncontrollable. Such errors are, therefore, purely
accidental in nature and are termed random or accidental errors. An error that occurs randomly and
whose causes are unknown and indeterminate is called random error.
3. Gross errors: These are large errors and occur due to carelessness or undue haste of the observer
which are also termed as mistakes. The wrong recording of some data may be cited as an example. So
mistakes obviously do not follow the law and can be avoided only by constant vigilance and careful
observation from the observer.
In all measurements even after minimizing systematic and random error, errors of observations
inherent in the manufacture of the instrument used remain present. The scale of a measuring
instrument is divided by the manufacturer only to its limit of reliability and no further. We already
know that the smallest output that we can detect clearly from the instrument is called its least
count.
This gives the worst possible error which might occur in measurements with that instrument. So in
all measurements, the degree of accuracy attainable is limited by the least counts of the different
instruments used. For example, a meter scale is usually graduated in millimeters. Hence, the
greatest error which might be committed to measuring length with such a scale is 1 mm.
The result of measurement of the length of a rod should therefore be expressed as the length of the
rod 22.4 ± 0.2 cm. This is the scientific method of recording a reading with the limits of error.
This means that the length of the rod lies between 22.6 cm and 22.2 cm. The errors are known
as errors of observation or permissible errors.
Therefore, in general, if the measured value of a quantity is x and the limits of error are ∆x then
the reading should be written as x ± ∆x which means that the value of the quantity lies between
x+∆x and x-∆r.