Module 1 Unit 1 21
Module 1 Unit 1 21
METROLOGY
MEASUREMENT
“ Whatever exists, exits in some amount”
• The determination of the amount is what measurement is all
about.
• Measurement is a process of obtaining a quantitative
comparison between a predefined standard and a
measurand.
OR
• Measurement is the process of comparing quantitatively
an unknown magnitude with a predefined standard.
* The word measurand is used to designate the input quantity
to the measuring process.
FUNDAMENTAL MEASURING PROCESS
3
4
The International System of Units, or SI System.
5
Derived Units in SI
6
U. S. SYSTEM METRIC SYSTEM
7
Metrology (from Ancient Greek metron (measure) and
logos (study of)) is the science of measurement.
Metrology includes all theoretical and practical aspects of
measurement.
Tertiary standard:
It is used for reference purposes in laboratories and
workshops and is used for comparison with working
standard.
22
STANDARDS OF MEASUREMENTS
† There are two standard measurement systems being used
throughout the world, i.e. English and Metric (Yard and
meter).
† Due to advantages of metric system most of the countries
are adopting metric standard with meter as the
fundamental unit of linear measurement.
Length can be measured by
1. Line standard
2. End standard
3. Wavelength standard 23
LINE STANDARD
24
LINE STANDARD
25
C H A R A C T E R I S T I C S O F L I N E S TA N D A R D S
1. Scales can be accurately engraved.
Example: A steel rule can be read to about ± 0.2 mm of true
dimension.
2. A scale is quick and easy to use over a wide range of
measurements.
3. The scale markings are not subjected to wear although
significant wear on leading ends results in
"UNDERSIZING".
4. Scales are subjected to parallax effect, which is a source of
both positive and negative reading errors.
5. Scales are not convenient for close tolerance length
measurements except in conjunction with microscopes.
26
IMPERIAL STANDARD YARD
† The imperial standard yard is a bronze bar of one inch
square cross section and 38 inches long.
† A round recess, one inch away from two ends up to central
plane of the bar.
† A gold plug 0.1 inch diameter having three lines engraved
transversely and two lines longitudinally is inserted into these
holes so that the lines are in neutral plane.
† Yard is then defined as the distance between the two central
transverse lines of the plug at 62oF.
† The purpose of keeping the gold plug lines in neutral axis is
due to bending of beam, the neutral axis remains unaffected
27
1 yard = 0.9144 meter
28
29
30
I N T E R N AT I O N A L P R O TO T Y P E M E T E R
† This is the distance between the center portions of the two
lines engraved on the polished surface of a bar of pure
platinum(90%)- iridium (10%) alloy which is non oxidizable
and retain good polished surface.
† The bar is kept at 0oC and under normal atmospheric
pressure
† It is supported by two rollers at least 1cm diameter
symmetrically situated in the same horizontal plane at a
distance of 571mm, so as to give minimum deflection.
† It has a shape of winged section (tresca cross section) having a
web whose surface lines are on the neutral axis.
† The shape gives the maximum rigidity 31
† The overall width and depth are 16mm each
† This standard is kept in BIPM in Paris (Bureau of
International prototype Meter)
† Thus one yard was equal to 0.91439841m. As
American yard was longer by four parts in one
million, international yard was adopted as 0.9144m
32
33
34
AIRY POINTS
In order to minimize slightest error in neutral axis due to
the supports at ends, the supports must be placed such that
the slope at the ends is zero and the flat end faces of the bar
are mutually parallel
35
Sir G.B. Airy showed that this condition was obtained
when the distance between the supports is
where
n → No. of supports
L → length of bar
For a simply supported beam, the expression becomes
42
Calibration
The calibration of all instruments is important, for it affords the opportunity to check
the instrument against a known standard and subsequently to reduce errors in accuracy.
Calibration procedures involve a comparison of the particular instrument with either (1)
a primary standard, (2) a secondary standard with a higher accuracy than the instrument
to be calibrated, or (3) a known input source. For example, a flowmeter might be
calibrated by (1) comparing it with a standard flow-measurement facility of the National
Institute for Standards and Technology (NIST), (2) comparing it with another flowmeter
of known accuracy, or (3) directly calibrating with a primary measurement such as
weighing a certain amount of water in a tank and recording the time elapsed for this
quantity to flow through the meter.
The importance of calibration cannot be overemphasized because it is calibration that
firmly establishes the accuracy of the instruments. Rather than accept the reading of an
instrument, it is usually best to make at least a simple calibration check to be sure of the
validity of the measurements.
Standards
In order that investigators in different parts of the country and different parts of the
world may compare the results of their experiments on a consistent basis, it is necessary
to establish certain standard units of length, weight, time, temperature, and electrical
quantities. NIST has the primary responsibility for maintaining these standards in the
United States. The meter and the kilogram are considered fundamental units upon
which, through appropriate conversion factors, the English system of length and mass
is based. At one time, the standard meter was defined as the length of a platinum iridium
bar maintained at very accurate conditions at the International Bureau of Weights and
Measures in S`evres, France. Similarly, the kilogram was defined in terms of a
platinum-iridium mass maintained at this same bureau. The conversion factors for the
English and metric systems in the United States are fixed by law as
1 meter = 39.37 inches
1 pound-mass = 453.59237 grams
Standards of length and mass are maintained at NIST for calibration purposes. In 1960
the General Conference on Weights and Measures defined the standard meter in terms
of the wavelength of the orange-red light of a krypton-86 lamp. The standard meter is
thus 1 meter = 1,650,763.73 wavelengths In 1983 the definition of the meter was
changed to the distance light travels in 1/299,792,458ths of a second. For the
measurement, light from a helium-neon laser illuminates iodine which fluoresces at a
highly stable frequency.
The inch is exactly defined as
1 inch = 2.54 centimeters
Standard units of time are established in terms of known frequencies of oscillation of
certain devices. One of the simplest devices is a pendulum. A torsional vibrational
system may also be used as a standard of frequency. Prior to the introduction of quartz
oscillator–based mechanisms, torsional systems were widely used in clocks and
watches. Ordinary 60-hertz (Hz) line voltage may be used as a frequency standard under
certain circumstances. The fundamental unit of time, the second(s), has been defined in
the past as of a mean solar day. The solar day is measured as the time interval between
two successive transits of the sun across a meridian of the earth. The time interval varies
with location of the earth and time of year; however, the mean solar day for one year is
constant. The solar year is the time required for the earth to make one revolution around
the sun. The mean solar year is 365 days 5 h 48 min 48 s. The above definition of the
second is quite exact but is dependent on astronomical observations in order to establish
the standard. In October 1967 the Thirteenth General Conference on Weights and
Measures adopted a definition of the second as the duration of 9,192,631,770 periods
of the radiation corresponding to the transition between the two hyperfine levels of the
fundamental state of the atom of cesium-133. This standard can be readily duplicated
in standards laboratories throughout the world.
1. A detector-transducer stage, which detects the physical variable and performs either
a mechanical or an electrical transformation to convert the signal into a more usable
form. In the general sense, a transducer is a device that transforms one physical effect
into another. In most cases, however, the physical variable is transformed into an
electric signal because this is the form of signal that is most easily measured. The signal
may be in digital or analog form. Digital signals offer the advantage of easy storage in
memory devices, or manipulations with computers.
2. Some intermediate stage, which modifies the direct signal by amplification, filtering,
or other means so that a desirable output is available.
3. A final or terminating stage, which acts to indicate, record, or control the variable
being measured. The output may also be digital or analog. As an example of a
measurement system, consider the measurement of a low voltage signal at a low
frequency. The detector in this case may be just two wires and possibly a resistance
arrangement, which are attached to appropriate terminals. Since we want to indicate or
record the voltage, it may be necessary to perform some amplification. The
amplification stage is then stage 2, designated above. The final stage of the
measurement system may be either a voltmeter or a recorder that operates in the range
of the output voltage of the amplifier.
Consider the simple bourdon-tube pressure gage shown in Fig.1. This gage offers a
mechanical example of the generalized measurement system. In this case the bourdon
tube is the detector-transducer stage because it converts the pressure signal into a
mechanical displacement of the tube. The intermediate stage consists of the gearing
arrangement, which amplifies the displacement of the end of the tube so that a relatively
small displacement at that point produces as much as three- quarters of a revolution of
the center gear. The final indicator stage consists of the pointer and the dial
arrangement, which, when calibrated with known pressure inputs, gives an indication
of the pressure signal impressed on the bourdon tube. A schematic diagram of the
generalized measurement system is shown in Fig. 1.
We have already discussed the meaning of frequency response and observed that in
order for a system to have good response, it must treat all frequencies the same within
the range of application so that the ratio of output-to-input amplitude remains the same
over the frequency range desired. We say that the system has linear frequency response
if it follows this behavior.
Amplitude response pertains to the ability of the system to react in a linear way to
various input amplitudes. In order for the system to have linear amplitude response, the
ratio of output-to-input amplitude should remain constant over some specified range of
input amplitudes. When this linear range is exceeded, the system is said to be
overdriven, as in the case of a voltage amplifier where too high an input voltage is used.
Overdriving may occur with both analog and digital systems.
Distortion
Some form of analysis must be performed on all experimental data. The analysis may
be a simple verbal appraisal of the test results, or it may take the form of a complex
theoretical analysis of the errors involved in the experiment and matching of the data
with fundamental physical principles. Even new principles may be developed in order
to explain some unusual phenomenon. Our discussion in this chapter will consider the
analysis of data to determine errors, precision, and general validity of Experimental
measurements. The correspondence of the measurements with physical principles is
another matter, quite beyond the scope of our discussion. Some methods of graphical
data presentation will also be discussed. The interested reader should consult the
monograph by Wilson [4] for many interesting observations concerning correspondence
of physical theory and experiment.
The experimentalist should always know the validity of data. The automobile test
engineer must know the accuracy of the speedometer and gas gage in order to express
the fuel-economy performance with confidence. A nuclear engineer must know the
accuracy and precision of many instruments just to make some simple radioactivity
measurements with confidence. In order to specify the performance of an amplifier, an
electrical engineer must know the accuracy with which the appropriate measurements
of voltage, distortion, and so forth, have been conducted. Many considerations enter
into a final determination of the validity of the results of experimental data, and we
wish to present some of these considerations in this chapter. Errors will creep into all
experiments regardless of the care exerted. Some of these errors are of a random nature,
and some will be due to gross blunders on the part of the experimenter. Bad data due to
obvious blunders may be discarded immediately. But what of the data points that just
“look” bad? We cannot throw out data because they
do not conform with our hopes and expectations unless we see something obviously
wrong.
Types of Errors
At this point we mention some types of errors that may cause uncertainty in an
experimental measurement. First, there can always be those gross blunders in apparatus
or instrument construction which may invalidate the data. Hopefully, the careful
experimenter will be able to eliminate most of these errors. Second, there may be certain
fixed errors which will cause repeated readings to be in error by roughly the same
amount but for some unknown reason. These fixed errors are sometimes called
systematic errors, or bias errors. Third, there are the random errors, which may be
caused by personal fluctuations, random electronic fluctuations in the apparatus or
instruments, various influences of friction, and so forth. These random errors usually
follow a certain statistical distribution, but not always. In many instances it is very
difficult to distinguish between fixed errors and random errors. The experimentalist may
sometimes use theoretical methods.
Our discussions in this chapter have considered a variety of topics: statistical analysis,
uncertainty analysis, curve plotting, and least squares, among others. With these tools
the reader is equipped to handle a variety of circumstances that may occur in
experimental investigations. As a summary to this chapter let us now give an
approximate outline of the manner in which one would go about analyzing a set of
experimental data:
1. Examine the data for consistency.
No matter how hard one tries, there will always be some data points that appear to be
grossly in error. If we add heat to a container of water, the temperature must rise, and
so if a particular data point indicates a drop in temperature for a heat input, that point
might be eliminated. In other words, the data should follow commonsense consistency,
and points that do not appear proper should be eliminated. If very many data points fall
in the category of “inconsistent,” perhaps the entire experimental procedure should be
investigated for gross mistakes or miscalculation.
2. Perform a statistical analysis of data where appropriate.
A statistical analysis is only appropriate when measurements are repeated several
times. If this is the
case, make estimates of such parameters as standard deviation, and so forth. In those
cases where the uncertainty of the data is to be prescribed by statistical analysis, a
calculation should be performed using the t-distribution. This may be used to determine
levels of confidence and levels of significance. The number of measurements to be
performed may be determined for different levels of confidence.
3. Estimate the uncertainties in the results.
We have discussed uncertainties at length. Hopefully, these calculations will have been
performed in advance, and the investigator will already know the influence of different
variables by the time the final results are obtained.
4. Anticipate the results from theory.
Before trying to obtain correlations of the experimental data, the investigator should
carefully review the theory appropriate to the subject and try to glean some information
that will indicate the trends the results may take. Important dimensionless groups,
pertinent functional relations, and other information may lead to a fruitful interpretation
of the data. This step is particularly important in determining the graphical form(s) to
be selected for presentation of data.
5. Correlate the data.
The word “correlate” is subject to misinterpretation. In the context here we mean that
the experimental investigator should make sense of the data in terms of physical theories
or on the basis of previous experimental work in the field. Certainly, the results of the
experiments should be analyzed to show how they conform to or differ from previous
investigations or standards that may be employed for such measurements.