Unit 1
Unit 1
Measurement
Measurement is the act, or the result, of a quantitative comparison between a
predetermined standard and an unknown magnitude.
Range
It represents the highest possible value that can be measured by an instrument.
Scale sensitivity
It is defined as the ratio of a change in scale reading to the corresponding change
in pointer deflection. It actually denotes the smallest change in the measured variable to
which an instrument responds.
True or actual value
It is the actual magnitude of a signal input to a measuring system which can only
be approached and never evaluated.
Accuracy
It is defined as the closeness with which the reading approaches an accepted
standard value or true value.
Precision
It is the degree of reproducibility among several independent measurements of
the same true value under specified conditions. It is usually expressed in terms of
deviation in measurement.
Repeatability
It is defined as the closeness of agreement among the number of consecutive
measurement of the output for the same value of input under the same operating
conditions. It may be specified in terms of units for a given period of time.
Reliability
It is the ability of a system to perform and maintain its function in routine
circumstances. Consistency of a set of measurements or measuring instrument often used
to describe a test.
Systematic Errors
A constant uniform deviation of the operation of an instrument is known as
systematic error. Instrumentational error, environmental error, Systematic error and
observation error are systematic errors.
Random Errors
Some errors result through the systematic and instrument errors are reduced or at
least accounted for. The causes of such errors are unknown and hence, the errors are
called random errors.
Calibration
Calibration is the process of determining and adjusting an instruments accuracy
to make sure its accuracy is within the manufacturers specifications.
GENERAL CONCEPT
Introduction to Metrology
Metrology word is derived from two Greek words such as metro which means
measurement and logy which means science. Metrology is the science of precision
measurement. The engineer can say it is the science of measurement of lengths and
angles and all related quantities like width, depth, diameter and straightness with high
accuracy. Metrology demands pure knowledge of certain basic mathematical and
physical principles. The development of the industry largely depends on the engineering
metrology. Metrology is concerned with the establishment, reproduction and
conservation and transfer of units of measurements and their standards. Irrespective of
the branch of engineering, all engineers should know about various instruments and
techniques.
Introduction to Measurement
Measurement is defined as the process of numerical evaluation of a dimension or
the process of comparison with standard measuring instruments. The elements of
measuring system include the instrumentation, calibration standards, environmental
influence, human operator limitations and features of the work-piece. The basic aim of
measurement in industries is to check whether a component has been manufactured to
the requirement of a specification or not.
Types of Metrology
Legal Metrology
'Legal metrology' is that part of metrology which treats units of measurements,
methods of measurements and the measuring instruments, in relation to the technical and
legal requirements. The activities of the service of 'Legal Metrology' are:
(i)
(ii)
(iii)
Dynamic Metrology
'Dynamic metrology' is the technique of measuring small variations of a
continuous nature. The technique has proved very valuable, and a record of continuous
measurement, over a surface, for instance, has obvious advantages over individual
measurements of an isolated character.
Deterministic metrology
Deterministic metrology is a new philosophy in which part measurement is
replaced by process measurement. The new techniques such as 3D error compensation
by CNC (Computer Numerical Control) systems and expert systems are applied, leading
to fully adaptive control. This technology is used for very high precision manufacturing
1.
research (by which accurate and reliable information can be obtained) was emphasized
by Ga1ileo and Gvethe. This is essential for solving almost all technical problems in the
field of engineering in general, and in production engineering and experimental design in
particular. The design engineer should not only check his design from the point of view
of strength or economical production, but he should also keep in mind how the
dimensions specified can be checked or measured. Unfortunately, a considerable amount
2.
4. Comparative method
5. Transposition method
6. Coincidence method
7. Deflection method
8. Complementary method
9. Contact method
5. Transposition method:
It is a method of measurement by direct comparison in which the value of the
quantity measured is first balanced by an initial known value A of the same quantity, and
then the value of the quantity measured is put in place of this known value and is
balanced again by another known value B. If the position of the element indicating
equilibrium is the same in both cases, the value of the quantity to be measured is AB. For
example, determination of amass by means of a balance and known weights, using the
Gauss double weighing.
6. Coincidence method:
It is a differential method of measurement in which a very small difference
between the value of the quantity to be measured and the reference is determined by the
observation of the coincidence of certain lines or signals. For example, measurement by
vernier calliper micrometer.
7. Deflection method:
In this method the value of the quantity to be measured is directly indicated by a
deflection of a pointer on a calibrated scale.
8. Complementary method:
In this method the value of the quantity to be measured is combined with a
known value of the same quantity. The combination is so adjusted that the sum of these
two values is equal to predetermined comparison value. For example, determination of
the volume of a solid by liquid displacement.
9. Method of measurement by substitution:
It is a method of direct comparison in which the value of a quantity to be
measured is replaced by a known value of the same quantity, so selected that the effects
produced in the indicating device by these two values are the same.
10. Method of null measurement:
It is a method of differential measurement. In this method the difference between
the value of the quantity to be measured and the known value of the same quantity with
which it is compared is brought to zero.
GENERALIZED MEASUREMENT SYSTEM
Standards
used as an index of precision. The less the scattering more precise is the instrument.
Thus, lower, the value of , the more precise is the instrument.
Accuracy
Accuracy is the degree to which the measured value of the quality characteristic
agrees with the true value. The difference between the true value and the measured value
is known as error of measurement. It is practically difficult to measure exactly the true
value and therefore a set of observations is made whose mean value is taken as the true
value of the quality measured.
Distinction between Precision and Accuracy
Accuracy is very often confused with precision though much different. The
distinction between the precision and accuracy will become clear by the following
example. Several measurements are made on a component by different types of
instruments (A, B and C respectively) and the results are plotted. In any set of
measurements, the individual measurements are scattered about the mean, and the
2.
3.
five elements in the measuring system are analyzed and steps taken to eliminate
them. The above analysis of five basic metrology elements can be composed into
the acronym SWIPE, for convenient reference where,
S STANDARD
W WORKPIECE
P PERSON
E ENVIRONMENT
I INSTRUMENT
SENSITIVITY
Sensitivity may be defined as the rate of displacement of the indicating device of
an instrument, with respect to the measured quantity. In other words, sensitivity of an
instrument is the ratio of the scale spacing to the scale division value. For example, if on
a dial indicator, the scale spacing is 1.0 mm and the scale division value is 0.01 mm, then
sensitivity is 100. It is also called as amplification factor or gearing ratio. If we now
consider sensitivity over the full range of instrument reading with respect to measured
quantities as shown in Figure the sensitivity at any value of y=dx/dy, where dx and dy
are increments of x and y, taken over the full instrument scale, the sensitivity is the slope
of the curve at any value of y.
The sensitivity may be constant
or
we
in
quantity
being
converted into meaningful number. Fine and widely spaced graduation lines ordinarily
improve the readability. If the graduation lines are very finely spaced, the scale will be
more readable by using the microscope; however, with the naked eye the readability will
be poor. To make micrometers more readable they are provided with vernier scale. It can
also be improved by using magnifying devices.
Calibration
The calibration of any measuring instrument is necessary to measure the quantity
in terms of standard unit. It is the process of framing the scale of the instrument by
applying some standardized signals. Calibration is a pre-measurement process, generally
carried out by manufacturers. It is carried out by making adjustments such that the read
out device produces zero output for zero measured input. Similarly, it should display an
output equivalent to the known measured input near the full scale input value. The
accuracy of the instrument depends upon the calibration. Constant use of instruments
affects their accuracy. If the accuracy is to be maintained, the instruments must be
checked and recalibrated if necessary. The schedule of such calibration depends upon the
severity of use, environmental conditions, accuracy of measurement required etc. As far
as possible calibration should be performed under environmental conditions which are
vary close to the conditions under which actual measurements are carried out. If the
output of a measuring system is linear and repeatable, it can be easily calibrated.
Repeatability
It is the ability of the measuring instrument to repeat the same results for the
measurements for the same quantity, when the measurement are carried out-by the same
observer,-with the same instrument,-under the same conditions,-without any change in
location,-without change in the method of measurement-and the measurements are
carried out in short intervals of time. It may be expressed quantitatively in terms of
dispersion of the results.
Reproducibility
It is never possible to measure the true value of a dimension there is always some
error. The error in measurement is the difference between the measured value and the
true value of the measured dimension.
Error in measurement = Measured value - True value
The error in measurement may be expressed or evaluated either as an absolute
error or as a relative error.
Absolute Error
True absolute error:
It is the algebraic difference between the result of measurement and the
conventional true value of the quantity measured.
Apparent absolute error:
If the series of measurement are made then the algebraic difference
between one of the results of measurement and the arithmetical mean is known
as apparent absolute error.
Relative Error:
It is the quotient of the absolute error and the value of comparison use or
calculation of that absolute error. This value of comparison may be the true
value, the conventional true value or the arithmetic mean for series of
measurement. The accuracy of measurement, and hence the error depends upon
so many factors, such as:
-calibration
standard -Work
piece -Instrument
-Person
-Environment
etc
1.7.2 Types of Errors
1. Systematic Error
These errors include calibration errors, error due to variation in the atmospheric
condition Variation in contact pressure etc. If properly analyzed, these errors can be
determined and reduced or even eliminated hence also called controllable errors. All
other systematic errors can be controlled in magnitude and sense except personal error.
These errors results from irregular procedure that is consistent in action. These errors
are repetitive in nature and are of constant and similar form.
2. Random Error
These errors are caused due to variation in position of setting standard and
work-piece errors. Due to displacement of level joints of instruments, due to backlash
and friction, these error are induced. Specific cause, magnitude and sense of these
errors cannot be determined from the knowledge of measuring system or condition of
measurement. These errors are non-consistent and hence the name random errors.
3.Environmental Error
These errors are caused due to effect of surrounding temperature, pressure and
humidity on the measuring instrument. External factors like nuclear radiation,
vibrations and magnetic field also leads to error. Temperature plays an important role
where high precision is required. e.g. while using slip gauges, due to handling the slip
gauges may acquire human body temperature, whereas the work is at 20C. A 300 mm
length will go in error by 5 microns which is quite a considerable error. To avoid
errors of this kind, all metrology laboratories and standard rooms worldwide are
maintained at 20C.
Calibration
required type of fit was obtained. These methods demanded craftsmanship of a high
order and a great deal of very fine work was produced. Present day standards of quantity
production, interchangeability, and continuous assembly of many complex compounds,
could not exist under such a system, neither could many of the exacting design
requirements of modern machines be fulfilled without the knowledge that certain
dimensions can be reproduced with precision on any number of components. Modern
mechanical production engineering is based on a system of limits and fits, which while
not only itself ensuring the necessary accuracies of manufacture, forms a schedule or
specifications to which manufacturers can adhere.
In order that a system of limits and fits may be successful, following conditions
must be fulfilled:
1. The range of sizes covered by the system must be sufficient for most purposes.
2. It must be based on some standards; so that everybody understands alike and a
given dimension has the same meaning at all places.
3. For any basic size it must be possible to select from a carefully designed range
of fit the most suitable one for a given application.
4. Each basic size of hole and shaft must have a range of tolerance values for
each of the different fits.
5. The system must provide for both unilateral and bilateral methods of applying
the tolerance.
6. It must be possible for a manufacturer to use the system to apply either a holebased or a shaft-based system as his manufacturing requirements may need.
7. The system should cover work from high class tool and gauge work where
very wide limits of sizes are permissible.
Nominal Size and Basic Dimensions
Nominal size: A 'nominal size' is the size which is used for purpose of general
identification. Thus the nominal size of a hole and shaft assembly is 60 mm, even
though the basic size of the hole may be60 mm and the basic size of the shaft 59.5
mm.
and the variations necessary to obtain the classes of fit are arranged by varying
those on the shaft.
Shaft basis system:
'Shaft basis system' is one in which the limits on the shaft are kept constant
and the variations necessary to obtain the classes of fit are arranged by varying the
limits on
the holes. In present day industrial practice hole basis system is used because a great
many holes are produced by standard tooling, for example, reamers drills, etc., whose
size is not adjustable. Subsequently the shaft sizes are more readily variable about the
basic size by means of turning or grinding operations. Thus the hole basis system
results in considerable reduction in reamers and other precision tools as compared to
a shaft basis system because in shaft basis system due to non-adjustable nature of
reamers, drills etc. great variety (of sizes) of these tools are required for producing
different classes of holes for one class of shaft for obtaining different fits.
Systems of Specifying Tolerances
The tolerance or the error permitted in manufacturing a particular dimension
may be allowed to vary either on one side of the basic size or on either side of the
basic size. Accordingly two systems of specifying tolerances exit.
1. Unilateral system
2. Bilateral system.
In the unilateral system, tolerance is applied only in one direction.
Examples:
+ 0.04
40.0 or
+ 0.02
-0.02
40.0
-0.04
1. The assembly of mating parts is easier. Since any component picked up from its
lot will assemble with any other mating part from another lot without additional
fitting and machining.
2. It enhances the production rate.
3. The standardization of machine parts and manufacturing methods is decided.