Iandm C
Iandm C
• We know that some physical quantities such as length and mass can be directly measured using measuring instruments.
• However, the direct measurement of physical quantities such as temperature, force, and pressure is not possible. In such
situations, measurements can be performed using a transducer, wherein one form of energy/signal that is not directly
measurable is transformed into another easily measurable form.
• Calibration of the input and output values needs to be carried out to determine the output for all values of input.
• A measuring instrument essentially comprises three basic physical elements. Each of these elements is recognized by a
functional element.
• Each physical element in a measuring instrument consists of a component or a group of componentsthatperform certain
functions in the measurementprocess.
• Hence, the measurement system is described in a more generalized method. A generalized measurement
systemessentially consistsof three stages.
1. Primary detector–transducerstage 2. Intermediatemodifying stage 3. Output or terminatingstage
15
PRIMARYDETECTOR–TRANSDUCERSTAGE
• The main function of the primary detector–transducer stage is to sense the input signal and transform it into its analogous
signal, which can be easily measured. • Theinput signal is a physical quantity such as pressure, temperature, velocity, heat, or
intensity of light. The device used for detectingthe input signal is known as a transducer or sensor. • The transducer converts
the sensed input signal into a detectable signal, which may be electrical, mechanical, optical, thermal, etc. The generatedsignal
is further modified in the second stage. • The transducer should have the ability to detect only the input quantity to be
measured and exclude all other signals. • For example, if bellows are employed as transducers for pressure measurement, it
should detect signals pertaining to only pressure and other irrelevant input signals or disturbances should not be sensed. •
However, in practice, the transducers used are seldom sensitive to only the signals of the quantity being measured.
INTERMEDIATE MODIFYING STAGE
• In the intermediate modifying stage of a measurement system, the transduced signal is modified and amplified appropriately
with the help of conditioning and processing devices before passing it on to the output stage for display. • Signal conditioning
(by noise reduction and filtering) is performed to enhance the condition of the signal obtained in the first stage, in order to
increase the signal-to-noise ratio. • If required, the obtained signal is further processed by means of integration,
differentiation, addition, subtraction, digitization, modulation, etc. • It is important to remember here that in order to obtain
an output that is analogous to the input, the characteristics of the input signals should be transformed with true fidelity.
OUTPUTORTERMINATING STAGE
• The output or terminating stage of a measurement system presents the value of the output that is analogous to the input
value. • The output value is provided by either indicating or recording for subsequent evaluations by human beings or a
controller, or a combination of both. • The indication may be provided by a scale and pointer, digital display, or cathode ray
oscilloscope. • Recording may be in the form of an ink trace on a paper chart or a computer printout. Other methods of
recording include punched paper tapes, magnetic tapes, or video tapes. • Else, a camera could be used to photograph a
cathode ray oscilloscope trace. Table 12.1 gives some of the examples for the three stages of a generalized measurement
system. • Thus, measurement of physical quantities such as pressure, force, and temperature, which cannot be measured
directly, can be performed by an indirect method of measurement. • This can be achieved using a transduced signal to move
the pointer on a scale or by obtaining a digital output
2) Explain the general process of calibration.
Calibration is the process of configuring an instrument to provide accurate measurements by
comparing it with a standard. The steps involved are:
1. Preparation:
o Select the instrument to be calibrated and a standard reference with known accuracy.
o Ensure the environment is stable (e.g., temperature, humidity).
2. Comparison:
o Measure the output of the instrument while applying known input values from the standard.
o Record the readings and compare them against the reference standard.
3. Adjustment:
o Adjust the instrument to minimize any deviation from the standard values.
o This can involve mechanical adjustments, recalibrating software, or replacing components.
4. Verification:
o Re-test the instrument to confirm it now produces accurate readings.
5. Documentation:
o Record the calibration process, including input values, output values, and adjustments made.
o Maintain a calibration certificate for audit purposes.
Importance: Calibration ensures accuracy and reliability, reduces errors, and ensures compliance with
standards.
3) Explain various kinds of errors in measurement.
• Dynamic error.
• Static error of a measuring instrument is the numerical difference between the true value of a quality and its value of
quantity and by measurement, i.e. repeated measurement of the same quantity gives a different indication.
• Dynamic error is the difference between the true value of a quantity changing with time and the value indicated by the
instrument. • Static errors are categorized as gross errors or human error, systematic errors, and random errors.
1.GROSS ERROR • These errors are mainly due to human mistakes in reading or in using instruments or error in recording
observations. Error may also occur due to incorrect adjustment of instruments and computational mistakes. These errors
cannot be treated mathematically. The complete elimination of gross error is not possible, but one can minimize them.
Some errors are easily detected while others may be elusive.
2.SYSTEMATICERROR • Systematic errors are biases in measurement which lead to the situation where the mean of
many separate measurements differs significantly from the actual value of the measured attribute
• • Incorrect measuring technique: For example, one might make an incorrect scale Reading because of parallax error.
• • Bias of the experimenter. The experimenter might consistently read an instrument incorrectly, or might let knowledge
of the expected value of a result influence the measurements.
3. RANDOMERROR • These errors are due to unknown causes, not determinable in the ordinary processor making
measurements. Such errors are normally small and follow the laws of probability. Random errors can thus be treated
mathematically. • For example, suppose a voltage is being monitored by a voltmeter which is read at 15 minutes
intervals. Although the instrument operates under ideal environmental conditions and accurately calibrated before
measurements, it still gives that vary slightly over the period of observation. This variation cannot be corrected by any
method of calibration or any other known method of control.
4) Define the Static and Dynamic characteristics of measuring instrument
• Some static characteristics are sensitivity, span, accuracy, resolution, threshold, tolerance, linearity, hysteresis, drift,
cross-sensitivity etc.
• The dynamic characteristics are Dynamic Error, Speed of Response, Fidelity, Lag.
• Accuracy: It is the degree of closeness of readings from an instrument to the true value. Always accuracy is measured
relative to the true value or actual value.
• Precision: It is the degree of closeness of reading with the previous reading. An instrument is said to be precise when
there is a negligible difference between successive readings.
• Sensitivity: It refers to the least change in measured value to which the instrument or device responds. The ratio of
change in the output of an instrument to a change in the value of the quantity to be measured is known as sensitivity.
• Linearity: Linearity is defined as the ability to give the input characteristics symmetrically and linearly (Straight line).
Instruments are said to be linear when there is an increment in input and output are constant over the specified range.
• Resolution: It refers to a small change in input value that does not affect the output value of an instrument. This
increment is called a resolution.
• Repeatability: It defines how consistently the output of an instrument for the same input is tried again and again under
the same conditions.
• Range: The left extreme and right extreme values of a quantity for which the instrument is designed to function. The
range is equal to the Maximum value minus the minimum value.
• Tolerance: It is the highest allowable error that is specified in terms of certain values while measurement, it is called
tolerance.
• Hysteresis: It is defined as an instrument showing different output values during loading and unloading conditions.
• Dead Zone: for the largest range of values of a measured variable, to which the instrument does not respond.
• Drift: undesired change in the output of a measured variable over a period of time that is unrelated to the changes in
output, operating conditions, and load.
• Threshold: If the input to the instrument is gradually increased from zero, a minimum value of that input is required to
detect the output. This minimum value of the input is defined as the threshold of the instrument
5) Explain working and applications of the Pyrometer.
Working Principle of Pyrometer
• Pyrometers are the temperature measuring devices used to detect the object’s temperature and
electromagnetic radiation emitted from the object.
• These are available in different spectral ranges. Based on the spectral range, pyrometers are classified
into 1-color pyrometers, 2-color pyrometers, and high-speed pyrometers.
• The basic principle of the pyrometer is, it measures the object’s temperature by sensing the
heat/radiation emitted from the object without making contact with the object.
• It records the temperature level depending upon the intensity of radiation emitted. The pyrometer has
two basic components like optical system and detectors that are used to measure the surface temperature
of the object.
• When any object is taken whose surface temperature is to be measured with the pyrometer, the optical
system will capture the energy emitted from the object.
• Then the radiation is sent to the detector, which is very sensitive to the waves of radiation. The output of
the detector refers to the temperature level of the object due to the radiation. Note that, the temperature
of the detector analyzed by using the level of radiation is directly proportional to the object’s temperature.
• The radiation emitted from every targeted object with its actual temperature goes beyond the absolute
temperature(-273.15 degrees Centigrade ).
• This emitted radiation is referred to as Infrared, which is above the visible red light in the
electromagnetic spectrum. The radiated energy is used for detecting the temperature of the object and it is
convertedintoelectrical signals with the help of a detector.
Applications
• To measure the temperature of moving objects or constant objects from a greater distance.
• In metallurgy industries
• In smelting industries
• Hot air balloons to measure the heat at the top of the ballon
• Steam boilers to measure steam temperature
• To measure the temperature of liquid metals and highly heated materials.
• To measure furnace temperature.
6) Explain the Tachometers with the neat and labelled diagram.
A Tachometer is a device that is useful in measuring the operating speed of an engine at the revolution
of RPM and is helpful for planes, both cars, and other types of vehicles.
• These device Gauges come in analog and digital forms. It indicates the engine speed, which plays a
vital role in determining the engine’s power output. It helps measure the rotation speed of a shaft or
disk, frequently that of a machine.
• It is usually measured in rotations per minute (RPM) and sometimes in revolutions per second (RPS).
• Wecanuse Tachometers to view the RPMs on cars, boats, motorcycles, and other machines with
engines. There are several types of Tachometers, including mechanical, electronic, and magnetic.
• A digital meter helps measure and indicate the speed of a rotating object. It is an optical encoder
that helps determine the velocity of the motor for the rotating shaft and is helpful in automobiles,
medical instruments applications, and more.
The capacitive transducer is used for measuring the displacement, pressure and other physical
quantities. It is a passive transducer that means it requires external power for operation. • The
capacitive transducer works on the principle of variable capacitances. The capacitance of the capacitive
transducer changes because of many reasons like overlapping of plates, change in distance between
the plates and dielectric constant. • The capacitive transducer contains two parallel metal plates. These
plates are separated by the dielectric medium which is either air, material, gas or liquid. In the normal
capacitor the distance between the plates are fixed, but in capacitive transducer the distance between
them are varied. • The capacitive transducer uses the electrical quantity of capacitance for converting
the mechanical movement into an electrical signal. The input quantity causes the change of the
capacitance which is directly measured by the capacitive transducer. • The capacitors measure both the
static and dynamic changes. The displacement is also measured directly by connecting the measurable
devices to the movable plate of the capacitor. It works on with both the contacting and non-contacting
modes. • Principleof Operation • The equations below express the capacitance between the plates of a
capacitor Where A–overlapping area of plates in m2 d– thedistancebetween twoplates in meter ε–
permittivity of the medium in F/m εr– relative permittivity ε0– the permittivity of free space • The
change in capacitance occurs because of the physicals variables like displacement, force, pressure, etc.
The capacitance of the transducer also changes by the variation in their dielectric constant which is
usually because of the measurement of liquid or gas level. • The capacita
11) Explain the Vernier Caliper with neat and labelled sketch.
Vernier caliper:
• A vernier caliper is a precision measuring instrument that can be used to measure the external and internal dimensions of
an object.
• It consists of a main scale and a vernier scale.
• The main scale is graduated in millimeters or inches, and the vernier scale is graduated in smaller units.
• The difference between the two scales is called the vernier constant.
• To use a vernier caliper, the object to be measured is placed between the jaws of the caliper.
• The main scale is then read to the nearest millimeter or inch, and the vernier scale is read to the nearest vernier constant.
• The two readings are then added together to get the final measurement
A sine bar is a precision measuring tool that is used to measure angles and to set workpieces at a given angle.
• It consists of a hardened, precision ground bar with two precision ground cylinders fixed at the ends.
• The distance between the centers of the cylinders is precisely controlled, and the top of the bar is parallel to a
line through the centers of the two rollers.
• The dimension between the two rollers is chosen to be an entire number (for ease of later calculations) and
forms the hypotenuse of a triangle when in use.
• Sine bars are used in conjunction with slip gauge blocks to measure angles.
• The sine bar is placed on a surface plate, and the workpiece is placed on top of the sine bar. Slip gauge blocks
are then placed between the workpiece and the sine bar until the workpiece is at the desired angle.
• The height of the slip gauge blocks is then measured, and the angle can be calculated using the sine function. •
Sine bars are also used to set workpieces at a given angle.
• For example, a sine bar can be used to set a workpiece at a 45-degree angle to a surface plate.
• This is done by placing the sine bar on the surface plate and then placing the workpiece on top of the sine bar.
Slip gauge blocks are then placed between the workpiece and the sine bar until the workpiece is at the desired
angle.
• Sine bars are an essential tool for precision machining and measurement.
• They are used in a wide variety of applications, including:
•Angle measurement •Workpiece positioning
•Tool setting •Inspection
• Sine bars are available in a variety of sizes and accuracies.
15) Explain Limits and Fits.
• Limits and fits are a system of standardized tolerances used in engineering and manufacturing to ensure
proper mating and interchangeability of parts.
• The concept of limits and fits allows for the controlled variation in dimensions between mating parts to
achieve desired functionality and performance.
• In this system, each part is assigned a tolerance zone, which defines the acceptable range of dimensions
for that part.
• The tolerance zone consists of two limits: the upper limit (or maximum material condition) and the lower
limit (or minimum material condition).
• These limits specify the largest and smallest allowable dimensions for the part.
• The fit refers to the degree of tightness or clearance between two mating parts.
• It determines how closely the dimensions of the two parts need to match in order to achieve the desired
functionality
16) Explain Process planning sheet with the help of example.
A process planning sheet is a document used in manufacturing and production environments to outline the
steps, operations, and resources required to produce a specific product or part. It serves as a comprehensive
guide for the manufacturing process, helping to ensure consistency, efficiency, and quality in production.
• The content and format of a process planning sheet may vary depending on the specific needs and practices of
an organization. However, it generally includes the following information:
• Product Information • Bill of Materials (BOM) • Routing • Operation Details • Work Instructions • Tooling and
Equipment • Inspection and Quality Control • Workforce and Labor • Production Timeline • Notes and
Additional Information
17) Explain the difference in the Hole Basis System and Shaft Basis system.
The shaft and hole basis system is a commonly used method in engineering and manufacturing to establish the tolerances
and fits between mating shafts and holes.
• It provides a systematic approach for designing and specifying the dimensions of the shaft and hole to achieve desired
clearances or interferences. 8 Shaft and Hole basis system
• In the shaft basis system, the shaft is considered the primary reference, and the tolerances are applied to the shaft
dimensions.
• The hole is then manufactured to accommodate the shaft within specified tolerances. • This system is suitable when the
shaft's function is critical or requires a specific size or performance.
• In the hole basis system, the hole is considered the primary reference, and the tolerances are applied to the hole
dimensions.
• The shaft is then manufactured to fit within the specified hole tolerances.
• This system is commonly used when the hole's function is critical or requires a specific size or performance.
18) Classify the limit gauges. Explain plug gauge.
Limit gauges are precision measuring instruments used to check the dimensions of a part against predetermined limits.
• They are commonly employed in quality control and inspection processes to verify whether a part is within acceptable
tolerances.
• Limit gauges are precision measuring instruments used to check the dimensions of a part against predetermined limits.
They are commonly employed in quality control and inspection processes to verify whether a part is within acceptable
tolerances. Here are some types of limit gauges:
• Plug Gauges: Plug gauges are used to check the internal dimensions of holes or bores.
• They consist of a cylindrical body with a precisely machined diameter.
• The gauge is inserted into the hole, and if it fits smoothly within the acceptable tolerance range, the hole is considered
within specification
19) Classify various kinds of Comparators. Explain any one.
Working Principle:
• Placing the Object: The object to be measured or inspected is placed on the stage of the microscope.
• Adjusting the Focus: The operator adjusts the focus and fine-tunes the image clarity using the focus knobs.
• Magnification Selection: The appropriate objective lens is selected to achieve the desired level of magnification.
• Observation and Measurement: The operator observes the magnified image of the object through the eyepiece and
performs measurements using a reticle or graticule.
• Precision Alignment: The toolmaker's microscope allows for precise alignment and positioning of the object, enabling
accurate measurement of dimensions, angles, and other features.
Advantages:
• High Magnification: Toolmaker's microscopes offer high magnification capabilities, enabling precise measurement and
inspection of small objects and intricate details.
• Versatility: They can be used for measuring linear dimensions, angles, radii, and surface roughness of objects.
• Non-contact Measurement: Since the object is observed through the microscope without physical contact, there is
minimal risk of damage or distortion.
• Precise Measurement: The fine adjustment controls and reticle/graticule allow for accurate measurement and inspection
of small features.
• Illumination: The built-in illumination system provides focused and bright lighting, ensuring clear visibility of the object
Limitations:
Limited Depth of Field: Toolmaker's microscopes have a limited depth of field, making it challenging to focus on objects with
varying heights or uneven surfaces. • Skill-dependent: Precise measurements require skill and experience in aligning the
object, focusing the microscope, and interpreting the reticle markings accurately. • Restricted to Small Objects: The
microscope's small stage size may limit the measurement of larger objects.
Applications
• Inspection of Precision Components
• Dimensional Measurement
• Thread Measurement
• Surface Finish Analysis
• Alignment and Positioning
• Tool Wear Analysis
• Microscopic Welding and Soldering
Explain the Autocollimator and its applications.
Autocollimators are optical instruments used for measuring small angular displacements and the angular alignment of optical
components.
• They find applications in various fields, including metrology, optics, and mechanical engineering.
• The light source emits a collimated beam towards the target surface.
• Thecollimated light reflects off the target and returns to the autocollimator.
• The returning light is focused by the objective lens and forms an image of the target.
• The eyepiece magnifies the image, and the user can observe it through the eyepiece.
• By analyzing the position or movement of the reflected image, angular displacements and misalignments can be measured.
Advantages:
Limitations:
1.Restricted Field of View: Autocollimators have a limited field of view, meaning they can only measure angular displacements within
a specific range.
2.Sensitivity to Vibrations: Vibrations or movements in the environment can affect the accuracy of the measurements, requiring
stable and controlled conditions. 3.Limited Range: Autocollimators may not be suitable for measuring extremely large angular
displacements.
Applications: 1.Optical Alignment: Autocollimators are commonly used to align optical components such as mirrors, lenses, prisms,
and telescopes. 2.Surface Flatness Measurement: They can be used to assess the flatness and parallelism of surfaces by measuring
the angular deviation from a reference plane. 3.Machine Tool Alignment: Autocollimators aid in aligning and calibrating machine
tools, ensuring accurate and precise machining operations. 4.Metrology and Quality Control: Autocollimators play a vital role in
metrology laboratories for calibration, verification, and quality control of angular measurements. 5.Optical Testing: They are used for
evaluating the quality and performance of optical systems, including reflectors, diffraction gratings, and wavefront measurement
devices
21) Explain the Optical profile projectors and list their applications.
Working Principle:
• Illumination: The object is placed on the stage and illuminated from underneath or through the side
with a bright light source.
• Projection: The profile projector projects a magnified image of the object onto a viewing screen.
• Measurement: The operator uses the calibrated reticle on the screen to measure the dimensions,
angles, and features of the projected image.
• Comparison: The measured dimensions are compared against predetermined specifications or
reference values for quality control and inspection purposes.
Advantages:
• High Magnification
• Non-contact Measurement
• Quick and Easy:
• Versatility:
• Visual Comparison
Limitations:
• Limited to 2D Measurements
• Subject to Operator Skill
• Magnification Distortion
Applications:
• Quality Control
• Tool and Die Making
• Automotive Industry
• Aerospace Industry
• Research and Development: