0% found this document useful (0 votes)
39 views15 pages

Unit 5 Notes Fme

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views15 pages

Unit 5 Notes Fme

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Measurement

Measurement is the process of determining the size, quantity, or degree of something,


typically using specific instruments or standardized units. It involves comparing an object or
phenomenon with a reference or known standard.
OR
Measurement refers to the act or process of quantifying an attribute or property, such as
length, mass, time, temperature, or capacity, using units of measurement to provide a
numerical value.
OR
In a broader sense, measurement can also be seen as a method of collecting data or
information that allows for comparison, analysis, or description of an object or event in
objective terms.

Calibration: Calibration is the process of comparing the measurements of an instrument


or system to a known standard or reference to ensure its accuracy. It typically involves
adjusting the instrument so that its readings match the true value provided by a standard.
Calibration is essential for maintaining precision and consistency in measurements across
various instruments.

Error in measurement
Error in measurement refers to the difference between the measured value and the true
value of the quantity being measured. It can arise due to various factors, such as limitations
of the measuring device, environmental conditions, or human error. Errors in measurement
can occur in several ways, and they are generally categorized as systematic, random, or
gross errors. Below are some common sources of errors:

1. Instrumental Errors
Poor Calibration: If the measuring instrument is not properly calibrated, it may give
consistently incorrect results.
Wear and Tear: Over time, instruments may wear out or lose precision, affecting accuracy.
Parallax Error: This occurs when the position of the observer affects the reading of the
measurement (e.g., not viewing a scale directly from above).
Instrument Resolution: Limited resolution of the instrument, where it cannot detect small
changes in the measured quantity, leads to errors.

2. Environmental Errors
Temperature Variations: Instruments and measured objects can expand or contract with
temperature changes, affecting the readings.
Pressure and Humidity: Changes in atmospheric conditions, such as humidity and air
pressure, can affect certain measurements (e.g., in electrical resistance or chemical
reactions).
Magnetic or Electromagnetic Fields: External fields can interfere with electrical or magnetic
instruments, causing incorrect readings.

3. Observational Errors

Human Error: Inaccurate observation, like reading an analogue scale incorrectly or pressing
the wrong button on digital equipment, can lead to errors.
Reaction Time: In manual measurements, a slight delay in recording the measurement can
result in errors, especially in time-sensitive experiments.
Judgment Errors: In cases where subjective judgment is required (e.g., visual estimations),
inconsistencies in human perception can lead to errors.

4. Procedural Errors
Improper Use of Equipment: Misusing or mishandling equipment, such as using the wrong
range or improper setup, can introduce errors.
Incorrect Sampling: Errors can occur if the sample being measured does not represent the
entire system (e.g., taking an unrepresentative temperature reading).
Improper Experimental Setup: Setting up the experiment incorrectly or introducing external
factors can cause deviations in the measurements.

5. Theoretical Errors
Simplified Assumptions: Inaccurate theoretical models or assumptions about how an
experiment should proceed can lead to discrepancies between the true value and the
measured value.
Neglecting Minor Effects: Ignoring minor but significant factors, such as friction or air
resistance, can result in inaccurate measurements.

6. Transcription or Data Handling Errors


Recording Errors: Mistakes in manually recording data can lead to incorrect conclusions.
Calculation Mistakes: Errors in calculations, whether manual or software-based, can
propagate incorrect measurements.
Minimizing these errors requires proper calibration, attention to environmental conditions,
correct usage of instruments, and careful observation.
Methods of Measurement
Direct and Indirect Methods
Direct and indirect measurement methods are approaches used to quantify physical
quantities, but they differ in how the measurement is obtained.
1. Direct Measurement:
Definition: In direct measurement, the quantity is measured directly using an instrument
designed for that purpose.
Examples:
Measuring length with a ruler or tape measure.
Measuring weight with a balance or scale.
Measuring temperature with a thermometer.
Advantages: Simple and often more accurate as it involves direct interaction with the
quantity being measured.
Disadvantages: Not always feasible for complex or inaccessible quantities.
2. Indirect Measurement:
Definition: In indirect measurement, the quantity is not measured directly. Instead, it is
calculated based on other directly measured quantities and mathematical relationships.
Examples:
Determining the height of a building by measuring its shadow and using trigonometry.
Measuring the density of an object by determining its mass and volume and applying the
formula.
Calculating distance by using speed and time (Distance = Speed × Time).
Advantages: Useful for measuring quantities that are difficult or impossible to measure
directly.
Disadvantages: Less accurate than direct methods due to potential errors in the
measurement of the related quantities or assumptions in the mathematical models.

Primary, Secondary, and Tertiary Measurements


In the context of measurement, primary, secondary, and tertiary measurements refer to
different levels of measurement processes, based on their accuracy, reference standards,
and how directly they measure the quantity of interest. These terms are often used in
metrology and scientific instrumentation, and they indicate the degree of separation from
the fundamental definition of the quantity being measured.
1. Primary Measurement
Definition:
A primary measurement directly measures a physical quantity without any need for
reference to another standard or calibration. It uses fundamental laws of physics and is
based on direct observation or measurement of the quantity itself.
Key Features:
No dependency on other measurements.
Considered the most accurate and fundamental type of measurement.
Involves little to no estimation, interpolation, or correction.
Examples:
Time measurement using an atomic clock, which is directly based on the vibrations of atoms
(fundamental physical property).
Mass measurement using a Kibble balance (formerly known as a watt balance), which
directly links mass to fundamental physical constants like Planck’s constant.
Length measurement using interferometry, where the length is determined based on the
wavelength of light, a fundamental constant.
Advantages:
Highly accurate, based on fundamental constants or properties of nature.
Used as reference standards in laboratories for calibration purposes.
Disadvantages:
Often requires sophisticated and expensive equipment.
May not be practical for everyday or industrial applications.
2. Secondary Measurement
Definition:
A secondary measurement is a measurement that relies on calibration against a primary
standard. It does not directly measure the quantity of interest but is calibrated using a
known reference or standard. Secondary measurements are typically used for practical
purposes where direct primary measurement is not feasible.
Key Features:
Requires reference to a primary standard for calibration.
Relies on comparison methods, i.e., the instrument or method is calibrated against a known
reference.
Commonly used in industrial, laboratory, and everyday settings because they are more
practical and cost-effective than primary measurements.
Examples:
Thermometer calibration: A thermometer may be calibrated against a primary reference
such as the triple point of water (exactly 0.01°C).
Pressure gauges: Calibrated against a manometer, which serves as a primary or higher-order
standard.
Mass scales: Calibrated using a set of standard weights that are traceable to a primary mass
standard (e.g., a kilogram standard).
Advantages:
Practical and cost-effective for routine measurements.
Widely applicable in industries and laboratories where high accuracy is needed but primary
measurement is impractical.
Disadvantages:
Less accurate than primary measurements due to potential calibration errors.
Dependent on the accuracy of the reference standard used for calibration.
3. Tertiary Measurement
Definition:
A tertiary measurement is based on calibration from a secondary standard. It involves
further measurement steps and comparisons, making it more indirect and potentially less
accurate than primary or secondary measurements. These are typically used for everyday
applications where extreme precision is not necessary.
Key Features:
Calibrated using a secondary standard.
Less accurate than both primary and secondary measurements but more suitable for large-
scale or general-purpose applications.
Often used in routine, non-critical measurements in industries, commerce, and daily life.
Examples:
Household thermometers: Calibrated based on secondary standards in laboratories, used
for general purposes like checking body temperature.
Commercial weighing scales: These scales are calibrated with secondary standards but may
not meet the precision required for scientific purposes.
Routine voltage measurement: Using multimeters that are calibrated against secondary
electrical standards.
Advantages:
Inexpensive and easy to use.
Sufficient for general purposes and non-critical applications.
Disadvantages:
Least accurate compared to primary and secondary measurements.
Relies on multiple layers of calibration, which can introduce errors.

Accuracy, precision, and resolution


Accuracy, precision, and resolution are fundamental concepts in measurement, crucial for
understanding how well a measuring system or instrument performs. While they are often
confused, each term refers to a different aspect of measurement quality.

1. Accuracy
Accuracy refers to how close a measured value is to the true or accepted value of the
quantity being measured. In simple terms, it is the degree of correctness or closeness to the
true value.
Key Features:
Indicates how close a single measurement (or the average of several measurements) is to
the actual or true value.
High accuracy means the measurement is close to the true value.
A measurement can be accurate even if it is imprecise, but only if the errors cancel out over
multiple measurements.
Example:
If you are measuring the weight of a standard 10 kg object, and your scale shows 10.01 kg,
the measurement is quite accurate since it’s close to the true weight.
Factors Affecting Accuracy:
Calibration: Proper calibration of instruments with known standards improves accuracy.
Systematic Errors: Errors that consistently occur in the same direction (e.g., an improperly
calibrated instrument can introduce bias).
Environmental Factors: Temperature, humidity, and other environmental conditions can
affect the accuracy of instruments.
2. Precision
Precision refers to how consistently repeated measurements produce the same results,
regardless of whether they are close to the true value. It reflects the degree of repeatability
or reproducibility of the measurement.
Key Features:
High precision means that repeated measurements give similar results, but those results
may or may not be close to the true value.
Precision is independent of accuracy. A highly precise measurement may still be inaccurate
if all the repeated values are consistently far from the true value.
Precision is often quantified by statistical measures like standard deviation or variance.
Example:
If you repeatedly measure the same object’s weight, and the scale reads 9.50 kg every time,
the measurements are precise (consistent), but they are not accurate (the true value is 10
kg).
Factors Affecting Precision:
Instrument stability: The stability of the measuring device can influence precision.
Instruments that fluctuate during use will have poor precision.
User technique: In manual measurements, differences in technique can affect the
consistency of results.
Random Errors: Random variations or noise can affect precision, introducing inconsistencies
between measurements.

3. Resolution
Resolution refers to the smallest difference or change in a quantity that an instrument can
detect or display. It is the fineness or granularity of the measurement.
Key Features:
Resolution does not affect the accuracy or precision directly, but it defines the smallest
measurable increment of a quantity.
Higher resolution means the instrument can detect smaller changes in the measurement.
An instrument with low resolution may not be able to detect small variations or changes,
even if it is accurate and precise.
Example:
A digital thermometer that reads temperatures to the nearest 0.1°C has a higher resolution
than one that only reads to the nearest 1°C.
If you have a scale with a resolution of 0.01 kg, it will detect weight changes as small as 0.01
kg, whereas a scale with a resolution of 0.1 kg will only detect changes of 0.1 kg or more.
Factors Affecting Resolution:
Design of the instrument: Digital or analogue instruments are designed with a specific
resolution based on the smallest measurable interval.
Display limitations: The display or output of the instrument may limit the resolution (e.g., a
digital multimeter may round to the nearest digit).
Signal-to-noise ratio: In instruments measuring analogue signals, noise can limit the
effective resolution by masking small changes.

Measurement Devices
1. Temperature Measurement (Thermocouple & Optical
Pyrometer)
1.1 Thermocouple
A thermocouple is a temperature-sensing device that works on the principle of the Seebeck
effect, which describes how a voltage is generated when two dissimilar metals are joined at
two different junctions and exposed to different temperatures. Thermocouples are widely
used in industries due to their wide temperature range, simplicity, and durability.

Basic Components of a Thermocouple:


1. Two Dissimilar Metal Wires: The thermocouple consists of two wires made from different
metals or metal alloys.
2. Two Junctions:
Hot Junction: The point where the two metals are joined, exposed to the temperature that
needs to be measured.
Cold Junction (Reference Junction): The other end of the metals, kept at a known or
reference temperature (usually at ambient or a controlled temperature).

Working Principle (Seebeck Effect)


The functioning of a thermocouple is based on the Seebeck effect, which states that when
two different metals are joined at two points, and there is a temperature difference
between the junctions, a voltage (thermoelectric EMF) is generated between them. The
magnitude of this voltage is related to the temperature difference between the hot and cold
junctions.
Step-by-Step Process:
1. Temperature Difference Creates Voltage:
When the hot junction (placed in the environment where temperature is to be measured)
experiences a higher temperature compared to the cold junction, electrons at the hot
junction have more energy and move more freely.
Due to this temperature difference, there is an electron movement between the two
different metals, causing a thermoelectric voltage (or EMF) to be generated.

2. Voltage Measurement:
The generated voltage is proportional to the difference in temperature between the hot
junction and the cold junction.
The thermocouple generates a small voltage, typically in the range of microvolts to
millivolts, depending on the metals used and the temperature gradient.
3. Temperature Calculation:
The voltage generated by the thermocouple is measured by a voltmeter or a specialized
thermocouple measurement system. Using the known Seebeck coefficient (which varies
depending on the materials used) and a calibration table or formula, the temperature at the
hot junction can be calculated.
4. Cold Junction Compensation:
In practice, the cold junction temperature is usually at ambient temperature or controlled.
However, to improve accuracy, the cold junction temperature is often measured separately
and compensated for to get the exact temperature at the hot junction.
Modern thermocouple systems use an electronic cold junction compensation system to
account for this.
1.2 Optical Pyrometer
The technique which supports the measurement of temperature of the objects without
touching them is called as pyrometric measurement .This is a non-contact type
measurement which is being used in various industrial application.

Optical Pyrometer
In an optical pyrometer, a brightness comparison is made to measure the temperature. As a
measure of the reference temperature, a colour change with the growth in temperature is
taken. The device compares the brightness produced by the radiation of the object whose
temperature is to be measured, with that of a reference temperature. The reference
temperature is produced by a lamp whose brightness can be adjusted till its intensity becomes
equal to the brightness of the source object. For an object, its light intensity always depends
on the temperature of the object, whatever may be its wavelength. After adjusting the
temperature, the current passing through it is measured using a multimeter, as its value will
be proportional to the temperature of the source when calibrated. The working of an optical
pyrometer is shown in the figure below.
An eye piece (observer) at the left side and an optical lens on the right.
A reference lamp, which is powered with the help of a battery.
A rheostat to change the current and hence the brightness intensity.
So as to increase the temperature range which is to be measured, an absorption screen is
fitted between the optical lens and the reference bulb.
A red filter placed between the eye piece and the reference bulb helps in narrowing the
band of wavelength.

Working
The radiation from the source is emitted and the optical objective lens captures it. The lens
helps in focusing the thermal radiation on to the reference bulb. The observer watches the
process through the eye piece and corrects it in such a manner that the reference lamp
filament has a sharp focus and the filament is super-imposed on the temperature source
image. The observer starts changing the rheostat values and the current in the reference
lamp changes. This in turn, changes its intensity. This change in current can be observed in
three different ways.
1. The filament is dark. That is, cooler than the temperature source.
2. Filament is bright. That is, hotter than the temperature source.
3. Filament disappears. Thus, there is equal brightness between the filament and
temperature source. At this time, the current that flows in the reference lamp is measured,
as its value is a measure of the temperature of the radiated light in the temperature source,
when calibrated.
Advantages
1. Simple assembling of the device enables easy use of it.
2. Provides a very high accuracy with +/-5 degrees Celsius.
3. There is no need for any direct body contact between the optical pyrometer and the
object. Thus, it can be used in a wide variety of applications.
4. As long as the size of the object, whose temperature is to be measured fits with the
size of the optical pyrometer, the distance between both of them is not at all a
problem. Thus, the device can be used for remote sensing.
5. This device can not only be used to measure the temperature but can also be used to
see the heat produced by the object/source. Thus, optical pyrometers can be used to
measure and view wavelengths less than or equal to 0.65 microns. However, a
Radiation Pyrometer can be used for high-heat applications and can measure
wavelengths between 0.70 microns to 20 microns.
6. Can measure the temperature of moving objects.
7. Electrical items temperature can be measured which is very critical if we think of
contact type measurement.
8. The instruments can be used where physical approach is difficult Small ducts or
some object at roof heights.

Disadvantages
1. As the measurement is based on the light intensity, the device can be used only in
applications with a minimum temperature of 700 degrees Celsius.
2. The device is not useful for obtaining continuous values of temperatures at small
intervals.

Pressure Measurement (Manometer & Bourdon Tube)


U-Tube Manometer
The simplest form of manometer consists of a U-shaped glass tube containing liquid. It is
used to measure gauge pressure and is the primary instrument used in the workshop for
calibration.

The working principle of a manometer is that one end is connected to the source of pressure.
Whereas, its other end is left open to the atmospheric pressure of the earth. If the pressure
present in it is greater than 1 atm then the fluid present in the column will be forced down by
that pressure. It can be seen that the level of the filling liquid in the leg where the pressure is
applied, i.e. the left leg of the tube, has dropped, while that in the right-hand leg has risen. A
scale is fitted between the tubes to enable us to measure this displacement.
Let us assume that the pressure we are measuring and have applied to the left-hand side of
the manometer is of constant value. The liquid will only stop moving when the pressure
exerted by the column of liquid, H is sufficient to balance the pressure applied to the left side
of the manometer, i.e. when the head pressure produced by column “h” is equal to the
pressure to be measured.
Knowing the length of the column of the liquid, h, and density of the filling liquid (ρ), we can
calculate the value of the applied pressure (P).
P=ρxgxh
Typical filling liquids commonly used in manometers and their densities.
1. Water ( ρ = 1000 kg m-3 )
2. Oil ( ρ can be between 800 and 950 kg m-3 )
3. Mercury ( ρ = 13600 kg m-3 )
Advantages:
1. It comes at a low cost.
2. Its construction is simple.
3. It has good sensitivity.
4. Its accuracy is good too.
5. It is suitable for application at low pressure.
6. It is simple in its operation.
7. It does not require to be calibrated against any standard values.
8. It can be used for a variety of liquids.
Disadvantages:
1. It is large and bulky in size.
2. It cannot work without its levelling.
3. It does not have any fixed reference.
4. It has an error in the form of condensation.
5. It has no protection over range.
6. It has a very small operating range.
7. It has zero portability because of its fragility.
8. It can also have an error due to temperature change.
9. Its response is very slow and because of this fluctuating pressures cannot be measured
in it.
Applications
1. It is used to measure the pressure of the fluids using mechanical properties of fluids.
2. It is also used to measure vacuum.
3. It is also used to measure the flow of the fluid.
4. It is used to measure the filter pressure drop of the fluids.
5. It is also used for meter calibrations.
6. It is used to measure leak testing.
7. It is also used to measure the liquid level present in a tank.

Bourdon Tube Pressure gauge


A Bourdon tube pressure gauge consists of an elastic tube that is bound or welded on one
side into a socket. A variation of pressure results in deflection in the tube. As the name itself
suggests Patented in 1849 by Edward Bourdon, the Bourdon pressure gauge was
commended for its accuracy, sensitivity, and linearity relative to different techniques for
estimating and measuring pressure. Most likely every mechanical dial-type pressure gauge
you’ve ever seen basically depends on the principles of the Bourdon tube.
The Bourdon tube pressure gauge works by estimating the amount of change in a coiled or
semicircular metal tube by a pressurized fluid-filled inside. This is because of the principle
that a flattened tube will in general regain its circular structure when pressurized. Bourdon
tube pressure gauges are utilized for the measurement of relative pressures ranging
between 0.6 – 7,000 bar. These are classified as mechanical pressure measuring instruments
and hence they operate without any application of electrical power.

Types of Bourdon Tube Pressure Gauges


a. C-shaped Bourdon Tube
b. Spiral Bourdon Tube
c. Helical Bourdon Tube

Bourdon Tube Pressure Gauge Construction and Working Principle


It essentially consisted of a C-moulded hollow tube, whose one endpoint is fixed and
connected to the pressure tapping, whereas the other end is left free, as displayed in the
figure. The cross-sectional part of the tube is elliptical. At the point when pressure is applied,
the elliptical tube (or Bourdon tube) attempts to obtain a circular cross-segment. Thus, stress
is created and the tube tries to straighten up. Subsequently, the free end of the tube moves
in an upward direction, depending upon the magnitude of the pressure.
The applied pressure is in a proportional relation with the displacement of the free-closed
end of the tube. A deflecting and indicating mechanism is connected to the free end that
rotates the pointer and indicates the pressure reading. The materials utilized are usually
Phosphor Bronze, Brass and Beryllium Copper. For a two-inch overall diameter of the C-tube,
the useful travel of the free end is approximately ⅛ inch. However, the C-type tubes are the
most commonly used ones, different shapes of tubes, for example, helical, bent, or spiral
tubes are additionally being used.
Advantages
1. The construction and operation of the Bourdon tube is quite simple and thus does
not require any skilled operator for its handling.
2. The most unique advantage of Bourdon tube over other types of pressure gauges is
its application at both low as well as high-pressure systems.
3. The Bourdon tube doesn’t require any kind of electrical power for its operations.
4. It also provides you with good accuracy in real-time applications as well.
5. Cost is comparatively on the lower side than other types of pressure gauges and also
has a long life as well.
Disadvantages
1. Bourdon tubes respond in case there is any change in the pressure.
2. These are even sensitive to vibrations and shocks.
3. These also sometimes subject to hysteresis.
4. Amplification is required as in the case of the Bourdon tube the displacement of the
free end of the tube is low.

You might also like