Unit 5 Notes Fme
Unit 5 Notes Fme
Error in measurement
Error in measurement refers to the difference between the measured value and the true
value of the quantity being measured. It can arise due to various factors, such as limitations
of the measuring device, environmental conditions, or human error. Errors in measurement
can occur in several ways, and they are generally categorized as systematic, random, or
gross errors. Below are some common sources of errors:
1. Instrumental Errors
Poor Calibration: If the measuring instrument is not properly calibrated, it may give
consistently incorrect results.
Wear and Tear: Over time, instruments may wear out or lose precision, affecting accuracy.
Parallax Error: This occurs when the position of the observer affects the reading of the
measurement (e.g., not viewing a scale directly from above).
Instrument Resolution: Limited resolution of the instrument, where it cannot detect small
changes in the measured quantity, leads to errors.
2. Environmental Errors
Temperature Variations: Instruments and measured objects can expand or contract with
temperature changes, affecting the readings.
Pressure and Humidity: Changes in atmospheric conditions, such as humidity and air
pressure, can affect certain measurements (e.g., in electrical resistance or chemical
reactions).
Magnetic or Electromagnetic Fields: External fields can interfere with electrical or magnetic
instruments, causing incorrect readings.
3. Observational Errors
Human Error: Inaccurate observation, like reading an analogue scale incorrectly or pressing
the wrong button on digital equipment, can lead to errors.
Reaction Time: In manual measurements, a slight delay in recording the measurement can
result in errors, especially in time-sensitive experiments.
Judgment Errors: In cases where subjective judgment is required (e.g., visual estimations),
inconsistencies in human perception can lead to errors.
4. Procedural Errors
Improper Use of Equipment: Misusing or mishandling equipment, such as using the wrong
range or improper setup, can introduce errors.
Incorrect Sampling: Errors can occur if the sample being measured does not represent the
entire system (e.g., taking an unrepresentative temperature reading).
Improper Experimental Setup: Setting up the experiment incorrectly or introducing external
factors can cause deviations in the measurements.
5. Theoretical Errors
Simplified Assumptions: Inaccurate theoretical models or assumptions about how an
experiment should proceed can lead to discrepancies between the true value and the
measured value.
Neglecting Minor Effects: Ignoring minor but significant factors, such as friction or air
resistance, can result in inaccurate measurements.
1. Accuracy
Accuracy refers to how close a measured value is to the true or accepted value of the
quantity being measured. In simple terms, it is the degree of correctness or closeness to the
true value.
Key Features:
Indicates how close a single measurement (or the average of several measurements) is to
the actual or true value.
High accuracy means the measurement is close to the true value.
A measurement can be accurate even if it is imprecise, but only if the errors cancel out over
multiple measurements.
Example:
If you are measuring the weight of a standard 10 kg object, and your scale shows 10.01 kg,
the measurement is quite accurate since it’s close to the true weight.
Factors Affecting Accuracy:
Calibration: Proper calibration of instruments with known standards improves accuracy.
Systematic Errors: Errors that consistently occur in the same direction (e.g., an improperly
calibrated instrument can introduce bias).
Environmental Factors: Temperature, humidity, and other environmental conditions can
affect the accuracy of instruments.
2. Precision
Precision refers to how consistently repeated measurements produce the same results,
regardless of whether they are close to the true value. It reflects the degree of repeatability
or reproducibility of the measurement.
Key Features:
High precision means that repeated measurements give similar results, but those results
may or may not be close to the true value.
Precision is independent of accuracy. A highly precise measurement may still be inaccurate
if all the repeated values are consistently far from the true value.
Precision is often quantified by statistical measures like standard deviation or variance.
Example:
If you repeatedly measure the same object’s weight, and the scale reads 9.50 kg every time,
the measurements are precise (consistent), but they are not accurate (the true value is 10
kg).
Factors Affecting Precision:
Instrument stability: The stability of the measuring device can influence precision.
Instruments that fluctuate during use will have poor precision.
User technique: In manual measurements, differences in technique can affect the
consistency of results.
Random Errors: Random variations or noise can affect precision, introducing inconsistencies
between measurements.
3. Resolution
Resolution refers to the smallest difference or change in a quantity that an instrument can
detect or display. It is the fineness or granularity of the measurement.
Key Features:
Resolution does not affect the accuracy or precision directly, but it defines the smallest
measurable increment of a quantity.
Higher resolution means the instrument can detect smaller changes in the measurement.
An instrument with low resolution may not be able to detect small variations or changes,
even if it is accurate and precise.
Example:
A digital thermometer that reads temperatures to the nearest 0.1°C has a higher resolution
than one that only reads to the nearest 1°C.
If you have a scale with a resolution of 0.01 kg, it will detect weight changes as small as 0.01
kg, whereas a scale with a resolution of 0.1 kg will only detect changes of 0.1 kg or more.
Factors Affecting Resolution:
Design of the instrument: Digital or analogue instruments are designed with a specific
resolution based on the smallest measurable interval.
Display limitations: The display or output of the instrument may limit the resolution (e.g., a
digital multimeter may round to the nearest digit).
Signal-to-noise ratio: In instruments measuring analogue signals, noise can limit the
effective resolution by masking small changes.
Measurement Devices
1. Temperature Measurement (Thermocouple & Optical
Pyrometer)
1.1 Thermocouple
A thermocouple is a temperature-sensing device that works on the principle of the Seebeck
effect, which describes how a voltage is generated when two dissimilar metals are joined at
two different junctions and exposed to different temperatures. Thermocouples are widely
used in industries due to their wide temperature range, simplicity, and durability.
2. Voltage Measurement:
The generated voltage is proportional to the difference in temperature between the hot
junction and the cold junction.
The thermocouple generates a small voltage, typically in the range of microvolts to
millivolts, depending on the metals used and the temperature gradient.
3. Temperature Calculation:
The voltage generated by the thermocouple is measured by a voltmeter or a specialized
thermocouple measurement system. Using the known Seebeck coefficient (which varies
depending on the materials used) and a calibration table or formula, the temperature at the
hot junction can be calculated.
4. Cold Junction Compensation:
In practice, the cold junction temperature is usually at ambient temperature or controlled.
However, to improve accuracy, the cold junction temperature is often measured separately
and compensated for to get the exact temperature at the hot junction.
Modern thermocouple systems use an electronic cold junction compensation system to
account for this.
1.2 Optical Pyrometer
The technique which supports the measurement of temperature of the objects without
touching them is called as pyrometric measurement .This is a non-contact type
measurement which is being used in various industrial application.
Optical Pyrometer
In an optical pyrometer, a brightness comparison is made to measure the temperature. As a
measure of the reference temperature, a colour change with the growth in temperature is
taken. The device compares the brightness produced by the radiation of the object whose
temperature is to be measured, with that of a reference temperature. The reference
temperature is produced by a lamp whose brightness can be adjusted till its intensity becomes
equal to the brightness of the source object. For an object, its light intensity always depends
on the temperature of the object, whatever may be its wavelength. After adjusting the
temperature, the current passing through it is measured using a multimeter, as its value will
be proportional to the temperature of the source when calibrated. The working of an optical
pyrometer is shown in the figure below.
An eye piece (observer) at the left side and an optical lens on the right.
A reference lamp, which is powered with the help of a battery.
A rheostat to change the current and hence the brightness intensity.
So as to increase the temperature range which is to be measured, an absorption screen is
fitted between the optical lens and the reference bulb.
A red filter placed between the eye piece and the reference bulb helps in narrowing the
band of wavelength.
Working
The radiation from the source is emitted and the optical objective lens captures it. The lens
helps in focusing the thermal radiation on to the reference bulb. The observer watches the
process through the eye piece and corrects it in such a manner that the reference lamp
filament has a sharp focus and the filament is super-imposed on the temperature source
image. The observer starts changing the rheostat values and the current in the reference
lamp changes. This in turn, changes its intensity. This change in current can be observed in
three different ways.
1. The filament is dark. That is, cooler than the temperature source.
2. Filament is bright. That is, hotter than the temperature source.
3. Filament disappears. Thus, there is equal brightness between the filament and
temperature source. At this time, the current that flows in the reference lamp is measured,
as its value is a measure of the temperature of the radiated light in the temperature source,
when calibrated.
Advantages
1. Simple assembling of the device enables easy use of it.
2. Provides a very high accuracy with +/-5 degrees Celsius.
3. There is no need for any direct body contact between the optical pyrometer and the
object. Thus, it can be used in a wide variety of applications.
4. As long as the size of the object, whose temperature is to be measured fits with the
size of the optical pyrometer, the distance between both of them is not at all a
problem. Thus, the device can be used for remote sensing.
5. This device can not only be used to measure the temperature but can also be used to
see the heat produced by the object/source. Thus, optical pyrometers can be used to
measure and view wavelengths less than or equal to 0.65 microns. However, a
Radiation Pyrometer can be used for high-heat applications and can measure
wavelengths between 0.70 microns to 20 microns.
6. Can measure the temperature of moving objects.
7. Electrical items temperature can be measured which is very critical if we think of
contact type measurement.
8. The instruments can be used where physical approach is difficult Small ducts or
some object at roof heights.
Disadvantages
1. As the measurement is based on the light intensity, the device can be used only in
applications with a minimum temperature of 700 degrees Celsius.
2. The device is not useful for obtaining continuous values of temperatures at small
intervals.
The working principle of a manometer is that one end is connected to the source of pressure.
Whereas, its other end is left open to the atmospheric pressure of the earth. If the pressure
present in it is greater than 1 atm then the fluid present in the column will be forced down by
that pressure. It can be seen that the level of the filling liquid in the leg where the pressure is
applied, i.e. the left leg of the tube, has dropped, while that in the right-hand leg has risen. A
scale is fitted between the tubes to enable us to measure this displacement.
Let us assume that the pressure we are measuring and have applied to the left-hand side of
the manometer is of constant value. The liquid will only stop moving when the pressure
exerted by the column of liquid, H is sufficient to balance the pressure applied to the left side
of the manometer, i.e. when the head pressure produced by column “h” is equal to the
pressure to be measured.
Knowing the length of the column of the liquid, h, and density of the filling liquid (ρ), we can
calculate the value of the applied pressure (P).
P=ρxgxh
Typical filling liquids commonly used in manometers and their densities.
1. Water ( ρ = 1000 kg m-3 )
2. Oil ( ρ can be between 800 and 950 kg m-3 )
3. Mercury ( ρ = 13600 kg m-3 )
Advantages:
1. It comes at a low cost.
2. Its construction is simple.
3. It has good sensitivity.
4. Its accuracy is good too.
5. It is suitable for application at low pressure.
6. It is simple in its operation.
7. It does not require to be calibrated against any standard values.
8. It can be used for a variety of liquids.
Disadvantages:
1. It is large and bulky in size.
2. It cannot work without its levelling.
3. It does not have any fixed reference.
4. It has an error in the form of condensation.
5. It has no protection over range.
6. It has a very small operating range.
7. It has zero portability because of its fragility.
8. It can also have an error due to temperature change.
9. Its response is very slow and because of this fluctuating pressures cannot be measured
in it.
Applications
1. It is used to measure the pressure of the fluids using mechanical properties of fluids.
2. It is also used to measure vacuum.
3. It is also used to measure the flow of the fluid.
4. It is used to measure the filter pressure drop of the fluids.
5. It is also used for meter calibrations.
6. It is used to measure leak testing.
7. It is also used to measure the liquid level present in a tank.