0% found this document useful (0 votes)
6 views21 pages

312 - Presentation2

The document discusses the measurement of errors, defining absolute and relative errors, and explaining the concept of limiting errors in measurements. It categorizes errors into gross, systematic, and random errors, detailing their origins and effects on measurement accuracy. Additionally, it highlights the importance of calibration and the loading effects that can distort measurements in practical applications.

Uploaded by

olajidefavour2g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

312 - Presentation2

The document discusses the measurement of errors, defining absolute and relative errors, and explaining the concept of limiting errors in measurements. It categorizes errors into gross, systematic, and random errors, detailing their origins and effects on measurement accuracy. Additionally, it highlights the importance of calibration and the loading effects that can distort measurements in practical applications.

Uploaded by

olajidefavour2g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

MEASUREMENT

OF ERRORS
MEASURE M E NT OF ERRORS
In practice, it is impossible to measure the exact value of the measurand. There is always some difference
between the measured value and the true (absolute) value of the unknown quantity (measurand), which
may be very small or large. The difference between the true or exact value and the measured value of the
unknown quantity is known as the absolute error of the measurement.
If δA is the absolute error of the measurement, Am and A be the measured and absolute value of the
unknown quantity, then δA may be expressed as
𝜹𝑨 = 𝑨𝒎 − 𝑨
Sometimes, δA is denoted by ε0.
The relative error is the ratio of absolute error to the true value of the unknown quantity to be measured,
𝜹𝑨 𝜺𝟎 𝒆𝒓𝒓𝒐𝒓
𝒊. 𝒆. , 𝒓𝒆𝒍𝒂𝒕𝒊𝒗𝒆 𝒆𝒓𝒓𝒐𝒓, 𝜺𝒓 = = = 𝑨𝒃𝒔𝒐𝒍𝒖𝒕𝒆
𝑨 𝑨 𝑻𝒓𝒖𝒆 𝒗𝒂𝒍𝒖𝒆
When the absolute error ε0 (=δA) is negligible, i.e., when the difference between the true value A and the
measured value Am of the unknown quantity is very small or negligible then the relative error may be
expressed as,
𝜺𝟎
𝒊. 𝒆. 𝒑𝒆𝒓𝒄𝒆𝒏𝒕𝒂𝒈𝒆 𝒆𝒓𝒓𝒐𝒓 = 𝜺𝒓 × 𝟏𝟎𝟎 = × 𝟏𝟎𝟎
𝑨𝒎
MEASURE M E NT OF ERRORS
The measured value of the unknown quantity may be more than or less than the true value
of the measurand. So the manufacturers have to specify the deviations from the specified
value of a particular quantity in order to enable the purchaser to make proper selection
according to his requirements. The limits of these deviations from specified values are
defined as limiting or guarantee errors. The magnitude of a given quantity having a specified
magnitude Am and a maximum or a limiting error ±δA must have a magnitude between the
limits
𝐴𝑚 − 𝛿𝐴 𝑎𝑛𝑑 𝐴𝑚 + 𝛿𝐴
𝑜𝑟, 𝐴 = 𝐴𝑚 ± 𝛿𝐴
For example, the measured value of a resistance of 100 Ω has a limiting error of ±0.5 Ω.
Then the true value of the resistance is between the limits 100 ± 0.5, i.e., 100.5 and 99.5Ω.

Example: A 0-25 A ammeter has a guaranteed accuracy of 1 percent of full scale reading. The current
measured by this instrument is 10 A. Determine the limiting error in percentage.
MEASURE M E NT OF ERRORS
Solution: The magnitude of limiting error of the instrument from Equation above, 𝛿𝐴 = 𝜀𝑟 × 𝐴 =
0.01 × 25 = 0.25𝐴
𝛿𝐴
The magnitude of the current being measured is 10 A. The relative error at this current is; 𝜀𝑟 = =
𝐴
0.25
= 0.025
10
Therefore, the current being measured is between the limit of
𝐴 = 𝐴𝑚 1 ± 𝜀𝑟 = 10 1 ± 0.025 = 10 ± 0.25 𝐴
0.25
𝑇ℎ𝑒 𝑙𝑖𝑚𝑖𝑡𝑖𝑛𝑔 𝑒𝑟𝑟𝑜𝑟 = × 100 = 2.5%
10

Example: The inductance of an inductor is specified as 20 H ± 5 percent by a manufacturer. Determine the limits of
inductance between which it is guaranteed.
𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝑒𝑟𝑟𝑜𝑟 5
Solution: 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑒𝑟𝑟𝑜𝑟, 𝜀𝑟 = = = 0.05
100 100
𝐿𝑖𝑚𝑖𝑡𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑖𝑛𝑑𝑢𝑐𝑡𝑎𝑛𝑐𝑒, 𝐴 = 𝐴𝑚 ± 𝛿𝐴
= 𝐴𝑚 ± 𝜀𝑟 𝐴𝑚 = 𝐴𝑚 1 ± 𝜀𝑟
= 20 1 ± 0.05 = 20 ± 1𝐻
MEASURE M E NT OF ERRORS
Example: A 0-250 V voltmeter has a guaranteed accuracy of 2% of full-scale reading. The voltage measured by the voltmeter is 150
volts. Determine the limiting error in percentage.
Solution: The magnitude of the limiting error of the instrument,
𝛿𝐴 = 𝜀𝑟 𝑉 = 0.02 × 250 = 5.0 𝑉
The magnitude of the voltage being measured is 150 V.
5.0
The percentage limiting error at this voltage = × 100% = 3.33%
150
Example: The measurand value of a resistance is 10.25 Ω, whereas its value is 10.22 Ω. Determine the absolute error of the
measurement.
Solution: 𝑀𝑒𝑎𝑠𝑢𝑟𝑎𝑛𝑑 𝑣𝑎𝑙𝑢𝑒 𝐴𝑚 = 10.25 Ω
𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒 𝐴 = 10.22 Ω
𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟, 𝛿𝐀 = 𝑨𝒎 − 𝑨 = 𝟏𝟎. 𝟐𝟓 − 𝟏𝟎. 𝟐𝟐 = 𝟎. 𝟎𝟑Ω
Example: The measured value of a capacitor is 205.3 𝜇F, whereas its true value is 201.4 𝜇F. Determine the relative error.
Solution: 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 𝐴𝑚 = 205.3 × 10−6 𝐹
𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒, 𝐴 = 201.4 × 10−6 𝐹
𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟 𝜀0 = 𝐴𝑚 − 𝐴
= 205.3 × 10−6 − 201.4 × 10−6 = 3.9 × 10−6 𝐹
𝜀0 3.9 × 10−6
𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑒𝑟𝑟𝑜𝑟, 𝜀𝑟 = = = 0.0194 𝑜𝑟 1.94%
𝐴 201.4 × 10−6
MEASURE M E NT OF ERRORS
Example: A wattmeter reads 25.34 watts. The absolute error in the measurement is –0.11 watt. Determine the true value of power.
Solution: 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 𝐴𝑚 = 25.34 𝑊
𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟 𝛿𝐴 = −0.11 𝑊
𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒 𝐴 = 𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 − 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
= 25.34 − −0.11 ,
= 25.45 𝑊
TYPES OF ERRORS
The origination of error may be in a variety of ways. They are categorised in three main types.
 Gross error
 Systematic error
 Random error
1. Gross Error: The errors occur because of mistakes (human, cannot be subjected to mathematical
treatment) in observed readings, or using instruments and in recording and calculating measurement
results. One common gross error is frequently committed during improper use of the measuring
instrument. Any indicating instrument changes conditions to some extent when connected in a
complete circuit so that the reading of measurand quantity is altered by the method used. For
example, in Figure (13)(a) and (b), two possible connections of voltage and current coil of a
wattmeter are shown.
MEASUREMENT OF ERRORS
In Figure 13(a), the connection shown is used when the applied voltage is high and current
flowing in the circuit is low, while the connection shown in Figure 13(b) is used when the
applied voltage is low and current flowing in the circuit is high. If these connections of
wattmeter are used in opposite order then an error is liable to enter in wattmeter reading.
Another example of this type of error is in the use of a well-calibrated voltmeter for
measurement of voltage across a resistance of very high value. The same voltmeter, when
connected in a low resistance circuit, may give a more dependable reading because of very
high resistance of the voltmeter itself. This shows that the voltmeter has a loading effect
on the circuit, which alters the original situation during the measurement.
MEASUREMENT OF ERRORS
Pictorial illustration of different types of gross error is shown in Figure 14.

Figure 14: Different types of Gross errors


MEASURE M E NT OF ERRORS
2. Systematic Error: These are the errors that remain constant or change according to a definite law
on repeated measurement of the given quantity. These errors can be evaluated and their influence on
the results of measurement can be eliminated by the introduction of proper correction.
There are two types of systematic errors:
Instrumental error
Environmental error
Instrumental errors are inherent in the measuring instruments because of their mechanical
structure and calibration or operation of the apparatus used. For example, in D’Arsonval movement,
friction in bearings of various components may cause incorrect readings. Improper zero adjustment
has a similar effect. Poor construction, irregular spring tensions, variations in the air gap may also
cause instrumental errors. Calibration error may also result in the instrument reading either being too
low or too high.
Such instrumental errors may be avoided by
Selecting a proper measuring device for the particular application
Calibrating the measuring device or instrument against a standard
Applying correction factors after determining the magnitude of instrumental errors
MEASURE M E NT OF ERRORS
Environmental errors are much more troublesome as the errors change with time in an
unpredictable manner. These errors are introduced due to using an instrument in different conditions
than in which it was assembled and calibrated. Change in temperature is the major cause of such
errors (affects dimensions, resistivity, spring effect of the material). Other environmental changes
also affect the results given by the instruments such as humidity, altitude, earth’s magnetic field,
gravity, stray electric and magnetic field etc.
These errors can be eliminated or reduced by taking the following precautions:
Use the measuring instrument in the same atmospheric conditions in which it was assembled and
calibrated.
If the above precaution is not possible then deviation in local conditions must be determined and
suitable compensations are applied in the instrumental reading.
Automatic compensation, employing sophisticated devices for such deviations, is also possible.
3. Random Errors: These errors are of variable magnitude and sign and do not maintain any known
law. The presence of random errors become evident when different results are obtained on repeated
measurements of one and the same quantity. The effect of random errors is minimised by measuring
the given quantity many times under the same conditions and calculating the arithmetical mean of
the results obtained.
SYSTEMATIC OR RANDOM ERROR

• Measurement errors could be systematic or random.

• As far as systematic errors are concerned, recalibration at


a suitable frequency is important to minimize errors due to
drift in instrument characteristics.

• The use of proper and rigorous calibration procedures is


essential in order to ensure that recalibration achieves its
intended purpose;
– to reflect the importance of getting these procedures right.
LOADING EFFECTS
Ideally, an element used for signal sensing, conditioning, transmission and detection should
not change/distort the original signal. The sensing element should not use any energy or take
least energy from the process so as not to change the parameter being measured. However,
under practical conditions, it has been observed that the introduction of any element in a
system results invariably in extraction of the energy from the system, thereby distorting the
original signal. This distortion may take the form of attenuation, waveform distortion, phase
shift, etc., and consequently, the ideal measurements become impossible.
The incapability of the system to faithfully measure the input signal in undistorted form is
called loading effect. This results in loading error.
The loading effects, in a measurement system, not only occur in the detector–transducer
stage but also occur in signal conditioning and signal presentation stages as well. The loading
problem is carried right down to the basic elements themselves. The loading effect may occur
on account of both electrical and mechanical elements. These are due to impedances of the
various elements connected in a system. The mechanical impedances may be treated similar
to electrical impedances.
Sometimes loading effect occurs due to the connection of measuring instruments in an
improper way. Suppose a voltmeter is connected with parallel of a very high resistance. Due
to the high resistance of the voltmeter itself, the circuit current changes. This is the loading
effect of a voltmeter when they are connected in parallel with a very high resistance.
Similarly, an ammeter has a very low resistance. So if an ammeter is connected in series with
a very low resistance, the total resistance of the circuit changes, and in succession, the circuit
current also changes.
PRINCIPLES OF CALIBRATION
• Calibration consists of comparing the output of the instrument or sensor
under test against the output of an instrument of known accuracy when the
same input (the measured quantity) is applied to both instruments.

• Calibration ensures that the measuring accuracy of all instruments and


sensors used in a measurement system is known over the whole
measurement range, provided that the calibrated instruments and sensors
are used in environmental conditions that are the same as those under
which they were calibrated.

• Instruments used as a standard in calibration procedures are usually chosen


to be of greater inherent accuracy than the process instruments that they
are used to calibrate.
• Because such instruments are only used for calibration purposes, greater accuracy can
often be achieved by specifying a type of instrument that would be unsuitable for
normal process measurements.

• The characteristics of any instrument change over a period. Thus, instrument


calibration has to be repeated at prescribed intervals.
• Changes in instrument characteristics are caused by factors such as mechanical wear,
and the effects of dirt, dust, fumes, chemicals, and temperature change in the operating
environment.
• To a great extent, the magnitude of the drift in characteristics depends on the amount
of use an instrument receives and hence on the amount of wear and the length of
time that it is subjected to the operating environment.
• However, some drift also occurs even in storage as a result of aging effects in
components within the instrument.

• Practical experimentation has to be applied to determine the rate at which frequency


of instrument recalibration would be required.
CONTROL OF CALIBRATION ENVIRONMENT

• Any instrument used as a standard in calibration procedures


must be kept solely for calibration duties and must never be
used for other purposes.

• Most particularly, it must not be regarded as a spare instrument


that can be used for process measurements if the instrument
normally used for that purpose breaks down.

• Proper provision for process instrument failures must be made


by keeping a spare set of process instruments. Standard
calibration instruments must be totally separate.
CONTROL OF CALIBRATION ENVIRONMENT
 Calibration procedures that relate in any way to measurements used
for quality control functions are controlled by the international standard
ISO 9000.

 One of the clauses in ISO 9000 requires that all persons using
calibration equipment be adequately trained. Training must be
adequate and targeted at the particular needs of the calibration
systems involved.

 The manager in charge of the calibration function is clearly


responsible for ensuring that this condition is met.

 People must understand what they need to know and especially why
they must have this information.
CALIBRATION CHAIN AND TRACEABILITY
• The calibration facilities provided within the instrumentation department of
a company provide the first link in the calibration chain. Instruments used
for calibration at this level are known as working standards.
• As such, working standard instruments are kept by the instrumentation
department of a company solely for calibration duties.

• It can be assumed that they will maintain their accuracy over a reasonable
period of time because use-related deterioration in accuracy is largely
eliminated. However, over the longer term, the characteristics of even such
standard instruments will drift, mainly due to aging effects in components
within them.

• Therefore, over this longer term, a program must be instituted for


calibrating working standard instruments at appropriate intervals of time
against instruments of yet higher accuracy.
• The instrument used for calibrating working standard instruments is known
as a secondary reference standard.
CALIBRATION CHAIN AND TRACEABILITY
• The establishment of a company standards laboratory to provide a calibration
facility of the required quality is economically viable only in the case of very large
companies where large numbers of instruments need to be calibrated across
several factories.
• In the case of small to medium size companies, the cost of buying and maintaining
such equipment is not justified. Instead, they would normally use the calibration
service provided by various companies that specialize in offering a standards
laboratory.
• Such standards laboratories are closely monitored by national standards
organizations. Although each different country has its own structure for the
maintenance of standards, each of these different frameworks tends to be
equivalent in its effect in ensuring that the requirements of ISO/IEC 17025 are
met.
• This provides confidence that the goods and services that cross national
boundaries from one country to another have been measured by properly
calibrated instruments.
CALIBRATION CHAIN AND TRACEABILITY

• Calibration has a chain-like structure.

• There must be clear evidence to show that there is no break in this


chain.

• The knowledge of the full chain of instruments involved in the


calibration procedure is known as traceability and is specified as a
mandatory requirement in satisfying the ISO 9000 standard.
CALIBRATION RECORDS
• An essential element in the maintenance of measurement systems and the
operation of calibration procedures is the provision of full documentation.
• This must give a full description of the measurement requirements
throughout the workplace, instruments used, and calibration system and
procedures operated. Individual calibration records for each instrument
must be included within this.
• This documentation is a necessary part of the quality manual.

• Instruments specified for each measurement situation must be listed next.


This list must be accompanied by full instructions about the proper use of
the instruments concerned.
• These instructions will include details about any environmental control or
other special precautions that must be taken to ensure that the instruments
provide measurements of sufficient accuracy to meet the measurement
limits defined.
CALIBRATION RECORDS
• Documentation must specify procedures that are to be
followed if an instrument is found to be outside the calibration
limits.
• This may involve adjustment, redrawing its scale, or
withdrawing an instrument, depending on the nature of the
discrepancy and the type of instrument involved.
• Instruments withdrawn will either be repaired or be scrapped.
In the case of withdrawn instruments, care must be taken to
prevent them from being put back into use accidentally.

You might also like