0% found this document useful (0 votes)
56 views16 pages

Unit I

The document discusses key concepts in measurement and instrumentation. It defines measurements as the comparison of an unknown quantity to a standard. The three main elements of a generalized measurement system are described as the primary sensing element, variable conversion element, and data presentation element. Static characteristics of instruments such as accuracy, precision, sensitivity, linearity, reproducibility, resolution, and drift are also explained. The document provides detailed descriptions and mathematical expressions for important measurement concepts.

Uploaded by

Nitish Ranjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views16 pages

Unit I

The document discusses key concepts in measurement and instrumentation. It defines measurements as the comparison of an unknown quantity to a standard. The three main elements of a generalized measurement system are described as the primary sensing element, variable conversion element, and data presentation element. Static characteristics of instruments such as accuracy, precision, sensitivity, linearity, reproducibility, resolution, and drift are also explained. The document provides detailed descriptions and mathematical expressions for important measurement concepts.

Uploaded by

Nitish Ranjan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 16

VINAYAKA MISSION UNIVERSITY

V.M.K.V ENGINEERING COLLEGE

DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING

Year: III Sem: VI

Subject: Measurements & Instrumentation

Unit -1

1. INTRODUCTION

MEASUREMENTS:
The measurement of a given quantity is essentially an act or the result of
comparison between the quantity (whose magnitude is unknown) & a predefined
Standard. Since two quantities are compared, the result is expressed in numerical
values.
BASIC REQUIREMENTS OF MEASUREMENT:
i) The standard used for comparison purposes must be accurately defined &
should be commonly accepted
ii) The apparatus used & the method adopted must be provable.

MEASURING INSTRUMENT:
It may be defined as a device for determining the value or magnitude of a
quantity or variable.

ELEMENTS OF A GENERALIZED MEASUREMENT SYSTEM:


Most of the measurement systems contain three main functional elements are
i) Primary sensing element
Data
storage/p
ii) Variable conversion element & lay back
element
iii) Data presentation element. system

Variable Data
Primary Variable Data
Manipulatio presentation
Sensing Conversion transmissio
n system
Element Element n system
Element

Measuring 1
quantity
Primary sensing element:
The quantity under measurement makes its first contact with the primary sensing
element of a measurement system.
i.e., the measurand- (the unknown quantity which is to be measured) is first detected
by primary sensor which gives the output in a different analogous form
This output is then converted into an electrical signal by a transducer - (which
converts energy from one form to another).
The first stage of a measurement system is known as a detector
Transducer stage Variable conversion element:
The output of the primary sensing element may be electrical signal of any form; it may
be voltage, a frequency or some other electrical
Parameter
For the instrument to perform the desired function, it may be necessary to convert this
output to some other suitable form.
Variable manipulation element:
The function of this element is to manipulate the signal presented to it preserving the
original nature of the signal
It is not necessary that a variable manipulation element should follow the variable
conversion element
Some non-linear processes like modulation, detection, sampling,
filtering, chopping etc.,are performed on the signal to bring it to the
desired form to be accepted by the next stage of measurement system
This process of conversion is called signal conditioning
The term signal conditioning includes many other functions in addition to Variable
conversion & Variable manipulation
In fact the element that follows the primary sensing element in any
instrument or measurement system is called signal conditioning
element

When the elements of an instrument are actually physically separated, it becomes


necessary to transmit data from one to another. The element that performs this function
is called a data transmission element.

2
The information about the quantity under measurement has to be conveyed to the
personnel handling the instrument or the system for monitoring, control, or analysis
purposes.
This function is done by data presentation element
In case data is to be monitored, visual display devices are needed
These devices may be analog or digital indicating instruments like
ammeters, voltmeters etc
In case data is to be recorded, recorders like magnetic tapes, high
speed camera & TV equipment, CRT, printers may be used
For control & analysis purpose microprocessor or computers may be used.
The final stage in a measurement system is known as terminating stage

STATIC & DYNAMIC CHARACTERISTICS


The performance characteristics of an instrument are mainly divided into two
categories:
i) Static characteristics
ii) Dynamic characteristics
Static characteristics:
The set of criteria defined for the instruments, which are used to measure the
quantities which are slowly varying with time or mostly constant, i.e., do not vary with
time, is called static characteristics.
The various static characteristics are:
i) Accuracy
ii) Precision
iii) Sensitivity
iv) Linearity
v) Reproducibility
vi) Repeatability
vii) Resolution
viii) Threshold
ix) Drift
x) Stability
xi) Tolerance

3
xii) Range or span
Accuracy:
It is the degree of closeness with which the reading approaches the true
value of the quantity to be measured. The accuracy can be expressed in following
ways:

a) Point accuracy:
Such accuracy is specified at only one particular point of scale.
It does not give any information about the accuracy at any other Point on the scale.

b) Accuracy as percentage of scale span:


When an instrument as uniform scale, its accuracy may be expressed in terms
of scale range.

c) Accuracy as percentage of true value:


The best way to conceive the idea of accuracy is to specify it in
terms of the true value of the quantity being measured. Precision: It is the measure of
reproducibility i.e., given a fixed value of a quantity, precision is a measure of the
degree of agreement within a group of measurements. The precision is composed of
two characteristics:

a) Conformity:
Consider a resistor having true value as 2385692 , which is being measured by
an ohmmeter. But the reader can read consistently, a value as 2.4 M due to the
nonavailability of proper scale. The error created due to the limitation of the scale
reading is a precision error.
b) Number of significant figures:
The precision of the measurement is obtained from the number of significant
figures, in which the reading is expressed. The significant figures convey the actual
information about the magnitude & the measurement precision of the quantity. The
precision can be mathematically expressed as:

4
Where, P = precision
Xn = Value of nth measurement
Xn = Average value the set of measurement values

Sensitivity:

The sensitivity denotes the smallest change in the measured variable to which the
instrument responds. It is defined as the ratio of the changes in the output of an
instrument to a change in the value of the quantity to be measured. Mathematically it is
expressed as,

Thus, if the calibration curve is liner, as shown, the sensitivity of the instrument is the
slope of the calibration curve. If the calibration curve is not linear as shown, then the
sensitivity varies with the input. Inverse sensitivity or deflection factor is defined as the
reciprocal of sensitivity. Inverse sensitivity or deflection factor = 1/ sensitivity

5
Linearity:
The linearity is defined as the ability to reproduce the input characteristics
symmetrically & linearly. The curve shows the actual calibration curve & idealized
straight line.

Reproducibility:
It is the degree of closeness with which a given value may be repeatedly measured. It
is specified in terms of scale readings over a given period of time.

Repeatability:
It is defined as the variation of scale reading & random in nature Drift:
Drift may be classified into three categories:

a) zero drift:

6
If the whole calibration gradually shifts due to slippage, permanent set, or due to
undue warming up of electronic tube circuits, zero drift sets in.

b) span drift or sensitivity drift


If there is proportional change in the indication all along the upward scale, the
drifts is called span drift or sensitivity drift.
c) Zonal drift:
In case the drift occurs only a portion of span of an instrument, it is called zonal
drift.

Resolution:
If the input is slowly increased from some arbitrary input value, it will again be
found that output does not change at all until a certain increment is exceeded. This
increment is called resolution.
Threshold:
If the instrument input is increased very gradually from zero there will be some
minimum value below which no output change can be detected. This minimum value
defines the threshold of the instrument.
Stability:

7
It is the ability of an instrument to retain its performance throughout is
specified operating life.
Tolerance:
The maximum allowable error in the measurement is specified in terms of
some value which is called tolerance.
Range or span:
The minimum & maximum values of a quantity for which an instrument is
designed to measure is called its range or span.

Dynamic characteristics:
The set of criteria defined for the instruments, which are changes rapidly with time, is
called dynamic characteristics.

The various static characteristics are:

i) Speed of response
ii) Measuring lag
iii) Fidelity
iv) Dynamic error

Speed of response:
It is defined as the rapidity with which a measurement system responds to changes in
the measured quantity.

Measuring lag:
It is the retardation or delay in the response of a measurement system to changes in
the measured quantity. The measuring lags are of two types:
a) Retardation type:
In this case the response of the measurement system begins immediately after the
change in measured quantity has occurred.
b) Time delay lag:
In this case the response of the measurement system begins after a dead time after
the application of the input. Fidelity: It is defined as the degree to which a

8
measurement system indicates changes in the measurand quantity without dynamic
error.
Dynamic error:
It is the difference between the true value of the quantity changing with time & the
value indicated by the measurement system if no static error is assumed. It is also
called measurement error.

ERRORS IN MEASUREMENT
The types of errors are follows
i) Gross errors
ii) Systematic errors
iii) Random errors

Gross Errors:
The gross errors mainly occur due to carelessness or lack of experience of a human
begin
These errors also occur due to incorrect adjustments of instruments
These errors cannot be treated mathematically
These errors are also called personal errors.

Ways to minimize gross errors:

The complete elimination of gross errors is not possible but one can minimize them by
the following ways:
Taking great care while taking the reading, recording the reading &
calculating the result
Without depending on only one reading, at least three or more
readings must be taken * preferably by different persons. 4.2 Systematic errors:
A constant uniform deviation of the operation of an instrument is known as a
Systematic error
The Systematic errors are mainly due to the short comings of the
instrument & the characteristics of the material used in the instrument, such as
defective or worn parts, ageing effects, environmental effects, etc.

9
Types of Systematic errors:

There are three types of Systematic errors as:


i) Instrumental errors
ii) Environmental errors
iii) Observational errors

Instrumental errors:
These errors can be mainly due to the following three reasons:
a) Short comings of instruments:
These are because of the mechanical structure of the instruments.
For example friction in the bearings of various moving parts; irregular spring tensions,
reductions in due to improper handling , hysteresis, gear backlash, stretching of spring,
variations in air gap, etc .,
Ways to minimize this error: These errors can be avoided by the following methods:
selecting a proper instrument and planning the proper procedure for
the measurement
recognizing the effect of such errors and applying the proper correction factors
calibrating the instrument carefully against a standard

b) Misuse of instruments:
A good instrument if used in abnormal way gives misleading results.
Poor initial adjustment, Improper zero setting, using leads of high resistance etc., are
the examples of misusing a good instrument.
Such things do not cause the permanent damage to the instruments but definitely
cause the serious errors.

C) Loading effects
Loading effects due to improper way of using the instrument cause the serious errors.
The best example of such loading effect error is connecting a well calibrated volt
meter across the two points of high resistance circuit.

10
The same volt meter connected in a low resistance circuit gives accurate reading.
Ways to minimize this error:
Thus the errors due to the loading effect can be avoided by using an instrument
intelligently and correctly.

Environmental errors:
These errors are due to the conditions external to the measuring instrument.
The various factors resulting these environmental errors are temperature changes,
pressure changes, thermal emf, and ageing of equipment and frequency sensitivity of
an instrument.
Ways to minimize this error:
The various methods which can be used to reduce these errors are:
i) Using the proper correction factors and using the information supplied by the
manufacturer of the instrument
ii) Using the arrangement which will keep the surrounding conditions constant
iii) Reducing the effect of dust , humidity on the components by hermetically sealing the
components in the instruments
iv) The effects of external fields can be minimized by using the magnetic or electro
static shields or screens
v) Using the equipment which is immune to such environmental effects.
Observational errors:
These are the errors introduced by the observer.
These are many sources of observational errors such as parallax error while reading
a meter, wrong scale selection, etc.
Ways to minimize this error :
To eliminate such errors one should use the instruments with mirrors, knife edged
pointers, etc.,
The systematic errors can be subdivided as static and dynamic errors. The static errors
are caused by the limitations of the measuring device while the dynamic errors are
caused by the instrument not responding fast enough to follow the changes in the
variable to be measured.
Random errors:

11
Some errors still result, though the systematic and instrumental errors are reduced or
at least accounted for.
The causes of such errors are unknown and hence the errors are called random
errors. Ways to minimize this error
The only way to reduce these errors is by increasing the number of observations and
using the statistical methods to obtain the best approximation of the reading.

STATISTICAL EVALUATION OF MEASUREMENT DATA

Out of the various possible errors, the random errors cannot be determined in the
ordinary process of measurements.
Such errors are treated mathematically
The mathematical analysis of the various measurements is called statistical
analysis of the data.
For such statistical analysis, the same reading is taken number of times, generally
using different observers, different instruments & by different ways of measurement.
The statistical analysis helps to determine analytically the uncertainty of the final test
results.
Arithmetic mean & median:
When the number of readings of the same measurement is taken, the most likely value
from the set of measured value is the arithmetic mean of the number of readings taken.
The arithmetic mean value can be mathematically obtained as, where

This mean is very close to true value, if number of readings is very large. But when the
number of readings is large, calculation of mean value is complicated. In such a case,
a median value is obtained which is obtained which is a close approximation to the
arithmetic mean value. For a set of measurements X1, X2, X3 ..Xn written down in
the ascending order of magnitudes, the median value is given by,

12
Average deviation:

The Average deviation is defined as the sum of the absolute values of deviations
divided by the number of readings. This is also called mean deviation.

di = deviation of ith reading


Xi= value of ith reading
=arithmetic mean

The average deviation is defined as the sum of the absolute values of deviations
divided by the number of readings. This is also called mean deviation.

Standard deviation:
The amount by which the n measurement values are spread about the mean is
expressed by a standard deviation. It is also called root mean square deviation.
The Standard deviation is defined as the square root of the sum of the individual
deviations squared, divided by the number of readings. It is denoted as

In practice for small number of readings less than 20, the denominator in above
equation is expressed as n-1 rather than n.

Variance:

The variance means mean square deviation, so it is the square of the standard
deviation. It is denoted as V.

13
STANDARD & CALIBRATION
CALIBRATION
Calibration is the process of making an adjustment or marking a scale so that the
readings of an instrument agree with the accepted & the certified standard.
In other words, it is the procedure for determining the correct values of measure and
by comparison with the measured or standard ones.
The calibration offers a guarantee to the device or instrument that it is operating with
required accuracy, under stipulated environmental conditions.
The calibration procedure involves the steps like visual inspection for various defects,
installation according to the specifications, zero adjustment etc., The calibration is the
procedure for determining the correct values of measure and by comparison with
standard ones. The standard of device with which comparison is made is called a
standard instrument. The instrument which is unknown & is to be calibrated is called
test instrument. Thus in calibration, test instrument is compared with standard
instrument.

Types of calibration methodologies:


There are two methodologies for obtaining the comparison between test instrument &
standard instrument. These methodologies are
i) Direct comparisons
ii) Indirect comparisons

Direct comparisons:
In a direct comparison, a source or generator applies a known input to the meter
under test.

14
The ratio of what meter is indicating & the known generator values gives the meters
error.
In such case the meter is the test instrument while the generator is the standard
instrument.
The deviation of meter from the standard value is compared with the allowable
performance limit.
With the help of direct comparison a generator or source also can be calibrated.
Indirect comparisons:
In the indirect comparison, the test instrument is compared with the
response standard instrument of same type i.e., if test instrument is meter, standard
instrument is also meter, if test instrument is generator; the standard instrument is also
generator & so on.
If the test instrument is a meter then the same input is applied to the
test meter as well a standard meter.
In case of generator calibration, the output of the generator tester as well as
standard, or set to same nominal levels.
Then the transfer meter is used which measures the outputs of both
standard and test generator.
Standard
All the instruments are calibrated at the time of manufacturer against measurement
standards.
A standard of measurement is a physical representation of a unit of
Measurement.
A standard means known accurate measure of physical quantity.

The different size of standards of measurement are classified as


i) Primary standard :
Closer to true value.
Protected in the national laboratory
ii) Sub standard:
Almost closer to true value.
It compared with the primary standard.

15
Consider as an accurate one.
iii) Secondary standard:
Some what closer to true value.
It compared with the sub standard.
Available in the market
iv) International standards :
Closer to true value.
Common standard used in all over the world.

*****************************************

16

You might also like