Physical Quantity - Data
Physical Quantity - Data
• By definition measurement
Is the process of determining the amount, degree or capacity by
comparison with the accepted standards of the system units
being used. OR
5
A method to obtain information regarding the physical values of
the variable.
7
• Two types of units are used in science and
engineering
– Fundamental units ( or quantities)
• E.g. meter (length), kilogram (mass), second (time)
– Derived units (or quantities); i.e. All units which
can be expressed in terms of fundamental units
• E.g. The volume of a substance is proportional to its length (l),
breadth (b) and height (h), or V= l x b x h.
• So, the derived unit of volume (V) is cube of meter (m3).
8
Quantity Unit Unit Symbol
9
Fundamental (Basic) Units
Length Meter m
Mass Kilogram kg
Time Second s
Electric current Ampere A
Thermodynamic temperature Kelvin K
Luminous intensity Candela cd
Quantity of substance Mole mol Supplementary Units
Plane angle Radian rad
Solid angle Steradian sr
Derived Units
Area Square meter m2
Volume Cubic meter m3
Velocity Meter per second m/s
10
– Length
– Mass
– Time
• As a physical representation of a unit of
measurement
• It is used for obtaining the values of the physical
properties of other equipment by comparison
methods; e.g.
11
– The fundamental unit of mass in the SI system is
the kilogram, defined as the mass of a cubic
decimeter of water at its temperature of maximum
density of 4 C.
• Direct comparison
– Easy to do but… less accurate
• e.g. to measure a steel bar
• Indirect comparison
12
– Calibrated system; consists of several devices to
convert, process (amplification or filtering) and
display the output
• e.g. to measure force from strain gages located in a structure
13
ELECTRONIC INSTRUMENT
• Basic elements of an electronics instrument
Signal Indicating
Transducer Modifier Device
1) Transducer
- convert a non electrical signal into an electrical signal
- e.g: a pressure sensor detect pressure and convert it to electricity for
display at a remote gauge.
2) Signal modifier
- convert input signal into a suitable signal for the indicating device
3) Indicating device
- indicates the value of quantity being measure
15
15
16
• Active Instruments
– the quantity being measured simply modulates (adapts to) the
magnitude of some external power source.
• Passive Instruments
– the instrument output is entirely produced by the quantity
being measured
19
• The pressure of the fluid is translated into a
movement of a pointer against scale.
20
• The energy expanded in moving the pointer is
derived entirely from the change in pressure
measured: there are no other energy inputs to
the system.
21
• An analogue instrument gives an output that
varies continuously as the quantity being
measured and has infinite number of values;
e.g. Deflection-type of pressure gauge
22
• A digital instrument has an output that varies
in discrete steps and only have a finite number
of values; e.g. Revolution counter
23
Strain Measurements
Non-destructive Testing
Automotive Sensors
Accelerometer Oxygen
Sensor
Airflow
Sensor
Oil
CO Sensor Pressure Water
Temperature
FUNCTION AND ADVANTAGES
• The 3 basic functions of instrumentation :–
Indicating – visualize the process/operation
– Recording – observe and save the measurement reading
– Controlling – to control measurement and process
28
Noise and Interference
Proce Sensor
Signal
ss or Amp Conditioner
Transducer
or
Test
ADC
Converter
OUR TOPIC IS HERE
Proces
s
PC
Controller comp
… and control and
over the process or experiment data
storage
29
PERFORMANCE CHARACTERISTICS
• Performance Characteristics - characteristics that show the
performance of an instrument.
– Eg: accuracy, precision, resolution, sensitivity.
• Allows users to select the most suitable instrument for a
specific measuring jobs.
• Two basic characteristics :
– Static – measuring a constant process condition.
– Dynamic - measuring a varying process condition.
30
PERFORMANCE CHARACTERISTICS
• Accuracy – the degree of exactness (closeness) of
measurement compared to the expected (desired) value.
• Resolution – the smallest change in a measurement variable to
which an instrument will respond.
• Precision – a measure of consistency or repeatability of
measurement, i.e successive reading do not differ.
• Sensitivity – ratio of change in the output (response) of
instrument to a change of input or measured variable.
• Expected value – the design value or the most probable value
that expect to obtain.
• Error – the deviation of the true value from the desired value.
31
• Accuracy, Precision, Resolution, and Significant
Figures
– Accuracy (A) and Precision
• The measurement accuracy of 1% defines how close the measurement is
to the actual measured quality.
• The precision is not the same as the accuracy of measurement, but they are
related
32
Measurement precision depends on the smallest change that can be observed in
the measured quantity. A 1mV change will be indicated on the digital voltmeter
display above. For the analog instrument, 50 mV is the smallest change that can
be noted
a) If the measured quantity increases or decreases by 1 mV, the reading
becomes 8.936 V or 8.934 V respectively. Therefore, the voltage is
measured with a precision of 1 mV.
b) The pointer position can be read to within one-fourth of the smallest scale
division. Since the smallest scale division represents 0.2 V, one-fourth of
the scale division is 50 mV.
Resolution
33
The measurement precision of an instrument defines the smallest
change in measured quantity that can be observed. This smallest
observable change is the resolution of the instrument.
Significant Figures
The number of significant figures indicate the precision
of measurement.
34
In a.c. circuits the impedance of the instrument varies with frequency and thus the
loading effect of the instrument can change.
Example:
Calculate the power dissipated by the voltmeter and by resistor R in Figure 10.9 when
(a) R=250 Ω, (b) R=2 MΩ. Assume that the voltmeter sensitivity (sometimes called
figure of merit) is 10 kΩ/V.
35
36
2. Systematic Error: due to shortcomings of the instrument
(such as defective or worn parts, ageing or effects of the
environment on the instrument)
• In general, systematic errors can be subdivided into static and dynamic
errors.
– Static – caused by limitations of the measuring device or the
physical laws governing its behavior.
– Dynamic – caused by the instrument not responding very fast
enough to follow the changes in a measured variable.
38
- due to external condition effecting the
measurement including surrounding area condition
such as change in temperature, humidity,
barometer pressure, etc
- to avoid the error :(a)
use air conditioner
(b) sealing certain component in the instruments
(c) use magnetic shields
39
- Eg: an observer who tend to hold his head too far to
the left while reading the position of the needle on the scale.
3) Random error
- due to unknown causes, occur when all systematic
error has accounted
- accumulation of small effect, require at high degree of
accuracy
- can be avoid by
(a) increasing number of reading
(b) use statistical means to obtain best
approximation of true value
2- Systematic Errors versus Random errors
40
Systematic Errors
Instrumental Errors
Friction
Zero positioning
Environment Errors
Temperature
Humidity
Pressure
Observational Error
Random Errors
41
Dynamic Characteristics
• Dynamic – measuring a varying process condition.
• Instruments rarely respond instantaneously to changes in the
measured variables due to such things as mass, thermal
capacitance, fluid capacitance or electrical capacitance.
• Pure delay in time is often encountered where the instrument
waits for some reaction to take place.
• Such industrial instruments are nearly always used for
measuring quantities that fluctuate with time.
• Therefore, the dynamic and transient behavior of the
instrument is important.
42
Dynamic Characteristics
• The dynamic behavior of an instrument is determined
by subjecting its primary element (sensing element)
to some unknown and predetermined variations in
the measured quantity.
• The three most common variations in the measured
quantity:
– Step change
– Linear change
– Sinusoidal change
43
Dynamic Characteristics
• Step change-in which the primary element is subjected to an
instantaneous and finite change in measured variable.
• Linear change-in which the primary element is following the
measured variable, changing linearly with time.
• Sinusoidal change-in which the primary element follows a
measured variable, the magnitude of which changes in
accordance with a sinusoidal function of constant amplitude.
• The dynamic performance characteristics of an
instrument are:
44
Dynamic Characteristics
– Speed of response- The rapidity with which an instrument
responds changes in measured quantity.
– Dynamic error-The difference between the true and
measured value with no static error.
– Lag – delay in the response of an instrument to changes in
the measured variable.
– Fidelity – the degree to which an instrument indicates the
changes in the measured variable without dynamic error
(faithful reproduction).
45
Standard
• A standard is a known accurate measure of physical quantity.
• Standards are used to determine the values of other physical
quantities by the comparison method.
• All standards are preserved at the International Bureau of
Weight and Measures (BIMP), Paris.
• Four categories of standard:
– International Standard
– Primary Standard
– Secondary Standard
– Working Standard
46
Standard
• International Std
– Defined by International Agreement
– Represent the closest possible accuracy attainable by the current science and
technology
• Primary Std
– Maintained at the National Std Lab (different for every country)
– Function: the calibration and verification of secondary std
– Each lab has its own secondary std which are periodically checked and
certified by the National Std Lab.
– For example, in Malaysia, this function is carried out by SIRIM.
47
Standard
• Secondary Standard
– Secondary standards are basic reference standards used by measurement and
calibration laboratories in industries.
– Each industry has its own secondary standard.
– Each laboratory periodically sends its secondary standard to the National
standards laboratory for calibration and comparison against the primary
standard.
– After comparison and calibration, the National Standards Laboratory returns
the secondary standards to particular industrial laboratory with a certification
of measuring accuracy in terms of a primary standard.
• Working Std
48
Standard
– Used to check and calibrate lab instrument for accuracy and performance.
– For example, manufacturers of electronic components such as capacitors,
resistors and many more use a standard called a working standard for checking
the component values being manufactured.
49
INSTRUMENT APPLICATION GUIDE
50
INSTRUMENT APPLICATION GUIDE
51
Analog Multimeter
INSTRUMENT APPLICATION GUIDE
52
Digital Multimeter
LECTURE REVIEW
• Define the terms accuracy, error, precision, resolution, expected value and
sensitivity.
• State three major categories of error.
• A person using an ohmmeter reads the measured value as 470 ohm when
the actual value is 47 ohm. What kind of error does this represent?
• State the classifications of standards.
• What are primary standards? Where are they used?
• What is the difference between secondary standards and working
standards?
• State three basic elements of electronic instrument.
53