0% found this document useful (0 votes)
16 views273 pages

Adv - Inst.Class Notes

The document provides class notes on Advanced Instrumentation, covering topics such as measurements, calibration, digital instrumentation, and biomedical instrumentation. It discusses measurement systems, instrument characteristics, accuracy, precision, calibration procedures, and the importance of regular calibration for maintaining measurement accuracy. Various types of calibration methods are also outlined, emphasizing the need for proper calibration to ensure reliable instrument performance.

Uploaded by

ry473414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views273 pages

Adv - Inst.Class Notes

The document provides class notes on Advanced Instrumentation, covering topics such as measurements, calibration, digital instrumentation, and biomedical instrumentation. It discusses measurement systems, instrument characteristics, accuracy, precision, calibration procedures, and the importance of regular calibration for maintaining measurement accuracy. Various types of calibration methods are also outlined, emphasizing the need for proper calibration to ensure reliable instrument performance.

Uploaded by

ry473414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 273

Advance Instrumentation

Advance Instrumentation – “VII Electrical”

Class Notes

Ganesh Adhikari
[email protected]
[email protected]
Course contents:
1. Measurements and Calibration

2. Digital Instrumentation

3. Introduction to Biomedical Instrumentation

4. Introduction to Fiber-optic Instrumentation

5. Analytical & Testing Instrumentation

6. Microprocessor based Instrumentation

7. Microcontroller/Embedded system Instrumentation


References:
1. Mazadi M.A., The 8085 Microcontroller & Embedded system ,
Pearson Education
2. Biomedical Instrumentation & Measurements by Cromwell, Pearson
Education
3. Optical Fiber sensing Technology , Jose Migel Lopez-Higuera, John
Wiley & Sons
4. S. Wolf and R.T.F. Smith “Students Reference Manual for Electronic
Instrumentation Laboratories” Prentice Hall
5. E.O Deobchin “Measurement systems : Application
+ Design” , MC Graw Hill
6. R. Rasad, Electronic Measurement and Instrumentation, Khanna
Publisher
Measurements and Calibration

What is Measurement?
Measurements and Calibration
Introduction

Measurement techniques have immense important since the start of human


civilization

Measurements first needed to regulate the transfer of goods in barter trade in


order to ensure that exchanges were fair

The industrial revolution during the nineteenth century brought about a rapid
development of new instruments and measurement techniques to satisfy the
needs of industrialized production techniques

The massive growth in the application of computers to industrial process control


and monitoring tasks has greatly expanded the requirement for instrument to
measure, record and control process variables.
Measurement Units

• The very first measurement units were those used in barter trade to
qualify the amounts being exchanged and to established clear rules
about the relative values of different commodities.

• Such early system of measurements was based on whatever was


available as a measuring unit.

• An internationally agreed set of standard units has been defined and


strong efforts are being made to encourage the adoption of this
system throughout the world.
Table: Definition of standard Units
Measurement system Design
In this section we will look at the main considerations in designing a
measurement system. First we will learn that a measurement system
usually consists of several separate components, although only one
component might be involved for some very simple measurement
tasks
Elements of Measurement System
• A measuring system exists to provide information about the physical
value of some variable being measured.
• In simple cases, the system can consists of only a single unit that
gives output reading or signal according to the magnitude of the
unknown variable applied to it.
• In more complex measurement situations, a measuring system
consists of several separate elements as shown in figure below.
The first element in any measuring system is the primary sensor: this gives an
output that is function of the measurand (the input applied to it).
Variable conversion elements: are needed where the output variable of the
primary transducer is in an inconvenient form and has to be converted to a
more convenient form. For instance, the displacement measuring stain gauge
has an output in the form of a varying resistance. The resistance changes in
voltage by a bridge circuit which is a typical example of a variable conversion
element.
Signal processing elements exists to improve the quality of the output of a
measurement system in some-way. The electronic amplifier is a very common
type of signal processing element. This is used to amplify low-amplitude
outputs from the primary transducer or variable conversion elements, thus
improving the sensitivity and resolution of measurement.
Signal transmission is needed when the observation or application point of the
output of a measurement system is some distance away from the site of the
primary transducer.
The final optional element is the measurement system, which is the point where
the measured signal is utilized. In some case, this element is omitted
altogether because the measurement is used as part of the automatic control
scheme, and the transmitted signal is fed directly to the control system.
Choosing appropriate measuring instrument
• The starting point in choosing the most suitable instrument to use for
measurement of a particular quantity in a manufacturing plant or
other system is the specification of instrument characteristics
required, especially parameters like the desired measurement
accuracy, resolution, sensitivity and dynamic performance.

• It is also essential to know the environment conditions that the


instrument will be subjected to, as some conditions will immediately
either eliminate the possibility of using certain expensive protection
of the instrument.

• It should also be noted that protection reduces the performance of


some instruments, especially in terms of their dynamic
characteristics.
Static Characteristics of Instrument
• If we have a thermometer in a room and its readings shows a temp of
200 C then it doesn’t really matter whether the true temperature of
the room is 19.5 or 20.5oC. Such small variation surround 200C are too
small to affect whether we feel warm enough or not. Our bodies
cannot discriminate between such close levels of temperature and
therefore a thermometer with an inaccuracy of ±0.50c is perfectly
adequate.
• If we had to measure the temperature of certain chemical process,
however a variation of 0.50C might have a significant effect on the
rate of reaction or even the products of a process. A measurement
inaccuracy much less than ±0.50c is therefore clearly required.
Accuracy and Inaccuracy (Measurement Uncertainty)

• The accuracy of an instrument is a measure of how close the reading


of the instrument is to the correct value.

• In practice, it is more usual to quote the inaccuracy or measurement


uncertainty value rather than the accuracy value of an instrument.

• Inaccuracy or measurement uncertainly is the extent to which a


reading might be wrong or is often quoted as a percentage of the full
scale reading of an instrument.
Example
• A pressure gauge with a measurement range of 0-10 bar has a
quoted in accuracy of ±1.0% of full scale reading.
• What is the maximum measurement error expected for this
instrument?
• What is the likely measurement error expressed as a percentage of
the output reading, if this pressure gauge is measuring a pressure of
1 bar?
Solution:
• The maximum error expected in any measurement reading is 1.0% of
the full scale reading, which is 10 bars for this particular instrument.
Hence the maximum likely error is 1.0%×10 bar=0.1 bar.
• The maximum measurement error is a constant value related to the
full scale reading of the instrument, irrespective of the magnitude of
the quantity that the instrument is actually measuring. In this case, as
worked out above, the magnitude of error is 0.1bars. Thus when
measuring a pressure of 1bar the maximum possible error is 0.1bar is
• The above example carries a very important message.

• Since the maximum measured error in an instrument usually related


to the full-scale reading of the instrument, measuring quantity that
are substantially less than the full-scale reading means that the
possible measurement error is amplified.

• For this reason, it is an important system design rule that instrument


are chosen such that their range is appropriate to the spread of value
being measured, so that the best possible accuracy is maintained in
instrument reading.

• Clearly, if we are measuring pressures with expected values between


0 and 1 bar, we would not use an instrument with a measurement
range of 0-10 bar.
Precision/Repeatability/Reproducibility
• Precision is a term that describes an instrument degree of freedom
from random errors. If a large number of readings are taken of the
same quality by a high-precision instrument, then the spread of
reading will be very small.
• Precision is often, thought incorrectly confused with accuracy. High
precision doesn’t imply anything about measurement accuracy. A
High precision instrument may have low-accuracy. Low accuracy
measurement from a high precision instrument is normally caused by
a bias in the measurement which is removable by recalibration.
• Repeatability describes the closeness of output reading when the
same input is applied repetitively over a short period of time, with
the same measurement conditions, same instrument and observer,
same location and same condition of use maintained throughout.
• Reproducibility describe the closeness of output reading for the same
input when there are changes in the method of measurement
observer measuring instrument location condition if use and time of
measurement.
Tolerance
 Tolerance is a term that is closely related to accuracy and defines the
maximum error that is to be expected in some value.

 Tolerance describe the maximum deviation of a manufactured


component from some specified value.

Example
• A packet of resistor bought in an electronic shop gives the nominal
resistance value of 1000Ω and the manufacturing tolerance as ±5%
if one resistor is chosen at random from the packet, what is the
minimum and maximum resistance value that this particular
resistor is likely to have?
Solution
• The minimum likely value is 1000Ω -5%=950 Ω The maximum likely
value if 1000Ω +5% = 1050 Ω
Range or Span
The range or span of an instrument defines the minimum and
maximum value of a quantity that the instrument is designed to
measure.
Example
• A particular micrometer is designed to measure dimensions
between 50 and 75, mm. what is the measurement range.
Solution:
The measurement range is simply the difference between the
maximum and minimum measurements. Thus in this case the range
is 75 - 50 = 25mm
Sensitivity of measurement
• The sensitivity of measurement is a measure of the change in instrument output
that occurs when the quality being measured changes by a given amount.

[𝑠𝑐𝑎𝑙e 𝑑𝑒𝑓𝑙𝑒𝑐𝑡𝑖𝑜𝑛 / 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓


𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑚𝑒𝑛𝑡]
• Thus, sensitivity is the ratio of

• Example:

Sensitivity = (314 – 307) / (230 – 200)


Sensitivity = 7 / 30 = 0.233 Ω/ degree C
Calibration
 Every instrument has at least one input and one output.

 For a pressure sensor, the input would be some fluid pressure and
the output would (most likely) be an electronic signal.

 For a variable-speed motor drive, the input would be an electronic


signal and the output would be electronic power to the motor.

 To calibrate an instrument means to check and adjust (if necessary)


its response so the output accurately corresponds to its input
throughout a specified range.

 Calibration is the process of adjusting an instrument of equipment to


meet the manufacturer specifications.
Why calibrate an instrument?
• Virtually all equipments degrades in some fashion over time, and
electronic equipment a main stay of today manufacturing process, is
not an exception.
• As components age, they lose stability and drift from their published
specifications.
• Even normal handling can adversely affect calibration, and rough
handling can throw a piece of equipment completely out of
calibration even though it may appear physically correct.
• Continuing calibration assures the equipment continually meets the
specification required at installation and it should be checked
frequently thereafter.
• Calibration is required after any maintenance to ensure that the
equipment still conforms to the required calibrations data.
Calibration and Ranging
• Calibration and ranging are two task associated with establishing an
accurate correspondence between any instrument input signal and
its output signal
• To calibrate an instrument means to check and adjust (if necessary)
its response so the output accurately corresponds to its input
throughout a specified range.
• In order to do this, one must expose the instrument to an actual
stimulus of precisely known quantity.
For a pressure gauge, indicator or transmitter, this would means
subjecting the pressure instrument to known fluid pressure and
comparing the instrument response against those known pressure
quantities. One can not perform a true calibration without
comparing the instrument response to known physical stimuli.
• To range an instrument means to set the lower and upper range value
so its responds with the desired sensitivity to change in input.
• For example a pressure transmitter set to a range of 0 to 200 PSI
(0PSI = 4maoutput; 200 PSI = 20maoutput) could be re-ranged to
respond on a scale of 0 to 150 PSI (0PSI=4ma; 150PSI=20ma)
• In analog instrument, re-range could (usually) only be accomplished
by re-calibration, since the same adjustments were used to achieve
both purpose.
• In digital instrument, calibration and ranging are typically separate
adjustment (i.e. it is possible to re-range a digital transmitter without
having to perform a complete recalibration). So, it is important to
understand the difference.
Necessity for Calibration
Instrument calibration is to ensure a continuous work flow. Although
there are several reasons for instruments calibration yet the following
are important from subject point of view.

 With a new instrument


 When a specified time period or operation hours is elapsed
 Before and after critical measurement
 As indicated by the manufacturer
 When a instrument has had a shock or vibration when potentially
may have put it out for calibration
 Sudden change in weather
 Whenever observation appears questionable
Principle of calibration
• Calibration consists of comparing the output of the instrument or
sensor under test against the output of the instrument of known
accuracy, when the same input (the measured quantity) is applied to
both instruments.
• This procedure is carried out for a range of input covering the whole
measurement range of the instrument or sensor.
• Calibration ensures that the measuring accuracy of all instruments
and sensors used in a measurement of all instruments and sensors
used in a measurement system is known over the whole
measurement range, provided that the calibrated instruments and
sensors are used in environmental conditions that the same as those
under which they are calibrated.
Types of Calibration:
There are two types of calibration methods
1. Direct Comparison calibration method:
• In this method, a source applies a known input to the meter under
test. The ratio of what meter is indicating and the known source
value gives the meter’s error. In this case meter is UUT(unit under
test) while source is the standard instrument.
Standard Instrument Test Instrument
Source Meter to be calibrated
2. Indirect Comparison calibration method:
• In this method, the UUT is compared with the response of standard
instrument of same type , if the test instrument is meter then the
standard instrument is also meter.
Calibration Procedure:
The following procedure describes the process of calibration different
types of instruments. The simplest calibration procedure for an
analog, linear instrument is the so-called zero-and- span method. The
method is as follows:
• Apply the lower range value stimulus to the instrument, wait for it to
stabilize
• Move the “zero” adjustment until the instrument register accurately
at this point.
• Apply the upper-range value stimulus to the instrument, wait for it to
stabilize.
• Move the “span” adjustment until the instrument register accurately
at this point.
• Repeat step 1 through step 4 as necessary to achieve good accuracy
at both ends of the range.
An improvement over this crude procedure is to check the instrument
response at several points between the lower-and upper-range value.
The procedure for calibrating a “smart” digital transmitter also known
as digital trimming is a bit different.
Unlike the zero and span adjustment of an analog instrument, the
low and high trim functions of digital instruments are typically non-
interactive.
That is adjusting the high trim function has no effect on the low trim
function and vice-versa.
This process normally involves trimming the digital circuit of the
analog to digital converter in the smart transmitter that is; it is the
sensor reading after the A/D conversion which is trimmed, not the
sensor hardware.
• Sensor trim is used to correct the digital reading as seen in the device
local indicator LCD and received over the digital communication.
• For instance, if pressure is 0 bar but transmitter reading shows
0.03bar, then sensor trim is used adjust it back to 0 bar.
Trimming the sensor of the smart instrument consists of these four
general steps:
1. Apply the lower-range value stimulus to the instrument; wait for it to
be stabilized.
2. Execute the “low” sensor trim function
3. Apply the upper-range value stimulus to the instrument, wait for it to
stabilize
4. Execute the “high” sensor trim function.
The calibration of inherently nonlinear instrument is much more
challenging than for linear instruments. No longer are two
adjustment (zero and span) sufficient, because more than two points
are necessary to define a curve. Example of non-linear instruments
included expanded- scale electric meters, square root character and
position characterized control value. Every nonlinear instrument will
have its own recommended calibration procedure.
Process Instrumentation
• Management and control is the brain and nervous system of any
modern plant.
• Measurement and control systems monitor and regulate processes
that otherwise would be difficult to operate efficiently and safely
while meeting the requirements for high quality and low cost.
• Process instrumentation and control (also known as process
measurement and control, process Automation or just
instrumentation) is needed in modern industrial processes for a
business to remain profitable.
• It improves product quality, reduces plant emissions, minimizes
human error and reduces operating costs among many other
benefits.
Environmental Instrumentation
Environmental issues such as climate change, pollution and
air quality have a very high profile with both government
and the public. Accurate measurements play a vital role in
gauging the scale of an thropogent and naturally driven
effects, pollutant concentrations and the development of
strategies to mitigate short and long term impacts.
 Mobile Remote Atmospheric Monitoring system
 Controlled Flow Air sampler
 Environmental Chamber for Gaseous Exposure Testing
 Auto System for calibration of Vehicle Emission Monitors
Power plant instrumentation

1. Temperature measurement instruments – eg. Thermocouple, RTD


2. Pressure measurement instruments – eg. Bourdon Tube, Pressure
Transducers
3. Flow measurement instruments – eg. Turbine flow meter,
Electromagnetic flow meter
4. Level measurement instruments – eg. Float type level sensor, Radar
level sensor
5. Electrical instrumentation - eg. Voltage & Current transformer,
Wattmeter
6. Boiler instrumentation – eg. Steam pressure transmitter, Fuel flow meter
7. Turbine instrumentation – eg. Speed sensor, Pressure & Flow sensor
8. Emission monitoring instrument – eg. Opacity monitors, Gas analyzer
9. Control & Automation system – eg. Distributed control system (DCS),
PLC
10. Safety instrumentation – eg. Pressure relief valve, Trip & Alarm system
etc.
Automobile Instrumentation

1. Speed related instruments – eg. Odometer, Tachometer


2. Fuel & energy management instruments – eg. Fuel Gauge, Battery
charge indicator
3. Engine & Performance monitoring instruments – eg. Engine
temperature gauge, Engine warning light
4. Electrical & Lighting instruments – eg. Dashboard illumination,
Headlight & Indicators lights
5. Driver assistance & Safety instrument – eg. Parking sensor, Reverse camera
display
6. Comfort & Entertainment instruments – eg. Climate control display,
Infotainment screen
7. Advance driver assistance system (ADAS) – eg. Heads up display (HUD),
Navigation display
8. Transmission & drive-train instruments – eg. Gear shift indicator, 4WD/AWD
Mode indicators
9. Autonomous driving & Monitoring systems (Modern Car) – eg. Autonomous
mode status, Traffic sign recognition
10. Miscellaneous Indicators – eg. Seat belt warning, Fuel cap warning etc.
Digital Instrumentations
Data Acquisition System
• Data acquisition is the process of sampling signal that
measures real world physical conditions and converting the
resulting samples into digital numeric value that can be
manipulated by a computer.
• A data acquisition system consists of many components that
are can be manipulated by a computer. A data acquisition
system consists of many components that are integrated to:
• Sense physical variables (use of transducers)
• Condition by electrical signal to make it readable by A/D
board.
• Convert the signal into a digital format acceptable by
computer.
• Process analysis, store and display the acquired data with
Components of DAS
Data Logger
• Data logger automatically makes a record of the readings of
instruments located at different parts of plant.
• Data logger measures and record data effortlessly as quickly, as
often and as accurately as desired.
• These devices measure electrical output from transducers give plant
performance computation, logic analysis of alarm conditions, passes
information (reading) to computer for further processing etc.
• So they are used in power generation plant, petrol chemical
installations, real time processing plant etc.
Characteristics of data loggers
• Modularity
• Reliability and raggedness
• Accuracy
• Management tool
• Ease of use.
Application of data logger
 Weather station recording e.g. wind speed, wind
direction, temperature, relative humidity.
 Hydrographic recording e.g. water level, depth
water flow pH conductivity.
 Soil moisture level
 Gas pressure
 Environmental monitoring
Input scanner
It is an automatic sequence switch which selects each signal in turn.
Modern scanner have input scanner which can scan at a rate of 150
inputs per seconds.
Characteristics of input scanner may be
 Low closed resistance
 High open circuit resistance
 Low contact potential
 Negligible interaction between switch, enter going signal and input
signal
 Short operating time
 Negligible contact bounce
 Long operation life
Signal amplifier and conditioner
Amplification for gain adjustment i.e. low level signal amplified up to 5V output
Characteristics are
 Precision and stable DC gain
 High SNR
 High CMRR
 Low DC drift
 Low out impedance
 High input impedance
 Good linearity
 Wide bandwidth
Conditioner for scaling linear transducer or correcting curvature of non-linear
transducer i.e. signal is changed to more linear from and suitable for digital analysis
characters are
 Linear scale
 Correcting the curvature of non-linear transducer
 It may include sample and hold circuits.
A/D converters
Converts analog sample into digital data.
Characteristics are
 Resolution
 Accuracy
 Conversion time
 Full scale output voltage
 Linearity
Recorder
Output from data logger may be recorded in any of the following.
Typewriter, strip printer, digital tape recorder, punched tape, computer (hard drive), magnetic
tapes etc
Characteristics are
 Speed
 Memory
 Writing techniques (serial/parallel)

Programmer
 Control all units of data conversion and parameters
 Microcontroller or microprocessor based system
 Basic units: mainframes, front panel assembly power supply units, scanner controller, input
Operations performed by programmer
 Set amplifier
 Set linearity factor
 Set high and low alarm value
 Start A/D conversion
 Record reading channel
 Identify channel and time of recording
 Display recording
 Reset logger.
 High level scanner adds intelligence & integration, while low level scanner
capture raw data and deals with it.
 A typical data logger unit provides 60 channels of data in a 20 x 40 x 60 cm
box weighting about 20 Kg. Most manufacturers offer local or remote add-
on scanners to expand about 100 channels.
 Scan rates are modest (1-20) channels per second
 The signal processing capability is limited to simple functions such as
(mx+b) scaling time averaging of single channels, group averaging of
several channels and alarm signaling when present limits are exceeded.
 Most units do allow interfacing to computers where versatile processing is
possible
 This class of data logger utilize a built in microprocessor to control the
interval of operation and carry out calculations through a single amplifier
-A/D converter, which is automatically ranged in gain switched under
program control
 Multiplexers are available in both general purpose (two wire) and low
level (two original wire plus shield) versions.
 Mill volt level signals, such as from thermocouples, generally use a
 Electro-mechanical read switches are used frequently in such
scanners since speed requirements are modest but low noise is
important.
 Since thermocouples are very common in data logger applications,
reference function compensation and linearization option are always
available.
 The microprocessor also stores the equation which curve-fit the
thermocouple table for each
 The system amplifier and A/D converter is crucial element for several
system accuracy.
 The microprocessor sets the amplifier gains at a proper value as each
channel is sampled.
 The A/D converter are often of dual slope type or voltage to
frequency converter type as the speed is modest with noise
rejections
 Readout obtained by means of a built in digital indicator and two
color printers whose format is selected by front panel programming.
Data Archiving and storage
Data archiving
Data archiving is the process of moving data that is no longer actively used to
separate data storage Device for long term retention but can be readily accessed
if required. Data archives consist of older data that is still important and
necessary for future reference as well as data that must be retained for
regulatory compliance. Referential integrity should be maintained.
 Data archives are indexed and have search capabilities so that files and parts of
files can be easily located and retrieved.
 Data achieves are often confused with data backups, which are copies of data.
Data backups are used to restore data in case it is corrupted or destroyed. In
contrast, data achieves protest older information that is not need or everyday
operations but may occasionally need to be accessed.
Data storage
Storage factors:
 Speed with which data can be accessed
 Cost per unit of data
 Reliability
−Data loss on power failure or system crash
Sample and Hold Circuit
Quantization:
• It is the process of converting an input function having continuous
value to an output having only discrete value.
Binary coding:
• It is the method of assigning a binary equivalent number to each
discrete level.
Sampling rate:
• The analog signal is continuous in time and it is necessary to convert
this to a flow of digital value.
• It is therefore required to define the rate at which new digital values
are sampled from the analog signal.
• The rate of new value is called the sampling rate or sampling
frequency or the converter.
• A continuously varying band limited signal can be sampled and then
the original signal can be exactly reproduced from the discrete-time
values by an interpolation formula.
• However this faithful reproduction is only possible if the sampling
rate is higher than twice the highest frequency of the signal.
• This is essentially what is embodied in the Shannon-Nyquist sampling
theorem.
• Since a practical ADC cannot make an instantaneous conversion, the
input value must necessarily be held constant during the time the
converter performs a conversion (called conversion time) .
• An input circuit called a sample and Hold performs this task in most
cases by using a capacitor to store the analog voltage at the input and
using an electronic switch or gate to disconnect the capacitor from
the input.
• Many ADC integrated circuit includes the sample and hold subsystem
internally.
Introduction to Biomedical Instrumentation
 Biomedical instrumentation system can be viewed as information
gathering instrument although sometimes includes some monitoring
and control devices.
 It can generally be classified into two major types : clinical and
research.
 Clinical instrumentation is basically devoted to the diagnosis, care
and treatment of patients,
 where as research instrumentation is used primarily in the search for
new knowledge pertaining to various systems that compose the
human organism.
 The overall system which includes both the human organism and the
instrumentation required for measurement of the human is called
the man-instrument system.
 The concept of the man-instrument system applies to both clinical
and research instrumentation.
 Measurement in which biomedical instrumentation is
employed can be divided into two categories: in vivo and in
vitro.
 An in vivo measurement is one that is made on or within
the living organism itself.
 An example would be a device inserted into the blood
stream to measure the pH of the blood directly.
 An in vitro measurement is one performed outside the
body, even though it relates to the functions of the body.
 An example of an in vitro measurement would be the
measurement of the pH of a sample of blood that has been
drawn from a patient.
Component of the man-instrument system
The basic components of the systems are essentially the same as in any
instrumentation system. The only real difference is in having a living human being as
the subject.
The system components are given below.
1. The Subject
• The subject is the human being on whom the measurements are made. Since it is the subject
who makes this system different from other instrumentation system.
2. Stimulus
• In many measurements, the response to some form of external stimulus is required. The
instrumentation used to generate and present this stimulus to the subject is a vital part of the
man-instrument system whenever responses are measured. The stimulus may be visual,
auditory or direct electrical stimulation of some part of the nervous system.
3. The transducer
• Transducer is defined as a device capable of converting one form of energy or signal to
another. In the man- instrument system, each transducer is used to produce an electric signal
that is an analog of the phenomenon being measured.
4. Signal conditioning equipment
The part of the instrumentation system that amplifies, modifies or in any other way changes
the electric output of the transducer is called signal conditioning equipment. The purpose of
the signal conditioning equipment is to process the signal from the transducer in order to
satisfy the functions of the system and to prepare signals suitable for operating the display or
5. Display equipment
• To be meaningful, the electrical output of the signal-conditioning equipment must
be converted into a form that can be perceived by one of man’s senses and that
can convey the information obtained by the measurement in a meaningful way. In
the man-instrumentation system, the display equipment may include a graphic
pen recorder that produces a permanent record of the data.
6. Recording, Data-processing and Transmission Equipment
• It is often necessary, or at least desirable, to record the measured information for
possible later use or to transmit it from one location to another.
7. Control Devices
• It is necessary or desirable to have automatic control of the stimulus, transducers,
or any other part of the man- instrument system, a control system is incorporated.
This system usually consists of a feedback loop in which part of the output from
the signal-conditioning or display equipment is used to control the operation of
the system in some way.
The major physiological systems of the body are:
 The biochemical systems
 The cardiovascular system
 The respiratory system
 The nervous system
Resting and Action potentials
 Bioelectric potentials are signal associated with nerve conduction,
brain activity, heart-beat, muscle activity and so on.
 The bioelectric potential is actually ionic voltages produced as a
result of the electrochemical activity of certain special types of cell.
 Surrounding the cells of the body are the body fluids. These fluids are
conductive solutions contained charged atoms known as ions. The
principle ions are sodium(Na+), potassium(K+) and chloride (Cl-).
 When a cell is not sending a signal, it is“ at rest”. When the cell is at
rest, the inside of the membrane is negative relative to the outside.
 Although the concentrations of the different ions attempt to balance
out on both sides of the membrane, they cannot because the cell
membrane allows only some ions to pass through channel (ion
channels).
 At rest, potassium ions(K+) can cross through the membrane easily.
Also at rest, chloride ions(Cl-) and sodium ions (Na+) have a more
difficult time crossing the membrane. The negatively charged protein
 Finally, when all these forces balance out the difference in the voltage
between the inside and outside of the neuron is measured, there is
the resting potential and is maintained until some kind of disturbance
upset the equilibrium.
 The resting membrane potential of neuron ranges from 60 to 100mV.
At rest, the membrane allows K+ ions and Cl- ions to diffuse along the
concentration gradient.
 Hence, potassium ions move inside the cell through the membrane to
restore neutrality of the cell.
 But as N+ ions can not move through the membrane the
concentration of Na+ ion much exceeds the K+ ion within the cell.
 Hence, a potential develops across the membrane due to unbalance
of ions.
 Since the measurement of the membrane potential is generally made
from inside the cell with respect to the body fluids, the resting
potential of a cell is given as negative.
 When a section of the cell membrane is excited by the flow of ionic
current or by some form of externally applied energy, the membrane
changes its characteristics and begins to allow some of the Na+ ions to
enter the membrane.
 This movement of Na+ into the cell constitutes an ionic current flow
that further reduces the barrier of the membrane to Na+.
 The net result is an avalanche effect in which Na+ literally rushes into
the cell to try to reach a balance with the ions outside.
 At the same time K+, which was in higher concentration inside the
cell during the resting state try to leave the cell but are unable to
move in higher concentration inside the cell during the resting
state, try to leave the cell but are unable to move as rapidly as the
Na+.
 As a result, the cell has a slightly positive potential on the inside
due to the imbalance of K+.
 This potential is known as the action potential and is
approximately +20mV.
 A cell that has been excited and that displays an action potential is
said to be depolarized; the process of changing form the resting
state o the action potential is called depolarization.
 Depolarization of a cell Na ion rush into the cell while K ions attempt
to depolarized cell during an action potential leave.
 By an active process, called a sodium pump, the Na ions are quickly
transported to the outside of the cell, and the cell again becomes
polarized and assumes its resting potential.
 This process is called re-polarization.
 A typical action-potential waveform represents beginning at the
resting potential, depolarizing and returning to the resting potential
after re-polarization.
 The time scale for the action potential depends on the type of cell
producing the potential.
 When a cell is excited and generates an action potential ionic current
level to flow.
 This process can in turn excite neighboring cell or adjacent are as of
the same cell.
 The rate at which an action potential moves down a fiber or is
propagated from cell to cell is called propagation rate.
The bioelectric potentials
• To measure bioelectric potentials, a transducer capable of converting
ionic potential and currents into electric potentials and currents is
required.
• Such a transducer consists of two electrodes, which measure the ionic
potential difference between their respective points of applications.
• Thus the bioelectric electrodes are simply electrical terminals or contact
points from which voltages can be obtained at the surface of the body.
• Also the purpose of the electrolyte paste or jelly often used in such
measurement might be assumed to be only the reduction of skin
impedance in order to lower the overall input impedance of the system.
• Devices that convert ionic potentials into the electronic potentials are
called electrodes.
• In electrodes used for the measurement of bioelectric potentials, the
electrode potential occurs at the interface of a metal and an electrolyte,
where as in biochemical transducers both membrane barriers and metal-
electrolyte interfaced are used.
Bio-potential electrodes:
• A wide variety of electrodes can be used to measure bioelectric events, but
nearly all can be classified as belonging to one of three basic types:
Micro-electrode:
• Microelectrodes are electrodes with tips sufficiently small to penetrate a single
cell in order to obtain readings from within the cell.
• The tip must be small enough to permit penetration without damaging the cell.
• This action is usually complicated by the difficulty of accurately positioning an
electrode with respect to cell. Microelectrodes are generally of two types:
metal and micropipette.
• A commercial type of micro-electrode is shown below.
Micro-Electrodes used to measure bioelectric potential near or within a single
cell.
Biochemical transducer
• An electrode potential is generated either at the metal-electrolyte
interface or across a semi permeable membrane separating two
different concentrations of ions that can diffuse through the
membrane.
• Both methods are used in transducers designed to measure the
concentration of an ion or of certain gas dissolved in blood or some
other liquid.
• It is impossible to have a single electrode interface to a solution; a
second electrode is required to act as a reference.
• The usual method of measuring concentrations of ions or gases is to
use one electrode that is sensitive to the substrate or ion being
measured and to choose the second or reference electrode, of a type
that is insensitive to that substance.
Reference electrodes
• Hydrogen electrodes can be used as reference electrode, as this
electrode is assigned a potential of zero volts.
• Hydrogen electrodes can be built and are available
commercially.
• Since measurement of electrochemical concentrations simply
requires a change of potential proportional to a change in
concentrations, the electrode potential of the reference
electrode can be any amount as long as it is stable and does not
respond to any possible changes in the composition of the
solutions being measured.
• Thus, the search for a good reference electrode is essentially a
search for the most stable electrode available.
• Two types of electrodes have interfaces sufficiently stable to
serve as reference electrodes-the silver chloride electrode and
calomel electrode.
pH electrode
 The most important indication of chemical balance in the body of the pH of
the blood and other body fluid.
 The pH is directly related to the hydrogen ion concentration in a fluid.
 Specifically, it is the logarithm of the reciprocal of the H + ion concentration.
In equation form
• Because a thin glass membrane allows passage of only hydrogen ions
in the form of H3O+, a glass electrode provides a “membrane”
interface for hydrogen.
• This principle is illustrated in the figure.
• Inside the glass bulb is a highly acidic buffer solution.
• Measurement of the potential across the glass interface is achieved
by placing a silver-silver chloride electrode in the solution inside the
glass bulb and a calomel or silver chloride reference electrode in the
solution in which the Ph is big measured.
• In the measurement of pH and in fact any electro-chemical
measurement, each of the two electrodes required to obtain the
measurement is called a half-cell.
• The type of glass used for the membrane has much to do with the pH
response of the electrode.
• Special hydroscopic glass that readily absorbs water provides the best
pH response.
Blood gas electrode
• Among the more important physiological chemical measurement
are the partial pressure of oxygen and carbon dioxide in the
blood.
• The partial pressure of a dissolved gas is the contribution of that
gas to the total pressure of all dissolved gases in the blood.
• The partial pressure of a gas is proportional to the quantity of
that gas in the blood.
• The effectiveness of both the respiratory and cardiovascular
system is reflected in these important parameters.
• The partial pressure of oxygen, Po2 often called oxygen tension
can be measured both in vitro and in vivo.
• A fine piece of platinum or some noble metal wire, embedded in
glass for insulation purposes, with only the tips exposed, is
placed in an electrolyte into which oxygen is allowed to diffuse.
 If a voltage of about 0.7V is applied between the platinum wire and a
reference electrode with the platinum wire negative, reduction of the
oxygen takes place at the platinum cathode.
 As a result, an oxidation-reduction current proportional to the partial
pressure of the diffused oxygen can be measured.
 The electrolyte is generally sealed into the chamber that holds the
platinum wire and the reference electrode by means of a membrane
across which the dissolved oxygen can diffuse from the blood.
 The measurement of the partial pressure of carbon dioxide, Pco2
makes use of the fact that there is a linear relationship between the
logarithm of the Pco2 and the pH of the solution.
 Since other factors also influence the pH measurement of Pco2 is
essentially accomplished by surrounding a pH electrode with
membrane selectivity permeable to CO2
Bio-potential amplifier
• Basic function of bio-potential amplifiers is to increase the amplitude of a weak
signal of biological origin.
• It typically processes voltages but in some cases also processes currents.
• Amplifiers adequate to measure these Signals have to satisfy very specific
requirements.
• They have to provide amplification selective to the physiological sign, reject super
imposed noise and interface signals and guarantee protection from damages
through voltage and current surges for both patient and electronic equipment.
• Amplifiers featuring these specifications are known as bio-potential amplifiers.
Basic requirements and features of bio-potential amplifiers are:
 High input impedance-greater than10 M-ohms
 Safety–offer protection of the patient from any hazard of electric shock.
 Low output impendence–to drive any external load with minimum distortion.
 Higher gain-greater than 1000
 High common mode rejection ratio: to suppress the noise effect at inputs.
 Proper isolation-provides the best possible separation of signal and interferences.
 Easy calibration and adjustable.
Blood pressure measurement (Invasive and Noninvasive)
• The heart’s pumping cycle is divided into two major parts: systole and
diastole.
• Systole is defined as the period of contraction of the heart muscles,
at which time blood is pumped into the pulmonary artery and the
aorta.
• Diastole is the period of dilation of the heart cavities as they fill with
blood.
• The heart pumps about 5 liters of blood per minutes.
• Systolic (maximum) blood pressure in the normal adult is in the range
of 95 to 140 mm Hg, with 120 mm Hg being average.
• Normal diastolic blood pressure (lowest pressure between beats)
ranges from 60 to 90 mm Hg, 80 mm Hg being about average.
• This pressure is usually measured in the bronchial artery in the arm.
• The blood pressure can be measured in two ways, indirect
measurement and direct measurement.
Indirect measurement (non-invasive measurement)
• In indirect form, with application of pressure on the upper arm in between systolic
and diastolic pressure, sound is heard by stethoscope to identify the systolic and
diastolic pressure.
• This method provides information only about systolic and diastolic pressure and
gives no indication about the pressure variation with time, whereas, direct method
provides continuous blood pressure monitoring.
• The indirect method a device called sphygmomanometer can be used to measure
the blood pressure.
• In this instrument a inflatable cut has been used which can be inflated by a rubber
bladder and the pressure in the cut is measured by the manometer.
• The inflated cut contains a rubber bladder and it can he deflated slowly by a
needle valve.
• The sphygmomanometer works on the principle that when the cuff is placed on
the upper arm and inflated, arterial blood can flow past the cuff only when the
arterial pressure exceeds the pressure in the cuff.
• When the cuff is inflated to a pressure that only partially occludes the brachial
artery, turbulence is generated in the blood as it spurts through the tiny arterial
opening during each systole.
• The sounds generated by this turbulence, korotkoff sounds can be heard through a
• To obtain a blood pressure measurement with sphygmomanometer
and astethoscope, the pressure cuff on the upper arm is first inflated
to a pressure well above systolic pressure.
• At this point no sounds can be heard through the stethoscope, which
is placed over the brachial artery, for that artery has been collapsed
by the pressure of the cuff.
• The pressure in the cuff is then gradually reduced.
• As soon as cuff pressure falls below systolic pressure, small amounts
of blood spurt past the cuff and korotkoff sounds begin to be heard
through the stethoscope.
• The pressure of the cuff that is indicated on the manometer when
the first korotkoff sound is heard is recorded as the systolic blood
pressure.
• As the pressure in the cuff continues to drop, the korotkoff sounds
continue until the cuff pressure is no longer sufficient to occlude the
vessel during any part of the cycle.
• Below this pressure the korotkoff sounds disappear, marking the
Direct measurement
• In direct measurement a surgical operation is needed to place a catheter within the
blood vessel.
• The method of measurement is of two types:
• A sterile saline solution is introduced in the catheter so that fluid pressure is
transmitted to a transducer outside the body.
• A pressure transducer is used within the catheter so that continuous recording of
blood pressure may be achieved.
• The pressure transducer used in catheter is of three types: capacitive, inductive and
linear variable differential transformer (LVDT) type.
Note:
Catheter:athinflexibletubeinsertedintothebodytopermitintroductionorwithdrawaloffluido
rtokeepthe passageway open.
Cardiac output and heart sound measurement
• The blood flow at any point in the circulatory system is the volume of blood
that passes the point during a unit of time.
• It is normally measured in millimeters per minute or liters per minutes.
• Blood flow is highest in the pulmonary artery and the aorta; where these
blood vessels leave the heart.
• The flow at these points, called cardiac output, is between 3.5 and 5
liters/min in a normal adult at rest.
• From the cardiac output or the blood flow in a given vessel, a number of
other characteristics variables can be calculated.
• The cardiac output is divided by the number of heart beats per minute
given the amount of blood that is rejected during each heartbeat, or the
stoke volume.
• Blood flow is a function of the blood pressure and flow resistance of the
blood vessels in the same way as electric current flow depends on voltage
and resistance.
• The velocity of blood flow flowing through a vessel is not constant through
out the cross section of the vessel but is a function of the distance from the
A thin layer of blood actually adheres to the wall, resulting in zero
velocity at this place, whereas the highest velocity occurs at the center
of the vessel.

The resulting “velocity profile” is shown in the figure.

Some blood flow meters do not actually measure the blood flow but
measure the mean velocity of the blood.
• The technique of listening to sound produced by the organs and vessels of the
body is called auscultation.
• In spite of its wide spread use however, auscultation is rather subjective and the
amount of information that can be obtained by listening to the sound of the
heart depends largely on the skill, experience and hearing ability of the physician.
• The heart sounds heard by the physician through his stethoscope actually occur
at the time of Closure of major valves in the heart.
• With each heartbeat, the normal heart produces two distinct sounds that are
audible in the stethoscope often described as “lub-dub”.
• The “lub” is caused by the closure of the atrioventricular valves, which permit
flow of blood from the atria into the ventricles but prevent flow in the reverse
direction.
• Normally, this is called the first heart sound, and it occurs just before ventricular
systole.
• The “dub” part of the heart sounds is called the second heart sound and is caused
by the closing of the semilunar valves, which release blood into the pulmonary
and systemic circulation systems.
• These valves close at the end of systole, just before the atrioventricular valve
reopens.
• This second heart sounds occurs about the time of the end of the T wave of the
Electrocardiography (ECG)
• The electrocardiography (ECG) is a graphical recording or display of the
time variant voltages produced by the myocardium during the cardiac
cycle.
• Actually, it is the measurement of the biopotentials generated by the
muscles of the heart.
• The shape and polarity of each of the features vary with the location of
the measuring electrodes with respect to the heart, and a cardiologist
normally bases their diagnosis on reading taken from several
electrodes locations.
• The electrocardiogram is used clinically in diagnosing various diseases
and conditions associated with the heart.
• To the clinician, the shape and duration of each features of the ECG are
significant.
• The waveform however depends greatly upon the lead configuration
used.
Some normal values for amplitude and durations of important ECG
• For this diagnosis, a cardiologist would typically look first at the heart rate.
The normal value lies in the range of 60 to 100 beats per minutes.
• A slower rate than this is called bradycardia (slow heart) and a higher rate,
tachycardia (fast heart).
• He would then see if the cycles are evenly spaced. If not, an arrhythmia
may be indicated.
• If the P-R interval is greater than 0.2 second, it can suggest blockage of the
AV node.
• If one or more of the basic features of the ECG should be missing, a heart
block of some sort might be indicated.
• The technique usually employed, not only in electrocardiography but also
in the measurement of other bioelectric signals, is the use of a differential
amplifier.
• To record an ECG, a number of electrodes usually five are affixed to the
body of the patient.
• The electrodes are connected to the ECG machine by the same number of
electrical wires.
• These wires and in a more general sense, the electrodes to which they are
ECG recording
The connecting wires for the patient electrodes originates at the end of a patient cable, the other end of
which plugs into the ECG recorder.
The wires from the electrodes connects to the lead selector switch which also incorporates the resistors.
• The pushbutton allows the insertion of a standard voltage of 1mV to
standardize or calibrate the recorder.
• Challenging the setting of the lead selector switch introduces an
artifact on the recorded trace.
• A special contract on the lead selector switch turns off the amplifier
momentarily whenever this switch is moved and turns it on again
after the artifact has passed.
• Form the lead selector switch the ECG signal goes to a preamplifier, a
differential amplifier with high common-mode rejection.
• The preamplifier also provides a switch to set the sensitivity or gain.
• By means of this adjustment, the sensitivity of the ECG recorder can
be set so that the standardization voltage of 1mV causes a pen
deflection of 10 mm.
• The preamplifier is followed by a dc amplifier called the pen
amplifier, which provides the power to drive the pen motor that
records the actual ECG trace.
• A position control on the pen amplifier makes it possible to center
the pen on the recording paper.
• The modern ECG recorder use heat-sensitive paper, and the pen is
actually an electrically heated stylus that can be actuated by a push
button and allows the operator to mark a coded indication of the lead
being recorded at the margin of the electrocardiogram.
• Normally, electrocardiograms are recorded at a paper speed of 25
mm/s.
• The power switch of an ECG recorder has three positions. In the ON
position the power to be amplifier is turned on, but the paper is not
running. In order to start the paper drive, the switch must be placed
in the RUN position.
• For this diagnosis, a cardiologist would typically look first at the heart rate.
• The normal value lies in the range of 60 to 100 beats per minutes.
• A slower rate than this is called bradycardia (slow heart) and a higher rate,
tachycardia (fast heart).
• He would then see if the cycles are evenly spaced.
• If not, an arrhythmia may be indicated.
• If the P-R interval is greater than 0.2 second, it can suggest blockage of the
AV node.
• If one or more of the basic features of the ECG should be missing, a heart
block of some sort might be indicated.
• The technique usually employed, not only in electrocardiography but also in
the measurement of other bioelectric signals, is the use of a differential
amplifier.
• To record an ECG, a number of electrodes usually five are affixed to the body
of the patient.
• The electrodes are connected to the ECG machine by the same number of
electrical wires.
These wires and in a more general sense, the electrodes to which they are
connected are usually called leads.
Electroencephalogram (EEG)
• The recorded representation of bioelectrical potentials generated by
the neuronal activity of the brain is called the electroencephalogram
(EEG).
• It is recorded by placing suitable surface electrode on the scalp.
• The EEG has a very complex pattern, which is much more difficult to
recognize than the ECG.
• Since clinical EEG measurement are obtained from electrodes places
on the surface of the scalp, these waveforms represents a very gross
type of summation of potentials that originates from an extremely
large number of neurons in the vicinity of the electrodes.

EEG potentials have random-appearing waveforms with peak-to-peak
amplitude ranging from less than10µA to over 100 µA.
• Required bandwidth for adequately handling the EEG signal is from
below 1 Hz to over 100Hz.
• EEG is recorded simultaneously from an array of many electrodes.
• Every channel has an individual very sensitive amplifier with differential input
and adjustable gain in a wide range.
• Its frequency response may be selected by using passive filters.
• The preamplifier used in EEG must have a high gain and low noise
characteristics because the EEG potentials are small in amplitude.
• In additional, the amplifier must have high common mode rejection to
minimize stray interference signals from power lines and other electrical
equipment.
• Placement of electrodes on the scalp is commonly dictated by the
requirement of the measurement to be made.
• A standard pattern called the 10-20 electrode placement system, is generally
used.
• The writing part of an EEG machine is usually of the ink writing type directly
writing recorder.
• Paper driven is provided by a synchronous motor.
• An actually and stable paper driven mechanism is essential and it is normal
practice to have several paper seeds available for selection.
Electromyographic (EMG)
• The bioelectric potential associated with muscle activity constitutes the
electromyogram (EMG).
• These potentials may be measured at the surface of the body near muscle of
interest or directly from the muscle by penetrating the skin with needle
electrodes.
• Since most EMG measurements are intended to obtain an indication of the
amount of activity of a given muscle, or group of muscles, rather than o fan
individual muscle fiber, the pattern is usually a summation of the individual
action potentials from the fibers constituting the muscle or muscles being
measured.

• The EMG potentials from a muscle or group of muscles produce a noise like
waveform that varies in amplitude with the amount of muscular activity.
• Peak amplitudes vary from 50µV to about 1mV, depending on the location of
the measuring electrodes with respect to the muscle and the activity of the
muscle.
• A frequency response from about 10 Hz to well over 3000 Hz is required for
faithful reproduction.
 EMG is usually recorded by using surface electrodes or more often
needle electrodes inserted directly into the muscle.

 These electrodes pickup the potentials produced by the contracting


muscle fibers.

 The signal can be amplified and displayed on the CRT screen.

 It is also applied to an audio power (AF) amplifier connected to a


loudspeaker.

 A trained EEG interpreter can diagnose various muscular disorders by


listening to the sounds produced when the muscle potentials are
feed to the loudspeaker.
Pacemaker systems
• A device capable of generating artificial pacing impulses and
delivering them to the heart is known as pacemaker system
(commonly called a pacemaker) and consists of a pulse generator and
appropriate electrodes.
• Pacemakers are available in a variety of forms.
• Internal pacemakers may be permanently implanted in patients
whose certain nodes have failed to functions properly or who suffer
from permanent heart block because of a heart attack.
• An internal pacemaker is defined as one in which the entire system is
inside the body.
• In contrast, an external pacemaker usually consists of an externally
worn pulse generator connected to electrodes located on or within
the myocardium.
• External pacemakers are used on patients with temporary heart
irregularities.
X-ray generation
X-rays are generated when fast moving electrons are suddenly
decelerated by impinging on a target.
An x-ray tube is basically a high-vacuum diode with a heated cathode
located opposite a target anode.
This diode is operated in the saturated mode with a fairly low
cathode temperature so that the current through the tube does not
depends on the applied anode voltage.
The target is usually made of tungsten, which has a high melting
point.
 The intensity of X-rays depends on the current through the tube.

 This current can be varied by varying the heater current, which in


turn controls the cathode temperature.

 The wavelength of the x-ray depends on the target material and the
velocity of the electrons hitting the target.

 It can be varied by varying the target voltage of the tube.

 When the electrons strike the target, only a small part of their energy
is converted into X-rays; most of it is dissipated as heat.
X-ray Machine
 An X-ray image from a certain part of the body, the region to
be examined must be positioned between the x- ray tube and
the imaging device.
 The use of x-ray as diagnostic tool is based on the fact that
various components of the body have different densities for
the rays.
 When x-rays from a point source penetrates a body section,
the internal structure of the body absorbs varying amounts of
the radiation.
 The radiation that leaves the body, therefore, has a spatial
intensity variation that is an image of the internal structure of
the body.
 When this intensity distribution is visualized by a suitable
device, a shadow image is generated that corresponds to the
• Bones and foreign bodies’ especially metallic ones, and air-filled cavities show up
well on these images because they have much higher or much lower density than
the surrounding tissue.
• X-rays normally can not be detected directly by the human senses; thus indirect
methods of visualization must be used to give an image of the intensity distribution
of x-rays that passed through the body of a patient.
• These different techniques are commonly used: fluoroscope, X-ray films and image
intensifiers.
CT Scanning
• Computed tomography (CT) is an imaging procedure that uses special
x-ray equipment to create detailed pictures or scans of areas inside
the body with greater definition and clarity than could ever be
attained by conventional methods.
• It is also called computerized tomography and computerized axial
tomography (CAT).
• Each picture created during a CT producer shows the organs, bones
and other tissues in a thin “slice” of the body.
• The entire series of picture produced in CT is like a loaf of sliced
bread.
• Computer programs are used to create both types of pictures.
• The cross-sectional images generated during a CT scan can be
reformatted in multiple planes and can even generate three
dimensional images.
• These images can be viewed on a computer monitor printed on film
or transferred to a CD or DVD.
Ultrasound imaging
• Ultrasound imaging (sonography) uses high-frequency sound waves to view
inside the body.
• Because ultrasound images are captured in real-time, they can also show
movement of the body’s internal organs as well as blood flowing through
the blood vessels.
• Unlike x-ray imaging, there is no ionizing radiation exposure associated with
ultrasound imaging.
• In an ultrasound exam, a transducer (probe) is placed directly on the skin or
inside a body opening.
• A thin layer of gel is applied to the skin so that the ultrasound waves are
transmitted from the transducer through the gel into the body.
• The ultrasound image is produced based on the reflection of the waves off
of the body structures.
• The strength (amplitude) of the sound signal and the tie it takes for the
wave to travel through the body provide the information necessary to
produce an image.
• Ultrasound imaging is a medical tool that can help a physician evaluate,
• Such system has got a 64 elements phased array transducer at the
very front end.
• After necessary analog preprocessing, the signals are digitized and
sent to the FPGAs.
• The system performs digital beam forming by properly delaying and
weighting signals coming from all 64channels, and then adding them
to get a coherent output.
• The amplitude and phase modulation parameters are then extracted
to obtain the intensity value.
• Since the beam formed data is in polar format, a scan conversion is
required before displaying the pixels on the TV raster display.
• Also ultrasound images are greatly corrupted with noise, hence some
image filtering operations are also done.
• The image frames are transformed and coded before storing them in
the hard disk of the PC.
Magnetic Resonance imaging (MRI)
• Magnetic resonance imaging (MRI) is a medical imaging technique that
uses magnetism, radio waves and a computer to produce images of
body structures.
• A CT scanner uses ionizing radiations, x-rays, to acquire its images
making it a good tool for dense tissue(bone) exams.
• MRI, on the other hand uses radio frequency signal to acquire its
images, and is best suited for soft tissue(especially useful in brain,
heart, muscles and cancer) exams.
• MRI is non-invasive technique with excellent soft tissue contrast.
However, it is slow process, and relatively expensive.
• Magnetic imaging is based on the nuclear properties of hydrogen
atoms in the body.
• If RF pulses of the same frequency as the processing nuclei are applied
at right angles to the main static magnetic field, the hydrogen nuclei
tissues get disturbed.
• They absorb energy and change their orientations with respect to the
• Now if the field is put off, the hydrogen nuclei go back to their low
energy state after emitting energy they had received.
• All this process is known as nuclear magnetic resonance.
• The emitted energy can be detected digitized, amplified, encoded
and transformed by computer into cross-sectional images.
• The MRI images are accurate for visualization of tumors,
inflammatory and vascular abnormalities.
• The MRI scanner is a tube surrounded by a giant circular magnet.
• The patient is placed on the moveable bed that is inserted into the
magnet.
• The magnet creates a strong magnetic field that aligns the protons of
hydrogen atoms, which are then exposed to a beam of radio waves.
• This spins the various protons of the body, and they produce a faint
signal that is detected by the receiver portion of the MRI scanner.
• The receiver information is converted to digital signal by the ADC and
processed by a computer, and an image is produced.
Introduction to Fiber Optical Instrumentation
• An optical fiber is a dielectric waveguide that operates at optical
frequencies.
• This fiber waveguide is normally cylindrical in form.
• It confines electromagnetic energy in the form of light to within its
surfaces and guides the light in a direction parallel to its axis.
• Optical fiber is of glasses which are used to carry signals in the form
of pulses of light over distances up to 50 km without the need for
repeaters.
• These signals maybe coded voice communications or computer data
Measurements of Attenuation

Three basic methods are available for determining attenuation in


fiber:
 cutback technique
 insertion-loss
 use of OTDR
Cutback technique
 The cutback technique which is destructive method requiring access
to both ends, as illustrated in the figure below.
 Measurement may be made at one or more specific wavelengths.
 To find the transmission loss, the optical power is first measured at
the output (or far end) of the fiber.
 Then, without disturbing the input condition, the cut is made off a
few meters from the source, and the output power at this near end is
measured.
Insertion-loss method
• For cabling with connectors, one can not use the cutback method.
• In this case, one commonly uses an insertion-loss technique.
• This is non-destructive method and is less accurate than the cutback
method, but is intended for field measurements to give the total attenuation
of a cable assembly in decibel.
• The basic setup is shown in figure, where the launch and detector couplings
are made through connector.
• The wave length-tunable light source is coupled to a short length of
fiber that has the same basic characteristics to be tested.
• To carry out the attenuation test, the connector of the short-length
launching is attached to the connector of the receiving system and
the launch-power (λ) is recorded.
• The attenuation of the cable in decibels is then

This attenuation is the sum of the loss of cabled and the connector
between the launch connector and the cable.
Optical Time Domain Reflectometer (OTDR)
detector
 Using Signal processing block, the signal is converted into proper
form for display
Fiber Refractive Index Profile Measurement
• Refractive index profile of the fiber core is an important parameter
that determines the characteristics of optical fiber.

• Some main transmission properties of optical, such as bandwidth


dispersion and so on, are depending on the design of fiber’s refractive
index profile which is the base of optical waveguide constitution.

• From the design of optical to their fabrication, refractive index profile


is a very important basic parameter of optical fiber, and often the
practical optical ’s refractive index profile is the key that determines if
or not the above mentioned transmission performance can achieve
the expected goal.

• Therefore, the exact measurement of a finished fiber’s refractive index


profile is absolutely necessary.
Fiber Optic sensing (FOS)
• A optical sensor (or optical sensor) system consists of an optical source (laser, LED,
Laser diode etc), optical fiber, sensing or modulator element transducing the
measurand to an optical signal, an optical detector and processing electronics
(oscilloscope, optical spectrum analyzer etc).
• Fiber-optical sensors are often loosely grouped into two basic classes referred to
intrinsic, or all fiber and extrinsic or hybrid sensors.
• The intrinsic-optical sensor has a sensing region within the fiber and light never
goes out of the fiber.
• In extrinsic sensors, light has to leave the fiber and reach the sensing region
outside, and then comes back to the fiber.
The inherent advantages of fiber-optical sensors are:
 Since the optical fiber is dielectric medium, it is harsh environment capability to
strong EMI (electromagnetic interference immunity), high temperature, chemical
corrosion, high pressure and high voltage.
 Very small size, passive and low power, chemically inert( no contamination, not
subject to corrosion etc)
 Long distance operation and
 Multiplexed or distributed measurement.

The major disadvantages are its high cost and end-user unfamiliarity.
To date, the most highlighted applications fields of fiber-optical sensors are in large
composite and concrete structures, electrical power industry, medicine, chemical
sensing, and gas and oil industry. A wide range of environmental parameters such
as position vibration, strain, temperature, humidity, viscosity, chemicals, pressure,
current, electric field and several other environmental factors have been widely
monitored.
Transduction technique based on intensity modulation

 Optical intensity or optical amplitude modulated sensors are those in which the
intensity of an optical radiation is modulated (in the optical transducer) according
with the measurand value.

 As the optical intensity changes are very easy to detect with a photo detector, this
kind of sensors is both technically simple and potentially low-cost.

 Intensity modulated techniques can be used for temperature measurement.

 Based on the temperature dependence of black body radiation emission, the


optical power emitted by a blackbody placed at the end of a fiber can be measured
at the opposite end of the at two fixed wavelength sand so, the black body
emission spectrum can be reconstructed and hence the corresponding
temperature can be calculated as shown in figure below.
Encoding Based Position Sensing
Distributed optical sensing
 Continuous spatially distributed sensing of the measurand is of great interest
for a very wide range of real applications.
 Because of the intrinsic properties of optical distributed optical sensor (DOES)
in which the fiber acts, simultaneously both as the optical channel and as the
distributed optical transducer.
 In a distributed sensor, the whole optical fiber is the sensor itself.
Why choose distributed sensing?
 Distributed sensing replaces complex integration of thousands of sensor with
one optical fiber system.
 Optical fiber is cheap, light, pliable, and immune to electromagnetic
interference (EMI), which makes it a cost-effective flexible and an inert sensor
medium.
 With distributed fiber sensing, one can measure the strain or temperature of
the test article at not just one location, or a few locations, but at hundreds of
locations with a single fiber optic sensor.
 This unique technology enables strain or temperature measurements at every
point along a simple optical fiber with gage lengths as small as few millimeters.
 Instead of needing two or three wires per sensing point, only one optical
Optical Amplifier
 A relatively high-power beam of light from pump source is mixed with
the input signal.
 The input signal and the excitation light must be at significantly
different wavelengths.
 The mixed light is guided into a section of active medium with erbium
(one kind of element) ions included in the core.
 This high-powered light beam excites the erbium ions to their higher
energy state.
 When the photons from the pump light meet the excited erbium
atoms, the erbium atoms give up some of their energy to the signal
and return to their lower-energy state.
 A significant point is that the erbium gives up its energy in the form
of additional photons which are exactly in the same phase and
directions as the signal being amplified so the signal is amplified
along its direction of travel only.
 Thus all of the additional signal power is guided in the same mode as
the incoming signal.
(Repeaters)
Assignments:
 X-Ray spectroscopy
 Mass spectroscopy
 Ionizing radiation spectroscopy
 Nuclear radiation for Instrument
 Scanning Tunnelling Microscopy
 Non Destructive Testing (NDT)

Note: Required to submit within one week


Time frame
Analytical and Testing Instrumentation
Spectroscopy
 Studying the properties of matter through its interaction with different frequency
components of the electromagnetic spectrum.
 With light, you aren’t looking directly at the molecule—the matter—but its “ghost.”
You observe the light’s interaction with different degrees of freedom of the
molecule.
 Each type of spectroscopy different light frequency gives a different picture the
spectrum.
 If matter is exposed to electromagnetic radiation, the radiation can be absorbed,
transmitted, reflected, scattered or undergo photo luminescence.
 It helps to study the property of matter.
 Absorption of selective frequency results jumping of electrons from
valence band to conduction band.
 The energy required for jumping from valance band to conduction
band is supplied by frequency radiation absorbed and particular
frequency is determined by
E=hv

and 𝑣is frequency of radiation.


Where E is energy difference in two bands, h is plank’s constant

A molecule possessed three types of internal energy states.


 Electronic
 Vibration.
 Radiation.
 Different spectroscopic techniques operate over different limited
frequency range, depending upon processes and magnitude of the
energy changes.
Infrared (IR) Spectroscopy
 It is the absorption measurement of different IR frequencies by a sample
positioned in the path of IR beam.

 The main goal is to determine the chemical functional groups in the sample, as
different functional groups absorbs characteristics frequency of IR radiation.

 Using a double beam method one beam is passed through the sample under
investigation and other through reference.

 The example of source of IR radiation could be tungsten lamp.

 The purpose of the reference is to eliminate absorption caused by Co2,


temperature, humidity, water vapour in air or absorption from the bond in the
 IR radiation is absorbed by the sample when it has the same frequency
as any of the natural bond frequencies in the sample molecules.
 Other frequencies simply pass through the sample.
 Using an optical chopper the sample and reference beams are
alternately focused on the detector.
 The sample cell contains a solution of the substance you are testing
usually very dilute.
 The solvent is chosen so that it does not absorb any wavelength range
we are interested in the reference cell.
UV-Visible Spectroscopy
• Wavelength of UV light are shorter than wavelength of IR radiation.
• A UV spectroscopy measures the amount of light absorbed at each wavelength of
UV region.
• The absorption of UV light results in electronic transitions; electrons are prompted
from low-energy ground state or orbital to higher energy excited-state orbital’s.
• It has same basic design as an IR spectroscopy (replace IR with UV rays).
• The example of source of UV radiation could be hydrogen and deuterium discharge
lamp.
• A diagram of the components of a typical spectrometer is shown in the following
diagram.
• A beam of light from a visible and /or UV light source is separated into its
component wavelengths by a prism or diffraction grating.
• Each monochromatic (single wavelength) beam in turn is split through as mall
transparent container (cuvette) containing a solution of the compound being
studied in a transparent solvent.
• The other beam, the reference, passes through an identical cuvette containing only
the solvent.
• The intensities of these light beams are then measured by electronic detectors and
compared.
X-ray Spectroscopy
 It is a method to investigate atomic local structure as well as
electronic states.

 X-ray interacts with all electrons in matter when its energy


exceeds the binding energy of the electron.

 X-ray excites or ionizes the electrons to a higher unoccupied


electronic state.

 The study of this process is XAS (X-ray Absorption Spectroscopy)

 The other type of XAS is the fluorescent spectroscopy.

 The X-ray spectroscopy is used for amount detection and


Mass spectrometer (MS)
 The MS is a tool for direct determining of molecular weight of a
sample composition of compounds and to investigate reaction
mechanism.

 In order to measure the characteristics of individual molecules, the


MS converts them to ions; so that they can be moved about and
manipulated by external electric and magnetic fields.

 The general operation of a mass spectrometer is


 Create gas-phase ions
 Separate the ions in space or time based on their mass to charge
ratio
 Measure the quantity of ions of each mass-to-charge ratio.
Microprocessor Based Instrumentation
Microprocessor:
A microprocessor is a multipurpose programmable, clock driven, register
based electronic device fabricated using signal integrations from SSI to VLSI
that reads binary instruments form a storage device called memory, accepts
binary data as input, processed data according to those instructions and
provide result as output.
Instrumentation System:
The system which is defined as the assembly of various instruments and other
components interconnected to measure, analyze and control physical quantities
such as electrical, thermal, mechanical etc.
Microprocessor based Instrument System:
 Any instrumentation system centered on a microprocessor is known as
microprocessor based system.
 Logical and computation power of microprocessor has extended the capabilities of
many basic instruments, improving accuracy and efficiency of use.
 Microprocessor is versatile device for use in any instrumentation system. Examples
are ATM automatic washing machine, fuel control, oven etc.
Open Loop and Close Loop Microprocessor Based System
Any instrument system can be controlled by microprocessor in two ways open loop
control system and closed loop control system.
Closed loop control system
ADC Interfacing
 In many applications, an analog device has to be interfaced to digital systems.
 But the digital device cannot accept the analog signal directly.
 So, the analog signals are converted to equivalent digital signal (data) using analog
to digital converter (ADC).
 The ADC0809 is an 8-bit successive approximation type ADC with an inbuilt 8-
channel multiplexer.
 A simple schematic for interfacing ADC 0809/ADC 0808 with 8085 microprocessor
is shown below.
 The ADC can be either memory-mapped or IO mapped in the system.
 Here, the ADC is IO-mapped in the system.
 The chip select signals for IO- mapped devices are generated by using a 3-to-8

 The address line A7 and the control signal IO/𝑀 ̅are used as enable for decoder.
decoder. The address lines A4, A5, and A6 are used as input to decoder.

 The decoder generates eight chip select signals (IOCS-0 to IOCS-7) and in this, three
chip select signals are used for ADC interface.
 The chip select signal IOCS-6 is used to give Start of Conversion (SOC) signal to ADC
along with a channel address.

 The chip select IOCS-5 is used to enable the tri-state buffer provided for interfacing
EOC with data bus.

 The chip select signal IOCS-7 is inverted and used to enable the output buffer of
ADC whenever the digital data has to read from ADC.

 The output clock signal of 8085 microprocessor is divided by suitable clock divider
circuit and used as clock signal for ADC.

 A separate voltage source has to be provided to give an accurate reference voltage


levels.

 The End of Conversion (EOC) signal of ADC is connected to the bus line D0 of the
system through a tri-state buffer, so that the processor can check for a valid EOC
before reading the output buffer of ADC.
The working of ADC 0809 with 8085 will be as follows:

 First the processor selects a channel by sending an address and SOC pulse is
asserted high and low.

 Once address of channel and SOC pulse are applied, the ADC will start converting
the signal at the selected channel.

 Then the processor keeps on polling the status of EOC to verify whether it is set to
one. (When the conversion is completed by ADC 0809 the EOC is set to one.)

 When the processor finds a valid EOC, it will read the digital value from output
buffer of ADC.
Digital to Analog Conversion (DAC):
In many applications, the microprocessor has to produce analog signals for
controlling certain analog devices.

 Basically the microprocessor system can produce only digital signals.

 In order to converts the digital signal to analog signal; a Digital-to-Analog converter


(DAC) has to be employed.

 The DAC 0800 can be interfaced to 8085 system bus by using an 8-bit latch and the
latch can be enabled by using one of the chip select signal generated for IO devices.

 A simple schematic for interfacing DAC 0800 with 8085 is shown in figure below.

 In this schematic the DAC 0800 is interfaced using an 8-bit latch 74LS273 to the
system bus.

 The 3-to-8 decoder 74LS138 is used to generate chip select signals for IO device.
 The address lines A4, A5 and A6 are used as input to decoder.

 The address line A7 and the control signal IO/𝑀̅ are used as enable for decoder.

 The decoder will generate eight chip select signals and in this signal IOCS-7 is used
as enable for latch of DAC.

 In order to convert a digital data to analog value, the processor has to load the data
to a latch.

 The latch will hold the previous data until the next data is loaded.

 The DAC will take definite time to convert the data.

 The software should take care of loading successive data only after the conversion
time.

 The DAC 0800 produces a current output, which is converted to voltage output
using I to V converter.
Keyboard Interface
 A common method of entering programs into a microcomputer is through a
keyboard which consists of a set of switches.

 Basically each switch will have two normally open metal contacts.

 These two contacts can be shorted by a metal plate supported by spring.

 On pressing the key, the metal plate will short the contacts and on releasing
the key, again the contact will be open.

 The processor has to perform the following three major tasks to get a
meaningful data from a keyboard.
 Sense a key actuation
 Denounce the key
 Decode the key.
 The three major tasks mentioned above can be performed by the
software, when a keyboard is connected through ports to 8085
processor.

 Consider a simple keyboard in which the keys are arranged in rows


and columns as shown in figure below.

 The rows are connected to port A line of 8255 and column are
connected to port –B lines, of the same chip.

 The rows and columns are normally tied high.

 At the intersection of a row and column, a key is placed such that


pressing a key will short the row and the column.
 A key actuation is sensed by sending a low to all the rows through port-A.

 Pressing a key will short the row and column to which it is connected.

 So, the column to which the key is connected will be pulled low.

 Therefore, the columns are read through the Port-B to see whether any of
the normally high columns are pulled low by a key actuation.

 If they are, then rows can be checked individually to determine the row in
which the key is down.

 For checking each row, the scan code of the type shown in table above is
output to Port–A one by one.

 This process of sensing a key actuation is called key scanning.


 A key pressed has to be accepted only after the denouncing.
 Normally, the key bounces for 10 to 30 milliseconds when it is pressed and
released.
 The bouncing time depends on the type of the key.
 When this bounce occurs, it may appear to the microcomputer that the same key
has been actuated several times instead of just one time.
 This problem can be eliminated by scanning the row in which the key pressed is
deduced after 10 to 20 millisecond and then verifying to see if the same key is still
down.
 If it is, then the key actuation is valid. This process is called debouncing.
 After the debouncing, the code for the key has to be generated.
 Each key can be individually identified by the port-A value (row code) and port B
input value (column code).
 The next step is to translate the row and column code into the more popular code
such as hexadecimal or ASCII.
 This can easily be accomplished by a program.
Interfacing 7-segment display
• The 7-segment LED is most popular display devices used for single board
microcomputers.
• Each 7-segment LED will have seven Light Emitting Diodes (LEDs) arranged in the
form of small rectangular segments and another LED as a dot point in a single
package.
• In common cathode type, all the cathode terminals of LEDs are internally shorted
and one/two pins are provided for external connections.
• The anodes of the LEDs are terminated on separate pins for external connections.
• The pin configuration and the internal connection of common cathode 7-segment
LED is shown in figure below.
• The display codes for LEDs can be generated by using the BCD to 7-segment
decoder IC, 7447.
• When a BCD code is sent to the input of the 7447, it outputs low on the segment to
display the number represented by the BCD code.
• A simple schematic is shown in figure below to interface a common anode 7-
segment LED to 8085 system using a port device.
• This circuit connection is referred to as static display, because current is being
passed through the display at all times.
1

2
3
Classification of Embedded system

Sophisticated
Design Challenges in Embedded system
IC Technology
 IC technology involves the manner in which we map a digital (gate-level)
implementation onto an IC.

 An IC, often called a chip, is a semiconductor device consisting of a set of


connected transistors and other devices.

 A number of different processes exist to build semiconductors; the most popular of


them is Complementary metal oxide semiconductor (CMOS).

 IC technologies differ by how customized the IC is for a particular design.

 IC technology is independent from processor technology; any type of processor can


be mapped to any type of IC technology, as shown in figure below.
 The bottom layers form the transistors.

 The middle layers form logic components.

 The top layers connect these components with wires.

 One way to create these layers is by depositing photo-sensitive chemicals on the


chip surface and then shining light through masks to change region of the
chemicals.

 Thus, the task of building the layers is actually one of designing appropriate masks.

 A set of masks is often called a layout.

 The narrowest line that we can create on a chip is called the feature size, which
today is well below one micrometer.

 For each IC technology, all layers must eventually be built to get a working IC; the
question is who builds each layer and when.
Interfacing Keyboard with 8051
7 Segment Display interfacing with 8051

You might also like