Lecture Notes On Laboratory Instrumentation and Techniques: Mathew Folaranmi Olaniyan
Lecture Notes On Laboratory Instrumentation and Techniques: Mathew Folaranmi Olaniyan
Lecture Notes On Laboratory Instrumentation and Techniques: Mathew Folaranmi Olaniyan
net/publication/317181728
CITATIONS READS
0 42,235
1 author:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Feasibility of Molecular (NAT) diagnosis of blood donors in low economically settings View project
All content following this page was uploaded by Mathew Folaranmi Olaniyan on 28 May 2017.
i
DEDICATION
This book is dedicated to Almighty God and my children(Olamide, Ajibola
and Oluwatobi)
ii
PREFACE
This book is written out of the author’s several years of professional and
academic experience in Medical Laboratory Science.
The textbook is well-planned to extensively cover the working principle
and uses of laboratory instruments. Common Laboratory techniques
(including principle and applications) are also discussed. Descriptive
diagrams/schematics for better understanding are included.
Teachers and students pursuing courses in different areas of Laboratory
Science, Basic and medical/health sciences at undergraduate and
postgraduate levels will find the book useful. Researchers and interested
readers will also find the book educative and interesting.
iii
TABLE OF CONTENTS
TITLE PAGE
TITLE PAGE………………………………………………………………………………… i
DEDICATION………………………………………………………………………………. ii
PREFACE…………………………………………………………………………………… iii
TABLE OF CONTENT…………………………………………………………………….. iv
CHAPTER ONE
BASIC CONCEPTS……………………………………………………………………….. 1
CHAPTER TWO
AUTOCLAVE……………………………………………………………………………… .. 3
CHAPTER THREE
CENTRIFUGES……………………………………………………………………………. 10
CHAPTER FOUR
WEIGHING BALANCE…………………………………………………………………... 14
CHAPTER FIVE
LABORATORY WATERBATHS……………………………………………………… 20
CHAPTER SIX
ANAEROBIC JARS………………………………………………………………………… 23
CHAPTER SEVEN
MICROSCOPE……………………………………………………………………………… 28
CHAPTER EIGHT
iv
CHAPTER TEN
CHAPTER ELEVEN
HOT AIR/BOX OVEN…………………………………………………………………… 79
CHAPTER TWELVE
ELIZA READER…………………………………………………………………………… 83
CHAPTER THIRTEEN
REFRIGERATOR…………………………………………………………………………… 88
CHAPTER FOURTEEN
LABORATORY MIXER…………………………………………………….…………… 100
CHAPTER FIFTEEN
POLYMERASE CHAIN REACTION (PCR ) MACHINE………………..………. 101
CHAPTER SIXTEEN
LABORATORY INCUBATOR…………………………………………………………. 105
CHAPTER SEVENTEEN
2
CHAPTER TWO
AUTOCLAVE
4
Achieving high and even moisture content in the steam-air environment is
important for effective autoclaving. The ability of air to carry heat is
directly related to the amount of moisture present in the air. The more
moisture present, the more heat can be carried, so steam is one of the most
effective carriers of heat. Steam therefore also results in the efficient killing
of cells, and the coagulation of proteins. When you cook beef at home, for
example, it can become tough when roasted in a covered pan in the oven.
But just add a little water in the bottom of the pan, and you will find that
the meat will be tender! The temperature is the same and the time of
roasting is the same, but the result is different. Now (as in an autoclave)
add another parameter, pressure. By putting this same roast in a pressure
cooker you can reduce the time it takes to cook this roast by at least three
quarters, and you still get just as tender a finished product.
How does killing occur? Moist heat is thought to kill microorganisms by
causing coagulation of essential proteins. Another way to explain this is
that when heat is used as a sterilizing agent, the vibratory motion of every
molecule of a microorganism is increased to levels that induce the cleavage
of intramolecular hydrogen bonds between proteins. Death is therefore
caused by an accumulation of irreversible damage to all metabolic
functions of the organism.
Death rate is directly proportional to the concentration of microorganisms
at any given time. The time required to kill a known population of
microorganisms in a specific suspension at a particular temperature is
referred to as thermal death time (TDT). All autoclaves operate on a
time/temperature relationship; increasing the temperature decreases TDT,
and lowering the temperature increases TDT.
What is the standard temperature and pressure of an autoclave? Processes
conducted at high temperatures for short time periods are preferred over
lower temperatures for longer times. Some standard
temperatures/pressures employed are 115 ¡C/10 p.s.i., 121¡C/ 15 p.s.i., and
132 ¡C/27 p.s.i. (psi=pounds per square inch). In our university autoclave,
autoclaving generally involves heating in saturated steam under a pressure
of approximately 15 psi, to achieve a chamber temperature of a least 121¡C
(250¡F)but in other applications in industry, for example, other
combinations of time and temperature are sometimes used.
5
Please note that after loading and starting the autoclave, the processing
time is measured after the autoclave reaches normal operating conditions
of 121¡C (250¡F) and 15 psi pressure, NOT simply from the time you push
the "on" button.
How does the autoclave itself work? Basically, steam enters the chamber
jacket, passes through an operating valve and enters the rear of the
chamber behind a baffle plate. It flows forward and down through the
chamber and the load, exiting at the front bottom. A pressure regulator
maintains jacket and chamber pressure at a minimum of 15 psi, the
pressure required for steam to reach 121…C (250…F). Overpressure
protection is provided by a safety valve. The conditions inside are
thermostatically controlled so that heat (more steam) is applied until 121C
is achieved, at which time the timer starts, and the temperature is
maintained for the selected time.
Mode of operation
◾Have steam in direct contact with the material being sterilized (i.e.
loading of items is very important).
◾Create vacuum in order to displace all the air initially present in the
autoclave and replacing it with steam.
7
Test for the efficacy of an Autoclave(quality assurance)
There are physical, chemical, and biological indicators that can be used to
ensure that an autoclave reaches the correct temperature for the correct
amount of time. If a non-treated or improperly treated item can be
confused for a treated item, then there is the risk that they will become
mixed up, which, in some areas such as surgery, is critical.
Chemical indicators on medical packaging and autoclave tape change color
once the correct conditions have been met, indicating that the object inside
the package, or under the tape, has been appropriately processed.
Autoclave tape is only a marker that steam and heat have activated the dye.
The marker on the tape does not indicate complete sterility. A more
difficult challenge device, named the Bowie-Dick device after its inventors,
is also used to verify a full cycle. This contains a full sheet of chemical
indicator placed in the center of a stack of paper. It is designed specifically
to prove that the process achieved full temperature and time required for a
normal minimum cycle of 274 degrees F for 3.5–4 minutes
To prove sterility, biological indicators are used. Biological indicators
contain spores of a heat-resistant bacterium, Geobacillus
stearothermophilus. If the autoclave does not reach the right temperature,
the spores will germinate when incubated and their metabolism will
change the color of a pH-sensitive chemical. Some physical indicators
consist of an alloy designed to melt only after being subjected to a given
temperature for the relevant holding time. If the alloy melts, the change will
be visible
Some computer-controlled autoclaves use an F0 (F-nought) value to
control the sterilization cycle. F0 values are set for the number of minutes
of sterilization equivalent to 121 °C (250 °F) at 100 kPa (15 psi) above
atmospheric pressure for 15 minutes . Since exact temperature control is
difficult, the temperature is monitored, and the sterilization time adjusted
accordingly.
Application of autoclave
Sterilization autoclaves are widely used in microbiology, medicine,
podiatry, tattooing, body piercing, veterinary science, mycology, funeral
homes, dentistry, and prosthetics fabrication. They vary in size and
function depending on the media to be sterilized.
8
Typical loads include laboratory glassware, other equipment and waste,
surgical instruments, and medical waste.A notable recent and increasingly
popular application of autoclaves is the pre-disposal treatment and
sterilization of waste material, such as pathogenic hospital waste. Machines
in this category largely operate under the same principles as conventional
autoclaves in that they are able to neutralize potentially infectious agents
by utilizing pressurized steam and superheated water. A new generation of
waste converters is capable of achieving the same effect without a pressure
vessel to sterilize culture media, rubber material, gowns, dressing, gloves,
etc. It is particularly useful for materials which cannot withstand the higher
temperature of a hot air oven.
Autoclaves are also widely used to cure composites and in the
vulcanization of rubber. The high heat and pressure that autoclaves allow
help to ensure that the best possible physical properties are repeatably
attainable. The aerospace industry and sparmakers (for sailboats in
particular) have autoclaves well over 50 feet (15 m) long, some over 10 feet
(3.0 m) wide.
Other types of autoclave are used to grow crystals under high temperatures
and pressures. Synthetic quartz crystals used in the electronic industry are
grown in autoclaves. Packing of parachutes for specialist applications may
be performed under vacuum in an autoclave which allows the parachute to
be warmed and inserted into the minimum volume.
9
CHAPTER THREE
CENTRIFUGES
A laboratory centrifuge is a piece of laboratory equipment, driven by a
motor, which spins liquid samples at high speed. There are various types of
centrifuges, depending on the size and the sample capacity. Like all other
centrifuges, laboratory centrifuges work by the sedimentation principle,
where the centripetal acceleration is used to separate substances of greater
and lesser density.
A centrifuge is a device for separating two or more substances from each
other by using centrifugal force. Centrifugal force is the tendency of an
object traveling around a central point to continue in a linear motion and
fly away from that central point.
11
Care and Maintenance of centrifuges
Mechanical stress
Always ensure that loads are evenly balanced before a run. „Always
observe the manufacturers maximum speed and sample density ratings.
„Always observe speed reductions when running high density solutions,
plastic adapters, or stainless steel tubes.
Many rotors are made from titanium or aluminum alloy, chosen for their
advantageous mechanical properties. While titanium alloys are quite
corrosion-resistant, aluminum alloys are not. When corrosion occurs, the
metal is weakened and less able to bear the stress from the centrifugal
force exerted during operation. The combination of stress and corrosion
causes the rotor to fail more quickly and at lower stress levels than an
uncorroded rotor.
12
1- A tabletop micro
laboratory centrifuge 2- Laboratory macro/bench centrifuge 4- An
Eppendorf laboratory centrifuge
13
CHAPTER FOUR
WEIGHING BALANCE
Balances are designed to meet the specific weighing requirement in the
laboratory working environment. These balances come in precision designs
and operating characteristics that allows making quick and accurate
measurements. Further, these balances can also be tubes to transfer data to
computer for further analysis as well as can have piece count functions and
hopper functions. With end usage of these scales in precision weighing
applications in laboratories, these also offer excellent value of money
invested. Here, our expertise also lies in making these available in both
standard and custom tuned specifications. The range offered includes
Analytical Balances, General Purpose Electronic Balance, Laboratory
Balances and Precision Weighing Balances.
The history of balances and scales dates back to Ancient Egypt. A simplistic
equal-arm balance on a fulcrum that compared two masses was the
standard. Today, scales are much more complicated and have a multitude
of uses. Applications range from laboratory weighing of chemicals to
weighing of packages for shipping purposes.
To fully understand how balances and scales operate, there must be an
understanding of the difference between mass and weight.
Mass is a constant unit of the amount of matter an object possesses. It stays
the same no matter where the measurement is taken. The most common
units for mass are the kilogram and gram.
Weight is the heaviness of an item. It is dependent on the gravity on the
item multiplied by the mass, which is constant. The weight of an object on
the top of a mountain will be less than the weight of the same object at the
bottom due to gravity variations. A unit of measurement for weight is the
newton. A newton takes into account the mass of an object and the relative
gravity and gives the total force, which is weight.
Although mass and weight are two different entities, the process of
determining both weight and mass is called weighing.
14
Balance and Scale Terms
Accuracy The ability of a scale to provide a result that is as close as
possible to the actual value. The best modern balances have an accuracy of
better than one part in 100 million when one-kilogram masses are
compared.
Calibration The comparison between the output of a scale or balance
against a standard value. Usually done with a standard known weight and
adjusted so the instrument gives a reading in agreement.
Capacity The heaviest load that can be measured on the instrument.
Precision Amount of agreement between repeated measurements of the
same quantity; also known as repeatability. Note: A scale can be extremely
precise but not necessarily be accurate.
Readability This is the smallest division at which the scale can be read. It
can vary as much as 0.1g to 0.0000001g. Readability designates the number
of places after the decimal point that the scale can be read.
Tare The act of removing a known weight of an object, usually the
weighing container, to zero a scale. This means that the final reading will be
of the material to be weighed and will not reflect the weight of the
container. Most balances allow taring to 100% of capacity.
Balance and Scale Types
Analytical Balance These are most often found in a laboratory or places
where extreme sensitivity is needed for the weighing of items. Analytical
balances measure mass. Chemical analysis is always based upon mass so
the results are not based on gravity at a specific location, which would
affect the weight. Generally capacity for an analytical balance ranges from 1
g to a few kilograms with precision and accuracy often exceeding one part
in 106 at full capacity. There are several important parts to an analytical
balance. A beam arrest is a mechanical device that prevents damage to the
delicate internal devices when objects are being placed or removed from
the pan. The pan is the area on a balance where an object is placed to be
weighed. Leveling feet are adjustable legs that allow the balance to be
brought to the reference position. The reference position is determined by
the spirit level, leveling bubble, or plumb bob that is an integral part of the
balance. Analytical balances are so sensitive that even air currents can
affect the measurement.
15
To protect against this they must be covered by a draft shield. This is a
plastic or glass enclosure with doors that allows access to the pan.
Analytical Balances come with highest accuracy for meeting the demands of
analytical weighing processes. These balances come equipped with
provision for eliminating interfering ambient effects as well as in delivering
repeatable weighing results with a fast response. These are recommended
for use in analytical applications requiring precision performance and
durability. Here, the presence of latest automatic internal calibration
mechanism also helps in keeping balance calibrated at all times, thus
providing for optimum weighing accuracy. These scales and balances also
automatically calibrate itself at startup, at preset time intervals or
whenever as required by temperature changes.
Equal Arm Balance/Trip Balance This is the modern version of the
ancient Egyptian scales. This scale incorporates two pans on opposite sides
of a lever. It can be used in two different ways. The object to be weighed
can be placed on one side and standard weights are added to the other pan
until the pans are balanced. The sum of the standard weights equals the
mass of the object. Another application for the scale is to place two items
on each scale and adjust one side until both pans are leveled. This is
convenient in applications such as balancing tubes or centrifugation where
two objects must be the exact same weight.
Platform Scale This type of scale uses a system of multiplying levers. It
allows a heavy object to be placed on a load bearing platform. The weight is
then transmitted to a beam that can be balanced by moving a counterpoise,
which is an element of the scale that counterbalances the weight on the
platform. This form of scale is used for applications such as the weighing of
drums or even the weighing of animals in a veterinary office.
Spring Balance This balance utilizes Hooke's Law which states that the
stress in the spring is proportional to the strain. Spring balances consist of
a highly elastic helical spring of hard steel suspended from a fixed point.
The weighing pan is attached at the lowest point of the spring. An indicator
shows the weight measurement and no manual adjustment of weights is
necessary. An example of this type of balance would be the scale used in a
grocery store to weigh produce.
Top-Loading Balance This is another balance used primarily in a
laboratory setting. They usually can measure objects weighing around 150–
5000 g. They offer less readability than an analytical balance, but allow
16
measurements to be made quickly thus making it a more convenient choice
when exact measurements are not needed. Top-loaders are also more
economical than analytical balances. Modern top-loading balances are
electric and give a digital readout in seconds.
Torsion Balance Measurements are based on the amount of twisting of a
wire or fiber. Many microbalances and ultra-microbalances, that weigh
fractional gram values, are torsion balances. A common fiber type is quartz
crystal.
Triple-Beam Balance This type of balance is less sensitive than a top-
loading balance. They are often used in a classroom situation because of
ease of use, durability and cost. They are called triple-beam balances
because they have three decades of weights that slide along individually
calibrated scales. The three decades are usually in graduations of 100g, 10g
and 1g. These scales offer much less readability but are adequate for many
weighing applications.
Precision Weighing Balances are laboratory standard high precision
balances that are based on latest process technology and features best
displayed increment of 0.001g (1mg) with maximum capacity available.
These perfectly match up the applications demanding more than a standard
balance and assist in simplifying complex laboratory measurements
including in determining difference between initial & residual weights.
Here, the calculation of the density of solids & liquids also eliminates need
for time consuming manual calculation and data logging. The standard
features include protective in-use cover and security bracket, working
capacities from 0.1 mg to 230 gm, pan size of 90 mm, ACC of 0.1 mg,
internal calibration, display using LCD with back light, standard RS-232 C
interface and hanger for below balance weighing.
Balance and Scale Care and Use
A balance has special use and care procedures just like other measuring
equipment. Items to be measured should be at room temperature before
weighing. A hot item will give a reading less than the actual weight due to
convection currents that make the item more buoyant. And, if your balance
is enclosed, warm air in the case weighs less than air of the same volume at
room temperature.
17
Another important part of using a balance is cleaning. Scales are exposed to
many chemicals that can react with the metal in the pan and corrode the
surface. This will affect the accuracy of the scale.
Also, keep in mind that a potentially dangerous situation could occur if a
dusting of chemicals is left on the balance pan. In many lab and classroom
situations, more than one person uses a single scale for weighing. It would
be impossible for each person to know what everyone else has been
weighing. There is a chance that incompatible chemicals could be brought
into contact if left standing or that someone could be exposed to a
dangerous chemical that has not been cleaned from the balance. To avoid
damaging the scale or putting others in danger, the balance should be kept
extremely clean. A camel's hair brush can be used to remove any dust that
can spill over during weighing.
Calibration is another care issue when it comes to scales. A scale cannot be
accurate indefinitely; they must be rechecked for accuracy. There are
weight sets available that allow users to calibrate the scale themselves or
the scales can be calibrated by hiring a professional to calibrate them on
site.
The correct weight set needs to be chosen when calibrating a scale. The
classes of weight sets start from a Class One which provides the greatest
precision, then to Class Two, Three, Four and F and finally go down to a
Class M, which is for weights of average precision. Weight sets have class
tolerance factors, and as a general rule, the tolerance factor should be
greater than the readability of the scale.
A scale should be calibrated at least once a year or per manufacturer’s
guidelines. It can be done using calibration weight sets or can be calibrated
by a professional. The readability of the scale will determine which weight
set will be appropriate for calibrating the scale
18
What is the difference between accuracy and precision?
Accuracy tells how close a scale gets to the real value. An inaccurate scale is
giving a reading not close to the real value. Precision and accuracy are
unrelated terms. A precise scale will give the same reading multiple times
after weighing the same item. A precise scale can be inaccurate by
repeatedly giving values that are far away from the actual value. For
instance a scale that reads 5.2g three times in a row for the same item is
very precise but if the item actually weighs 6.0g the scale is not accurate.
19
CHAPTER FIVE
LABORATORY WATERBATHS
Most general laboratory water baths go from ambient up to 80oC or 99oC.
Boiling baths will boil water at 100oC (under normal conditions). Any
baths that work above 100oC will need a liquid in them such as oil. Any
baths that work below ambient will need an internal or external cooling
system. If they work below the freezing point of water they will need to be
filled with a suitable anti-freeze solution. If you require very accurate
control you require the laboratory stirred baths with accurate thermostat /
circulator.
Water baths are used in industrial clinical laboratories, academic facilities,
government research laboratories environmental applications as well as
food technology and wastewater plants. Because water retains heat so well,
using water baths was one of the very first means of incubation.
Applications include sample thawing, bacteriological examinations,
warming reagents, coliform determinations and microbiological assays.
Different types of water baths may be required depending on the
application. Below is a list of the different types of commerially available
water baths and their general specifications.
Types of Laboratory Water Baths:
Unstirred water baths are the cheapest laboratory baths and have the
least accurate temperature control because the water is only circulated by
convection and so is not uniformly heated.
Stirred water baths have more accurate temperature control. They can
either have an in-built pump/circulator or a removable immersion
thermostat / circulator (some of which can pump the bath liquid externally
into an instrument and back into the bath).
Circulating Water Baths Circulating water baths (also called stirrers ) are
ideal for applications when temperature uniformity and consistency are
critical, such as enzymatic and serologic experiments. Water is thoroughly
circulated throughout the bath resulting in a more uniform temperature.
20
Non-Circulating Water Baths This type of water bath relies primarily on
convection instead of water being uniformly heated. Therefore, it is less
accurate in terms of temperature control. In addition, there are add-ons
that provide stirring to non-circulating water baths to create more uniform
heat transfer
Shaking water baths have a speed controlled shaking platform tray
(usually reciprocal motion i.e. back and forwards, although orbital motion
is available with some brands) to which adaptors can be added to hold
different vessels.
Cooled water baths are available as either an integrated system with the
cooling system (compressor, condenser, etc.) built into the laboratory
water baths or using a standard water bath as above using an immersion
thermostat / circulator with a separate cooling system such as an
immersion coil or liquid circulated from a circulating cooler. The
immersion thermostat used must be capable of controlling at the below
ambient temperature you require
Laboratory water baths working below 4oC should be filled with a liquid
which does not freeze.
Immersion thermostats which state they control bath temperatures below
room ambient temperature need a cooling system as mentioned above to
reach these temperatures. The immersion thermostat will then heat when
required to maintain the set below ambient temperature.
Boiling water baths are usually designed with an anlogue control to control
water temperature from simmering up to boiling and have a water level
device so that the bath does not run dry and an over temperature cut-out
thermostat fitted to or near the heating element. The flat lids usually have a
number of holes (up to 150mm. diameter for instance) with eccentric rings
which can be removed to accommodate different size flasks. Specify the
number of holes you require (usually a choice of 4, 6 or 12).
Cooling circulators vary in size, cooling capacity / heat removal,
temperature accuracy, flow rate, etc. and are used to cool laboratory water
baths or remove heat from another piece of equipment by circulating the
cooled water / liquid through it and back to the circulator
21
Construction and Dimensions:
Laboratory water baths usually have stainless steel interiors and either
chemically resistant plastic or epoxy coated steel exteriors. Controllers are
either analogue or digital.
Bath dimensions can be a bit misleading when litre capacity is quoted
because it depends how high you measure (water baths are never filled to
the top). To compare different bath volumes it is best to compare the
internal tank dimensions.
Laboratory Water Bath Accessories:
Lift-off or hinged plastic (depending on bath temperature) or stainless steel
lids are available as well as different racks to hold tubes, etc. Lids with
holes with concentric rings are available for boiling water baths to hold
different size flasks.
Care and maintenance
It is not recommended to use water bath with moisture sensitive or
pyrophoric reactions. Do not heat a bath fluid above its flash point.
Water level should be regularly monitored, and filled with distilled water
only. This is required to prevent salts from depositing on the heater.
Disinfectants can be added to prevent growth of organisms.
Raise the temperature to 90 °C or higher to once a week for half an hour for
the purpose of decontamination.
Markers tend to come off easily in water baths. Use water resistant ones.
If application involves liquids that give off fumes, it is recommended to
operate water bath in fume hood or in a well-ventilated area.
The cover is closed to prevent evaporation and to help reaching high
temperatures.
Set up on a steady surface away from flammable materials.
22
CHAPTER SIX
ANAEROBIC JARS
23
Method of use
1a.The culture: The culture media are placed inside the jar, stacked up one
on the other, and
24
1b..Indicator system: Pseudomonas aeruginosa, inoculated on to a nutrient
agar plate is kept inside the jar along with the other plates. This bacteria
need oxygen to grow (aerobic). A growth free culture plate at the end of the
process indicates a successful anaerobiosis. However, P. aeruginosa
possesses a denitrification pathway. If nitrate is present in the media, P.
aeruginosa may still grow under anaerobic conditions.
2. 6/7ths of the air inside is pumped out and replaced with either unmixed
Hydrogen or as a 10%CO2+90%H2 mixture. The catalyst (Palladium) acts
and the oxygen is used up in forming water with the hydrogen. The
manometer registers this as a fall in the internal pressure of the jar.
3. Hydrogen is pumped in to fill up the jar so that the pressure inside equals
atmospheric pressure. The jar is now incubated at desired temperature
settings.
DESCRIPTION
The jar (McIntosh and Filde's anaerobic jar), about 20″×12.5″ is made up of
a metal. Its parts are as follows:
1.The body made up of metal (airtight)
2.The lid, also metal can be placed in an airtight fashion
3.A screw going through a curved metal strip to secure and hold the lid in
place
4.A thermometer to measuring the internal temperature
5.A pressure gauge to measuring the internal pressure (or a side tube is
attached to a manometer)
6.Another side tube for evacuation and introduction of gases (to a gas
cylinder or a vacuum pump)
7.A wire cage hanging from the lid to hold a catalyst that makes hydrogen
react to oxygen without the need of any ignition source
25
Gas-pak
Gas-pak is a method used in the production of an anaerobic environment. It
is used to culture bacteria which die or fail to grow in presence of oxygen
(anaerobes).These are commercially available, disposable sachets
containing a dry powder or pellets, which, when mixed with water and kept
in an appropriately sized airtight jar, produce an atmosphere free of
elemental oxygen gas (O2). They are used to produce an anaerobic culture
in microbiology. It is a much simpler technique than the McIntosh and
Filde's anaerobic jar where one needs to pump gases in and out.
Constituents of gas-pak sachets
1.Sodium borohydride - NaBH4
2.Sodium bicarbonate - NaHCO3
3.Citric acid - C3H5O(COOH)3
4.Cobalt chloride - CoCl2 (catalyst)
The addition of a Dicot Catalyst maybe required to initiate the reaction.
Reactions
NaBH4 + 2 H2O = NaBO2 + 4 H2↑
C3H5O(COOH)3 + 3 NaHCO3 + [CoCl2] = C3H5O(COONa)3 + 3 CO2 + 3 H2 +
[CoCl2]
2 H2 + O2 + [Catalyst] = 2 H2O + [Catalyst]
Consumption of oxygen
These chemicals react with water to produce hydrogen and carbon dioxide
along with sodium citrate and water (C3H5O(COONa)3) as byproducts.
Again, hydrogen and oxygen reacting on a catalyst like Palladiumised
alumina (supplied separately) combine to form water.
Culture method
The nedium, the gas-pak sachet (opened and with water added) and an
indicator are placed in an air-tight gas jar which is incubated at the desired
temperature. The indicator tells whether the environment was indeed
oxygen free or not.
26
The chemical indicator generally used for this purpose is "chemical
methylene blue solution" that since synthesis has never been exposed to
elemental oxygen. It is colored deep blue on oxidation in presence of
atmospheric oxygen in the jar, but will become colorless when oxygen is
gone, and anaerobic conditions are achieved.
27
CHAPTER SEVEN
MICROSCOPE
28
29
Working Principle and Parts of a Compound Microscope (with
Diagrams)
The most commonly used microscope for general purposes is the standard
compound microscope. It magnifies the size of the object by a complex
system of lens arrangement.It has a series of two lenses; (i) the objective
lens close to the object to be observed and (ii) the ocular lens or eyepiece,
through which the image is viewed by eye. Light from a light source (mirror
or electric lamp) passes through a thin transparent object. The objective
lens produces a magnified ‘real image’ first image) of the object. This image
is again magnified by the ocular lens (eyepiece) to obtain a magnified
‘virtual image’ (final image), which can be seen by eye through the
eyepiece. As light passes directly from the source to the eye through the
two lenses, the field of vision is brightly illuminated. That is why; it is a
bright-field microscope.
These are the parts, which support the optical parts and help in their
adjustment for focusing the object
The components of mechanical parts are as follows:
The whole microscope rests on this base. Mirror, if present, is fitted to it.
2. Pillars:
3. Inclination joint:
It is a curved structure held by the pillars. It holds the stage, body tube, fine
adjustment and coarse adjustment.
5. Body Tube:
It is usually a vertical tube holding the eyepiece at the top and the revolving
nosepiece with the objectives at the bottom. The length of the draw tube is
called ‘mechanical tube length’ and is usually 140-180 mm (mostly 160
mm).
6. Draw Tube:
It is the upper part of the body tube, slightly narrower, into which the
eyepiece is slipped during observation.
7. Coarse Adjustment:
It is a knob with rack and pinion mechanism to move the body tube up and
down for focusing the object in the visible field. As rotation of the knob
through a small angle moves the body tube through a long distance relative
to the object, it can perform coarse adjustment. In modern microscopes, it
moves the stage up and down and the body tube is fixed to the arm.
8. Fine Adjustment:
It is a relatively smaller knob. Its rotation through a large angle can move
the body tube only through a small vertical distance. It is used for fine
adjustment to get the final clear image. In modern microscopes, fine
adjustment is done by moving the stage up and down by the fine
adjustment.
9. Stage:
31
10. Mechanical Stage (Slide Mover):
Mechanical stage consists of two knobs with rack and pinion mechanism.
The slide containing the object is clipped to it and moved on the stage in
two dimensions by rotating the knobs, so as to focus the required portion
of the object.
It is a rotatable disc at the bottom of the body tube with three or four
objectives screwed to it. The objectives have different magnifying powers.
Based on the required magnification, the nosepiece is rotated, so that only
the objective specified for the required magnification remains in line with
the light path.
These parts are involved in passing the light through the object and
magnifying its size.
1. Light Source:
Modern microscopes have in-built electric light source in the base. The
source is connected to the mains through a regulator, which controls the
brightness of the field. But in old models, a mirror is used as the light
source. It is fixed to the base by a binnacle, through which it can be rotated,
so as to converge light on the object. The mirror is plane on one side and
concave on the other.
Only plane side of the mirror should be used, as the condenser converges
the light rays.
(i) Daylight:
32
Plane or concave (plane is easier)
2. Diaphragm:
If light coming from the light source is brilliant and all the light is allowed
to pass to the object through the condenser, the object gets brilliantly
illuminated and cannot be visualized properly. Therefore, an iris
diaphragm is fixed below the condenser to control the amount of light
entering into the condenser.
3. Condenser:
If the condenser has such numerical aperture that it sends light through the
object with an angle sufficiently large to fill the aperture back lens of the
objective, the objective shows its highest numerical aperture. Most
common condensers have numerical aperture.
If the numerical aperture of the condenser is smaller than that of the
objective, the peripheral portion of the back lens of the objective is not
illuminated and the image has poor visibility. On the other hand, if the
numerical aperture of condenser is greater than that of the objective, the
back lens may receive too much light resulting in a decrease in contrast.
33
There are three types of condensers as follows:
4. Objective:
It is the ability of the objective to resolve each point on the minute object
into widely spaced points, so that the points in the image can be seen as
distinct and separate from one another, so as to get a clear un-blurred
image.
34
It may appear that very high magnification can be obtained by using more
number of high power lenses. Though possible, the highly magnified image
obtained in this way is a blurred, one. That means, each point in the object
cannot be found as widely spaced distinct and separate point on the image.
A wide pencil of light passing through the object ‘resolves’ the points in the
object into widely spaced points on the lens, so that the lens can produce
these points as distinct and separate on the image. Here, the lens gathers
more light.
On the other hand, a narrow pencil of light cannot ‘resolve’ the points in the
object into widely spaced points on the lens, so that the lens produces a
blurred image. Here, the lens gathers less light. Thus, the greater is the
width of the pencil of light entering into the objective, the higher is its
‘resolving power’.
35
The numerical aperture of an objective is its light gathering capacity, which
depends on the site of the angle 8 and the refractive index of the medium
existing between the object and the objective.
Where,
n = Refractive index of the medium between the object and the objective
and
For air, the value of ‘n’ is 1.00. When the space between the lower tip of the
objective and the slide carrying the object is air, the rays emerging through
the glass slide into this air are bent or refracted, so that some portion of it
do not pass into the objective. Thus, loss of some light rays reduces
numerical aperture and decreases the resolving power.
However, when this space is filled with an immersion oil, which has greater
refractive index (n=1.56) than that of air (n=1.00), light rays are refracted
or bent more towards the objective. Thus, more light rays enter into the
objective and greater resolution is obtained. In oil immersion objective,
which provides the highest magnification, the size of the aperture is very
small.
Therefore, it needs bending of more rays into the aperture, so that the
object can be distinctly resolved. That is why, immersion oils, such as cedar
wood oil and liquid paraffin are used to fill the gap between the object and
the objective, while using oil-immersion objective.
The smaller is the wavelength of light (λ), the greater is its ability to resolve
the points on the object into distinctly visible finer details in the image.
Thus, the smaller is the wavelength of light, the greater is its resolving
power.
36
The limit of resolution of an objective (d) is the distance between any two
closest points on the microscopic object, which can be resolved into two
separate and distinct points on the enlarged image.
Points with their in-between distance less than ‘d’ or objects smaller than
‘d’ cannot be resolved into separate points on the image. If the resolving
power is high, points very close to each other can be seen as clear and
distinct.
Thus, the limit of resolution (the distance between the two resolvable
points) is smaller. Therefore, smaller objects or finer details can be seen,
when’d’ is smaller. Smaller ‘d’ is obtained by increasing the resolving
power, which in turn is obtained by using shorter wavelength of light (λ)
and greater numerical aperture.
Where,
If λ green = 0.55 p and n.a. = 1.30, then d = λ/2 n.a. = 0.55/2 X 1.30 = 0.21 µ.
Therefore, the smallest details that can be seen by a typical light
microscope is having the dimension of approximately 0.2 µ. Smaller objects
or finer details than this cannot be resolved in a compound microscope.
5. Eyepiece:
The eyepiece is a drum, which fits loosely into the draw tube. It magnifies
the magnified real image formed by the objective to a still greatly magnified
virtual image to be seen by the eye.
Usually, each microscope is provided with two types of eyepieces with
different magnifying powers (X10 and X25). Depending upon the required
magnification, one of the two eyepieces is inserted into the draw tube
before viewing. Three varieties of eyepieces are usually available.
37
They are the Huygenian, the hyper plane and the compensating. Among
them, the Huygenian is very widely used and efficient for low
magnification. In this eyepiece, two simple Plano-convex lenses are fixed,
one above and the other below the image plane of the real image formed by
the objective.
The convex surfaces of both the lenses face downward. The lens towards
the objective is called ‘field lens’ and that towards eye, ‘eye lens’. The rays
after passing through the eye lens come out through a small circular area
known as Rams-den disc or eye point, where the image is viewed by the
eye.
Total magnification:
Mt = Mob X Moc
Where,
Mt = Total magnification,
If the magnification obtained by the objective (Mob) is 100 and that by the
ocular (Moc) is 10, then total magnification (Mt) = Mob X Moc =100 X 10
=1000. Thus, an object of lq will appear as 1000 µ.
38
CHAPTER EIGHT
39
What are the different types of Spectrophotometers?
Even though, double beam instruments are easier and more stable for
comparison measurements, single beam instruments can have a large
dynamic range and is also simple to handle and more compact.
Light source, diffraction grating, filter, photo detector, signal processor and
display are the various parts of the spectrophotometer. The light source
provides all the wavelengths of visible light while also providing
wavelengths in ultraviolet and infra red range. The filters and diffraction
grating separate the light into its component wavelengths so that very
small range of wavelength can be directed through the sample. The sample
compartment permits the entry of no stray light while at the same time
without blocking any light from the source. The photo detector converts
the amount of light which it had received into a current which is then sent
to the signal processor which is the soul of the machine. The signal
processor converts the simple current it receives into absorbance,
transmittance and concentration values which are then sent to the display.
first the intensity of the measurement light beam, I0, is measured without
the sample set. Then the sample is set in the path of the measurement light
beam, and the intensity of the light beam after it passes through the sample,
It, is measured.
a cell containing solvent is set in the path of the measurement light beam,
and the intensity of the light beam after it passes through the cell, I0, is
measured. Next, a cell containing a solution produced by dissolving the
sample in the solvent is set in the path of the measurement light beam, and
the intensity of the light beam after it passes through the cell, It, is
measured. The transmittance, T, is given by equation (1), but with solution
samples, it is more common to use the absorbance, Abs, which is given by
equation (2).
Here, ε is the sample’s absorption coefficient and L is the cell’s optical path
length.
The measurement method shown in eliminates the influence of reflection
from the cell surface and absorption by the solvent, and ensures that only
the absorption due to the sample is measured.
3.Light Source
The desirable properties of a light source are as follows:
a) Brightness across a wide wavelength range
b) Stability over time
c) A long service life
d) Low cost
Although there are no light sources that have all of these properties, the
most commonly used light sources at the moment are the halogen lamps
used for the visible and near-infrared regions and the deuterium lamps
used for the ultraviolet region. Apart from these, xenon flash lamps are
sometimes used.
(1) Halogen Lamp
42
Emission Intensity Distribution of Halogen Lamp (3,000K)
The principle for light emission is the same as that for a standard
incandescent bulb. Electric current is supplied to a filament, the filament
becomes hot, and light is emitted. The bulb in a halogen lamp is filled with
inert gas and a small amount of a halogen. While the tungsten used as the
filament evaporates due to the high temperature, the halide causes the
tungsten to return to the filament. This helps create a bright light source
with a long service life. The emission intensity distribution of a halogen
lamp can be approximated using Planck’s law of radiation. It has relatively
high levels of each of the properties a) to d) mentioned above.
(2)Deuterium Lamp
4.Monochrometer
5.Sample Compartment
Two light beams pass through the compartment, and that this is therefore
the sample compartment of a “double-beam spectrophotometer”. The
monochromatic light that leaves the spectrometer is split into two beams
before it enters the sample compartment. A spectrophotometer in which
only one beam passes through the sample compartment is called a “single-
beam spectrophotometer”.
44
In a standard configuration, the sample compartment contains cell holders
that, hold square cells with optical path lengths of 10 mm. The various
accessories are attached by replacing these cell holder units or by replacing
the entire sample compartment. Among spectrophotometers of medium or
higher grade that use photomultipliers, which will be described later, as
detectors, there are models for which large sample compartments are
made available in order to allow the analysis of large samples or the
attachment of large accessories.
6.Detector
The light beams that pass through the sample compartment enter the
detector, which is the last element in the spectrophotometer.
45
Spectral Sensitivity Characteristics of a Photomultiplier2)
A photomultiplier is a detector that uses the fact that photoelectrons are
discharged from a photoelectric surface when it is subjected to light (i.e.,
the external photoelectric effect). The photoelectrons emitted from the
photoelectric surface repeatedly cause secondary electron emission in
sequentially arranged dynodes, ultimately producing a large output for a
relatively small light intensity. The most important feature of a
photomultiplier is that it achieves a significantly high level of sensitivity
that cannot be obtained with other optical sensors. If there is sufficient
light intensity, this feature is not particularly relevant, but as the light
intensity decreases, this feature becomes increasingly useful. For this
reason, photomultipliers are used in high-grade instruments. The spectral
sensitivity characteristics of a photomultiplier are mainly determined by
the material of the photoelectric surface.
46
Spectral Sensitivity Characteristics of a Silicon
COLORIMETER
Design of Colorimeter
Working Principle
Applications
49
Besides being used for basic research in chemistry laboratories,
colorimeters have many practical applications such as testing water quality
by screening chemicals such as chlorine, fluoride, cyanide, dissolved
oxygen, iron, molybdenum, zinc and hydrazine. They are also used to
determine the concentrations of plant nutrients such as ammonia, nitrate
and phosphorus in soil or hemoglobin in blood.
Notes
Colour choice.
Why red on a blue solution: (an example of why a red filter is used on a
blue solution but can be developed for any other colour). The blue solution
appears blue as all other colours have been preferentially absorbed. The
Red is the most strongly absorbed. Remember: that if you want to see how
the changes in the solution are progressing you need to study a colour that
changes. IF the blue is always passing unhindered then red must be used.
This is very counter intuitive. However if you want to convince yourself:
shine a bright white light through a tank of water with milk. It appears blue.
If you go to the end and look at the light source it will be reddish in tinge.
Uses
52
CHAPTER NINE
FLAME PHOTOMETERS
FLAME PHOTOMETERS
The neutral atoms are obtained by introduction of sample into flame. Hence
the name flame photometry.
When a solution of metallic salt is sprayed as fine droplets into a flame. Due
to heat of the flame, the droplets dry leaving a fine residue of salt. This fine
residue converts into neutral atoms.
Due to the thermal energy of the flame, the atoms get excited and there
after return to ground state. In this process of return to ground state, exited
atoms emit radiation of specific wavelength. This wavelength of radiation
emitted is specific for every element.
1. Burner
2. Monochromators
3. Detectors
54
4. Recorder and display.
Burner: This is a part which produces excited atoms. Here the sample
solution is sprayed into fuel and oxidant combination. A homogenous flame
of stable intensity is produced.
Fuel and oxidants: Fuel and oxidant are required to produce flame such
that the sample converts to neutral atoms and get excited by heat energy.
55
Monochromators:
Recorders and display: These are the devices to read out the recording
from detectors.
56
2. The sample requires to be introduced as solution into fine droplets.
Many metallic salts, soil, plant and other compounds are insoluble in
common solvents. Hence, they can’t be analyzed by this method.
57
CHAPTER TEN
The pictures below show all the equipment needed for an Ion
Analyser measurement:
58
Simple laboratory set-up for Ion Selective Electrode
calibration and measurement
a) Applications
59
Ion-selective electrodes are used in a wide variety of applications for
determining the concentrations of various ions in aqueous solutions. The
following is a list of some of the main areas in which ISEs have been used.
Pollution Monitoring: CN, F, S, Cl, NO3 etc., in effluents, and natural waters.
Agriculture: NO3, Cl, NH4, K, Ca, I, CN in soils, plant material, fertilisers and
feedstuffs.
Salt content of meat, fish, dairy products, fruit juices, brewing solutions.
60
Education and Research: Wide range of applications.
b) Advantages.
61
9) ISEs are one of the few techniques which can measure both positive and
negative ions.
11) ISEs can be used in aqueous solutions over a wide temperature range.
Crystal membranes can operate in the range 0°C to 80°C and plastic
membranes from 0°C to 50°C.
a) The pH Electrode
i.e. pH=7 means a concentration of 1x10-7 moles per litre. (To be more
precise, the term ‘concentration’ should really be replaced by ‘activity’ or
‘effective concentration’. This is an important factor in ISE measurements.
The difference between activity and concentration is explained in more
detail later, but it may be noted here that in dilute solutions they are
essentially the same.
Because of the need for equilibrium conditions there is very little current
flow and so this potential difference can only be measured relative to a
separate and stable reference system which is also in contact with the test
solution, but is unaffected by it.
62
A sensitive, high impedance millivolt meter or digital measuring system
must be used to measure this potential difference accurately.
Where E = the total potential (in mV) developed between the sensing and
reference electrodes.
(It is the sum of all the liquid junction potentials in the electrochemical cell,
see later)
Note that 2.303RT/nF is the Slope of the line (from the straight line plot of
E versus log(A) which is the basis of ISE calibration graphs) and this is an
important diagnostic characteristic of the electrode - generally the slope
gets lower as the electrode gets old or contaminated, and the lower the
slope the higher the errors on the sample measurements.
For practical use in measuring pH, it is not normally necessary for the
operator to construct a calibration graph and interpolate the results for
unknown samples. Most pH electrodes are connected directly to a special
pH meter which performs the calibration automatically. This determines
the slope mathematically and calculates the unknown pH value for
immediate display on the meter.
These basic principles are exactly the same for all ISEs. Thus it would
appear that all can be used as easily and rapidly as the pH electrode: i.e.
simply by calibrating the equipment by measuring two known solutions,
then immersing the electrodes in any test solution and reading the answer
directly from a meter. Whilst it is certainly true that some other ions can be
measured in this simple fashion, it is not the case for most. Unfortunately,
some ISE advertising material tends to gloss over this fact and gives the
reader a rather rosy view of the capabilities of this technique. There are
several factors which can cause difficulties when ISE technology is applied
to the measurement of other ions. These are listed below and discussed in
more detail in later sections. Nevertheless, it must be stressed here that as
long as these difficulties are recognised and steps are taken to overcome
them, then ISEs can still be a very useful and cost-effective analytical tool.
64
b) Differences Between pH and Other Ion-Selective Electrodes
ii) Most ISEs have a much lower linear range and higher detection limit
than the pH electrode. Many show a curved calibration line in the region
10-5 to 10-7 moles/l and very few can be used to determine concentrations
below 1x10-7 moles/l. Thus, for low concentration samples, it may be
necessary to construct a calibration graph with several points in order to
define the slope more precisely in the non-linear range.
vi) Some ISEs will only work effectively over a narrow pH range.
a) General Discussion
Ion selective electrodes come in various shapes and sizes. Each
manufacturer has it’s own distinctive features, but very few give details of
the internal construction of the electrode or composition of the ion-
selective membranes. These are the most important factors which control
the performance of the electrode, and are often kept as closely guarded
trade secrets. Nevertheless, there are certain features that are common to
all. All consist of a cylindrical tube, generally made of a plastic material,
between 5 and 15 mm in diameter and 5 to 10 cm long. An ion-selective
membrane is fixed at one end so that the external solution can only come
into contact with the outer surface, and the other end is fitted with a low
noise cable or gold plated pin for connection to the millivolt measuring
device. In some cases the internal connections are completed by a liquid or
gel electrolyte, in others by an all-solid-state system.
66
ANIONS: Bromide (Br-), Chloride (Cl-), Cyanide (CN-), Fluoride (F-), Iodide
(I-), Nitrate (NO3-), Nitrite (NO2-), Perchlorate (ClO4-), Sulphide (S-),
Thiocyanate (SCN-).
The manner in which these different membranes select and transport the
particular ions is highly variable and in many cases highly complex. It is far
beyond the scope of this work to explain in detail the exact mechanism for
each ion. Moreover, it is not necessary for the analyst to understand these
mechanisms in order to use the electrodes satisfactorily. Nevertheless, it
may be of interest to the general reader to give some indication of these
processes. There are two main types of membrane material, one based on a
solid crystal matrix, either a single crystal or a polycrystalline compressed
pellet, and one based on a plastic or rubber film impregnated with a
complex organic molecule which acts as an ion-carrier. The development of
these organic membranes was based on biological research which revealed
that some antibiotics and vitamins can induce cationic permeation through
cell membranes. One example of each membrane type is described below
as an illustration of the range of technologies employed.
b) Crystal-Membrane Electrodes e.g. Fluoride.
The Fluoride electrode is a typical example of the first type. Here the
membrane consists of a single lanthanum fluoride crystal which has been
doped with europium fluoride to reduce the bulk resistivity of the crystal. It
is 100% selective for F- ions and is only interfered with by OH- which reacts
with the lanthanum to form lanthanum hydroxide, with the consequent
release of extra F- ions. This interference can be eliminated by adding a pH
buffer to the samples to keep the pH in the range 4 to 8 and hence ensure a
low OH- concentration in the solutions.
c) Impregnated-PVC-Membrane Electrodes e.g. Potassium.
The Potassium electrode was one of the earliest developed and simplest
examples of the second type. The membrane is usually in the form of a thin
disc of PVC impregnated with the macrocyclic antibiotic valinomycin. This
compound has a hexagonal ring structure with an internal cavity which is
almost exactly the same size as the diameter of the K+ ion. Thus it can form
complexes with this ion and preferentially conducts it across the
membrane. Unfortunately it is not 100% selective and can also conduct
small numbers of sodium and ammonium ions. Thus these can cause errors
in the potassium determination if they are present in high concentrations.
The majority of other ISEs suffer from similar limitations (see later section
on ‘interference’).
67
d) Care and Maintenance of ISEs.
When handling ISEs, care should be taken to avoid damaging the
membrane surface. If the electrodes are in frequent use then they can
simply be left hanging in the electrode holder with the membrane surface
open to the air but protected by a clean dry beaker. For prolonged storage
in a cupboard or drawer, the membrane should be protected by covering
with the rubber or plastic cap which is normally provided with the
electrode. After extensive use the membranes may become coated with a
deposit or scoured with fine scratches which may cause a slow or reduced
response (low slope) or unstable readings.
Crystal membranes can be regenerated by washing with alcohol and/or
gently polishing with fine emery paper to remove any deposit or
discoloration, then thoroughly washing with de-ionised water to remove
any debris. After this, they may require soaking in the concentrated
standard solution for several hours before a stable reading can be re-
established. It must be noted, however, that prolonged immersion of crystal
membranes in aqueous solutions will eventually cause a build up of
oxidation products on the membrane surface and thus inhibit performance
and shorten the active life. Conversely, PVC membranes should not even be
touched, let alone polished, and can be often be regenerated by prolonged
(several days) soaking in the standard solution, after removing any deposit
with a fine jet of water, or rinsing in alcohol.
REFERENCE ELECTRODES
In order to measure the change in potential difference across the
ion-selective membrane as the ionic concentration changes, it is
necessary to include in the circuit a stable reference voltage which
acts as a half-cell from which to measure the relative deviations.
a) The Silver / Silver Chloride Single Junction Reference Electrode.
The most common and simplest reference system is the silver /
silver chloride single junction reference electrode. This generally
consists of a cylindrical glass tube containing a 4 Molar solution of
KCl saturated with AgCl. The lower end is sealed with a porous
ceramic frit which allows the slow passage of the internal filling
solution and forms the liquid junction with the external test
solution. Dipping into the filling solution is a silver wire coated with
a layer of silver chloride (it is chloridised) which is joined to a low-
noise cable which connects to the measuring system.
68
In electrochemical terms, the half-cell can be represented by:
Ag / AgCl (Satd), KCL (Satd)
and the electrode reaction is:
AgCl (s) + e- = Ag (s) + Cl-
The electrode potential for this half-cell is + 0.2046 V relative to the
Standard Hydrogen Electrode at 25°C
b) Double Junction Reference Electrodes.
One problem with reference electrodes is that, in order to ensure a
stable voltage, it is necessary to maintain a steady flow of electrolyte
through the porous frit. Thus there is a gradual contamination of the
test solution with electrolyte ions. This can cause problems when
trying to measure low levels of K, Cl, or Ag, or when using other ISEs
with which these elements may cause interference problems. In
order to overcome this difficulty the double junction reference
electrode was developed. In this case the silver / silver chloride cell
described above forms the inner element and this is inserted into an
outer tube containing a different electrolyte which is then in contact
with the outer test solution through a second porous frit. The outer
filling solution is said to form a "salt bridge" between the inner
reference system and the test solution and is chosen so that it does
not contaminate the test solution with any ions which would effect
the analysis.
Commonly used outer filling solutions are:
potassium nitrate - for Br, Cd, Cl, Cu, CN, I, Pb, Hg, Ag, S, SCN.
sodium chloride - for K,
ammonium sulphate - for N03,
magnesium sulphate - for NH4,
Note that double junction reference electrodes are named after their
outer filling solutions.
One disadvantage with double junction reference electrodes is that
they introduce an extra interface between two electrolytes and thus
give the opportunity for an extra liquid junction potential to
develop.
c) Liquid Junction Potentials.
It must be noted that the standard voltage given by a reference
electrode is only correct if there is no additional voltage supplied by
a liquid junction potential formed at the porous plug between the
filling solution and the external test solution. Liquid junction
potentials can appear whenever two dissimilar electrolytes come
into contact.
69
At this junction, a potential difference will develop as a result of the
tendency of the smaller and faster ions to move across the boundary
more quickly than those of lower mobility. These potentials are
difficult to reproduce, tend to be unstable, and are seldom known
with any accuracy; so steps must be taken to minimise them. Using
4 Molar KCL as the inner filling solution has the advantage that the
K+ and Cl- ions have nearly equal mobilities and hence form an equi-
transferrent solution. Also, in the single junction electrodes, the
electrolyte concentration is much higher than that of the sample
solution thus ensuring that the major portion of the current is
carried by these ions. A third factor in minimising the junction
potential is the fact that there is a small but constant flow of
electrolyte out from the electrode thus inhibiting any back-diffusion
of sample ions - although this is less important with modern gel
electrolytes.
As indicated above, all these problems are doubled when double
junction reference electrodes are used and an additional problem
arises in the case of the last three listed above (Sodium Chloride,
Ammonium Sulphate, Magnesium Sulphate) because the filling
solutions are not equi-transferrent and hence have a stronger
tendency to form liquid junction potentials. It must be noted here
that Nico2000 Ltd have recently introduced a novel Lithium Acetate
reference electrode which overcomes most of these problems and
can be used with all the ELIT range of ISEs. This is because it
contains ions which are very nearly equi-tranferrent and which do
not interfere with any of the commonly used ISEs.
It must be noted that the E0 factor in the Nernst equation is the sum
of all the liquid junction potentials present in the system and any
variation in this during analyses can be a major source of potential
drift and error in measurements.
d) Combination Electrodes
The majority of pH electrodes are produced in the form of
combination electrodes in which the reference system is housed in
the same cylindrical body as the sensor head. This produces a
simple, compact unit for immersing in the test solution and has the
added advantage that the two cells are in close proximity (with the
reference cell normally completely surrounding the sensor element)
- thus minimising the effect of any stray electrostatic fields or any
inhomogeneity in the test solution.
70
The main disadvantage of this arrangement is the fact that it is the
reference element which is the most likely to cause problems or fail,
long before the ISE head does, but the whole unit has to be replaced
when failure does occur.
In contrast to pH electrodes, some ISEs are produced as mono-
electrodes for use with separate reference systems. One reason for
this is because ISE membranes have a far lower impedance than pH
sensors and are less susceptible to stray electrostatic fields. Thus it
is not necessary to screen the sensor head by surrounding it with
the reference system. More importantly, the membranes and
internal construction of ISEs are generally far more expensive than
pH sensors and it is much more cost-effective to have separate units
in which the reference system can be replaced independently from
the ISE.
e) Multiple Electrode Heads: Separable Combinations.
A new concept for combination electrodes has recently been
introduced. Both the ISEs and the reference electrodes are made in
the form of 8mm diameter tubes fitted with a gold plated plug-in
connector. These can be inserted separately into special multiple
electrode heads which are fitted with the cables and connectors for
attaching to the measuring system. The rigid plastic head ensures
that the ISE and reference system remain firmly linked together at a
regular distance apart during operation, but either one can easily be
replaced in the event of failure or need to change the analysis.
Moreover, the replacement electrodes are relatively inexpensive
compared to conventional electrodes because they do not
incorporate the expensive low-noise cables.
The ELIT Electrode Head System
A practical and cost effective way to combine Ion Selective and
Reference Electrodes
ELIT Electrode Heads are manufactured from a robust plastic
material and fitted with low noise cables and connectors which are
compatible with any standard mV/pH/ion meter.
The standard version, for use with an ELIT Ion Analyser / Computer
Interface, has a BNC plug, but DIN, US or S7 versions are available if
required.
The sockets on the head and the pins on the plug-in electrodes are
gold plated to assure good contact.
71
Advantages of this 'electrode combination' over conventional
combination electrodes:
Use of one reference electrode for several ion-selective
electrodes.
Replacement of a defective reference system without
sacrificing the more expensive ISE.
Expensive low-noise cable and connector are attached to the
re-usable head and do not need to be replaced if the ISE
becomes defective.
ISE is less expensive than conventional types with cable &
connectors permanently attached.
ISE can be stored dry and the RE wet.
Increased distance between the ISE and the reference system
reduces electrical interference and increases the precision of
measurement.
72
MEASURING PROCEDURES
a) Adding ISAB
One simple way to avoid adding ISAB is to dilute the samples to a level
where the activity effect is insignificant. But this requires a knowledge of
the Ionic Strength of the samples, and care must be taken to avoid diluting
so much that the measurements would fall within the non-linear range of
the electrode. In some applications, where only the approximate
concentration of the samples are required, or the differences between
samples are more important than the actual concentrations, the effect of
the ionic strength can often be ignored. Alternatively, if the highest possible
precision and accuracy is required then using the Sample Addition or
Standard Addition methods may be a better solution than adding ISAB.
If it is decided that ISAB should be added then the most important factor is
that it should be added equally to standards and samples.
73
Thus if the ISE instructions say, for example, that ISAB should be added
"2%v/v" it must be recognised that this is only an approximate
recommendation, and it will be more convenient to add 2ml of ISAB to
100ml of sample and standard (or 1ml to 50ml) rather than adding 2ml to
98ml.
The time at which the measurements are taken can also vary depending on
the characteristics of the particular type of electrode system being used
and the balance between time constraints and precision requirements. In
some cases it is best to wait for a stable mV reading (this can take several
minutes). In others it is better to take all readings after a pre-specified time
after immersion. Generally, the mV reading changes rapidly in the first 10
or 20 seconds as the ISE membrane equilibrates with the solution, then
more slowly and exponentially as the reference electrode liquid junction
poential stabilises.
74
Always taking a reading after say 1 or 2 minutes (depending on which
electrode system is being used) should ensure that all are taken in the
shallow part of the stabilisation curve where only small and insignificant
changes are occuring. A third alternative is to observe the drift in reading
as the electrodes equilibrate after immersion and then take a reading at the
point where the direction of drift is definitely reversed - i.e. a different
electrochemical process begins to dominate - but this last effect is not
common.
The electrode tips must be rinsed by spraying with a jet of deionised water
and gently dabbed dry with a low-lint laboratory tissue between
measurements. For the most precise results, it may help to minimise
hysteresis effects if the electrodes are soaked in deionised water for 20 or
30 seconds after rinsing, before every measurement, so that each new
reading is approached from the same direction, (i.e. always from low
concentration to high - as recommended for calibration measurements, to
also minimise cross contamination) rather than just being dependent on
the last sample measured.
a) Direct Potentiometry
Direct potentiometry is the simplest and most widely used method of using
ISEs as described above in the Basic Theory and Calibration sections of this
work. Simply measure the electrode response in an unknown solution and
read the concentration directly from the calibration graph (either manually
or using special computer graphics and calculations - see later) or from the
meter display on a self-calibrating ion meter.
75
A big advantage of this method is that it can be used to measure large
batches of samples covering a wide range of concentrations very rapidly
without having to change range, recalibrate or make any complicated
calculations. Moreover, if ISAB is not being used, it is not necessary to
measure the volume of the samples or standards. Quite acceptable results
can be obtained for some elements by simply dangling the electrodes in a
river or pond or effluent outflow without the need to take samples in small
beakers.
b) Incremental Methods
•Sample Addition,
•Sample Subtraction.
76
•Once the approximate concentration for the samples is known, the
calibration (slope) can be "fine tuned" by analysing a standard with a
concentration that lies within the range of the samples (and is at the same
temperature) and then adjusting the slope and re-calculating the results
until the standard gives the correct answer. This "fine tune" procedure is
very quick and easy using the ELIT ISE/pH Ion Analyser Software.
c) Potentiometric Titrations
This method can also be used to extend the range of ions measurable by
ISEs. For example aluminium cannot be measured by direct potentiometry
but it can be titrated by reacting with sodium fluoride and monitoring the
reaction using a fluoride electrode. It can also be used for elements for
which it is difficult to maintain stable standard solutions or which are toxic
and it is undesirable to handle concentrated standard solutions. For
example, cyanide solutions can be titrated against a hypochlorite solution
which forms a complex with the cyanide ions and effectively removes them
from solution. The amount of cyanide in the original solution is
proportional to the amount of hypochlorite used from the start of the
titration until the end-point when there is no further change in the cyanide
electrode potential.
78
CHAPTER ELEVEN
HOT AIR/BOX OVEN
The electrical devices which are called as hot air oven are used in
sterilization by providing the dry heat.
They were originally developed by Pasteur. The oven uses dry heat to
sterilize articles. Generally, they can be operated from 50 to 300 degC (122
to 572 degF) . There is a thermostat controlling the temperature. These are
digitally controlled to maintain the temperature. Their double walled
insulation keeps the heat in and conserves energy, the inner layer being a
poor conductor and outer layer being metallic. It is used in the sterilization
of pharmaceutical products and other materials. It is double walled
chamber made of steel.
TYPES OF OVEN
- Laboratory Oven.
- High Temperature Lab Oven.
- Industrial Oven.
- Top Loading Annealing Oven.
- Pharmaceutical Oven.
- Vacuum Oven.
- Bench Oven.
81
DISADVANTAGES
1. This method is not suitable for the surgical dressings.
2. This method is not suitable for the most of the medicaments, rubber
and plastic good because the articles are exposed to a very high
temperature for a long period.
3. Dry heat penetrates slowly and unevenly.
4. Dry heat requires long exposure times to effectively achieve sterility.
5. Dry heat requires higher temperatures that many items cannot be
safely exposed to.
6. Dry heat requires specialized packaging materials that can sustain
integrity under high heat conditions.
7. Dry heat may require different temperature and exposure times,
depending on the type of item being sterilized.
PRECAUTIONS
1. Glass apparatus must be wrapped with the clean cloth or filter paper
and containers must be plugged with non assorbants cotton wool.
2. The article and substances which are to be sterilised should not be
placed at the floor of the oven as it receieves direct heat and becomes
much hotter.
3. The oven should not be over loaded with the materials meant for
sterilisation.
4. There should be sufficient space in between the artilcles, so that
there is uniform distribution of heat.
5.
82
CHAPTER TWELVE
ELIZA READER
The basic principle in Elisa readers are the special filters for only 5-6
standard wavelengths for all Elisa kits (which depends from substrate
type). Always check your kit’s instructions with the reader filters (or the
substrate electronic absorbance spectrum). For instance, you can measure
the maximum to reach the highest sensitivity of your elisa photometer by
putting your colored substrate in the plate reader for the absorbance
spectra. The Elisa photometers have these filters which fit to almost all
substrates commonly used.
Elisa Reader vs. Spectrophotometer
The major difference between the Elisa plate reader and the
spectrophotometer is that the Elisa readers are commonly used for
intensity measurements on a large number of samples where you can also
use a very small volume of sample. The spectrophotometer appears to be
more sophisticated and much detailed as discussed below:
Spectrophotometer
•The instrument is more accurate than the Elisa microplate reader
•It can measure at any wavelength
•It can record a spectrum
•It can measure the kinetic continuously
•Moreover, spectrophotometer offers full monochromator and more
sensitive detection, anisotropy measurements. The only disadvantage with
this instrument being that you can easily pick up artifact spectra (based on
unclean samples or misaligned optics) therefore there’s the need to know
your instrument well and use the recommended correction factor files for
the instrument and do an many controls as possible of all the components
of the solution you may be having.
Elisa Reader
•The elisa reader commonly referred to as the microplate reader is much
faster compared to spectrophotometer
•This instrument used multiple sample at the same time
•Smaller volumes can be used such as 500-200 ul for 96-well plates
•It can be equipped with robot for screening too
83
84
The microplate reader is used for reading the results of ELISA tests.This
technique has a direct application in immunology and serology.
Among other applications it confirms the presence of antibodies or
antigens of an infectious agent in an organism, antibodies from a vaccine or
auto-antibodies, for example in rheumatoid arthritis.
OPERATION PRINCIPLES
The microplate reader is a specialized spectrophotometer.
Unliketheconventionalspectrophotometerwhichfacilitates readings on a
wide range of wavelengths, the microplate reader has filters or diffraction
gratings that limit the wavelength range to that used in ELISA, generally
between 400 to 750 nm (nanometres). Some readers operate in the
ultraviolet range and carry out analyses between 340 to 700 nm. The
optical system exploited by many manufacturers uses optic fibres to supply
light to the microplate wells containing the samples. The light beam,
passing through the sample has a diameter ranging between 1 to 3 mm. A
detection system detects the light coming from the sample, amplifies the
signal and determines the sample’s absorbance. A reading system converts
it into data allowing the test result interpretation. Some microplate readers
use double beam light systems.
85
Test samples are located in specially designed plates with a specific
number of wells where the procedure or test is carried out. Plates of 8
columns by 12 rows with a total of 96 wells are common. There are also
plates with a greater number of wells. For specialized applications, the
current trend is to increase the number of wells (384-well plates) to reduce
the amount of reagents and samples used and a
greaterthroughput.Thelocationoftheopticalsensorsofthe microplate reader
varies depending on the manufacturers: these can be located above the
sample plate, or directly underneath the plate’s wells. Nowadays
microplate readers have controls regulated by microprocessors;
connection interfaces to information systems; quality and process control
programs, which by means of a computer, allow complete test automation.
86
4. Next, a secondary antibody, called the conjugate, is added.
This harbours an enzyme which will react with a substrate to produce a
change of colour at a later step. 5. Then begins a second period of
incubation during which this conjugate will bind to the antigen-antibody
complex in the wells. 6. After the incubation, a new washing cycle is done to
remove unbound conjugate from the wells. 7. A substrate is added. The
enzyme reacts with the substrate and causes the solution to change in
colour. This will indicate how much antigen-antibody complex is present at
the end of the test. 8. Once the incubation time is completed, a reagent is
added to stop the enzyme-substrate reaction and to prevent further
changes in colour. This reagent is generally a diluted acid. 9. Finally, the
plate in is read by the microplate. The resulting values are used to
determine the specific amounts or the presence of antigens or antibodies in
the sample. Note: Some of the wells are used for standards and controls.
Standards allow the cut-off points to be defined. The standards and
controls are of known quantities and are used for measuring the success of
the test, evaluating data against known concentrations for each control.
The process described above is common, although there are many ELISA
tests with test-specific variants.
87
CHAPTER THIRTEEN
REFRIGERATOR
1. The compressor constricts the refrigerant vapor, raising its pressure, and
pushes it into the coils on the outside of the refrigerator.
2. When the hot gas in the coils meets the cooler air temperature of the
kitchen, it becomes a liquid.
88
3. Now in liquid form at high pressure, the refrigerant cools down as it
flows into the coils inside the freezer and the fridge.
4. The refrigerant absorbs the heat inside the fridge, cooling down the air.
Refrigeration Basics
You can describe something as cold and everyone will know what you mean, but cold really only
means that something contains less heat than something else. All there really is, is greater and
lesser amounts of heat. The definition of refrigeration is The Removal and Relocation of Heat. So
if something is to be refrigerated, it is to have heat removed from it. If you have a warm can of
pop at say 80 degrees Fahrenheit and you would prefer to drink it at 40 degrees, you could place
it in your fridge for a while, heat would somehow be removed from it, and you could eventually
enjoy a less warm pop. (oh, all right, a cold pop.) But lets say you placed that 40 degree pop in the
freezer for a while and when you removed it, it was at 35 degrees. See what I mean, even "cold"
objects have heat content that can be reduced to a state of "less heat content". The limit to this
process would be to remove all heat from an object. This would occur if an object was cooled to
Absolute Zero which is -273º C or -460º F. They come close to creating this temperature under
laboratory conditions and strange things like electrical superconductivity occur.
The latter two are used extensively in the design of refrigeration equipment. If you place two
objects together so that they remain touching, and one is hot and one is cold, heat will flow from
the hot object into the cold object. This is called conduction. This is an easy concept to grasp and
is rather like gravitational potential, where a ball will try to roll down an inclined plane. If you
were to fan a hot plate of food it would cool somewhat. Some of the heat from the food would be
carried away by the air molecules. When heat is transferred by a substance in the gaseous state
89
the process is called convection. And if you kicked a glowing hot ember away from a bonfire, and
you watched it glowing dimmer and dimmer, it is cooling itself by radiating heat away. Note that
an object doesn't have to be glowing in order to radiate heat, all things use combinations of these
methods to come to equilibrium with their surroundings. So you can see that in order to
refrigerate something, we must find a way to expose our object to something that is colder than
itself and nature will take over from there. We are getting closer to talking about the actual
mechanics of a refrigerating system, but there are some other important concepts to discuss first.
The States of Matter
They are of course: solid, liquid and gas. It is important to note that heat must be added to a
substance to make it change state from solid to liquid and from liquid to a gas. It is just as
important to note that heat must be removed from a substance to make it change state from a gas
to a liquid and from a liquid to a solid.
The Magic of Latent Heat
Long ago it was found that we needed a way to quantify heat. Something
more precise than "less heat" or "more heat" or "a great deal of heat" was
required. This was a fairly easy task to accomplish. They took 1 Lb. of
water and heated it 1 degree Fahrenheit. The amount of heat that was
required to do this was called 1 BTU (British Thermal Unit). The
refrigeration industry has long since utilized this definition. You can for
example purchase a 6000 BTUH window air conditioner. This would be a
unit that is capable of relocating 6000 BTU's of heat per hour. A larger
unit capable of 12,000 BTUH could also be called a one Ton unit. There
are 12,000 BTU's in 1 Ton.
To raise the temperature of 1 LB of water from 40 degrees to 41 degrees would take 1 BTU. To
raise the temperature of 1 LB of water from 177 degrees to 178 degrees would also take 1 BTU.
However, if you tried raising the temperature of water from 212 degrees to 213 degrees you
would not be able to do it. Water boils at 212 degrees and would prefer to change into a gas
rather than let you get it any hotter. Something of utmost importance occurs at the boiling point
of a substance. If you did a little experiment and added 1 BTU of heat at a time to 1 LB of water,
you would notice that the water temperature would increase by 1 degree each time. That is until
you reached 212 degrees. Then something changes. You would keep adding BTU's, but the water
would not get any hotter! It would change state into a gas and it would take 970 BTU's to
vaporize that pound of water. This is called the Latent Heat of Vaporization and in the case of
water it is 970 BTU's per pound.
So what! you say. When are you going to tell me how the refrigeration effect works? Well hang in
there, you have just learned about 3/4 of what you need to know to understand the process.
What keeps that beaker of water from boiling when it is at room temperature? If you say it's
because it is not hot enough, sorry but you are wrong. The only thing that keeps it from boiling is
the pressure of the air molecules pressing down on the surface of the water.
90
When you heat that water to 212 degrees and then continue to add heat, what you are doing is
supplying sufficient energy to the water molecules to overcome the pressure of the air and allow
them to escape from the liquid state. If you took that beaker of water to outer space where there
is no air pressure the water would flash into a vapor. If you took that beaker of water to the top of
Mt. Everest where there is much less air pressure, you would find that much less heat would be
needed to boil the water. (it would boil at a lower temperature than 212 degrees). So water boils
at 212 degrees at normal atmospheric pressure. Lower the pressure and you lower the boiling
point. Therefore we should be able to place that beaker of water under a bell jar and have a
vacuum pump extract the air from within the bell jar and watch the water come to a boil even at
room temperature. This is indeed the case!
A liquid requires heat to be added to it in order for it to overcome the air pressure pressing down
on its' surface if it is to evaporate into a gas. We just learned that if the pressure above the liquids
surface is reduced it will evaporate easier. We could look at it from a slightly different angle and
say that when a liquid evaporates it absorbs heat from the surrounding area. So, finding some
fluid that evaporates at a handier boiling point than water (IE: lower) was one of the first steps
required for the development of mechanical refrigeration.
Chemical Engineers spent years experimenting before they came up with the perfect chemicals
for the job. They developed a family of hydroflourocarbon refrigerants which had extremely low
boiling points. These chemicals would boil at temperatures below 0 degrees Fahrenheit at
atmospheric pressure. So finally, we can begin to describe the mechanical refrigeration process.
Main Components
91
There are 4 main components in a mechanical refrigeration system. Any components beyond
these basic 4 are called accessories. The compressor is a vapor compression pump which uses
pistons or some other method to compress the refrigerant gas and send it on it's way to the
condenser. The condenser is a heat exchanger which removes heat from the hot compressed gas
and allows it to condense into a liquid. The liquid refrigerant is then routed to the metering
device. This device restricts the flow by forcing the refrigerant to go through a small hole which
causes a pressure drop. And what did we say happens to a liquid when the pressure drops? If you
said it lowers the boiling point and makes it easier to evaporate, then you are correct. And what
happens when a liquid evaporates? Didn't we agree that the liquid will absorb heat from the
surrounding area? This is indeed the case and you now know how refrigeration works. This
component where the evaporation takes place is called the evaporator. The refrigerant is then
routed back to the compressor to complete the cycle. The refrigerant is used over and over again
absorbing heat from one area and relocating it to another. Remember the definition of
refrigeration? (the removal and relocation of heat).
Heat Transfer Rates
One thing that we would like to optimize in the refrigeration loop is the rate of heat transfer.
Materials like copper and aluminum are used because they have very good thermal conductivity.
In other words heat can travel through them easily. Increasing surface area is another way to
improve heat transfer. Have you noticed that small engines have cooling fins formed into the
casting around the piston area? This is an example of increasing the surface area in order to
increase the heat transfer rate. The hot engine can more easily reject the unwanted heat through
the large surface area of the fins exposed to the passing air. Refrigeration heat transfer devices
such as air cooled condensers and evaporators are often made out of copper pipes with
aluminum fins and further enhanced with fans to force air through the fins.
Metering Device
We will now take a closer look at the individual components of the system. We will start with the
metering device. There are several types but all perform the same general function which is to
cause a pressure drop. There should be a full column of high pressure liquid refrigerant (in the
liquid line) supplying the inlet of the metering device. When it is forced to go through a small
orifice it loses a lot of the pressure it had on the upstream side of the device. The liquid
refrigerant is sort of misted into the evaporator. So not only is the pressure reduced, the surface
area of the liquid is vastly increased. It is hard to try and light a log with a match but chop the log
into toothpick sized slivers and the pile will go up in smoke easily. The surface area of zillions of
liquid droplets is much greater than the surface area of the column of liquid in the pipe feeding
the metering device. The device has this name because it meters the flow of refrigerant into the
evaporator. The next graphic shows a capillary line metering device. This is a long small tube
which has an inside diameter much smaller than a pencil lead. You can imagine the large pressure
drop when the liquid from a 1/4" or 3/8" or larger pipe is forced to go through such a small
opening.
92
The capillary line has no moving parts and can not respond to changing conditions like a
changing thermal load on the evaporator. I have also added a few labels showing the names of
some of the pipes.
The Evaporator
The metering device has sprayed low pressure droplets of refrigerant into the evaporator. The
evaporator could be the forced air type and could be constructed of many copper tubes which
conduct heat well. To further enhance heat transfer the pipes could have aluminum fins pressed
onto them. This vastly increases the surface area that is exposed to the air. And this type of
evaporator could have a fan motor sucking air through the fins. The evaporator would be capable
of reducing the temperature of air passing through the fins and this is a prime example of the
refrigeration effect. If that evaporator was located in a walk in cooler, the air would be blown out
into the box and would pick up heat from the product; let's say it is a room full of eggs. The flow
of heat would be egg core/egg shell/circulating air/aluminum fins/copper evaporator
pipe/liquid droplet of refrigerant. The droplet of refrigerant has the capability of absorbing a
large quantity of heat because it is under conditions where it is just about ready to change state
into a gas. We have lowered it's pressure, we have increased surface areas and now we are
adding heat to it. Just like water, refrigerants also have ratings for Latent Heats of vaporization in
BTU's per LB. When heat is picked up from the air stream, the air is by definition cooled and is
blown back out into the box to take another pass over the eggs and pick up more heat. This
process continues until the eggs are cooled to the desired temperature and then the refrigeration
system shuts off and rests. But what about our droplet of refrigerant.
93
By now it might have picked up so much heat that it just couldn't stand it anymore and it has
evaporated into a gas. It has served it's purpose and is subjected to a suction coming from the
outlet pipe of the evaporator. This pipe is conveniently called the suction line. Our little quantity
of gas joins lots of other former droplets and they all continue on their merry way to their next
destination.
The Compressor
The compressor performs 2 functions. It compresses the
gas (which now contains heat from the eggs) and it moves
the refrigerant around the loop so it can perform it's
function over and over again. We want to compress it
because that is the first step in forcing the gas to go back
into a liquid form. This compression process unfortunately
adds some more heat to the gas but at least this process is
also conveniently named; The Heat of Compression. The
graphic shows a reciprocating compressor which means
that it has piston(s) that go up and down. On the down
stroke refrigerant vapor is drawn into the cylinder. On the
upstroke those vapors are compressed. There are thin
valves that act like check valves and keep the vapors from
going back where they came from. They open and close in
response to the refrigerant pressures being exerted on them by the action of the piston. The hot
compressed gas is discharged out the...you guessed it; discharge line. It continues towards the
last main component.
The Condenser
The condenser is similar in appearance to the evaporator. It utilizes the same features to effect
heat transfer as the evaporator does. However, this time the purpose is to reject heat so that the
refrigerant gas can condense back into a liquid in preparation for a return trip to the evaporator.
If the hot compressed gas was at 135 degrees and the air being sucked through the condenser
fins was at 90 degrees, heat will flow downhill like a ball wants to roll down an inclined plane and
be rejected into the air stream. Heat will have been removed from one place and relocated to
another as the definition of refrigeration describes. As long as the compressor is running it will
impose a force on the refrigerant to continue circulating around the loop and continue removing
heat from one location and rejecting it into another area.
Superheat and Slugging
There is another very common type of metering device called a TX Valve. It's full name is
Thermostatic Expansion Valve, and you will be thankful to know that its' short form is TXV. (It
can also be called TEV) This valve has the additional capability of modulating the refrigerant flow.
94
This is a nice feature because if the load on the evaporator changes the valve can respond to the
change and increase or decrease the flow accordingly. The next graphic shows this type of
metering device and you will note that another component has been added along with it.
The TXV has a sensing bulb attached to the outlet of the evaporator. This bulb senses the suction
line temperature and sends a signal to the TXV allowing it to adjust the flow rate. This is
important because if not all the refrigerant in the evaporator changes state into a gas, there
would be liquid refrigerant content returning down the suction line to the compressor. That
could be disastrous to the compressor. A liquid can not be compressed and if a compressor tries
to compress a liquid something is going to break and it's not going to be the liquid. The
compressor can suffer catastrophic mechanical damage. This unwanted situation is called liquid
slugging. The flow rate through a TXV is set so that not only is all the liquid hopefully changed to
a gas, but there is an additional 10 degree safety margin to insure that all the liquid is changed to
a gas. This is called Superheat. At a given temperature any liquid and vapor combination will
always be at a specific pressure. There are charts of this relationship called PT Charts which
stands for Pressure/Temperature Chart. Now if all the liquid droplets in an evaporator have
changed state into a gas, and they still have 1/4 of the evaporator to travel through, this gas will
pick up more heat from the load being imposed on the evaporator and even though it is at the
same pressure, it will become hotter than the PT Chart says it should be. This heat increase over
and above the normal PT relationship is called superheat. It can only take place when there is no
liquid in the immediate area and this phenomena is used to create an insurance policy of sorts.
95
Usually TXV's are set to maintain 10 degrees of superheat and by definition that means that the
gas returning to the compressor is at least 10 degrees away from the risk of having any liquid
content. A compressor is a vapor compression pump and must not attempt to compress liquid
liquid.
That extra component that got added in along with the TX Valve is called a receiver. When the
TXV reduces the flow there has to be somewhere for the unneeded refrigerant to go and the
receiver is it. Note that there is a dip tube in the outlet side to insure that liquid is what is fed into
the liquid line. Liquid must be provided to the TXV not a mixture of liquid and gas. The basic
premise is to change a liquid to a gas so you don't want to waste any of the evaporator's capacity
by injecting useless vapor into it. The line that comes from the condenser and goes to the receiver
is also given a name. It's called the condensate line.
Accessories
Even though there are only 4 basic components to a refrigeration system there are numerous
accessories that can be added. The next graphic shows a liquid line filter and a sight glass. The
filter catches unwanted particles such as welding slag, copper chips and other unwanted debris
and keeps it from clogging up important devices such as TX Valves. It has another function as
well. It contains a desiccant which absorbs minute quantities of water which hopefully wasn't in
the system in the first place. The sight glass is a viewing window which allows a mechanic to see
if a full column of liquid refrigerant is present in the liquid line.
96
Earlier we discussed heat transfer rates and mentioned surface area as one of the factors. Let's
put some fins on our condenser and evaporator. While we are at it lets also add a couple of fan
motors to move air through those fins. They are conveniently called the condenser fan motor and
evaporator fan motor.
To make our cyber space refrigeration system a little more realistic lets separate the evaporator
away from the compressor section and put it inside an insulated box. The left over components
can now be called a Condensing Unit. The insulated box does not conduct heat well. If we lower
the temperature of a refrigerated product inside the box we want to slow down the rate of
thermal gain from the rest of the world outside the box. There has been oil added to the
compressor sump to keep the moving parts inside the compressor lubricated. The suction line
returning to the compressor has been sloped to aid in returning oil to the compressor. The oil is
slowly depleted from the sump by getting entrained in the refrigerant and proper piping
practices must be used to insure its' return. Also notice that the liquid line has been made
smaller. The same quantity of refrigerant can be contained in a much smaller pipe when it is in
the liquid form. The suction line has been connected to its' proper place on the evaporator; the
bottom. Consider the direction of flow, the liquid refrigerant (which probably contains oil stolen
from the compressor) enters the top of the evaporator and now has gravity on its' side to return
the oil where to it belongs (just like the sloped suction line).
97
Consider the heat flow within the insulated box. The evaporator is constantly recirculating air in
a forced convection loop around the box. As the cold air passes over the product to be
refrigerated, once again we see a thermal transfer taking place. If there were a bunch of boxes of
warm eggs placed in the cooler some of their heat content would be picked up by the cold air and
that air is sucked back into the evaporator. We know what happens then. The heat is transferred
through the fins, through the tubing, and into the refrigerant and carried away. That same air has
been cooled and is once again discharged back over the product. The next graphic shows this
loop and the pink and blue colors represent air with more heat content and less heat content
respectively.
98
The next graphic is a more pictorial representation of what an actual installation might look like.
99
CHAPTER FOURTEEN
LABORATORY MIXER
100
CHAPTER FIFTEEN
POLYMERASE CHAIN REACTION (PCR ) MACHINE
PCR stands for Polymerase Chain Reaction, which is often used in biological
and chemical labs. A thermal cycler, or PCR machine, has the ability to
produce DNA copies of a specific segment that can range from thousands to
millions in numbers. (Learn More: What Is The Polymerase Chain
Reaction?)
PCR MachineThis machine, also called a DNA amplifier, can serve various
purposes such as gene analysis, the evolutionary study between organisms
or phylogeny and for diagnosing various long term diseases with the help
of DNA structure.
Also, it’s used in the field of forensic sciences in arriving at the results
based on fingerprints patterns and for testing paternity. Thanks to Kary B.
Mullis, who invented PCR technique in 1985!
The basic function of this machine is copying the sections of DNA and this is
performed though a heating cycle. This is performed when the temperature
rises to 95 degree Celsius which in turn melts the DNA strands. This
melting of DNA strands causes the backbones of sugar phosphate to split
apart.
Then as the temperature lowers, the primers bind them 3 inch end of each
sequence of target. Primers are able to perform this task as the DNA
polymerase taq and free nucleotides aid it in the process.
This process goes on so that there are two strands of double partially
stranded molecules of DNA at the end of first cycle. The same process
continues to be repeated again and again causing thousands of copies of the
particular target sequence.
Though these thermal cyclers are essential tool for all the biologists, they
are not very easy to afford. In spite of its price, this DNA amplifier can be
found to be extensively used in various laboratories in universal standards,
public school facilities, health centers and forensic departments.
To quantify and detect DNA sample, this type of machine is widely used for
amplification. This thermal cycler uses DNA dyes and fluorescent reporter
for its method of probing.
This is used to carry out amplification method that helps you identify the
flanking sequence of different genomic inserts. This is DND amplification
from known sequence to unknown sequence.
102
Reverse Transcriptase PCR Machine
This makes use of standard Polymerase Chain Reaction process that helps
you produce billion copies of DNA and RNA strand.
103
Initially, DNA or RNA amplification involved cloning the selected segment
using bacteria. This took weeks to amplify. But with PCR, it only takes few
hours to produce millions of DNA sequence of interested.
104
CHAPTER SIXTEEN
LABORATORY INCUBATOR
An incubator comprises a transparent chamber and the equipment that
regulates its temperature, humidity, and ventilation. For years, the
principle uses for the controlled environment provided by incubators
included hatching poultry eggs and caring for premature or sick infants, but
a new and important application has recently emerged, namely, the
cultivation and manipulation of microorganisms for medical treatment and
research. This article will focus on laboratory (medical) incubators.
The first incubators were used in ancient China and Egypt, where they
consisted of fire-heated rooms in which fertilized chicken eggs were placed
to hatch, thereby freeing the hens to continue laying eggs. Later, wood
stoves and alcohol lamps were used to heat incubators. Today, poultry
incubators are large rooms, electrically heated to maintain temperatures
between 99.5 and 100 degrees Fahrenheit (37.5 and 37.8 degrees Celsius).
Fans are used to circulate the heated air evenly over the eggs, and the
room's humidity is set at about 60 percent to minimize the evaporation of
water from the eggs. In addition, outside air is pumped into the incubator
to maintain a constant oxygen level of 21 percent, which is normal for fresh
air. As many as 100,000 eggs may be nurtured in a large commercial
incubator at one time, and all are rotated a minimum of 8 times a day
throughout the 21-day incubation period.
During the late nineteenth century, physicians began to use incubators to
help save the lives of babies born after a gestation period of less than 37
weeks (an optimal human pregnancy lasts 280 days, or 40 weeks). The first
infant incubator, heated by kerosene lamps, appeared in 1884 at a Paris
women's hospital.
In 1933, American Julius H. Hess designed an electrically heated infant
incubator (most are still electrically heated today). Modern baby
incubators resemble cribs, save that they are enclosed. Usually, the covers
are transparent so that medical personnel can observe babies continually.
In addition, many incubators are made with side wall apertures into which
long-armed rubber gloves can be fitted, enabling nurses to care for the
babies without removing them. The temperature is usually maintained at
between 88 and 90 degrees Fahrenheit (31 to 32 degrees Celsius). Entering
air is passed through a HEPA (high efficiency purified air) filter, which
cleans and humidifies it, and the oxygen level within the chamber is
adjusted to meet the particular needs of each infant. Incubators in neonatal
units, centers that specialize in caring for premature infants, are frequently
equipped with electronic devices for monitoring the infant's temperature
and the amount of oxygen in its blood.
105
Laboratory (medical) incubators were first utilized during the twentieth
century, when doctors realized that they could be could be used to identify
pathogens (disease-causing bacteria) in patients' bodily fluids and thus
diagnose their disorders more accurately. After a sample has been
obtained, it is transferred to a Petri dish, flask, or some other sterile
container and placed in a rack inside the incubator. To promote pathogenic
growth, the air inside the chamber is humidified and heated to body
temperature (98.6 degrees Fahrenheit or 37 degrees Celsius). In addition,
these incubators provide the amount of atmospheric carbon dioxide or
nitrogen necessary for the cell's growth. As this carefully conditioned air
circulates around it, the microorganism multiplies, enabling easier and
more certain identification.
A related use of incubators is tissue culture, a research technique in which
clinicians extract tissue fragments from plants or animals, place these
explants in an incubator, and monitor their subsequent growth. The
temperature within the incubator is maintained at or near that of the
organism from which the explant was derived. Observing explants in
incubators gives scientists insight into the operation and interaction of
particular cells; for example, it has enabled them to understand cancerous
cells and to develop vaccines for polio, influenza, measles, and mumps. In
addition, tissue culture has allowed researchers to detect disorders
stemming from the lack of particular enzymes.
Incubators are also used in genetic engineering, an extension of tissue
culturing in which scientists manipulate the genetic materials in explants,
sometimes combining DNA from discrete sources to create new organisms.
While such applications as sperm banks, cloning, and eugenics trouble
many contemporary observers, genetic material has already been
manipulated to measurable positive effect—to make insulin and other
biologically essential proteins, for example. Genetic engineering can also
improve the nutritional content of many fruits and vegetables and can
increase the resistance of various crops to disease. It is in the field of bio-
technology that incubators' greatest potential lies.
Raw Materials
Three main types of materials are necessary to manufacture an incubator.
The first is stainless steel sheet metal of a common grade, usually .02 to .04
inch (.05 to .1 centimeter) thick. Stainless steel is used because it resists
rust and corrosion that might be caused by both naturally occurring
environmental agents and by whatever is placed inside the unit. The next
category of necessary components includes items purchased from outside
suppliers: nuts, screws, insulation, motors, fans, and other miscellaneous
106
items. The third type of necessary material is the electronics package,
108
The bed has a rectangular block with an open "V" on its topside,
while the ram has a knife-edged blade with a radius at its cutting
edge. The ram's descent into the open bottom "V" is controlled; the
depth at which the blade enters the bed controls the angle at which
the sheet metal is bent. A simple straight-edge serves as a back-
gauge.
Assembling the cabinets
5 Next, the components of both chamber and case are fitted together,
some with sheet metal screws. Others are joined by means of spot
welding, a process in which separate pieces of material are fused
with pressure and heat.
6 Other components are arc welded using one of three methods. In
the first method, known as MIG (metal-arc inert gas) welding, a coil
of thin wire is threaded through a hand-held gun. A hose is connected
from a tank of inert gas (usually argon) to the tip of the gun's nozzle.
A machine generating electrical current is attached to the wire in the
gun and the work piece. When the gun's trigger is pulled, the wire
rod moves, feeding toward the work piece, and the gas is released,
creating an atmosphere at the point where the wire arcs with the
metal. This allows the joining of the parts.
7 The second arc welding method is known as stick welding. In this
process, a thin rod approximately 12 inches long, .187 inch thick (30
centimeters long, .47 centimeter thick), and coated with a flux
material is placed into a hand-held holder. This holder is attached to
a machine generating an electrical charge. Also connected to the
machine is a grounding cable that has an end clamped to the part to
be welded. When the rod is close to the parts, an arc is struck,
generating intense heat that melts the rod and flux. The flux acts as a
cleanser, allowing the rod material to adhere to both pieces of metal.
The welder drags the rod along the seams of the metal while
maintaining its distance from the seam to allow the arc to remain
constant.
8 The third arc welding method used to assemble the incubator is TIG
(tungsten-arc inert gas) welding, a combination of stick and MIG
welding. In this process, a stationary tungsten rod without any flux is
inserted into a hand-held gun. Inert gas flows from a tank through
the gun's nozzle. When the trigger is pulled, the gas creates an
atmosphere; as the tungsten rod strikes its arc, the two parts fuse
together without any filler metal.
109
Painting the incubator
9 At this point, the case may be painted to further provide surface
protection, both inside and outside (the inner chamber is never
painted). The box is spray painted, usually with an electrostatically
charged powder paint. This process requires that a small electrical
charge be applied so that it will attract the powder particles, which
have been given an opposite charge. After the case is sprayed, it is
moved into an oven that melts the powder particles, causing them to
adhere to the freshly cleaned metal surface. This process is very
clean, efficient, and environmentally friendly, and the high-quality
paint resists most laboratory spills.
Insulating or jacketing the chamber
10 Next, the inner-chamber is wrapped with insulation (either
blanket batting or hard-board), placed inside the case, and secured. If
the unit is water-jacketed, the jacket is placed inside the case and the
chamber inside the jacket. A sheet metal door is constructed using
methods similar to that mentioned above.
Assembling the control panel
11 While the sheet metal cabinetry is being fabricated, a control
panel is being assembled elsewhere in the factory. Following detailed
electrical prints, electricians fasten different colored wire of varying
thicknesses to electrical devices. The color scheme helps technicians
to diagnose problems quickly, and the various thicknesses allow for
safe and efficient transfer of lower and higher voltages. Purchased
electrical devices such as fuse blocks, switches, terminal blocks, and
relays adhere to strict electrical codes. Finally, the wires from the
control panel are attached to the control devices (on/off switches or
micro-processors) and the electro-mechanical devices (fan motor,
lights, and heaters).
Final assembly, testing, and cleaning
12 The incubator now has its inner glass and the outer solid door
attached, and shelving and supplemental features are installed. Each
unit is 100 percent functionally tested. The parameters for each test
are set to verify the unit's performance against advertised
specifications or the customer's requests, whichever is more
stringent. Problems are corrected, and the equipment is retested. A
copy of the test result is kept on file and the original sent to the
customer.
110
13 The incubator receives a thorough cleaning inside and out.
Shelves are removed and packaged separately, and the doors are
taped closed. A brace is positioned under the door to help prevent
sagging. Next, each unit is secured to a wooden skid and a corrugated
cardboard box is placed around the case. Packing filler is put in-
between the carton and the case. Finally, the product is shipped.
Quality Control
No quality standards are accepted by the entire incubator manufacturing
industry. Some areas of the country may require UL (Underwriters
Laboratory) Electrical Approval, but those standards apply only to the
electro-mechanical devices being used. During the sheet metal work,
manufacturers utilize in-house inspection processes that can vary widely,
from formal first-piece inspection to random lot sampling inspection. Some
companies may keep records of their findings, while others do not. Almost
without exception, manufacturers do performance-level testing before
shipment as described above.
The Future
While hospitals will always need neonatal incubators, the bio-technological
industry is where the growth market lies for this product. Growth chamber
type incubators will need to control temperature and relative humidity to
more precise settings, as microbiologists and researchers investigate new
ways to improve our health and well-being.
111
CHAPTER SEVENTEEN
MICROTOMES
1112
A microtome is a tool used to cut extremely thin slices of material, known
as sections.
114
Cryomicrotome
For the cutting of frozen samples, many rotary microtomes can be adapted
to cut in a liquid nitrogen chamber, in a so-called cryomicrotome setup. The
reduced temperature allows for the hardness of the sample to be increased,
such as by undergoing a glass transition, which allows for the preparation
of semi- thin samples. However the sample temperature and the knife
temperature must be controlled in order to optimise the resultant sample
thickness
Ultramicrotome
A ribbon of ultrathin sections prepared by room temperature
ultramicrotomy, floating on water in the boat of a diamond knife used to
cut the sections. The knife blade is the edge at the upper end of the trough
of water.
Vibrating microtome
The vibrating microtome operates by cutting using a vibrating blade,
allowing the resultant cut to be made with less pressure than would be
required for a stationary blade. The vibrating microtome is usually used for
difficult biological samples. The cut thickness is usually around 30-500 µm
for live tissue and 10- 500 µm for fixed tissue.
115
Saw microtome
The saw microtome is especially for hard materials such as teeth or bones.
The microtome of this type has a recessed rotating saw, which slices
through the sample. The minimal cut thickness is approximately 30 µm,
and can be made for comparatively large samples.
Laser microtome
A conceptual diagram of laser microtome operation.The laser microtome is
an instrument for contact free slicing. Prior preparation of the sample
through embedding, freezing or chemical fixation is not required, thereby
minimizing the artifacts from preparation methods. Alternately this design
of microtome can also be used for very hard materials, such as bones or
teeth as well as some ceramics. Dependent upon the properties of the
sample material, the thickness achievable is between 10 and 100 µm.The
device operates using a cutting action of an infra-red laser. As the laser
emits a radiation in the near infra-red, in this wavelength regime the laser
can interact with biological materials. Through sharp focusing of the probe
within the sample, a focal point of very high intensity, up to TW/cm2, can
be achieved. Through the non-linear interaction of the optical penetration
in the focal region a material separation in a process known as photo-
disruption is introduced. By limiting the laser pulse durations to the
femtoseconds range, the energy expended at the target region is precisely
controlled, thereby limiting the interaction zone of the cut to under a
micrometre. External to this zone the ultra- short beam application time
introduces minimal to no thermal damage to the remainder of the sample.
The laser radiation is directed onto a fast scanning mirror based optical
system which allows for three dimensional positioning of the beam
crossover, whilst allowing for beam traversal to the desired region of
interest. The combination of high power with a high raster rate allows the
scanner to cut large areas of sample in a short time. In the laser microtome
the laser-microdissection of internal areas in tissues, cellular structures,
and other types of small features is also possible.
MICROTOME KNIFE
It is the important instrument used to cut uniform thin serial sections of the
tissue. Various types of knives are used with different microtomes. For
routine purpose wedge (C type) knife is used. It is plain on both sides. The
size varies from 100 mm to 350 mm in length.Microtome knives are made
of good quality of high carbon or steel which is tempered at the tip.
116
Hardness of knife is essential to obtain good tissue sections.
Sharpening of microtome knife - To achieve good sections knife should
be very sharp. The knife is put in the knife back to sharpen. Knife can be
sharpened manually or by the use of automatic machine.
Honing - This is done to remove nicks and irregularity from the knife edge.
Coarse and fine honing is done using different abrasives.
Stropping - The purpose of stropping is to remove the “burr” formed during
honing and to polish cutting edge.
Other types of knives are diamond and glass knives. These knives are very
expensive and used for ultramicrotomy.
Disposable knife – Nowadays these microtome blades are used. Two types
of disposable blades are available.
1. Low profile blade - Usually used for cutting small and soft biopsies like
kidney and liver biopsies.
2. High profile blade-Used for any tissue like myometrium, breast tumor
or skin.
Advantages
1. Time is not spent in sharpening, honing or stropping the knife.
2. Resistant to both corrosion and heat.
3. Hardness of blade can be compared with the steel knife.
Disadvantages
1. Relatively expensive
2. Disposable blades are not as rigid as steel knife:
Care of the Microtome Knife
Store the knife in its box, when not in use.
The knife should be cleaned with xylene before and after use.
When knife is being stored for a long time, it should be smeared with
grease or good grade of light oil.
Knife edge should not be touched.
Knife edge should never be come badly nicked. It is advisable to use
separate knife for cutting hard issue like bone.
The above points are important if re usable knife is being used.
Points to remember
1 For routine histopathology rotary microtome is used.
2 Ultramicrotome is used to cut semi-thin sections or ultrathin sections.
3 Traditional type of knife requires honing and stropping to smoothen the
cutting edge.
117
4 Disposable knives are expensive but do not need honing or stropping.
5 Knife edge is spoiled if properly decalcified tissue is not used.
118
119
CHAPTER EIGHTEEN
ENZYME-LINKED IMMUNOSORBENT ASSAYS (ELISAs) TECHNIQUE
Enzyme-Linked Immunosorbent Assays (ELISAs) are the most widely used
type of immunoassay. ELISA is a rapid test used for detecting or quantifying
antibody or antigen against viruses, bacteria and other materials.
ELISA is so named because the test technique involves the use of an
enzyme system and immunosorbent.
ELISA method for measuring Antigen (Ag) Antibody (Ab) reaction is
becoming increasingly used in the detection of antigen (infectious agent) or
antibody for its simplicity and sensitivity. It is as sensitive as
radioimmunoassay (RIA) and requires only microlitre quantities of test
reagents. It has now been widely applied in detection of a variety of
antibody and antigens such as hormones, toxins, and viruses.
Some Salient Features
1. ELISA test has high sensitivity and specificity.
2. The result of quantitative ELISA tests can be read visually
3. A large number of tests can be done at one time.
ELISAs are designed specifically for screening large numbers of
specimens at a time, making them suitable for use in surveillance and
centralized blood transfusion services
4. Reagents used for ELISA are stable and can be distributed in district
and rural laboratories but as ELISAs require sophisticated equipment
and skilled technicians to perform the tests, their use is limited to
certain circumstances.
Materials needed in ELISA Testing
121
The enzyme system consists of;
1. An enzyme: horse radish peroxidase, alkaline phosphatase which is
labelled or linked, to a specific antibody.
2. A specific substrate:
O-Phenyl-diamine-dihydrochloride for peroxidase
P Nitrophenyl Phosphate- for Alkaline Phosphatase
Which is added after the antigen-antibody reaction. The enzyme catalyses
(usually hydrolyses) the substrate to give a colour end point (yellow
compound in case of Alkaline Phosphatase). The intensity of the colour
gives an indication of the amount of bound antibody or antigen.
122
CHAPTER NINETEEN
MICROSCOPY TECHNIQUE
Microscopy is the technical field of using microscopes to view objects and
areas of objects that cannot be seen with the naked eye (objects that are
not within the resolution range of the normal eye). There are three well-
known branches of microscopy: optical, electron, and scanning probe
microscopy.
Optical and electron microscopy involve the diffraction, reflection, or
refraction of electromagnetic radiation/electron beams interacting with
the specimen, and the collection of the scattered radiation or another signal
in order to create an image. This process may be carried out by wide-field
irradiation of the sample (for example standard light microscopy and
transmission electron microscopy) or by scanning of a fine beam over the
sample (for example confocal laser scanning microscopy and scanning
electron microscopy). Scanning probe microscopy involves the interaction
of a scanning probe with the surface of the object of interest. The
development of microscopy revolutionized biology, gave rise to the field of
histology and so remains an essential technique in the life and physical
sciences.
Optical microscopy
Stereo microscope
Optical or light microscopy involves passing visible light transmitted
through or reflected from the sample through a single or multiple lenses to
allow a magnified view of the sample. The resulting image can be detected
directly by the eye, imaged on a photographic plate or captured digitally.
The single lens with its attachments, or the system of lenses and imaging
equipment, along with the appropriate lighting equipment, sample stage
and support, makes up the basic light microscope. The most recent
development is the digital microscope, which uses a CCD camera to focus
on the exhibit of interest. The image is shown on a computer screen, so eye-
pieces are unnecessary.
123
Limitations
Limitations of standard optical microscopy (bright field microscopy) lie in
three areas;
This technique can only image dark or strongly refracting objects
effectively.
Diffraction limits resolution to approximately 0.2 micrometres (see:
microscope). This limits the practical magnification limit to ~1500x.
Out of focus light from points outside the focal plane reduces image clarity.
Live cells in particular generally lack sufficient contrast to be studied
successfully, since the internal structures of the cell are colourless and
transparent. The most common way to increase contrast is to stain the
different structures with selective dyes, but this often involves killing and
fixing the sample. Staining may also introduce artifacts, apparent structural
details that are caused by the processing of the specimen and are thus not
legitimate features of the specimen. In general, these techniques make use
of differences in the refractive index of cell structures. It is comparable to
looking through a glass window: you (bright field microscopy) don't see the
glass but merely the dirt on the glass. There is a difference, as glass is a
denser material, and this creates a difference in phase of the light passing
through. The human eye is not sensitive to this difference in phase, but
clever optical solutions have been thought out to change this difference in
phase into a difference in amplitude (light intensity).
Techniques
Optical microscope
In order to improve specimen contrast or highlight certain structures in a
sample special techniques must be used. A huge selection of microscopy
techniques are available to increase contrast or label a sample.Four
examples of transillumination techniques used to generate contrast in a
sample of tissue paper. 1.559 μm/pixel.
Bright field illumination, sample contrast comes from absorbance of light in
the sample.
Cross-polarized light illumination, sample contrast comes from rotation of
polarized light through the sample.
124
Dark field illumination, sample contrast comes from light scattered by the
sample.
Phase contrast illumination, sample contrast comes from interference of
different path lengths of light through the sample.
Bright field microscopy
Bright field microscopy is the simplest of all the light microscopy
techniques. Sample illumination is via transmitted white light, i.e.
illuminated from below and observed from above. Limitations include low
contrast of most biological samples and low apparent resolution due to the
blur of out of focus material. The simplicity of the technique and the
minimal sample preparation required are significant advantages.
Oblique illumination
The use of oblique (from the side) illumination gives the image a 3-
dimensional appearance and can highlight otherwise invisible features. A
more recent technique based on this method is Hoffmann's modulation
contrast, a system found on inverted microscopes for use in cell culture.
Oblique illumination suffers from the same limitations as bright field
microscopy (low contrast of many biological samples; low apparent
resolution due to out of focus objects).
Dark field microscopy
Dark field microscopy is a technique for improving the contrast of
unstained, transparent specimens. Dark field illumination uses a carefully
aligned light source to minimize the quantity of directly transmitted
(unscattered) light entering the image plane, collecting only the light
scattered by the sample. Dark field can dramatically improve image
contrast – especially of transparent objects – while requiring little
equipment setup or sample preparation. However, the technique suffers
from low light intensity in final image of many biological samples, and
continues to be affected by low apparent resolution.
125
A diatom under Rheinberg illumination
Rheinberg illumination is a special variant of dark field illumination in
which transparent, colored filters are inserted just before the condenser so
that light rays at high aperture are differently colored than those at low
aperture (i.e. the background to the specimen may be blue while the object
appears self-luminous red). Other color combinations are possible but their
effectiveness is quite variable.
Dispersion staining
Dispersion staining is an optical technique that results in a colored image of
a colorless object. This is an optical staining technique and requires no
stains or dyes to produce a color effect. There are five different microscope
configurations used in the broader technique of dispersion staining. They
include brightfield Becke line, oblique, darkfield, phase contrast, and
objective stop dispersion staining.
Phase contrast microscopy
More sophisticated techniques will show proportional differences in optical
density. Phase contrast is a widely used technique that shows differences in
refractive index as difference in contrast. It was developed by the Dutch
physicist Frits Zernike in the 1930s (for which he was awarded the Nobel
Prize in 1953). The nucleus in a cell for example will show up darkly
against the surrounding cytoplasm. Contrast is excellent; however it is not
for use with thick objects. Frequently, a halo is formed even around small
objects, which obscures detail. The system consists of a circular annulus in
the condenser, which produces a cone of light. This cone is superimposed
on a similar sized ring within the phase-objective. Every objective has a
different size ring, so for every objective another condenser setting has to
be chosen. The ring in the objective has special optical properties: it, first of
all, reduces the direct light in intensity, but more importantly, it creates an
artificial phase difference of about a quarter wavelength. As the physical
properties of this direct light have changed, interference with the diffracted
light occurs, resulting in the phase contrast image. One disadvantage of
phase-contrast microscopy is halo formation (halo-light ring).
126
Differential interference contrast microscopy
Superior and much more expensive is the use of interference contrast.
Differences in optical density will show up as differences in relief. A nucleus
within a cell will actually show up as a globule in the most often used
differential interference contrast system according to Georges Nomarski.
However, it has to be kept in mind that this is an optical effect, and the
relief does not necessarily resemble the true shape. Contrast is very good
and the condenser aperture can be used fully open, thereby reducing the
depth of field and maximizing resolution.The system consists of a special
prism (Nomarski prism, Wollaston prism) in the condenser that splits light
in an ordinary and an extraordinary beam. The spatial difference between
the two beams is minimal (less than the maximum resolution of the
objective). After passage through the specimen, the beams are reunited by
a similar prism in the objective.
128
Growth of protein crystals results in both protein and salt crystals. Both are
colorless and microscopic. Recovery of the protein crystals requires
imaging which can be done by the intrinsic fluorescence of the protein or
by using transmission microscopy. Both methods require an ultraviolet
microscope as protein absorbs light at 280 nm. Protein will also
fluorescence at approximately 353 nm when excited with 280 nm light.
Since fluorescence emission differs in wavelength (color) from the
excitation light, an ideal fluorescent image shows only the structure of
interest that was labeled with the fluorescent dye. This high specificity led
to the widespread use of fluorescence light microscopy in biomedical
research. Different fluorescent dyes can be used to stain different biological
structures, which can then be detected simultaneously, while still being
specific due to the individual color of the dye.
To block the excitation light from reaching the observer or the detector,
filter sets of high quality are needed. These typically consist of an excitation
filter selecting the range of excitation wavelengths, a dichroic mirror, and
an emission filter blocking the excitation light. Most fluorescence
microscopes are operated in the Epi-illumination mode (illumination and
detection from one side of the sample) to further decrease the amount of
excitation light entering the detector.
An example of fluorescence microscopy today is two-photon or multi-
photon imaging. Two photon imaging allows imaging of living tissues up to
a very high depth by enabling greater excitation light penetration and
reduced background emission signal. A recent development using this
technique is called Superpenetration Multi Photon Microscopy, which
allows imaging at greater depths than two-photon or multi-photon imaging
would by implementing adaptive optics into the system. Pioneered by the
Cui Lab at Howard Hughes Medical Center and recently reported by Boston
University on focusing light through static and dynamic strongly scattering
media. By utilizing adaptive optics,it has allowed the optical wavelength
control needed for transformative impacts on deep tissue imaging.
129
Confocal microscopy
Confocal microscopy uses a scanning point of light and a pinhole to prevent
out of focus light from reaching the detector. Compared to full sample
illumination, confocal microscopy gives slightly higher resolution, and
significantly improves optical sectioning. Confocal microscopy is, therefore,
commonly used where 3D structure is important.
Deconvolution
Fluorescence microscopy is a powerful technique to show specifically
labeled structures within a complex environment and to provide three-
dimensional information of biological structures.
130
However, this information is blurred by the fact that, upon illumination, all
fluorescently labeled structures emit light, irrespective of whether they are
in focus or not. So an image of a certain structure is always blurred by the
contribution of light from structures that are out of focus. This
phenomenon results in a loss of contrast especially when using objectives
with a high resolving power, typically oil immersion objectives with a high
numerical aperture.
However, blurring is not caused by random processes, such as light
scattering, but can be well defined by the optical properties of the image
formation in the microscope imaging system. If one considers a small
fluorescent light source (essentially a bright spot), light coming from this
spot spreads out further from our perspective as the spot becomes more
out of focus. Under ideal conditions, this produces an "hourglass" shape of
this point source in the third (axial) dimension. This shape is called the
point spread function (PSF) of the microscope imaging system. Since any
fluorescence image is made up of a large number of such small fluorescent
light sources, the image is said to be "convolved by the point spread
function".
Knowing this point spread function means that it is possible to reverse this
process to a certain extent by computer-based methods commonly known
as deconvolution microscopy. There are various algorithms available for 2D
or 3D deconvolution. They can be roughly classified in nonrestorative and
restorative methods. While the nonrestorative methods can improve
contrast by removing out-of-focus light from focal planes, only the
restorative methods can actually reassign light to its proper place of origin.
Processing fluorescent images in this manner can be an advantage over
directly acquiring images without out-of-focus light, such as images from
confocal microscopy, because light signals otherwise eliminated become
useful information. For 3D deconvolution, one typically provides a series of
images taken from different focal planes (called a Z-stack) plus the
knowledge of the PSF, which can be derived either experimentally or
theoretically from knowing all contributing parameters of the microscope.
Super-Resolution microscopy
Example of super-resolution microscopy. Image of Her3 and Her2, target
of the breast cancer drugTrastuzumab, within a cancer cell.A multitude of
super-resolution microscopy techniques have been developed in recent
times which circumvent the diffraction barrier.
131
This is mostly achieved by imaging a sufficiently static sample multiple
times and either modifying the excitation light or observing stochastic
changes in the image.
Knowledge of and chemical control over fluorophore photophysics is at the
core of these techniques, by which resolutions of ~20 nanometers are
regularly obtained.
Serial time-encoded amplified microscopy
Serial time encoded amplified microscopy (STEAM) is an imaging method
that provides ultrafast shutter speed and frame rate, by using optical image
amplification to circumvent the fundamental trade-off between sensitivity
and speed, and a single-pixel photodetector to eliminate the need for a
detector array and readout time limitations The method is at least 1000
times faster than the state-of-the-art CCD and CMOS cameras.
Consequently, it is potentially useful for a broad range of scientific,
industrial, and biomedical applications that require high image acquisition
rates, including real-time diagnosis and evaluation of shockwaves,
microfluidics, MEMS, and laser surgery.
Extensions
Most modern instruments provide simple solutions for micro-photography
and image recording electronically. However such capabilities are not
always present and the more experienced microscopist will, in many cases,
still prefer a hand drawn image to a photograph. This is because a
microscopist with knowledge of the subject can accurately convert a three-
dimensional image into a precise two-dimensional drawing. In a
photograph or other image capture system however, only one thin plane is
ever in good focus.The creation of careful and accurate micrographs
requires a microscopical technique using a monocular eyepiece. It is
essential that both eyes are open and that the eye that is not observing
down the microscope is instead concentrated on a sheet of paper on the
bench besides the microscope. With practice, and without moving the head
or eyes, it is possible to accurately record the observed details by tracing
round the observed shapes by simultaneously "seeing" the pencil point in
the microscopical image.
132
Practicing this technique also establishes good general microscopical
technique. It is always less tiring to observe with the microscope focused so
that the image is seen at infinity and with both eyes open at all times.
X-ray microscopy
As resolution depends on the wavelength of the light. Electron microscopy
has been developed since the 1930s that use electron beams instead of
light. Because of the much smaller wavelength of the electron beam,
resolution is far higher.
Though less common, X-ray microscopy has also been developed since the
late 1940s. The resolution of X-ray microscopy lies between that of light
microscopy and electron microscopy.
Electron microscope
Until the invention of sub-diffraction microscopy, the wavelength of the
light limited the resolution of traditional microscopy to around 0.2
micrometers. In order to gain higher resolution, the use of an electron
beam with a far smaller wavelength is used in electron microscopes.
Transmission electron microscopy (TEM) is quite similar to the compound
light microscope, by sending an electron beam through a very thin slice of
the specimen. The resolution limit in 2005 was around 0.05 nanometer and
has not increased appreciably since that time.
Scanning electron microscopy (SEM) visualizes details on the surfaces of
specimens and gives a very nice 3D view. It gives results much like those of
the stereo light microscope. The best resolution for SEM in 2011 was 0.4
nanometer.
Electron microscopes equipped for X-ray spectroscopy can provide
qualitative and quantitative elemental analysis.
Scanning probe microscopy
This is a sub-diffraction technique. Examples of scanning probe
microscopes are the atomic force microscope (AFM), the Scanning
tunneling microscope, the photonic force microscope and the recurrence
tracking microscope. All such methods use the physical contact of a solid
probe tip to scan the surface of an object, which is supposed to be almost
flat.
133
Ultrasonic force
Ultrasonic Force Microscopy (UFM) has been developed in order to
improve the details and image contrast on "flat" areas of interest where
AFM images are limited in contrast. The combination of AFM-UFM allows a
near field acoustic microscopic image to be generated. The AFM tip is used
to detect the ultrasonic waves and overcomes the limitation of wavelength
that occurs in acoustic microscopy. By using the elastic changes under the
AFM tip, an image of much greater detail than the AFM topography can be
generated.
Ultrasonic force microscopy allows the local mapping of elasticity in atomic
force microscopy by the application of ultrasonic vibration to the cantilever
or sample. In an attempt to analyze the results of ultrasonic force
microscopy in a quantitative fashion, a force-distance curve measurement
is done with ultrasonic vibration applied to the cantilever base, and the
results are compared with a model of the cantilever dynamics and tip-
sample interaction based on the finite-difference technique.
Ultraviolet microscopy
Ultraviolet microscopes have two main purposes. The first is to utilize the
shorter wavelength of ultraviolet electromagnetic energy to improve the
image resolution beyond that of the diffraction limit of standard optical
microscopes. This technique is used for non-destructive inspection of
devices with very small features such as those found in modern
semiconductors. The second application for UV microscopes is contrast
enhancement where the response of individual samples is enhanced,
relative to their surrounding, due to the interaction of light with the
molecules within the sample itself. One example is in the growth of protein
crystals. Protein crystals are formed in salt solutions. As salt and protein
crystals are both formed in the growth process, and both are commonly
transparent to the human eye, they cannot be differentiated with a
standard optical microscope. As the tryptophan of protein absorbs light at
280 nm, imaging with a UV microscope with 280 nm bandpass filters
makes it simple to differentiate between the two types of crystals. The
protein crystals appear dark while the salt crystals are transparent.
134
Infrared microscopy
The term infrared microscopy refers to microscopy performed at infrared
wavelengths. In the typical instrument configuration a Fourier Transform
Infrared Spectrometer (FTIR) is combined with an optical microscope and
an infrared detector. The infrared detector can be a single point detector, a
linear array or a 2D focal plane array. The FTIR provides the ability to
perform chemical analysis via infrared spectroscopy and the microscope
and point or array detector enable this chemical analysis to be spatially
resolved, i.e. performed at different regions of the sample. As such, the
technique is also called infrared microspectroscopy This technique is
frequently used for infrared chemical imaging, where the image contrast is
determined by the response of individual sample regions to particular IR
wavelengths selected by the user, usually specific IR absorption bands and
associated molecular resonances . A key limitation of conventional infrared
microspectroscopy is that the spatial resolution is diffraction-limited.
Specifically the spatial resolution is limited to a figure related to the
wavelength of the light. For practical IR microscopes, the spatial resolution
is limited to 1-3X the wavelength, depending on the specific technique and
instrument used. For mid-IR wavelengths, this sets a practical spatial
resolution limit of ~3-30 μm.
Digital holographic microscopy
In digital holographic microscopy (DHM), interfering wave fronts from a
coherent (monochromatic) light-source are recorded on a sensor. The
image is digitally reconstructed by a computer from the recorded
hologram. Besides the ordinary bright field image, a phase shift image is
created.
DHM can operate both in reflection and transmission mode. In reflection
mode, the phase shift image provides a relative distance measurement and
thus represents a topography map of the reflecting surface. In transmission
mode, the phase shift image provides a label-free quantitative
measurement of the optical thickness of the specimen. Phase shift images
of biological cells are very similar to images of stained cells and have
successfully been analyzed by high content analysis software.
135
A unique feature of DHM is the ability to adjust focus after the image is
recorded, since all focus planes are recorded simultaneously by the
hologram. This feature makes it possible to image moving particles in a
volume or to rapidly scan a surface. Another attractive feature is DHM’s
ability to use low cost optics by correcting optical aberrations by software.
Scientists have been working on practical designs and prototypes for x-ray
holographic microscopes, despite the prolonged development of the
appropriate laser.
136
Amateur microscopy
Amateur Microscopy is the investigation and observation of biological and
non-biological specimens for recreational purposes. Collectors of minerals,
insects, seashells, and plants may use microscopes as tools to uncover
features that help them classify their collected items. Other amateurs may
be interested in observing the life found in pond water and of other
samples. Microscopes may also prove useful for the water quality
assessment for people that keep a home aquarium. Photographic
documentation and drawing of the microscopic images are additional tasks
that augment the spectrum of tasks of the amateur. There are even
competitions for photomicrograph art. Participants of this pastime may
either use commercially prepared microscopic slides or engage in the task
of specimen preparation.
While microscopy is a central tool in the documentation of biological
specimens, it is, in general, insufficient to justify the description of a new
species based on microscopic investigations alone. Often genetic and
biochemical tests are necessary to confirm the discovery of a new species.
A laboratory and access to academic literature is a necessity, which is
specialized and, in general, not available to amateurs. There is, however,
one huge advantage that amateurs have above professionals: time to
explore their surroundings. Often, advanced amateurs team up with
professionals to validate their findings and (possibly) describe new species.
137
CHAPTER TWENTY
Type of
Functions
Microtome
138
THE HISTOLOGICAL MICROSCOPE
The histology is focused on the study of tissues and cells. Given the small
size of these structures, the histologist requires systems increase the image,
what we know as microscopes, sinceThese structures are so small that not
can be clearly discernible to the naked eye. Currently, there are multiple
types of microscopes, although the microscope optical field course is the
basic tool of study and as such the student should sufficiently meet him.
To begin with, there are a number of facts that the student must know:
The human eye is unable to distinguish small things, but like any other
optical system has a limit; i.e., there comes a time that is not able to
distinguish two points as separate but as a single item. This minimum
separation that allows to recognize how separate colon is known as
"Resolution limit". This has made the human being has searched systems to
enlarge the image.
(B) the development of the first lenses in the 17TH century led to the
emergence of the first rudimentary microscopes. Since then, microscopes
have evolved to the current. Despite this development, optical microscopes
have a limit of resolution limiting the increases and that is determined by
the nature of the light.
C nowadays the best microscopes do not exceedthe 1000-1500 increases
and there is a limit of 0,2 m (0.0002 mm) resolution.
(D) are called simple microscopes to those who have one, or a single set of
lenses. It is colloquially referred to as magnifying glass.
E are called compound microscopes to those who have two lenses or lens
sets, some are known as objective lenses and other eye glasses. The current
laboratory microscopes are compound microscopes.
F in a compound microscope, it is essential that the sample to be observed
is very thin so that light can pass through it and reaching the objective lens.
G light passes through the different tissues of a cut in a similar way so it is
almost impossible to distinguish the different components of the Court.
That is why it is necessary to dye the cuts in order to distinguish its
components.
139
CONCEPTS AND DEFINITIONS:
-Magnification: Is the number of times that a microscope system can
increase the size of the image of an object.
-The power of resolutionon: is the ability of an optical system to display
two points very near each other as separate elements.
-Resolution limit: is the smallest distance that a microscope can display two
points coming as separate elements and not as a single point. This
parameter depends on the wavelength of light (energy) and the numerical
aperture of the lens used. In practical terms in a conventional optical
microscope with a 100 x objective (and eye and 15 x, i.e. to about 1500
increases intermediate lens) is 0.2 m. He is calculated using the formula:
LR = /2AN (donde represents the length of wave and AN numerical
aperture).
-Numerical aperture: is the ability of the lens to allow light to pass through
(mide cone of light that a goal can support). It is unique for each objective.
-Depth of field: the distance between the more separate parts of an object
(according to the optical axis of the microscope), that can be seen without
changing the focus. This distance is larger in the objectives of small
increase and lower in the largest increase.
-Eye: Is the lens Assembly that forms the extended final image that we are
witnessing.
-Revolver nosepiece: current microscopes often carried several goals which
are arranged in a wheel called revolver. To place the desired objective
should move the revolver to the appropriate position. We can find
microscopes equipped with 3, 4 or 5 positions revolver.
-Objective: It is the device that contains the set of lenses that capture the
light from the sample and generate the first enlargement, there are various
magnifications.
-Stage sample: is the surface where the sample is placed to observe.
-System Koeller: enable to focus the beam on the optical axis of the
microscope. (Not all microscopes have these systems).
-Condenser: Allows to focus the beam of light in a small area of the sample.
(Not visible in the schema).
-Aperture: Allows you to adjust the contrast of the image.
140
-Control of light intensity: many current microscopes have this light
intensity control device.
-Wheels ofdisplacement: are commands that allow you to move the sample
to the length and breadth of the deck sample (axes X and and).
-Micrometric approach control: allows you to sample the fine focus by
using light displacement (on the Z axis) of the stage.
-Coarse focus control: allows the approximation to the focal plane through
major displacements of the deck along the Z axis (vertical) sample.
-Tripod: Is the metal housing that serves as support to the rest of the
system. The tripod attaches to the base forming a solid and stable block.
Different sets of lenses and other optical elements define the path of the
light microscope.
1. The light necessary for the operation of the microscope is generated by a
light bulb.
2. The diameter of the beam is restricted through the use of an opening (a
metal plate with a hole).
3. The condensing lens condense the beam on the sample.
4. The objective is a first enlargement of the image. The extension depends
on the chosen objective.
5. A Prism changes the direction of the light to make a comfortable
observation at a right angle.
6. The eye is the second and final ampthe image liation.
As you can see in the diagram the present in a microscope optical elements
are many and varied and we can divide them into two categories: those
destined to generate, modulate, focus and condense light (such as
diaphragms, condensers, openings, etc...) and others intended to enlarge
the image (objectives and eyepieces).
Without a doubt, the most important are objective and eyepiece.
-Objectives: The microscope often have many different goals and positions
you gun, the most common are the 4 positions revolvers. The variety of
objectives in the market is large, though the 4 most common objectives in
laboratory microscopes are usually 4 x, 10 x, 40 x and 100 x, being the last
dive.
141
-Eye: All conventional laboratory microscope has 1 (if monocular) or two
eyepieces are the same (if binocular). The most common eyepieces with 10
x (10 x) Although some manufacturers offer, for special jobs, other ocular
(15 x, or 5 x) for example.
Taking into account that the total increases of a microscope is the result of
the multiplication of the partial objective and eyepiece increases can
deduce thatthe range of increases of a conventional optical microscope is
40 x to 1. 000 x.
In binocular microscopes one or the two eyepieces can be graduating,
allowing a proper binocular vision even if used glasses, whose optical
correction can compensate for graduating the eyepieces. In a monocular
microscope is not necessary nor this.
IN THE MAJORITY OF CASES DO NOT NEED TO USE GLASSES WITH THE
MCIROSCOPIO.
142
Using little power this phenomenon objective barely affects the
observation, but it becomes a problem when we use a 100 x objective,
mainly because these increases the preparation should be very close to the
goal and the change of direction of the light from the glass (slide) air and
again to the glass (lens) causes that not can focus correctly the sample. to
avoid this place, between the objective lens and the top of the sample a
small drop of a special oil (immersion oil), which has a similar to the glass
refraction index, so the light does not change direction on this interface,
and as a result you can focus without any problems.
If after studying the sample in order to 100 x is necessary to return tostudy
with the 40 x objective is necessary to remember that the sample even has
a drop of immersion oil that could soil this goal, so it will be clean with a
handkerchief or piece of soft paper soaked in cleaning solution.
Once finished the comment removing the food clean, as you have just
described, as well as the 100 x objective.
At the end of the observation period should be that both the microscope
objectives (deck), as well as the preparations are completely clean.
We will also check that the microscope is properly disconnected and make
sure you leave the smaller lens, placed to facilitate the following use of the
microscope.
PREPARATION OF SAMPLES
Now that we know that the optical microscope obtains a magnified image
to pass light through a sample and a set of lenses, it is clear we can not
study an organ or a piece of a body under the microscope, since light could
not cross. It is necessary to obtain fine samples the light to pass through
and to study under a microscope.
How to make for fine samples?
The answer is obvious, must be dialledAR cuts fine to study body, which is
not easy, this it may verify the student if you try to cut a piece of fresh meat
with a very sharp knife, you'll find that you can not get thin slices. This is
due to the fresh meat does not have consistency nor the hardness required
for cuts sufficiently fine, while if we freeze the meat or leave it dry, if it's
possible to cut the meat, since this in both processes has increased its
hardness and consistency...
As bodies to study have a consistency similar to the piece of fresh meat of
the example it is necessary to increase the hardness and consistency of
artificial form, there are different methods, the method most used being the
inclusion in paraffin. It is not a simple process and involves numerous
steps.
143
1 FIXING
Biological material, from the time of death, undergoes a process of
degradation, known as rot, due both to endogenous causes (autolysis) or
exogenous (bacterial attacks). It is clear that this degradation makes
progressively more difficult (more time to more degradation) the study of
biological structures to the mycroscopio.
To avoid this degradation is necessary to stabilize the structures and make
them unavailable to such degradation, so used chemicals known as "clips".
The chemical nature of the clips is varied but they tend to be molecules
with several active groups that bind to different molecules of the cell
creating an interconnected molecular network that attacks bacterial and
enzymatic are unsuccessful, thus keeping the cell structure.
For conventional optical microscopy techniques are often used based on
formaldehyde, fixatives in buffer solutions or mixed with other chemical
species (as with picric acid in Bouin's fixative) either.
How to manage the post to the sample depends on the circumstances of
collection.
-Well, in human (and animal) specimens obtained post-mortem (necropsy)
or obtained by direct extraction (biopsies), the fixing method is simple:
immerses the sample in a container filled with the binding substance. The
fastener must spread tissues to perform its action. At times, and depending
on the nature of tissues this penetration is not completeand there is a
gradient of fixing, still better fixed the worst fixation, the central areas and
peripheral areas.
-In the case of animals for experimentation and to avoid the effect of
gradient of fixing is commonly used method of perfusion. The idea is
simple: it is injecting fixative liquid in the cardiovascular system so this
circulate throughout the body and thus the fixation is homogeneous in all
tissues. Typically is injecting liquid fixative to the height of the heart
ventricle or aortic artery and at the height of the Atrium blood must be
drained to prevent overpressurization of the system causing rupture of
capillaries.
In one case or the other left a long piece to ensure that the locking fluid
concluded its effect. At this point the piece is inserted into a cassette
sample (a small plastic box holes). This sample allows easy change from
samples of a few containers others maintaining the integrity of the piece.
Fixing time piece should cleaned with frequent water baths, first tap water
and then distilled water to remove all traces of fijador which could react
with chemical substances that must be used later.
144
2 INCLUSION
Inclusion is the process by which will increase the hardness and
consistency of the piece to allow his court. This is achieved including the
piece in a substance which hardness and proper consistency. The paraffin
is usually used in this process. The challenge is to replace the water that is
in the inside and outside of cells by paraffin.
Paraffin is a substance that is liquid and solidified below this temperature
over the Garcia, this facilitates dissemination of paraffin by tissues when
liquid, however another problem that must be overcome is the fact that the
paraffin is highly hydrophobic, i.e. cannot be mixed with water or
substances in aqueous media. Therefore the next step that must suffer the
samples is the removal of water from the sample: dehydration.
The dehydration of the samples is achieved by a gradual replacement of the
water by ethanol. To get it undergoes successive baths of gradation
growing ethanol parts, starting with 500 or 700 ethanol and concluding
withdifferent baths of absolute ethanol (1000), passing by ethanol 960
baths.
The piece, already dehydrated, yet not can be passed to paraffin since
ethanol is miscible with paraffin. An intermediary agent, i.e. a substance
which is miscible with ethanol as the paraffin is used. The commonly used
intermediary is xylene, in which the piece suffers several baths to
completely replace the ethanol.
With the workpiece in xylene, usually through a bath of a mixture of Xileno-
Parafina 50% to favour the penetration of paraffin. Subsequent to this
bathroom occur several bathrooms in paraffin pure until paraffin has gone
completely in the entire piece. All these bathrooms that include paraffin are
Garcia stove to keep the liquid paraffin. In some laboratories throughout
this process is automated using a device (robot), changing a fluid samples
to another using a preset program.
Once past the time that paraffin penetrates the tissues, the question is to
perform a block with all of this, which can be used to get cuts on
microtome.
The easiest way is to usea mold in which the paraffin is poured and which
introduces the sample processed and let it cool for to solidify the set. A
station is used for this purpose in many laboratories. These stations have a
tank of liquid paraffin (Garcia), from which can be dispensed paraffin on
the mold through a tap; There is also a camera where you store different
molds to Garcia and a hot plate (Garcia) and other cold (40 c).
145
Dynamics is simple, fits the mold under the tap of paraffin and is filled with
liquid paraffin, the workpiece is placed and oriented, finally by the base of
the cassette of inclusion. Set above the cold plate moves with care to
achieve a rapid solidification not forming crystals and solidified once
removed the mold getting a block ready to cut, as shown in the interactive
diagram.
4. ASSEMBLY CUTS
To see the cuts to the microscope needed mounted on a thin sheet of glass
(slide).
To do this, first separate cuts one by one or in small strips that can be
mounted on the slide.
The cuts come out wrinkled of the microtome due to friction between the
blade and the block, so it becomes necessary to stretch the cuts to its
correct observation.
146
To achieve this the cuts made to float in a bath of water, lukewarm (about
35-360C). Due to the heat, the paraffin expands by stretching the cut until it
is completely smooth. When the cuts are stretched, they simply fish with
the slides. Previously on the slide surface extends a thin layer of a magnetic
substance that secures the cut to slide and prevents the cut comes off in the
subsequent processes. The steps - lysine as adherent substance is used in
many laboratories.
When we have the cuts above the slide left to dry (usually in an oven
to)(35-360C).
5 STAINING
Fine cut biological material is basically transparent, so it cannot distinguish
anything to observe under the microscope.
This is why that it is necessary to stain samples to distinguish the cells and
tissues.
Staining protocols are performed by sequential baths of various dyes and
reagents. There are different ways of doing this:
-Can be slides onto a cooling rack placed in a tray and using a pipette will
placing the different dyes and reagents.
Several slides placed in special baskets of glass which go from a bucket to
another, containing each bucket-can stain or dye reagent relevant.
-The process of staining in racks can be through an automated using a
robot that automatically changes the sieve tray by a preset times.
Whatever the method used, the (sequence of steps) to use staining Protocol
will depend on what you want to display the processed tissue.
Staining protocols (techniques), numerous books have been written since
they are very numerous and varied. At the endThis topic we will make a
summary of the most common techniques used in histology.
Among all staining techniques is one that stands out above the others since
it is by far the most widely used around the world, it is the technique of the
hematoxylin-eosin. 7
Hematoxylin is a dye mixture (there are different variations) that is basic in
nature so it binds to acidic substances. In the cells, the more acidic area is
the core since it is full of nucleic acids (DNA), by which the core turn with
an azul-violaceo color hematoxylin.
Eosin, a dye colour is rosa-rojo which is dissolved in ethanol and has an
acidic, so it binds to the Basic (high pH) structures of the tissues. Structures
with higher pH of the tissue are proteins, because the bridges of sulphur
and nitrogen have. It is that in samples processed with this technique,
147
stained pink, preferably, the cytoplasm and the extracellular matrix, both
rich in protein structures.
All staining process can be divided into three phases:
-The first phase withsystem in the dewaxing and hydration of the cuts. For
desparafinar cuts with xylene, which dissolves the paraffin bathe and
hydration is a sequence of steps reverse the dehydration.
-With the sample in an aqueous medium is with the chosen staining
Protocol. The Protocol is usually consist of a sequence of baths with dyes,
cleaners and reagents, which is typical of each technique.
-The third stage aims the permanent mounting of specimens for
observation under a microscope. Mounting medium is placed on top of
them to fit and protect cuts: a liquid substance that crystallizes with air
(polymerizes) and a very thin sheet of glass (coverslip), they form a whole
stable and durable. The mounting medium is usually a hydrophobic
substance, by what it cuts prior to be mounted have to be dehydrated,
following a protocol similar to that used during the inclusion.
The mounted preparation is left to dry for a few hours and kept in dark to
prevent light to degrade the colors.
149
Using electrons in a microscope involves a series of problem as:
-First, should take into account that the electrons are electrically charged
so if these electrons, in his career, found with atoms, by electrostatic,
clouds of electrons from the atoms forwarded to the electrons in the beam,
making it impossible to obtain a coherent image. Why microscopes inside
should be the most complete vacuum.
-Secondly, we must bear in mind that to make a microscope behave as such,
it is necessary to have some lenses that change the path of the beam. The
problem is that the optical lenses do not serve for this purpose, so you have
to give the electron microscope lenses otherwise. How to change a beam of
electrons is through the use of a magnetic field, this is why an electron
microscope lenses are electromagnetic coils which generate these magnetic
fields.
-Third, there is the thickness of the cut. If we use a cut used for optical
microscopy (from 5 to 10 m of thickness), the amount of biological
material within this thickness is such that the image would be
incomprehensible. This is whythe thickness of the cuts for this type of
microscopy should be much more fine, ranging between 40 and 50 nm.
(0.04-0.05 m). Nor can also be used glass slides (for very fine whatever)
since the electron beam not traversed it, is why is often used as slides a thin
metal grille (copper), the Court rests on the filaments of the grid being
suspended in the spaces between filaments.
-Then we must solve the problem of contrast (staining). Biological samples
present a contrast against the electrons very similar between if and in
general very low, so it becomes necessary to increase it. In electron
microscopy are useless dyes used in optical microscopy (just offer contrast
to electrons), so it is necessary to use other contrastadoras substances.
Uranium acetate is used basically, this molecule binds chemically to the
biological material and with the large amount of electrons from the atom of
uranium, makes that there where there is an accumulation of biological
material (with uranium chemically bonded acetate) electrons do not
traverse the sample to be diverted by the electron cloud todayso there
where there is a low concentration of biological material, the electrons will
pass through the sample easier. This causes the final image is composed by
the presence or absence of electrons.
150
-The last problem that has to be solved is to obtain the image. The human
eye is not able to see electrons, so it is necessary to devise a system to
obtain a visible image.
151
Attaches the blade with a piece of plastic at the end of which forms a cavity
which is full of water where float cuts, once made, and where they are
captured with sample grids.
The ultramicrotome with thicker cuts can also be obtained (1-2 mm) to his
studio in conventional optical microscopy (of bright field), these cuts are
known as "semi-fine".
152
As the secondary electrons may indexed in the matrix in different numbers
in each cell, due to the angle in which they are generated, the image shows
light and dark, reflecting the three-dimensional surface of the sample,
giving additional information of tissues to those that can be obtained with a
conventional optical microscope or even a MET.
Dark-field device:
This device does not fall perpendicularly to the sample, but tangentially, so
is refracted by the sample toward the goal, in areas in which there is no
material is all black, hence the name of "dark field".
Fluorescence microscope:
This microscope uses ultraviolet light instead of white light. This light
causes fluorescence in certain substances (fluoro-baseball cards) that are
used as dyes. In this type of microscope, the background is black and
chrome-fluoro-labeled structures emit its own light (which is which is).
Confocal microscope:
The confocal microscope is an optical microscope of special features and
recent emergence.
This microscope benefits are unique, so for example, you can see thick
specimens, from which the microscope obtains images not of all the
thickness of the sample, but sections of small thickness (in the style of the
computerized axial tomography) as luego, by computer programs, can
rebuild in a three-dimensional structure that is displayed on a computer
screen.
It also allows the study of live samples (cell culture) over time, which
makes it ideal for the understanding of certain biological processes.
153
It is a microscope of recent onset, not by the optical complexity of their
construction, which was already in operation in the 1950s, but by the
complexity of hardware and computer software necessary.
The confocal microscope has several transmitters laser, which are the
sources of light used.
Each one of those lasers is different wavelength and incident on the sample
where excited to foto-cromos (which are "dyes") that respond, each at a
given wavelength, allowing for multiple marks on the same sample,
revealing different structures in different colors.
Monocromicas techniques
These techniques use only dye tints like all tissues and differentiation is
achieved thanks to the different nature of the tissues, thus an epithelium
formed by a continuous layer of cells is dyed more intensely than the
connective tissue where live fibres (which stained less) withsome cells.
These techniques include:
-Aniline blue
-Toluidine blue
-Hematoxylin Hedienhain
-Neutral Red
154
These techniques are often performed on cuts of between 5 and 10 m
paraffin.
Tricromicas techniques
As its name indicates, these techniques use a combination of three colours.
A characteristic of these techniques is that tend to stain the connective
tissue of differential form, since one of the dyes tend to have affinity for
fibers (collagen) of the extracellular matrix of this tissue.
These techniques are often used, as well as the hematoxylin-eosin, for the
topographic study of bodies, with the added value of the ease of recognition
of the connective tissue.
Tricromicas most commonly used techniques are:
-Masson trichrome
-Mallory trichrome
These techniques are often performed on cuts of between 5 and 10 m
paraffin.
PAS-H technique
This technique is based on the combined use of the periodic acid along with
the Schiff reagent.
This combination stained selectively mucopolysaccharides. These parts are
in the basal plates of epithelia, mucous secretions, or the glycocalyx of
microvilli, so these items are selectively stained rosa-rojo color.
Usually combined with hematoxylin, which stains nuclei blue-violet color
and allows better locate items stained with PAS.
This technique is usually performed on cuts of between 5 and 10 m
paraffin.
155
Why this technique is preferably used in the study of mucous secretions of
the digestive tract mucous secretions neutral (rosa-rojo) to differentiate
from acidic mucous secretions (blue).
This technique is usually performed on cuts of between 5 and 10 m
paraffin.
Technique of Gordon-Sweets
It is technique is based on the use of silver salts, which in combination with
other reagents used in this technique selectively stained black the reticular
fibers of the connective tissue.
This technique is usually performed on cuts of between 5 and 10 m
paraffin.
Sudan IV technique
Sudan IV is a fat-soluble dye, so it is very suitable for dyeing fatty elements,
such as adipocytes.
To perform this technique, during the processing of tissues should not be
used any solvent orgonly, because that would dissolve fatty elements that
are intended to dye; This makes it impossible to include samples in
paraffin, so samples are usually cut by freezing in a thickness of
approximately 30 m made with the cryostat.
156
Technique of tomato Lectin
Lectins are proteins that bind selectively to certain sugars, tomato lectin is
combined with N-acetylglucosamine, that in the liver, for example, found in
the macrophage (Kupffer cells) and endothelial walls of vessels (sinusoids).
The tomato lectin combined with a marker molecule (such as biotin) is
used in histology. Biotin can then be put of manifest in a process of
developing. The process is simple: place thelectin marked over the cut,
expected one sufficient time so that the lectin is a sugar and then reveals
that you see under a microscope.
This technique can be, both on cuts of about 30 m, made by freezing in the
cryostat, both cuts of 5 to 10 m in paraffin.
157
Immuno-Histochemical techniques are based on the natural
phenomenon of the specific recognition of a molecule (Antigen) by another
(antibody), this is known as immune reaction. If we use an antibody
marked the result is that the mark will be there where the Antigen (given
the specificity of the Antigen-antibody reaction).
Now the bioindustry offers lots of marked antibodies against a wide variety
of antigens.
Technique of Nissl
This technique is commonly used in the study of the nervous tissue rather
than the hematoxylin-eosin. That is a topographic technique that shows the
distribution of the neuronal somas in the neuropil.
The dye used in this technique is the toluidine blue, that applies after pre-
processing in potassium dichromate. Violet Cresilo used as dye in some
variants.
Neuronal somas are stained dark blue while the parenchyma appears
almost white. Glial cells (their cell bodies) are also stained in dark blue and
can be to distinguish the different types by its shape and location.
158
Cuts the sufficiently fine (up 5-7 m of thickness), to large increases (400-
1,000 x) are seen in the cytoplasm of the neuronal body (soma) some
accumulations that are known as Nissl bodies, which correspond to stacks
of cisterns of rough endoplasmic reticulum.
This technique is usually performed on cuts of between 5 and 10 m in
paraffin.
159
This involves various circumstances to be taken into account:
-Firstly it is obvious that complete neurons in a cut of 5, 10 or 30 m of
thickness may not be seen, and will need to be thicker cuts (100-150 m).
-Secondly, should only stain a small proportion of neurons, since if they
tiñeran all, staining all their elements, some neurons of others could not be
distinguished.
Both circumstances are given in the technique of Golgi, where nervous
tissue with a solution of silver nitrate, is impregnated into the block after
an induration with potassium dichromate. This process manages to
impregnate one very smallENA proportion of total of neurons
(approximately 0.03%). Today, is still under discussion the mechanism by
which some neurons are stained and others do not.
The Golgi technique is done in block, i.e. with the complete brain (or a
portion of it). The cuts have to be thick (100-150 m), the piece is not
included in paraffin or cut with a rotary microtome. The piece impregnated
with the Golgi technique may include in celloidin (purified Collodion), or
simply snap into place gelatine or even paraffin. For cutting are often used
other less sophisticated devices, such as a vibratomo, a sliding microtome
or even a simple Barber on a hand microtome blade.
As a result of this technique, a microscope so observed complete neurons
that can follow the winding course of its dendrites, using the movement of
the deck and the focus knob control. As the silver nitrate can not cross the
myelin sheath, will be only the axons not myelinated.
You can also view cell glial and blood vessels.
OTHER MICROTOMES
In addition to Rotary microtome or Minot for paraffin cutting, as already
explained, in routine practice for the development of specific techniques in
histology.
160
Ultramicrotome
In essence this instrument continues the mechanical model of the Rotary
microtome, but with a considerably greater precision, to get much more
fine cuts (40-50 nm) to be used in transmission electron microscopy.
Another type of harder blades, made of specially treated as even with the
edge of diamond glass are used in this microtome.
Cryostat
The cryostat is, in essence, a rotary microtome inside a refrigerated food
locker. Obviously used to cut samples at low temperatures, for techniques
that require the maintenance of biological activity in samples, for example,
the technical histoenzimaticas and in the Immunohistochemistry.
Frozen samplesan, after treatment with a cryoprotectant, and cut, riding
the cuts on slide, which is usually done the treatment Histochemistry and
Immunohistochemistry.
Vibratomo
Vibratomo, as its name suggests, is based on a resonant arm that is not too
thin cuts (between 25 and 500 m). Typically used for technical
histoenzimaticas and immunohistochemical since it is not necessary to
include paraffin parts, although currently the cryostat is used more for
these techniques.
Occasionally, the vibratomo has been used to make cuts with the Golgi
technique.
Hand microtome
It is the simplest of the microtomes and simply is a worm that raises the
sample on a surface flat, on which slides a sharp trimmer blade for cuts.
HIS court rank ranges from 20 to 150 m and is widely used in plant
histology. Animal histology is used for thicker cuts in the Golgi technique.
ARTIFACTS OF STAINING
As the student has seen the technique is complex and involves many steps,
so it can happen that in any of these steps areproduce distortion or small
bug, which then found during the study of the sample under a microscope,
is what is known as artifacts of staining.
161
In themselves the artifacts have no histological value, but you can get to the
found was them during the study of preparations the student to confuse
them with elements of the fabric which makes it convenient to the student
aware of its existence.
BANDED
The banded appearance that show some preparations is typically caused by
an incorrect angle of incidence of the blade.
Other sometimes occurs a banded to try to cut too fine a fabric that does
not have the hardness and/or sufficient consistency.
BUBBLES
Some preparations are small bubbles of air trapped in the midst of
mounting.
This appliance is usually given when the medium ofmounting is very dense
and solidifies quickly.
162
NICKS
The nicks are produced by small imperfections from the edge of the blade
when cutting on the microtome.
These nicks cause drag and destruction of a straight band of tissue whose
width corresponds to the width of the imperfection of the blade edge and
occupying all of the tissue in the direction of the Court.
FOLDS
The folds are often occur during the cutter on the slide Assembly.
Sometimes it's a simple ripple of tissue, sometimes it a great fold that fold
onto itself.
163
CHAPTER TWENTY ONE
SPECTROPHOTOMETRY
where I sub 0 is the intensity of transmitted light using the pure solvent, I is
the intensity of the transmitted light when the colored compound is added,
c is concentration of the colored compound, l is the distance the light
passes through the solution, and k is a constant. If the light path l is a
constant, as is the case with a spectrophotometer, Beer's law may be
written,
164
where k is a new constant and T is the transmittance of the solution. There
is a logarithmic relationship between transmittance and the concentration
of the colored compound. Thus,
1. The instrument must have been warm for at least 15 min. prior to
use. The power switch doubles as the zeroing control.
3. With the sample cover closed, use the zero control to adjust the
meter needle to "0" on the % transmittance scale (with no sample in
the instrument the light path is blocked, so the photometer reads no
light at all).
4. Wipe the tube containing the reference solution with a lab wipe and
place it into the sample holder. Close the cover and use the light
control knob to set the meter needle to "0" on the absorbance scale.
165
5. Remove the reference tube, wipe off the first sample or standard
tube, insert it and close the cover. Read and record the absorbance,
not the transmittance.
Why use a reference solution? Can't you just use a water blank? A proper
reference solution contains color reagent plus sample buffer. The
difference between the reference and a sample is that the concentration of
the measurable substance in the reference solution is zero. The reference
tube transmits as much light as is possible with the solution you are using.
A sample tube with any concentration of the measurable substance absorbs
more light than the reference, transmitting less light to the photometer. In
order to obtain the best readability and accuracy, the scale is set to read
zero absorbance (100% transmission) with the reference in place. Now you
can use the full scale of the spectrophotometer. If you use a water blank as
a reference, you might find that the solution alone absorbs so much light
relative to distilled water that the usable scale is compressed, and the
accuracy is very poor.
Most test substances in water are colorless and undetectable to the human
eye. To test for their presence we must find a way to "see" them. A
colorimeter or spectrophotometer can be used to measure any test
substance that is itself colored or can be reacted to produce a color. In fact
a simple definition of colorimetry is "the measurement of color" and a
colorimetric method is "any technique used to evaluate an unknown color
in reference to known colors". In a colorimetric chemical test the intensity
of the color from the reaction must be proportional to the concentration of
the substance being tested. Some reactions have limitations or variances
inherent to them that may give misleading results. Most limitations or
variances are discussed with each particular test instruction. In the most
basic colorimetric method the reacted test sample is visually compared to a
known color standard. However, the eyesight of the analyst,
inconsistencies in the light sources, and the fading of color standards limit
accurate and reproducible results.
166
To avoid these sources of error, a colorimeter or spectrophotometer can be
used to photoelectrically measure the amount of colored light absorbed by
a colored sample in reference to a colorless sample (blank). A colorimeter
is generally any tool that characterizes color samples to provide an
objective measure of color characteristics. In chemistry, the colorimeter is
an apparatus that allows the absorbance of a solution at a particular
frequency (color) of visual light to be determined. Colorimeters hence
make it possible to ascertain the concentration of a known solute, since it is
proportional to the absorbance.
167
Global Water's spectrophotometers use either a tungsten or xenon
flashlamp as the source of white light. The white light passes through an
entrance slit and is focused on a ruled grating consisting of 1200 lines/mm.
The grating causes the light to be dispersed into its various component
wavelengths. The monochromator design allows the user to select which
specific wavelength of interest will be passed through the exit slit and into
the sample. The use of mirrors and additional filters prevents light of
undesired wavelengths (diffraction of higher order, stray light) from
making it to the sample. A photodetector measures the amount of light,
which passes through the sample.
168
COLORIMETRY
169
Apparatus:
1. light source
Beer-Lambert’s
Laws:
170
Beer’s Law
It = Ioe-KC
Lambert’s Law
It = Ioe-kt
IE/Io = e-KCT
where,
K = a constant
C = concentration
171
Steps for operating the photoelectric colorimeter:
2. Fill two of the cuvette with blank solution to about three-fourth and
place it in the cuvette slot.
5. Take the test solution i another cuvette and read the optical density.
8. From the graph the concentration of the test solution or the unknown
solution can be calculated.
172
Table for choosing the wavelength of absorption:
173
CHAPTER TWENTY TWO
ELECTROPHORESIS
174
What is Electrophoresis & Its Principle
The charged molecules under the influence of electric field migrate towards
oppositely charged electrodes.Those molecules with +ve charge move
towards cathode and -ve molecules move towards Anode. The migration is
due to charge on the molecules and potential applied across the electrodes.
The sample under test is placed at one end of the paper near one of
electrodes. When electricity is applied, the molecules start moving to
respective electrodes.But the movement is influenced by molecular weight
of the molecule. So when a mixture is placed on the electrophoresis paper
or gel, different bands are seen along the paper after the process.This is due
to differential rate of movement by molecules based on their weight.#
Paper-electrophoresisThose molecules with higher molecular weight move
slower. While those with small weight move faster. Also the size of the
molecule also influences the movement. The bigger size molecule
experience more friction than smaller ones in motion. These molecules
migrate at different speed and to different lengths based on their charge,
mass and shape.
175
Electrons driven to the cathode will leave the electrode and participate in a
reduction reaction with water generating hydrogen gas and hydroxide ions.
In the meantime, at the positive anode an oxidation reaction occurs.
Electrons released from water molecules enter the electrode generating
oxygen gas and free protons (which immediately form hydroxonium ions
with water molecules). The amount of electrons leaving the cathode equals
the amount of electrons entering the cathode. As mentioned, the two buffer
chambers are interconnected such that charged particles can migrate
between the two chambers. These particles are driven by the electric
potential between the two electrodes. Negatively charged ions, called
anions, move towards the positively charged anode, while positively
charged ions, called cations, move towards the positively charged cathode.
176
As charged particles can migrate between the two chambers due to the
electric potential difference, positive ions (cations) move towards the
negatively charged cathode while negatively charged ions (anions) move
towards the positively charged anode.
Different ions migrate at different speeds dictated by their sizes and by the
number of charges they carry. As a result, different ions can be separated
from each other by electrophoresis. It is very important to understand the
basic physics describing the dependence of the speed of the ion as a
function of the number of charges on the ion, the size of the ion, the
magnitude of the applied electric field and the nature of the medium in
which the ions migrate. By understanding these basic relationships, the
principles of the many different specific electrophoresis methods become
comprehensible.
1.Slab electrophoresis
2.Capillary electrophoresis.
The slab method is the classical method which is widely used for
industrial scale. It is slow, time consuming and bulky. Yet it is the sole
method available for separation of proteins like enzymes, hormones,
antibodies and nucleotides like DNA and RNA.This slab electrophoresis is
further divided into 3 types based on the principle used for separation.
a. Zone electrophoresis
b. Isoelectro-focusing
c. Immune-electrophoresis.
177
Zone electrophoresis: Here the charged particles are separated into
different zones or bands.
1.Paper electrophoresis.
2.Gel electrophoresis.
These colored bands are recognized for the nature of sample by comparing
with the standard. For a sample of serum, 5 bands of proteins can be
separated by paper electrophoresis.
The separation is more efficient than paper type as the rate of movement is
slow and area of separation is larger by thickness.
The sample is applied and subjected to electric field which can lead to
separation of molecules. These molecules form bands and can be
recognized by staining and comparing with standard sample bands. The
method is more effective than paper and for instance from serum sample,
15 proteins bands can be isolated.
178
Immuno electrophoresis: This is the method with combination of
principles of both electrophoresis with immune reactions. First the
proteins are separated on to the electrophoresis paper. Then the antibodies
are allowed to diffuse through the paper and react with separated protein
molecules in bands.
Applications of electrophoresis:
2. For analysis of nucleic acid molecules like RNA and DNA studies. These
long chain molecules can be analyzed only after separation after
electrophoresis. This helps to determine the size or breaks in the DNA or
RNA molecule.
179
This is due to differential rate of movement by molecules based on their
weight.
Those molecules with higher molecular weight move slower. While those
with small weight move faster. Also the size of the molecule also influences
the movement. The bigger size molecule experience more friction than
smaller ones in motion. These molecules migrate at different speed and to
different lengths based on their charge, mass and shape.
180
The gel electrophoresis instrumentation
181
CHAPTER TWENTY THREE
POLYMERASE CHAIN REACTION (PCR)
The polymerase chain reaction (PCR) can be used for the selective
amplification of a specific segment (target sequence or amplicon) of a DNA
molecule. The DNA to be amplified can theoretically be present in a very
small amount—even as a single molecule. The PCR reaction is carried out in
vitro and, as such, it does not require a host organism. The size of the DNA
region amplified during the PCR reaction typically falls within the range of
100 bp to 10 kbp.
PCR is based on the reaction scheme described as follows.First, a heat-
induced denaturation of the target DNA sequence (template, panel a) is
performed in order to separate the constituent complementary DNA
strands. Then, short single-stranded DNA molecules (oligonucleotide
primers) are added that are complementary to the flanking regions of the
target sequence. Cooling of the sample allows annealing (panel b).
Subsequently, the strand elongation activity of a DNA polymerase enzyme
leads to the formation of new DNA strands (so-called primer extension
products) starting from the 3’ end of the annealed primers (panel c). After
repeated heat denaturation and cooling, the primers will be able to anneal
both to the original template molecules and to the primer extension
products (panel d). In the latter case, the length of the nascent DNA strand
will be limited by the primer extension product now serving as a template
strand. This way, the resulting “end-product” strands will comprise the
DNA segment defined by the template and the flanking primers (panel e). In
further denaturation–annealing–synthesis cycles, the end-product strands
will serve as templates for the synthesis of additional end-product strands.
Therefore, the amount of these molecules will grow exponentially with the
number of reaction cycles (panel f). Thus, the final result of the reaction
will be a large amount of end-product molecules comprising the sequence
flanked by the predefined primers.
182
This highlights one of the crucial advantages of the PCR technique: via the
design of primers, we can fully control which segment of the template DNA
will be amplified—with only a few practical limitations.
183
In order to successfully perform the reaction described above, the
following solution components are necessary:
a. DNA molecules that serve as template for the reaction. The amount of
the template can be very low—in principle, the reaction can start
even from a single template molecule. Another advantage of PCR is
that the selective amplification of the desired DNA segment can be
accomplished even using a heterogeneous DNA sample as template.
b. A pair of oligonucleotides serving as primers. The 3’ ends of the
oligonucleotides must be able to anneal to the corresponding strands
of the template. A further advantage of PCR is that the 5’ end of the
applied primers may contain segments that do not anneal to the
original template. These regions of the primers may be specific
engineered sequences or even contain labelling or other
modifications, which will be present in the end product and thus
facilitate its further analysis and/or processing. (For instance,
recognition sites of restriction endonucleases can be incorporated in
order to facilitate the subsequent cloning of the PCR product.)
c. The DNA polymerase enzyme catalysing DNA synthesis. As the heat-
induced denaturation of the template is required during each cycle,
heat stable polymerases are usually applied that originate from
thermophilic organisms (e.g. Thermus aquaticus (Taq) or Pyrococcus
furiosus (Pfu) DNA polymerase).
d. Deoxyribonucleoside triphosphate (dNTP) molecules that serve as
building blocks for the DNA strands to be synthesised. These include
dATP (deoxyadenosine triphosphate), dGTP (deoxyguanosine
triphosphate), dTTP (deoxythymidine triphosphate), and dCTP
(deoxycytidine triphosphate).
e. A buffer providing optimal reaction conditions for the activity of DNA
polymerase. Among other components, PCR buffers contain bivalent
cations (e.g. Mg2+ or Mn2+).
For an effective polymerase chain reaction, it is necessary to change the
temperature of the solution rapidly, cyclically and in a wide range
184
. This can be achieved by using a programmable instrument containing a
thermoblock equipped with a Peltier cell. To achieve effective heat
exchange, PCR reactions are performed in thin-walled plastic tubes in small
reaction volumes (typically, in the order of 10-200 μl). The caps of the PCR
tubes are constantly held at high temperature by heating the lid of the
thermoblock, in order to prevent condensation of the reaction mixture in
the upper part of the tubes. In the absence of a heated lid, oil or wax can be
layered on top of the aqueous PCR samples in order to prevent
evaporation.
The programmed heat profile of a PCR reaction generally consists of
the following steps:
a. Initial denaturation of the template, performed at high temperature
(typically, around 95°C).
b. Denaturation: Heat-induced separation of the strands of double-
stranded DNA molecules at high temperature (typically, around
95°C).
c. Annealing: Cooling of the reaction mixture to a temperature around
45-65°C in order to facilitate the annealing of the oligonucleotide
primers to complementary stretches on template DNA molecules.
d. DNA synthesis: This step takes place at a temperature around the
optimum of the heat-stable DNA polymerase (typically, 72°C), for a
time period dictated by the length of the DNA segment to be
amplified (typically, 1 minute per kilo-base pair).
a. Steps (b)-(d) are repeated typically 20-35 times, depending on the
application.
a. Final DNA synthesis step: After completion of the cycles consisting of
steps (b)-(d), this step is performed at a temperature identical to that
during step (d) (72oC), in order to produce complementary strands
for all remaining single-stranded DNA molecules.
185
PCR reactions are widely applied in diverse areas of biology and medical
science. In the following, we list a few examples.
a. Molecular cloning, production of recombinant DNA constructs and
hybridisation probes, mutagenesis, sequencing;
b. Investigation of gene function and expression;
c. Medical diagnostics: identification of genotypes, hereditary
disorders, pathogenic agents;
d. Forensic investigations: identification of samples and individuals
based on DNA fingerprints (unique individual DNA sequence
patterns);
e. Evolutionary biology, molecular evolution, phylogenetic
investigations, analysis of fossils.
186
CHAPTER TWENTY FOUR
FLOROMETRY /SPECTROFLOROMETRY
Fluorescence
When an illuminating system emits light of wavelength different from the incident
light, the phenomenon is termed as fluorescence and it takes place as soon as the light
is absorbed and ceases as soon as the light is stopped. Fluorescence usually is seen at
moderate temperature in liquid solution.
187
The intensity of the photoluminescence spectrum depends on the excitation
wavelength, although its spectral position does not. The photoluminescence spectrum
appears at longer wavelengths than the excitation spectrum. This phenomenon arises
because the excitation process requires an amount of energy equal to electronic
energy change plus a vibrational energy increase; conversely each deexcitation yields
the electronic excitation energy minus a vibrational energy increase .
In analytical work, fluorescence is important because of the fact that the intensity of
light emitted by a fluorescent material depends upon the concentration of that
material. In this manner, the measurement of fluorescence intensity permits the
quantitative determination of traces of many inorganic species.
The intensity of exciting light is proportional to both the intensity of exciting light and
the concentration of the fluorescing material at low concentration (10-4-10-7M) and
with in narrow limits.
188
Structural factors:
• Polarity of the solvent: The polarity of the solvent also affects the fluorescence and
phosphorescence. Solvents containing heavy atoms or other such atoms in their
structures decrease the fluorescence.
• Presence of dissolved oxygen: The presence of dissolved oxygen often reduces the
emission intensity of a fluorescent solution probably due to photochemically induced
oxidation of the fluorescent material. Quenching also occurs as a result of
paramagnetic properties of molecular oxygen.
189
Internal conversion
The use of fluorescence in quantitative analysis is based on the fact that there should
be a definite relationship (preferably linear) between concentration and intensity of
fluorescence. On the basis of theory as well as experiment such a linear relationship
has actually been found to exist and it is related to the familiar Lambert-beer law,
from which it can be derived as under:
Where I = I0 e-acl
c = Concentration
190
The intensity of fluorescence (F) emitted can be obtained by multiplying the amount
of light absorbed by quantum yield (f), where f is the ratio of light emitted to the light
absorbed. Thus
F = φ X I0 (I – e–acl) (2)
Reabsorption and scattering of light have not been taken into consideration in the
above equation.
It is also assumed that both absorption and emission occur only due to single
molecular species and degree of dissociation does not vary with concentration. Now
e-acl can be exponentially expressed as
If the magnitude of acl is small, all terms after first two can be neglected. Hence
F = φ X I0 (1 – 1 + acl) = φ X I0 acl
For a given fluorescent compound, solvent and temperature, and in a cell of definite
dimensions, all other factors are constant giving simply,
F = Kc (4)
191
Calculation of Results
It is very important for the analyst to run a blank and two standards of known
composition that cover the range of concentration expected. The blank must be kept
relatively low and blank reading should be subtracted from all other readings.
Fs = KCs (5)
Fu = KCu (6)
s and u are subscripts for standard and unknown respectively. Standard should be
very close to the unknown in composition.
F = I0 φ
192
Fluorimetry
The phenomenon of fluorescence was discovered and published by Sir John Fredrick
William Herschel in the mid-1800s. He observed that, when illuminated with white
light, a solution of quinine emitted a strange blue light perpendicular to the direction
of the illumination, even though it remained colourless when observed facing the light
source.
Photons of a given wavelength are absorbed by the fluorophore and excite some of its
electrons. The system remains in this excited state for only a few nanoseconds and
then relaxes into its ground state. (Note that light travels 30 centimetres in a single
nanosecond.) When returning from the excited state to the ground state, the electron
may emit a photon. This is known as fluorescent emission. The wavelength of the
absorbed photon is always shorter than that of the emitted photon (i.e. the energy of
the emitted light is lower than that of the absorbed one). This phenomenon, the so-
called Stokes shift, is an important attribute of fluorescence both in theory and
practice.
193
The fluorimeter
The Stokes shift facilitates the creation of highly sensitive methods of detection of
fluorescence. As the wavelengths of the exciting and detected (emitted) light differ,
the background created by the exciting light can be minimised by using a proper
setup. There are two ways to avoid that the exciting light get into the detector:
(1) Measurements are often carried out in a geometric arrangement in which the
detection of emission is perpendicular to the exciting beam of light.
(2) Light filters are placed between the light source and the sample and also between
the sample and the detector. Light of only a certain wavelength range can pass
through these filters. Photons of the exciting light leaving the sample will not reach
the detector as they are absorbed by the emission filter. In many cases,
monochromators are used instead of filters. Their advantage is that the selected
wavelength can be set rather freely and more precisely compared to filters that are
set to a given interval and adjustments can only be made by replacing them .
194
Scheme of a monochromator: From white (wide-spectrum) light, the monochromator
is able to select light within a given narrow spectrum. White light is projected onto a
prism splitting it to its components, effectively creating a rainbow behind it. On its
way to the sample, light must pass through a small slit and therefore only a small part
of the spectrum (a practically homogenous light beam) reaches it. Wavelength of the
light leaving the monochromator can be changed by rotating the prism as this will let
a different part of the rainbow through the slit.
This double protection of the detector from the exciting light is necessary due to the
fact that the intensity of fluorescent light is usually two or three orders of magnitude
smaller than that of the exciting light. This means that even if only 1 or 0.1 % of the
exciting light reaches the detector, half of the detected signal intensity would arise
from the exciting light and only the other half from the emission of the sample. This
would result in a 50 % background signal level, as the detector is unable to
distinguish photons based on their wavelength.
195
Fluorophores
The shape of the excitation spectrum is usually the same as the shape of the emission
spectrum. However, due to the Stokes shift, the emission spectrum is shifted towards
red compared to the excitation spectrum, and usually the shape of the two spectra are
mirror images of each other
196
The intensity of fluorescence of a molecule is sensitive to its environment. Emission
intensity is significantly affected by the pH and the polarity of the solvent as well as
the temperature. Usually, an apolar solvent and a decrease in temperature will
increase the intensity. The immediate environment of the fluorophore is an important
factor, too. Another molecule or group moving close to the fluorophore can change
the intensity of fluorescence. Due to these attributes, fluorimetry is well suited to the
study of different chemical reactions and/or conformational changes, aggregation and
dissociation. In proteins, two amino acids have side chains with significant
fluorescence: tryptophan and tyrosine. The fluorescence of these groups in a protein
is called the intrinsic fluorescence of the protein. Tryptophan is a relatively rare
amino acid; most proteins contain only one or a few tryptophans. Tyrosine is much
more frequent; there are usually five to ten times more tyrosines in a protein than
tryptophans. On the other hand, the fluorescence intensity of tryptophan is much
higher than that of tyrosine.
197
Extinction (A) and emission (B) spectra of tryptophan, tyrosine and phenylalanine
198
. (Note that the three amino acids shown display markedly different fluorescence
intensities. For visibility, emission spectra shown in panel B were normalised to their
individual maxima.)
The spectra in figure above clearly show that the fluorescence of tryptophan can be
studied specifically even in the presence of tyrosines, since if the excitation is set to
295 nm and the detection of emission is set to 350 nm, the fluorescence of tyrosine
can be neglected. Both the intensity of the fluorescence and the shape of the emission
spectrum are sensitive to the surroundings of the side chain, which often changes
upon conformational changes of the protein. Tryptophan fluorimetry is therefore
suitable to detect conformational changes of enzymes and other proteins. It can also
be applied to detect the binding of ligands to proteins as well as the di- or
multimerisation of proteins, provided that the reaction results in a change in the
surroundings of a tryptophan side chain. The environment of tryptophans obviously
changes on unfolding of proteins. Consequently, fluorescence is well suited also for
following denaturation of proteins.
Tryptophan and tyrosine fluorescence is not the only way to detect and investigate
proteins using fluorescence. There are proteins that undergo post-translational
modifications including the covalent isomerisation of three amino acids that makes
them fluorescent. The first such protein discovered was the green fluorescent protein
(GFP), which is expressed naturally in the jellyfish Aequorea victoria (phylum
Cnidaria). Since then, fluorescent proteins were isolated from many other species. A
large number of recombinantly modified forms of GFP were created in the last 20
years, all different in their fluorescence and colour. The intrinsic fluorescence of GFP
can be used to label proteins. If we create a chimera from the genes of GFP and
another protein of interest—in other words, we attach the gene of GFP to the 5’ or 3’
end of the gene encoding the other protein—this construct will be transcribed and
translated into a protein that will have GFP fused to it at its N- or C-terminus. Thus, if
using an appropriate vector we transform an organism and introduce this new gene
into it, its product will show a green fluorescence when excited. Through this
phenomenon, we can easily locate proteins on the tissue, cellular or subcellular levels.
As a variety of differently coloured fluorescent proteins are at our disposal, we can
even measure colocalisation of labelled proteins in vivo. The application of fluorescent
proteins in biology was such a significant technological breakthrough that its
pioneers were awarded a Nobel prize in 2008.
199
Left panel: Molecular structure of GFP. GFP has a typical β-barrel structure. The
fluorophore is formed by the covalent isomerisation of three amino acids located in
the centre of the protein (coloured orange in the figure). This core alone is not
fluorescent, only when surrounded by the β-barrel in its native conformation. Right
panel: Proteins and other biological molecules can also be made fluorescent by using
extrinsic modifications. We can attach extrinsic fluorophores to biomolecules by
either covalent or non-covalent bonds.
Covalent attachment of fluorophores is most often achieved by using the reactive side
chains of cysteines. To this end, researchers use fluorophores that bear
iodoacetamido or maleimido groups that alkylate the sulfhydryl group of cysteine
side chains under appropriate conditions.
Proteins can form complexes with fluorescent substrates or inhibitors also via non-
covalent bonds. There exist also fluorophores that can bind to certain regions of
proteins with a high affinity. For example, 8-anilononaphtalene-1-sulfonic acid (ANS)
binds to hydrophobic regions of proteins specifically and becomes strongly
fluorescent when bound. We can take advantage of this phenomenon in experiments.
A change in the amount of hydrophobic surfaces can occur in conjunction with
structural changes induced by the binding of a ligand. Thus, addition of the ligand
may cause the decrease of the amount of protein-bound ANS and thus the binding of
the ligand can be studied by measuring the changes in the fluorescence of ANS. This
way the binding constant of the protein and the ligand, as well as the kinetics of the
binding can be examined in a simple yet quantitative manner.
Labelling of double-stranded DNA can also be achieved, for example, with ethidium
bromide in vitro. When intercalated between the bases of DNA, the fluorescence of
ethidium bromide will rise markedly. Formerly, visualisation of DNA in agarose gel
electrophoresis was generally achieved using ethidium bromide.
200
The dye was mixed into the agarose gel to form a complex with the DNA passing
through it. When illuminated by ultraviolet light, the ethidium bromide accumulated
in the DNA becomes visible due to its fluorescence. As ethidium bromide is
carcinogenic, nowadays rather non-carcinogenic alternatives (e.g. SYBR Safe) are
used.
201
CHAPTER TWENTY FIVE
LYOPHILISATION (FREEZE-DRYING)
Protein (or any other non-volatile molecule) samples can be concentrated
by evaporating water and other volatile compounds from the sample. In
principle, this could be achieved by simply heating the sample. However,
most proteins would unfold during such a simple evaporation process. In
order to prevent the denaturation of proteins, the sample is transferred
into a glass container and is frozen as quickly as possible, usually by
immersing the outside of the container into liquid nitrogen. Moreover, the
container is rotated in order to spread and freeze the sample on a large
surface area. The glass container with the sample is then placed into an
extremely low-pressure space (vacuum) that contains a cooling coil as well.
The cooling coil acts as a condenser. The temperature of the coil is usually
lower than -50°C. Volatile compounds of the frozen sample will evaporate
(sublimate) in the vacuum. The process of evaporation (in this case,
sublimation) absorbs heat. This effect keeps the sample frozen. Evaporated
molecules are captured from the gas phase by the cooling coil, forming a
frozen layer on it. At the end of the process, proteins and other non-volatile
compounds of the sample remain in the container in a solid form. This
process does not cause irreversible denaturation of proteins. Thus, it is a
method frequently used not only to concentrate but also to preserve
proteins or other sensitive biomolecules for long-term storage. Such
samples can usually be stored for years without a significant deterioration
of quality. However, before lyophilisation it is very important to carefully
consider all non-volatile compounds of the initial sample as these will
concentrate along with the proteins. Non-volatile acids or bases can cause
extreme pH, and the presence of salts can result in very high ionic strength
when the sample is resolubilised.
202
203
204
CHAPTER TWENTY SIX
OSMOMETRY
OSMOMETRY
205
An osmometer is a device for measuring the osmotic strength of a solution,
colloid, or compound.
206
There are several different techniques employed in osmometry:Vapor pressure
depression osmometers determine the concentration of osmotically active particles
that reduce the vapor pressure of a solution.
207
CHAPTER TWENTY SEVEN
TURBIDIMETRY AND NEPHELOMETRY
Turbidimetry and nephelometry are two techniques based on the elastic
scattering of radiation by a suspension of colloidal particles: (a) in turbidimetry
the detector is placed in line with the source and the decrease in the radiation’s
transmitted power is measured.; (b) in nephelometry scattered radiation is
measured at an angle of 90o to the source. The similarity of turbidimetry to the
measurement of absorbance and of nephelometry to the measurement of
fluorescence is evident in these instrumental designs . In fact, turbidity can be
measured using a UV/Vis spectrophotometer and a spectrofluorimeter is
suitable for nephelometry.
NEPHELOMETER
208
209
CHAPTER TWENTY EIGHT
CONDUCTOMETRY, POLAROGRAPHY AND POLAROGRAPHY
Conductometry is a measurement of electrolytic conductivity to monitor a
progress of chemical reaction. Conductometry has notable application in
analytical chemistry, where conductometric titration is a standard technique.
In usual analytical chemistry practice, the term conductometry is used as a
synonym of conductometric titration, while the term conductimetry is used to
describe non-titrative applications. Conductometry is often applied to determine
the total conductance of a solution or to analyze the end point of titrations that
include ions.
Conductive measurements began as early as the 18th century when Henry
Cavendish and Andreas Baumgartner noticed that salt and mineral waters from
Bad Gastein in Austria conducted electricity. As such, using conductometry to
determine water purity, which is often used today to test the effectiveness of
water purification systems, began in 1776. Friedrich Kohlrausch further
developed conductometry in the 1860s when he applied alternating current to
water, acids, and other solutions. It was also around this time when Willis
Whitney, who was studying the interactions of sulfuric acid and chromium
sulfate complexes, found the first conductometric endpoint.These finding
culminated into potentiometric titrations and the first instrument for volumetric
analysis by Robert Behrend in 1883 while titrating chloride and bromide with
HgNO3. This development allowed for testing the solubility of salts and
hydrogen ion concentration, as well as acid/base and redox titrations.
Conductometry was further improved with the development of the glass
electrode, which began in 1909.
Conductometric titration
Conductometric titration is a type of titration in which the electrolytic
conductivity of the reaction mixture is continuously monitored as one reactant
is added. The equivalence point is the point at which the conductivity undergoes
a sudden change. Marked increases or decrease in conductance are associated
with the changing concentrations of the two most highly conducting ions—the
hydrogen and hydroxyl ions. The method can be used for titrating coloured
solutions or homogeneous suspension (e.g.: wood pulp suspension), which
cannot be used with normal indicators.
210
Acid-base titrations and redox titrations are often performed in which common
indicators are used to locate the end point e.g., methyl orange, phenolphthalein
for acid base titrations and starch solutions for iodometric type redox process.
However, electrical conductance measurements can also be used as a tool to
locate the end point, e.g., in a titration of a HCl solution with the strong base
NaOH.
As the titration progresses, the protons are neutralized to form water by the
addition of NaOH. For each amount of NaOH added equivalent amount of
hydrogen ions is removed. Effectively, the mobile H+ cation is replaced by the
less-mobile Na+ ion, and the conductivity of the titrated solution as well as the
measured conductance of the cell fall. This continues until the equivalence
point is reached, at which one obtains a solution of sodium chloride, NaCl. If
more base is added, an increase in conductivity or conductance is observed,
since more ions Na+ and OH− are being added and the neutralization reaction
no longer removes an appreciable amount of H+. Consequently, in the titration
of a strong acid with a strong base, the conductance has a minimum at the
equivalence point. This minimum can be used, instead of an indicator dye, to
determine the endpoint of the titration. The conductometric titration curve is
a plot of the measured conductance or conductivity values as a function of the
volume of the NaOH solution added. The titration curve can be used to
graphically determine the equivalence point.
For reaction between a weak acid-weak base in the beginning conductivity
decreases a bit as the few available H+ ions are used up. Then conductivity
increases slightly up to the equivalence point volume, due to contribution of the
salt cation and anion.(This contribution in case of a strong acid-strong base is
negligible and is not considered there.)After the equivalence point is achieved
the conductivity increases rapidly due to the excess OH- ions.
211
POLAROGRAPHY
Polarography is a subclass of voltammetry where the working electrode is a
dropping mercury electrode (DME) or a static mercury drop electrode (SMDE),
which are useful for their wide cathodic ranges and renewable surfaces.
Principle: Polarography is a voltammetric measurement whose response is
determined by combined diffusion/convection mass transport. The simple
principle of polarography is the study of solutions or of electrode processes by
means of electrolysis with two electrodes, one polarizable and one
unpolarizable, the former formed by mercury regularly dropping from a
capillary tube. Polarography is a specific type of measurement that falls into the
general category of linear-sweep voltammetry where the electrode potential is
altered in a linear fashion from the initial potential to the final potential. As a
linear sweep method controlled by convection/diffusion mass transport, the
current vs. potential response of a polarographic experiment has the typical
sigmoidal shape. What makes polarography different from other linear sweep
voltammetry measurements is that polarography makes use of the dropping
mercury electrode (DME) or the static mercury drop electrode.
A plot of the current vs. potential in a polarography experiment shows the
current oscillations corresponding to the drops of Hg falling from the capillary. If
one connected the maximum current of each drop, a sigmoidal shape would
result. The limiting current (the plateau on the sigmoid), called the diffusion
current because diffusion is the principal contribution to the flux of electroactive
material at this point of the Hg drop life.
COULOMETRY
Coulometry is the name given to a group of techniques in analytical chemistry
that determine the amount of matter transformed during an electrolysis
reaction by measuring the amount of electricity (in coulombs) consumed or
produced.[1] It is named after Charles-Augustin de Coulomb.
There are two basic categories of coulometric techniques. Potentiostatic
coulometry involves holding the electric potential constant during the reaction
using a potentiostat. The other, called coulometric titration or amperostatic
coulometry, keeps the current (measured in amperes) constant using an
amperostat.
212
Potentiostatic coulometry is a technique most commonly referred to as "bulk
electrolysis". The working electrode is kept at a constant potential and the
current that flows through the circuit is measured. This constant potential is
applied long enough to fully reduce or oxidize all of the electroactive species in a
given solution. As the electroactive molecules are consumed, the current also
decreases, approaching zero when the conversion is complete. The sample mass,
molecular mass, number of electrons in the electrode reaction, and number of
electrons passed during the experiment are all related by Faraday's laws. It
follows that, if three of the values are known, then the fourth can be calculated.
Bulk electrolysis is often used to unambiguously assign the number of electrons
consumed in a reaction observed through voltammetry. It also has the added
benefit of producing a solution of a species (oxidation state) which may not be
accessible through chemical routes. This species can then be isolated or further
characterized while in solution.
The rate of such reactions is not determined by the concentration of the
solution, but rather the mass transfer of the electroactive species in the solution
to the electrode surface. Rates will increase when the volume of the solution is
decreased, the solution is stirred more rapidly, or the area of the working
electrode is increased. Since mass transfer is so important the solution is stirred
during a bulk electrolysis. However, this technique is generally not considered a
hydrodynamic technique, since a laminar flow of solution against the electrode
is neither the objective nor outcome of the stirring.
The extent to which a reaction goes to completion is also related to how much
greater the applied potential is than the reduction potential of interest. In the
case where multiple reduction potentials are of interest, it is often difficult to set
an electrolysis potential a "safe" distance (such as 200 mV) past a redox event.
The result is incomplete conversion of the substrate, or else conversion of some
of the substrate to the more reduced form. This factor must be considered when
analyzing the current passed and when attempting to do further
analysis/isolation/experiments with the substrate solution.
An advantage to this kind of analysis over electrogravimetry is that it does not
require that the product of the reaction be weighed. This is useful for reactions
where the product does not deposit as a solid, such as the determination of the
amount of arsenic in a sample from the electrolysis of arsenous acid (H3AsO3)
to arsenic acid (H3AsO4).
213
Coulometric titrations use a constant current system to accurately quantify the
concentration of a species. In this experiment, the applied current is equivalent
to a titrant. Current is applied to the unknown solution until all of the unknown
species is either oxidized or reduced to a new state, at which point the potential
of the working electrode shifts dramatically. This potential shift indicates the
endpoint. The magnitude of the current (in amperes) and the duration of the
current (seconds) can be used to determine the moles of the unknown species in
solution. When the volume of the solution is known, then the molarity of the
unknown species can be determined.
Coulometer
A device for determining the amount of a substance released during electrolysis
by measuring the electrical charge that results from the electrolysis.
Coulometers can be used to detect and measure trace amounts of substances
such as water.
214
CHAPTER TWENTY NINE
RADIOIMMUNOASSAY (RIA)
Radioimmunoassay (RIA) is a sensitive method for measuring very small
amounts of a substance in the blood. Radioactive versions of a substance, or
isotopes of the substance, are mixed with antibodies and inserted in a sample of
the patient's blood. The same non-radioactive substance in the blood takes the
place of the isotope in the antibodies, thus leaving the radioactive substance
free.
The amount of free isotope is then measured to see how much of the original
substance was in the blood. This isotopic measuring method was developed in
1959 by two Americans, biophysicist Rosalyn Yalow (1921-) and physician
Solomon A. Berson (1918-1972).
Yalow and Berson developed the first radioisotopic technique to study blood
volume and iodine metabolism. They later adapted the method to study how the
body uses hormones, particularly insulin, which regulates sugar levels in the
blood. The researchers proved that Type II (adult onset) diabetes is caused by
the inefficient use of insulin. Previously, it was thought that diabetes was caused
only by a lack of insulin.In 1959 Yalow and Berson perfected their measurement
technique and named it radioimmunoassay (RIA). RIA is extremely sensitive. It
can measure one trillionth of a gram of material per milliliter of blood. Because
of the small sample required for measurement, RIA quickly became a standard
laboratory tool.
Radioimmunoassay (RIA) is an in vitro assay that measures the presence of an
antigen with very high sensitivity. Basically any biological substance for which a
specific antibody exists can be measured, even in minute concentrations. RIA
has been the first immunoassay technique developed to analyze nanomolar and
picomolar concentrations of hormones in biological fluids.
215
Radioimmunoassay (RIA) method
The target antigen is labeled radioactively and bound to its specific antibodies (a
limited and known amount of the specific antibody has to be added). A sample,
for example a blood-serum, is then added in order to initiate a competitive
reaction of the labeled antigens from the preparation, and the unlabeled
antigens from the serum-sample, with the specific antibodies. The competition
for the antibodies will release a certain amount of labeled antigen. This amount
is proportional to the ratio of labeled to unlabeled antigen. A binding curve can
then be generated which allows the amount of antigen in the patient's serum to
be derived.
That means that as the concentration of unlabeled antigen is increased, more of
it binds to the antibody, displacing the labeled variant. The bound antigens are
then separated from the unbound ones, and the radioactivity of the free antigens
remaining in the supernatant is measured. A binding curve can be generated
using a known standard, which allows the amount of antigens in the patient's
serum to be derived.
Radioimmunoassay is an old assay technique but it is still a widely used assay
and continues to offer distinct advantages in terms of simplicity and sensitivity.
Needed substances and equipment:
1. Specific antiserum to the antigen to be measured
2. Availability of a radioactive labeled form of the antigen
3. A method in which the antibody-bound tracer can be separated from the
unbound tracer
4. An instrument to count radioactivity
Radioactivity:
125-I labels are usually applied although other isotopes such as C14 and H3
have also been used. Usually, high specific activity radio-labeled (125-I) antigen
is prepared by iodination of the pure antigen on its tyrosine residue(s) by
chloramine-T or peroxidase methods and then separating the radio-labeled
antigen from free-isotope by gel-filtration or HPLC. Other important
components of RIA are the specific antibody against the antigen and pure
antigen for use as the standard or calibrator.
216
Separation techniques:
Double antibody, charcoal, cellulose, chromatography or solid phase techniques
are applied to separate bound and free radio-labeled antigen. Most frequently
used is the double antibody technique combined with polyethylene. The bound
or free fraction is counted in a gamma counter.
Concomitantly, a calibration or standard curve is generated with samples of
known concentrations of the unlabeled standards. The amount of antigen in an
unknown samples can be calculated from this curve.
Sensitivity:
The sensitivity can be improved by decreasing the amount of radioactively-
labeled antigen and/or antibody. The sensitivity can also be improved by the so-
called disequilibrium incubation. In this case radioactively labeled antigen is
added after initial incubation of antigen and antibody.
Troubleshooting:
The antibody must be specific for the antigen under investigation (other
antigens must not cross-react with the antibody). If any cross-reactivity is
observed, selection of a different antibody is advised or the antibody needs to be
purified from the cross-reacting antigen by affinity chromatography.
217
CHAPTER THIRTY
AUTOANALYZERS/ AUTOMATED ANALYSER
An automated analyser is a medical laboratory instrument designed to measure
different chemicals and other characteristics in a number of biological samples
quickly, with minimal human assistance.
Many methods of introducing samples into the analyser have been invented.
This can involve placing test tubes of sample into racks, which can be moved
along a track, or inserting tubes into circular carousels that rotate to make the
sample available. Some analysers require samples to be transferred to sample
cups. However, the effort to protect the health and safety of laboratory staff has
prompted many manufacturers to develop analysers that feature closed tube
sampling, preventing workers from direct exposure to samples., The automation
of laboratory testing does not remove the need for human expertise (results
must still be evaluated by medical technologists and other qualified clinical
laboratory professionals), but it does ease concerns about error reduction,
staffing concerns, and safety.
Routine biochemistry analysers
These are machines that process a large portion of the samples going into a
hospital or private medical laboratory. Automation of the testing process has
reduced testing time for many analytes from days to minutes. The history of
discrete sample analysis for the clinical laboratory began with the introduction
of the "Robot Chemist" invented by Hans Baruch and introduced commercially
in 1959. Perform tests on whole blood, serum, plasma, or urine samples to
determine concentrations of analytes (e.g., cholesterol, electrolytes, glucose,
calcium), to provide certain hematology values (e.g., hemoglobin concentrations,
prothrombin times), and to assay certain therapeutic drugs (e.g., theophylline),
which helps diagnose and treat numerous diseases, including diabetes, cancer,
HIV, STD, hepatitis, kidney conditions, fertility, and thyroid problems.
The AutoAnalyzer profoundly changed the character of the chemical testing
laboratory by allowing significant increases in the numbers of samples that
could be processed. The design based on separating a continuously flowing
stream with air bubbles largely reduced slow, clumsy, and error prone manual
methods of analysis.
218
The types of tests required include enzyme levels (such as many of the liver
function tests), ion levels (e.g. sodium and potassium, and other tell-tale
chemicals (such as glucose, serum albumin, or creatinine).Simple ions are often
measured with ion selective electrodes, which let one type of ion through, and
measure voltage differences. Enzymes may be measured by the rate they change
one coloured substance to another; in these tests, the results for enzymes are
given as an activity, not as a concentration of the enzyme. Other tests use
colorimetric changes to determine the concentration of the chemical in question.
Turbidity may also be measured.
219
Principles of operation
After the tray is loaded with samples, a pipette aspirates a precisely measured
aliquot of sample and discharges it into the reaction vessel; a measured volume
of diluent rinses the pipette. Reagents are dispensed into the reaction vessel.
After the solution is mixed (and incubated, if necessary), it is either passed
through a colorimeter, which measures its absorbance while it is still in its
reaction vessel, or aspirated into a fl ow cell, where its absorbance is measured
by a fl ow-through colorimeter. The analyzer then calculates the analyte’s
chemical concentrations.
Operating steps
The operator loads sample tubes into the analyzer; reagents may need to be
loaded or may already be stored in the instrument. A bar-code scanner will read
the test orders off the label on each test tube, or the operator may have to
program the desired tests. After the required test(s) are run, the results can be
displayed on-screen, printed out, stored in the analyzer’s internal memory,
and/or transferred to a computer.
Reported problems
Operators should be aware of the risk of exposure to potentially infectious
bloodborne pathogens during testing procedures and should use universal
precautions, including wearing gloves, face shields or masks, and gowns.
Cell counters
Automated cell counters sample the blood, and quantify, classify, and describe
cell populations using both electrical and optical techniques. Electrical analysis
involves passing a dilute solution of the blood through an aperture across which
an electrical current is flowing. The passage of cells through the current changes
the impedance between the terminals (the Coulter principle)
221
A lytic reagent is added to the blood solution to selectively lyse the red cells
(RBCs), leaving only white cells (WBCs), and platelets intact. Then the solution is
passed through a second detector. This allows the counts of RBCs, WBCs, and
platelets to be obtained. The platelet count is easily separated from the WBC
count by the smaller impedance spikes they produce in the detector due to their
lower cell volumes.
Optical detection may be utilised to gain a differential count of the populations
of white cell types. A dilute suspension of cells is passed through a flow cell,
which passes cells one at a time through a capillary tube past a laser beam. The
reflectance, transmission and scattering of light from each cell is analysed by
sophisticated software giving a numerical representation of the likely overall
distribution of cell populations.
Some of the latest hematology instruments may report Cell Population Data that
consist in Leukocyte morphological information that may be used for flagging
Cell abnormalities that trigger the suspect of some diseases.
Reticulocyte counts can now be performed by many analysers, giving an
alternative to time-consuming manual counts. Many automated reticulocyte
counts, like their manual counterparts, employ the use of a supravital dye such
as new methylene blue to stain the red cells containing reticulin prior to
counting Some analysers have a modular slide maker which is able to both
produce a blood film of consistent quality and stain the film, which is then
reviewed by a medical laboratory professional.
Coagulometers
Automated coagulation machines or Coagulometers measure the ability of blood
to clot by performing any of several types of tests including Partial
thromboplastin times, Prothrombin times (and the calculated INRs commonly
used for therapeutic evaluation), Lupus anticoagulant screens, D dimer assays,
and factor assays.
222
Coagulometers require blood samples that have been drawn in tubes containing
sodium citrate as an anticoagulant. These are used because the mechanism
behind the anticoagulant effect of sodium citrate is reversible. Depending on the
test, different substances can be added to the blood plasma to trigger a clotting
reaction. The progress of clotting may be monitored optically by measuring the
absorbance of a particular wavelength of light by the sample and how it changes
over time.
223
Automatic erythrocyte sedimentation rate (ESR) readers, while not strictly
analysers, do preferably have to comply to the 2011-published CLSI (Clinical and
Laboratory Standards Institute) "Procedures for the Erythrocyte Sedimentation
Rate Test: H02-A5 and to the ICSH (International Council for Standardization in
Haematology) published "ICSH review of the measurement of the erythrocyte
sedimentation rate", both indicating the only reference method, being
Westergren, explicitly indicating the use of diluted blood (with sodium citrate),
in 200 mm pipettes, bore 2.55 mm. After 30 or 60 minutes being in a vertical
position, with no draughts and vibration or direct sunlight allowed, an optical
reader determines how far the red cells have fallen by detecting the level.
Miscellaneous analysers
Some tests and test categories are unique in their mechanism or scope, and
require a separate analyser for only a few tests, or even for only one test. Other
tests are esoteric in nature—they are performed less frequently than other tests,
and are generally more expensive and time-consuming to perform. Even so, the
current shortage of qualified clinical laboratory professionals has spurred
manufacturers to develop automated systems for even these rarely performed
tests.
Analysers that fall into this category include instruments that perform:
DNA labeling and detection
Osmolarity and osmolality measurement
Measurement of glycosylated haemoglobin (haemoglobin A1C), and
Aliquotting and routing of samples throughout the laboratory
224
CHAPTER THIRTY ONE
SOLVENT EXTRACTION
A technique, also called liquid extraction, for separating the components of a
liquid solution. This technique depends upon the selective dissolving of one or
more constituents of the solution into a suitable immiscible liquid solvent. It is
particularly useful industrially for separation of the constituents of a mixture
according to chemical type, especially when methods that depend upon different
physical properties, such as the separation by distillation of substances of
different vapor pressures, either fail entirely or become too expensive.
The separation of materials of different chemical types and solubilities by
selective solvent action; that is, some materials are more soluble in one solvent
than in another, hence there is a preferential extractive action; used to refine
petroleum products, chemicals, vegetable oils, and vitamins.
Extraction takes advantage of the relative solubilities of solutes in immiscible
solvents. If the solutes are in an aqueous solution, an organic solvent that is
immiscible with water is added. The solutes will dissolve either in the water or
in the organic solvent. If the relative solubilities of the solutes differ in the two
solvents, a partial separation occurs. The upper, less dense solvent.
Industrial plants using solvent extraction require equipment for carrying out the
extraction itself (extractor) and for essentially complete recovery of the solvent
for reuse, usually by distillation.
The petroleum refining industry is the largest user of extraction. In refining
virtually all automobile lubricating oil, the undesirable constituents such as
aromatic hydrocarbons are extracted from the more desirable paraffinic and
naphthenic hydrocarbons. By suitable catalytic treatment of lower boiling
distillates, naphthas rich in aromatic hydrocarbons such as benzene, toluene,
and the xylenes may be produced. The latter are separated from paraffinic
hydrocarbons with suitable solvents to produce high-purity aromatic
hydrocarbons and high-octane gasoline. Other industrial applications include so-
called sweetening of gasoline by extraction of sulfur-containing compounds;
separation of vegetable oils into relatively saturated and unsaturated glyceride
esters; recovery of valuable chemicals in by-product coke oven plants;
pharmaceutical refining processes; and purifying of uranium.
225
Solvent extraction is carried out regularly in the laboratory as a commonplace
purification procedure in organic synthesis, and in analytical separations in
which the extraordinary ability of certain solvents preferentially to remove one
or more constituents from a solution quantitatively is exploited. Batch
extractions of this sort, on a small scale, are usually done in separatory funnels,
where the mechanical agitation is supplied by handshaking of the funnel.
Soxhlet Extractor
A Soxhlet Extractor has three main sections: A percolator (boiler and reflux)
which circulates the solvent, a thimble (usually made of thick filter paper) which
retains the solid to be laved, and a siphon mechanism, which periodically
empties the thimble.
226
Assembly of Soxhlet Extractor
1.The source material containing the compound to be extracted is placed inside
the thimble.
2.The thimble is loaded into the main chamber of the Soxhlet extractor.
3.The extraction solvent to be used is placed in a distillation flask.
4.The flask is placed on the heating element.
5.The Soxhlet extractor is placed atop the flask.
6.A reflux condenser is placed atop the extractor.
Operation
The solvent is heated to reflux. The solvent vapour travels up a distillation arm,
and floods into the chamber housing the thimble of solid. The condenser ensures
that any solvent vapour cools, and drips back down into the chamber housing
the solid material. The chamber containing the solid material slowly fills with
warm solvent. Some of the desired compound dissolves in the warm solvent.
When the Soxhlet chamber is almost full, the chamber is emptied by the siphon.
The solvent is returned to the distillation flask. The thimble ensures that the
rapid motion of the solvent does not transport any solid material to the still pot.
This cycle may be allowed to repeat many times, over hours or days.During each
cycle, a portion of the non-volatile compound dissolves in the solvent. After
many cycles the desired compound is concentrated in the distillation flask. The
advantage of this system is that instead of many portions of warm solvent being
passed through the sample, just one batch of solvent is recycled.
After extraction the solvent is removed, typically by means of a rotary
evaporator, yielding the extracted compound. The non-soluble portion of the
extracted solid remains in the thimble, and is usually discarded.
227
CHAPTER THIRTY TWO
CHROMATOGRAPHY
A technique for analysis of chemical substances. The term chromatography
literally means color writing, and denotes a method by which the substance to
be analyzed is poured into a vertical glass tube containing an adsorbent, the
various components of the substance moving through the adsorbent at different
rates of speed, according to their degree of attraction to it, and producing bands
of color at different levels of the adsorption column. The term has been extended
to include other methods utilizing the same principle, although no colors are
produced in the column.
The mobile phase of chromatography refers to the fluid that carries the mixture
of substances in the sample through the adsorptive material. The stationary or
adsorbent phase refers to the solid material that takes up the particles of the
substance passing through it. Kaolin, alumina, silica, and activated charcoal have
been used as adsorbing substances or stationary phases.
The technique is a valuable tool for the research biochemist and is readily
adaptable to investigations conducted in the clinical laboratory. For example,
chromatography is used to detect and identify in body fluids certain sugars and
amino acids associated with inborn errors of metabolism.
Adsorption chromatography that in which the stationary phase is an
adsorbent. Adsorption chromatography is probably one of the oldest types of
chromatography around. It utilizes a mobile liquid or gaseous phase that is
adsorbed onto the surface of a stationary solid phase. The equilibriation
between the mobile and stationary phase accounts for the separation of
different solutes.
228
Affinity chromatography is based on a highly specific biologic interaction such
as that between antigen and antibody, enzyme and substrate, or receptor and
ligand. Any of these substances, covalently linked to an insoluble support or
immobilized in a gel, may serve as the sorbent allowing the interacting
substance to be isolated from relatively impure samples; often a 1000-fold
purification can be achieved in one step. This is the most selective type of
chromatography employed. It utilizes the specific interaction between one kind
of solute molecule and a second molecule that is immobilized on a stationary
phase. For example, the immobilized molecule may be an antibody to some
specific protein. When solute containing a mixture of proteins are passed by this
molecule, only the specific protein is reacted to this antibody, binding it to the
stationary phase. This protein is later extracted by changing the ionic strength
or pH.
Column chromatography the technique in which the various solutes of a
solution are allowed to travel down a column, the individual components being
adsorbed by the stationary phase. The most strongly adsorbed component will
remain near the top of the column; the other components will pass to positions
farther and farther down the column according to their affinity for the
adsorbent. If the individual components are naturally colored, they will form a
series of colored bands or zones.
229
Gel-filtration chromatography (gel-permeation chromatography) exclusion
chromatography.
Ion exchange chromatography that utilizing ion exchange resins, to which are
coupled either cations or anions that will exchange with other cations or anions
in the material passed through their meshwork. In this type of chromatography,
the use of a resin (the stationary solid phase) is used to covalently attach anions
or cations onto it. Solute ions of the opposite charge in the mobile liquid phase
are attracted to the resin by electrostatic forces.
Molecular sieve chromatography exclusion chromatography. Also known as
gel permeation or gel filtration, this type of chromatography lacks an attractive
interaction between the stationary phase and solute. The liquid or gaseous
phase passes through a porous gel which separates the molecules according to
its size. The pores are normally small and exclude the larger solute molecules,
but allows smaller molecules to enter the gel, causing them to flow through a
larger volume. This causes the larger molecules to pass through the column at a
faster rate than the smaller ones.
Paper chromatography a form of chromatography in which a sheet of blotting
paper, usually filter paper, is substituted for the adsorption column. After
separation of the components as a consequence of their differential migratory
velocities, they are stained to make the chromatogram visible. In the clinical
laboratory, paper chromatography is employed to detect and identify sugars and
amino acids.
Partition chromatography a process of separation of solutes utilizing the
partition of the solutes between two liquid phases, namely the original solvent
and the film of solvent on the adsorption column. This form of chromatography
is based on a thin film formed on the surface of a solid support by a liquid
stationary phase. Solute equilibriates between the mobile phase and the
stationary liquid.
230
Thin-layer chromatography is that in which the stationary phase is a thin
layer of an adsorbent such as silica gel coated on a flat plate. It is otherwise
similar to paper chromatography.
Column chromatography(an example of chromatography)
231
1. Feed Injection
The feed is injected into the mobile phase. The mobile phase flows through the
system by the action of a pump (older analytical chromatorgraphy used capillary
action or gravity to move the mobile phase).
2. Separation in the Column
As the sample flows through the column, its different components will adsorb to
the stationary phase to varying degrees. Those with strong attraction to the
support move more slowly than those with weak attraction. This is how the
components are separated.
3. Elution from the Column
After the sample is flushed or displaced from the stationary phase, the different
components will elute from the column at different times. The components with
the least affinity for the stationary phase (the most weakly adsorbed) will elute
first, while those with the greatest affinity for the stationary phase (the most
strongly adsorbed) will elute last.
4. Detection
The different components are collected as they emerge from the column. A
detector analyzes the emerging stream by measuring a property which is related
to concentration and characteristic of chemical composition. For example, the
refractive index or ultra-violet absorbence is measured.
Example
The figure below shows a simple separation by chromatography. A continuous
flow of solvent carries a solution of solutes A and B down a column. (a) As the
solvent carries the two solutes down the column, we begin to see some
separation of the solution. (b) At some later point in time, it can be seen that
solute B is moving at a much faster rate than A. (c) In (d), solute B emerges first,
while solute A finally emerges in (e). Thus, solute A has a greater affinity for the
stationary phase than solute B. By varying the pH of the solvent or temperature
of the column, the output of the column can be significantly altered, such as the
timing of when individual species emerge.
232
233
Chromatography - The Chromatogram
234
Te information that can be attained
The level of complexity of the sample is indicated by the number of peaks
which appear.
Qualitative information about the sample composition is obtained by
comparing peak positions with those of standards.
Quantitative assessment of the relative concentrations of components is
obtained from peak area comparisons.
Column performance is indicated by comparison with standards.
235
In paper chromatography there is what is known as the stationary phase
which is the absorbent Chromatography paper and the mobile phase which is a
liquid solvent (or mixture of solvents) used to carry the sample solutes under
analysis along the paper. Usually, one uses chromatography to find out the
components of a sample which are seperated depending how much soluble
these are in particular solvents and hence how far they travel along the
chromatography paper. Samples are usually of organic matter (not ionic salts)
which dissolve in certain polar solvents (namely water) or non-polar (organic)
solvents.
Principle
In order to make the technique more scientific rather than a mere interpretation
by sight, what is called the Retention Value (Rf value for short) was applied in
chromatography. A particular compound will travel the same distance along the
stationary phase by a specific solvent (or solvent mixture) given that other
experimental conditions are kept constant. In other words, every compound
(dye, pigment, organic substance etc) have a specific Rf value for every specific
solvent and solvent concentration. Rf values come very handy for identification
because one can compare Rf values of the unknown sample (or its consituents)
with Rf Values of known compounds.
Calculation
The Rf value is defined as the ratio of the distance moved by the solute (i.e. the
dye or pigment under test) and the distance moved by the the solvent (known as
the Solvent front) along the paper, where both distances are measured from the
common Origin or Application Baseline, that is the point where the sample is
initially spotted on the paper.
Due the fact that the solvent front is always larger from the distance travelled by
the solute,
236
Rf values are always between 0 - one extreme where solute remains fixed at its
origin and 1 - the other extreme where the solute is so soluble that it moves as
far as the solvent.
Rf values do not have units since it is a ration of distances. Because mixture
solvents are often applied Rf values are usually written as the following
examples:
Rf = 0.66 (60% Ethanol) - if % is given it is assumed that the mixture is in
water hence 60% ethanol 40% water.
Rf = 0.78 (Ethanol-Methanol mixture {1:2}) - a mixture of 1 part Ethanol
and 2 parts Methanol
Rf = 0.25 (Ethanol-Methanoic Acid-Acetone mixture {4:3:1}) - a mixture of
4 parts Ethanol, 3 parts Methanoic Acid and 1 part Acetone. Note that
mixture compounds with larger proportions are placed first in the mixture
sequence.
Rf Values for Identification
Note that different componds can have the SAME Rf value for a particular
solvent, but unlikely to have similar Rf for a number (2-4) of different solvents.
Therefore the more different solvents (or mixtures) are used, the more RF
values are obtained, and so the more concise the identification is. Identification
relies on comparing a number of RF values of the unknown sample with known
Rf values of a number of known dyes.
Environment Conditions
As mentioned before, the Rf value of a particular pure dye or analyte in a
particular solvent (or mixture) is constant if the following experimental
conditions are kept unaltered:
i. Temperature
ii. Chromatography medium, ie same type and grade of Chromatography
Paper
iii. Solvent concentration and purity
iv. Amount of sample spotted on Chromatography medium
237
If the same grade of Chromatography medium is used (typically Grade 1 CHR or
3 MM CHR) and the room temperature of the experiment does not fluctuate too
much, the remaining critical variable to be observed is the amount of dye
spotted. Large amounts tend to form elongated zones with uneven distribution
of dye along its zone. Too much dilute spots makes visibility of seperated dye
poor. Trial and error is involved to find the ideal proximate amount to be
spotted.
Problems with dye zones so as to determine Rf Values
In the ideal scenario, the zone of the dye or component moved along the
chromatography paper is a small, compact disc-like structure. In the real world,
the zones can be elongated (streak-like) and this brings the problem of where
should one take the measurment to caclulate the Rf value - either taken from the
top, or the centre or the bottom of the zone! Actually the zone length can vary
from 4 to 40 mm. By definition, the Rf value is taken as the distance from the
centre of te zone. This is however prone to visual estimation errors, so the best
way to calculate the centre is to measure the following 2 distances:
a. measurment from origin to the top edge of the zone,
b. measurment from origin to the bottom edge of the zone
The diagram below explains what are the distances to be taken to calculate these
RFs
238
By defenition the actual RF is that of the center (b) but all three should be
compared and analysed. Shorter, compact zones give more accurate results,
while elongated streak-like zones (especially starting from the origin) should be
discarded as in such cases, the Rf values are not reliable. Zones with uneven
distibution of dye or atypical shapes should also be discarded and RF value in
other solvents with good zones should be seeked. The reference RF Value should
be calculated from at least 3 different runs.
Specific Rf Values of Dyes and compounds obtained in the Lab
Below are the RF value results obtained from various, either known ones or
those isolated from inks, markers etc. Click on dye name to see the table of
results. Note that the smaller the standard deviation is, the more accurate are
the results. The method is standardised as much as possible to provide
reproducible and reliable results.
239
CHAPTER THIRTY THREE
FLOW CYTOMETRY
Flow cytometry is a technology that is used to analyse the physical and chemical
characteristics of particles in a fluid as it passes through at least one laser. Cell
components are fluorescently labelled and then excited by the laser to emit light
at varying wavelengths.
The fluorescence can be measured to determine various properties of single
particles, which are usually cells. Up to thousands of particles per second can be
analysed as they pass through the liquid stream. Examples of the properties
measured include the particle’s relative granularity, size and fluorescence
intensity as well as its internal complexity. An optical-to-electronic coupling
system is used to record the way in which the particle emits fluorescence and
scatters incident light from the laser.
Three main systems make up the flow cytometer instrument and these are the
fluidics, the optics and the electronics. The purpose of the fluidics system is to
transport the particles in a stream of fluid to the laser beam where they are
interrogated. Any cell or particle that is 0.2 to 150 μms in size can be analyzed. If
the cells are from solid tissue, they require disaggregation before they can be
analyzed. Although cells from animals, plants, bacteria, yeast or algae are usually
measured, other particles such as chromosomes or nuclei can also be examined.
Some particles such as marine algae are naturally fluorescent, but in general,
fluorescent labels are required to tag components of the particle. The section of
the fluid stream that contains the particles is referred to as the sample core.
The optics system is made up of lasers which illuminate the particles present in
the stream as they pass through and scatter light from the laser. Any flourescent
molecules that are on the particle emit fluorescence, which is detected by
carefully positioned lenses. Generally, the light scattered from up to six or more
fluorescences is determined for two different angles. Optical filters and beam
splitters then direct the light signals to the relevant detectors, which emit
electronic signals proportional to the signals that hit them. Data can then be
collected on each particle or event and the characteristics of those events or
particles are determined based on their fluorescent and light scattering
properties.
240
The electronics system is used to change the light signals detected into
electronic pulses that a computer can process. The data can then be studied to
ascertain information about a large number of cells over a short period.
Information on the heterogeneity and different subsets within cell populations
can be identified and measured. Some instruments have a sorting feature in the
electronics system that can be used to charge and deflect particles so that
certain cell populations can be sorted for further analysis.
The data are usually presented in the form of single parameter histograms or as
plots of correlated parameters, which are referred to as cytograms. Cytograms
may display data in the from of a dot plot, a contour plot or a density plot.
Flow cytometry provides rapid analysis of multiple characteristics of single cells.
The information obtained is both qualitative and quantitative. Whereas in the
past flow cytometers were found only in larger academic centers, advances in
technology now make it possible for community hospitals to use this
methodology. Contemporary flow cytometers are much smaller, less expensive,
more user-friendly, and well suited for high-volume operation. Flow cytometry
is used for immunophenotyping of a variety of specimens, including whole
blood, bone marrow, serous cavity fluids, cerebrospinal fluid, urine, and solid
tissues. Characteristics that can be measured include cell size, cytoplasmic
complexity, DNA or RNA content, and a wide range of membrane-bound and
intracellular proteins.
Schematic Diagram of Flow Cytometer
Schematic of a flow
cytometer.
241
A single cell suspension is hydrodynamically focused with sheath fluid to
intersect an argon-ion laser. Signals are collected by a forward angle light scatter
detector, a side-scatter detector (1), and multiple fluorescence emission
detectors (2–4). The signals are amplified and converted to digital form for
analysis and display on a computer screen.
General Principles
Flow cytometry measures optical and fluorescence characteristics of single cells
(or any other particle, including nuclei, microorganisms, chromosome
preparations, and latex beads). Physical properties, such as size (represented by
forward angle light scatter) and internal complexity (represented by right-angle
scatter) can resolve certain cell populations. Fluorescent dyes may bind or
intercalate with different cellular components such as DNA or RNA. Additionally,
antibodies conjugated to fluorescent dyes can bind specific proteins on cell
membranes or inside cells. When labeled cells are passed by a light source, the
fluorescent molecules are excited to a higher energy state. Upon returning to
their resting states, the fluorochromes emit light energy at higher wavelengths.
The use of multiple fluorochromes, each with similar excitation wavelengths and
different emission wavelengths (or “colors”), allows several cell properties to be
measured simultaneously. Commonly used dyes include propidium iodide,
phycoerythrin, and fluorescein, although many other dyes are available. Tandem
dyes with internal fluorescence resonance energy transfer can create even
longer wavelengths and more colors.
Applications
Flow cytometry is used to perform several procedures including:
Cell counting
Cell sorting
Detection of biomarkers
Protein engineering
Flow cytometry has numerous applications in science, including those relevant
to healthcare. The technology has been widely used in the diagnosis of health
conditions, particularly diseases of the blood such as leukemia, although it is
also commonly used in the various different fields of clinical practice as well as
in basic research and clinical trials.
242
Some examples of the fields this technology is used in include molecular biology,
immunology, pathology, marine science and plant biology. In medicine, flow
cytometry is a vital laboratory process used in transplantation, oncology,
hematology, genetics and prenatal diagnosis. In marine biology, the abundance
and distribution of photosynthetic plankton can be analysed.
Flow cytometry can also be used in the field of protein engineering, to help
identify cell surface protein variants.
243
LIST OF REFERENCES