MI - 02 01 - Merged

Download as pdf or txt
Download as pdf or txt
You are on page 1of 153

COURSE TITLE: MEDICAL IMAGING TECHNIQUES AND DATA

ANALYSIS
COURSE CODE: BIO3005
COURSE TYPE: LT
MODULE NO. 2_PART 1
COURSE INSTRUCTOR: DR. N. VIGNESH

EMPLOYEE ID: 100589

DESIGNATION: ASSISTANT PROFESSOR

DEPARTMENT: SCHOOL OF BIOENGINEERING AND TECHNOLOGY


X-rays
• X-rays were discovered by the German Physicist Wilhelm Konrad Rontgen in 1895.

• X-rays are widely used in visualizing the internal anatomy of humans.

• Radiological examination is one of the most important diagnostic aids available in the medical practice.

• It is based on the fact that various anatomical structures of the body have different densities for the
X-rays.

• When X-rays from a point source penetrate a section of the body, the internal body structures absorb
varying amounts of the radiation.

• The radiation that leaves the body has a spatial intensity variation.

• The X-ray intensity distribution is visualized by a suitable device like a photographic film.

• A shadow image is generated that corresponds to the X-ray density of the organs in the body section.
Chest radiograph
• Capability to penetrate matter coupled with differential absorption observed in various materials.
• Ability to produce luminescence and its effect on photographic emulsions.
• X-ray picture is called a radiograph.
• Chest radiographs are mainly taken for the examination of the lungs and the heart.
• Because of the air enclosed in the respiratory tract, the larger bronchi are seen as negative contrast.
• Pulmonary vessels are seen as a positive contrast against the air-filled lung tissue.
• Different types of lung infection are accompanied by changes in the location, size and extent of the
shadow.
Angiography
• Heart examinations are performed by taking frontal and lateral films.
• The evaluation is performed partly by calculating the total heart volume and partly on the basis of any
changes in shape.
• For visualization of the rest of the circulatory system and for the special examinations of the heart, use is
made of injectable, water-soluble organic compounds of iodine.
• A contrast medium is injected into an artery or vein, usually through a catheter placed in the vessel.
• Larger organs of the body can be examined by visualizing the associated vessels and this technique is
called angiography.
• The examination is designated according to the organ examined.
• Coronary angiography means examination of coronary vessels of the heart. Cerebral angiography – brain
examination.
• The entire gastro-intestinal tract can be imaged by using an
emulsion of barium sulphate as a contrast medium.
• It is swallowed or administered to diagnose common
pathological conditions such as ulcers, tumours or
inflammatory conditions.
• Negative and positive contrast media are used for
visualizing the spinal canal, the examination being known as
myelography.
• The central nervous system is usually examined by
pneumography
Nature of X-rays

• X-rays are electromagnetic radiation located at the low wavelength end


of the electromagnetic spectrum.

• The X-rays in the medical diagnostic region have wavelength of the


order of Angstrom.

• They propagate with the speed of light and are unaffected by electric
and magnetic fields.

• The wavelength of X-ray is directly dependent on the voltage with


which the radiation is produced. Hence, energy of X-ray can be
determined from voltage.
Properties of X-rays
• Because of short wavelength and extremely high energy, X-rays are able to penetrate through
materials which readily absorb and reflect visible light.
• X-rays are absorbed when passing through matter.
• The extent of absorption depends upon the density of the matter.
• X-rays produce secondary radiation in all matter through which they pass.
• The secondary radiation is composed of scattered radiation, characteristic radiation and
electrons.
• X-rays produce ionization in gases and influence the electric properties of liquids and solids.
• Fluorescence phenomenon is observed with X-rays, as seen with visible light.
Production of X-rays
• X-rays are produced whenever electrons collide at very high speed with matter and are thus
suddenly stopped.
• X-rays are produced in a specially constructed glass tube, which basically comprises of
a. A source for the production of electrons.
b. An energy source to accelerate the electrons.
c. A free electron path.
d. A mean of focusing the electron beam.
e. A device to stop the electrons.
Stationary Anode Tube (SAT)
• The normal tube is a vacuum diode in which electrons are generated by thermionic emission from
the filament of the tube.
• The electron stream is electrostatically focused on a target on the anode by means of a suitably
shaped cathode cup.
• The kinetic energy of the electrons impinging on the target is converted into X-rays.
• Parameters in SAT: Tube current (depends on the filament temperature) and the tube voltage
(depends on the primary voltage).
• Cathode block with the filament inside, is made from nickel or stainless steel.
• The filament is a closely wound helix of tungsten wire and the target is a composite of tungsten and
copper.
• Tungsten is chosen as a target due to high
atomic number (74) and high melting point
(3400⁰C).
• To withstand heavy thermal loads.
• Copper being an excellent thermal conductor,
performs the vital function of carrying the heat
rapidly away from the tungsten target.
• The heat flows through the anode to the outside
of the tube.
• Oil is provided as coolant.
• Electrodes have high open voltages on them
and are shielded.
• Shield is shockproof due to earthing
arrangement.
Rotating Anode Tube
• X-ray tube is a limiting factor in generating higher tube voltages and more penetrating
radiations.
• This is primarily due to the heat generated at the anode.
• The heat capacity of the anode is a function of the focal spot area.
• Therefore, the absorbed power can be increased if the effective area of focal spot
widened and it is accomplished by rotating the anode tube.
• Rotation ensures controlled heating of the anode (electron bombardment) and before
excessive heating, new anode target is placed.
• The anode is a disk of tungsten or an alloy of tungsten
and 10% rhenium.
• The anode rotates at a speed of 3,000-3,600 or
9,000-10,000 rpm.
• The tungsten disk has an angular orientation of 5-20⁰.
• The design of anode helps to limit the power density
incident on the physical focal spot, while creating a
small effective focal spot.
• The heat produced during an exposure is spread over
a large surface of anode.
• Increased heat-loading capacity enhances the intensity
and power levels of X-ray.
• The rotor is made from copper and molybdenum.
• Choice of molybdenum is based on melting point
similarity with tungsten.
X-ray machine
• There are two parts of the circuit.
• One of them is for producing high voltage, which is applied to the tube’s anode and comprises a high voltage
step-up transformer followed by rectification.
• A voltage selector switch facilitates change in voltage between exposures.
• The second part of the circuit concerns the control of heating X-ray tube filament.
• The filament is heated with 6-12 V of AC supply at a current of 3-5 A.
• The filament current is controlled by rheostat.
• A preferred method of providing high voltage direct current to the anode of the X-ray tube is by use of bridge
rectifier.
• Moving coil meters are used for making current measurements (mA).
X-ray classification

• X-ray tubes are classified on the basis of their application for diagnostic or therapeutic purposes.

• For diagnostic applications, it is usual to employ high milliamperes and lower exposure time.

• Whereas, high voltage and relatively lower mA are necessary for therapeutic uses.
High voltage generation
• Voltages in the range of 30-200 kV are required for the production of X-rays and they are
generated by high voltage transformer.
• A high ratio step up transformer is used so that the voltages applied to primary winding
are small in comparison to those taken from the secondary winding.
• The voltage ratio is 1:500.
• An input of 250 V produces an output of 125 kV.
• High tension transformer assembly is immersed in special oil, which provides a high
level of insulation.
Types of high voltage generation

• Self-rectified circuit (one pulse).


• Full wave rectification X-ray circuit
(two pulse).
• Three phase power for X-ray
generation.
• Six-rectifier circuit (six pulse).
• Twelve rectifier circuit (twelve pulse).
High voltage generation – self-rectified circuit
• It is one pulse X-ray generation system.
• The high voltage is produced by using a step-up transformer whose primary is connected to an
auto-transformer.
• The secondary of the High Tension (HT) transformer is directly connected to the anode of the X-ray
tube.
• This arrangement is termed as self-rectification and it is used in mobile and dental X-ray units.
• These machines have maximum tube currents of about 20 mA and a voltage of about 100 kV.
• Parallel combination of a diode and a resistance is applied in series with the primary coil of the HT
transformer.
• Self-rectification reduces the cost and complexity of X-ray machines.
High voltage generation – Full-wave rectification
• It is a two-pulse X-ray generation system.
• Limitation with self-rectified circuit: reduction in exposure time during non-conducting cycle
(half cycle) of X-ray tube.
• The full-wave rectification circuit produces X-rays during each half-cycle of the applied
sinusoidal 50 Hz mains supply voltage.
• Anode is positive with respect to the cathode over both the half-cycles.
• Full-wave rectified circuits are used in the medium and high-capacity X-ray units.
• These are most commonly employed for diagnostic X-ray examination.
Limitations of single phase X-ray circuits
• The intensity of radiation produced is lower because no radiation is generated during a large
portion of the exposure time.
• When the tube voltage is appreciably lower than the peak voltage, the X-rays produced are of
low energy and get mostly transformed into heat at the anode.
• A considerable part of the radiation produced is absorbed by the filter or the tube housing and
produces a poor quality image.
• The deficiencies of the single-phase system can be overcome by using three-phase power in
X-ray machines.
• Three phase supply can result in steady power to the X-ray tube instead of pulsating power.
High tension cable
• In view of the very high voltages applied to the X-ray tube, it is necessary to use special highly insulated cables
for its connections to the generator.

• The centre of the cable comprises three conductors individually insulated for the low filament voltages and
surrounded by semi-conducting rubber.

• This, in turn, is surrounded by non-conducting rubber which provides the insulation against the high voltage
also carried by the centre conductors.

• The cable is shielded with a woven copper braiding and it is earthed.

• Final protective layer covering is made using vinyl or some other plastic.

• The grounded metal braid serves as a safety path to ground for the high voltage.

• The effect of cable capacitance is that the energy is stored during the conduction period of the rectifiers and the
energy is delivered to the tube during the non-conducting period.
Collimator
• In order to increase the image contrast and to reduce the dose to the patient, the X-ray beam must be
limited to the area of interest.
• The dosage of X-ray is manipulated using collimators and grids.
• The collimator is placed between the X-ray tube and the patient and it consists of a sheet of lead with a
circular or rectangular hole of suitable size.
• In some cases, collimator may consist of four adjustable lead strips which can be moved relative to each
other.
• The collimator makes sure the X-ray dose at the smallest value and further increases image contrast by
collecting the less scattered radiations.
• Hence, smallest field size and collimator together minimize the loss of contrast due to scattered radiation.
Grid
• Grids are inserted between the patient and the film cassette in order to reduce the loss of contrast
due to scattered radiation.

• A grid consists of thin lead strips separated by spacers of a low attenuation material.

• The lead strips are designed that the primary radiation from the X-ray focus will pass between them
while the scattered radiation from the object is largely attenuated.

• Grids conceal the final details of the image and it is avoided using moving grids.
Exposure timing systems
• A timer is used in X-ray machine to initiate and terminate the X-ray exposure.

• The timer controls the X-ray contactor which in turn, controls the voltage to the primary of the high voltage
transformer.

• Timers vary in their methods of operation – simple mechanical timer to microcontroller based electronic timer.

• The simple mechanical timers are spring-driven (hand-operated type) and electronic timers use thyristors.

• High voltage X-ray generators have very small exposure time and require precise control by electronic timer.

• Electronic and digital timers use of oscillator, counter and associated logic.

• The reference oscillator generates the frequency to an AND circuit and the counter logic acts as an flip-flop
switch to disconnect the oscillator from the counter (termination of X-ray exposure).
Automatic exposure control
• Radiographic practice is based on the selection of appropriate X-ray exposure factors such as patient size, shape, and physical
condition.

• The standard operating procedure involves the introduction of Anatomically Programmed Radiography (APR) by combining all
the primary controls of the generator and the Automatic Exposure Control.

• The use of machine stored parameters results in better quality of radiographs.

• Methods of automatic exposure control: photocell and ionization chamber.

• In the photocell-based method, a fluorescent detector is placed on the exit side of the patient and behind the radiographic
cassette.

• The system monitors the X-ray intensity transmitted through the film screen system.

• Alternatively, an ionization chamber is placed between the patient and the cassette.

• The signal from the chamber is amplified and used to control a high speed relay which terminates the exposure under pre-set
density level.
X-ray visualization
• X-rays cannot be detected or visualized directly by human senses.
• Indirect methods for X-ray visualization include X-ray films, fluorescent screens, X-ray image
intensifier television system.
• X-rays have much shorter wavelength than light, but react with photographic emulsions in a
similar fashion.
• Image of photographic film is proportional to the X-ray intensity.
• Intensifying screens consisting of a layer of fluorescent material and these cover the X-ray
emulsion film in a light tight cassette.
• The screens increase the sensitivity of the X-ray film significantly.
Fluorescent screens
• X-rays are converted into a visual image on a fluorescent screen and it can be viewed directly.
• It facilitates a dynamic radiological study of the human anatomy.
• The fluorescent screen consists of a plastic base coated with a thin layer of fluorescent material,
zinc cadmium sulphide.
• In turn, the fluorescent screen is bonded to a lead-glass plate.
• Zinc cadmium sulphide emits light at 550 nm and it corresponds to green spectrum.
• Fluoroscopy requires imaging in dark room due to the faint intensity of emitted light radiation.
X-ray image intensifier
• Large glass tube with input screen
converting X-ray image into light image.
• The light image is transmitted to
photo-cathode which converts light image
into an equivalent electron image.
• Image intensification takes place because of
the very small output screen size and
electron magnification in the tube.
• X-ray image intensifiers use thin layer of
cesium iodide with high X-ray absorption
than zinc cadmium sulphide.
• The output window which permits to examine the light image is flat and allows for image transfer through large
numerical aperture objective.

• X-ray image intensifier system is coupled to a closed circuit television and video recording facilities.

• The combination of X-ray image intensifiers and TV system must control the X-ray generator to produce
constant density changes.

• This is done by automatic dose rate control, also known as automatic brightness control.

• The exposure control circuits drive the voltage and current of the generator (XG).

• If the image intensifier is switched to a higher magnification, the current in the X-ray tube is increased in
inverse proportion to the diameter on the X-ray screen.

• At the same time, the lead diaphragm output size between the patient and the X-ray tube is reduced to cut down
on the area of patient exposure.

• Diaphragms and photo-camera can be opened up, if the light output is not sufficient.
• In video fluoroscopic X-ray systems, the detector embodies the key technology.
• The CCD (charge coupled device) camera offers improvements in image quality.
• The introduction of selenium as a photoconductor produces image in digital format.
• Optical image is provided by the cesium iodide input screen, and it is directly detected by
a high resolution amorphous silicon photo-diode matrix and a thin film transistor array.
Dental X-ray machines
• X-rays are the only media available to detect location of the teeth, their internal condition and the degree
of decay at an early stage.
• Since the object-film distance is low, and the tissue and bone thickness are limited, an X-ray machine of
low power is adequate to obtain the radiograph with sufficient contrast.
• Most dental units have a fixed tube voltage, in the region of 50 kV, and a fixed tube current of 7 mA.
• The system combines the high voltage transformer and X-ray tube in a small case.
• No high voltage cables are required.
• Third electrode called grid between anode and cathode electrodes release electrons at high velocity in a
self-rectified circuit.
• Ratio of soft to hard X-rays decreases.
Portable X-ray unit
• Portable and mobile X-ray units are necessary during surgical procedures.
• A portable unit can be dismantled, packed into a small case and conveniently carried to the site.
• The tube head is constructed that X-ray tube and the high voltage generator are enclosed in one earthed
metal tank filled with oil.
• The X-ray is a small stationary anode tube type, operating in a self-rectifying mode and connected
directly across the secondary winding of the transformer.
• The X-ray machine contains fewer components such as mains voltage compensator, combined voltage
and current switch and time selector.
• Current supply is limited to 15 A.
• Maximum radiographic output is in the range of 15-20 mA and 90-95 kV.
Physical parameters for X-ray detectors
• Detector quantum efficiency (DQE): The DQE describes the efficiency of a detector (the
percentage of quanta for a given dose that actually contributes to the image). It is a function of
dose and spatial frequency.
• Dynamic range: Range from minimum to maximum radiation intensity that can be displayed in
terms of either differences in signal intensity or density differences in conventional film.
• Modulation transfer function: how the contrast of the image component is transmitted as a
function of its size or its spatial frequency.
• Contrast resolution: It is the smallest detectable contrast for a given detail size that can be
shown by the imaging system with different intensity or the whole dynamic range.
Digital radiography
• In both radiography and fluoroscopy, there are definitive advantages of having a digital image
stored in a computer.
• This allows image processing for better displayed images, the use of lower doses, avoiding
repeat radiography and opening up of the possibility of digital storage with a PACS (Picture
Archiving and Communication System).
• It helps in remote image viewing and these have vast possibilities of image-related processing.
• Digital X-ray imaging systems consist of X-ray imaging transducer, Data collection and
processing and data display, storage and processing.
Digital radiography
Conventional radiography
Digital subtraction angiography
• Digital subtraction angiography is developed to study the diseases of circulatory system.
• In this technique, a pre-injection image is acquired, the injection of contrast agent is then
performed.
• Images of the opacified vessels are acquired and subtracted.
• This technique enhances contrast and provides increased contrast sensitivity.
• It is applicable for observing small contrast changes in vessels that have been masked by
the large anatomical background signal.
Digital subtraction angiography
Digital mammographic X-ray equipment
• Mammography is an X-ray imaging procedure used for the examination of the female breast.
• It is primarily used for diagnosis of breast cancer and in the guidance of needle biopsies.
• The female breast is highly radiation-sensitive.
• The radiation dosage during mammography should be kept as low as possible.
• It is required to achieve better spatial resolution than other types of film or screen radiographs.
• The X-ray tube has a molybdenum target and a beryllium window.
• Radiographs are taken at 28-35 kV and mammographic units operate at low peak voltages.
Mammography types
• Digital mammography systems work based on two methods broadly
• Indirect detection: It is a scintillator X-ray conversion. Here, a scintillator converts
X-rays into visible light that is in turn picked up by a solid state detector.
• Direct detection: It is an electronic X-ray conversion. Aims for higher resolution. It
employs a amorphous selenium to convert X-rays into electron-hole pairs for sensing by
a transistor array.
• Direct detection converts X-rays into electrical signals.
Left, a dedicated mammography system, right, anode angle in mammography system, an anode
angle of 0 (A) and 16 degrees (B), require a tube tilt of 24 and 6 degrees
COURSE TITLE: MEDICAL IMAGING
TECHNIQUES AND DATA ANALYSIS
COURSE CODE: BIO3005
COURSE TYPE: LT
MODULE NO. 1_PART 2
COURSE INSTRUCTOR: DR. N. VIGNESH

EMPLOYEE ID: 100589

DESIGNATION: ASSISTANT PROFESSOR

DEPARTMENT: SCHOOL OF BIOENGINEERING AND TECHNOLOGY

1
Laser scanning confocal microscopes
(LSCM)
► Optical sections are produced in the laser scanning confocal microscope by scanning the specimen point by point
with a laser beam
► Laser is focused in the specimen, and the unwanted fluorescence is removed from above and below the focal plane
of interest using a spatial filter.
► The power of the confocal approach lies in the ability to image structures at discrete levels within an intact
biological specimen.
► There are two major advantages of using the LSCM in preference to conventional epifluorescence light
microscopy.
► First is the reduction of glare coming from out-of-focus structures in the specimen
► Second is the increase in both lateral (In the X and the Y directions: 0.14 mm) and axial resolutions (in the Z
direction: 0.23 mm).
► Image quality of some relatively thin specimens, for example, chromosome spreads and the leading lamellipodium
of cells growing in tissue culture (<0.2 mm thick) is not dramatically improved by the LSCM.
► Whereas thicker specimens such as fluorescently labelled multicellular embryos can only be imaged using the
LSCM.
3D reconstruction using LSCM
• For successful confocal imaging, a minimum number of photons
should be used to efficiently excite each fluorescent probe labelling
the specimen, and as many of the emitted photons from the
fluorochromes as possible should make it through the light path of
the instrument to the detector.
• The LSCM has found many different applications in biomedical
imaging.
• Some of these applications have been made possible by the ability of
the instrument to produce a series of optical sections at discrete steps
through the specimen.
• This Z series of optical sections collected with a confocal
microscope are all in register with each other, and can be merged
together to form a single projection of the image (Z projection) or a
3D representation of the image (3D reconstruction). Optical configuration of a laser scanning confocal microscope.
Setup of the confocal laser scanning microscope (CLSM) and the image-processing software (BioSPA) to perform the spatial
population-growth analysis over time a Experimental setup for analyzing microbial colonization and further growth under flow in
situ using confocal laser scanning microscopy-surface topography imaging approach.
Processing of multiple fluorescent labels

• Multiple-label images can be collected from a specimen labelled


with more than one fluorescent probe using multiple laser light
sources for excitation.
• Since all of the images collected at different excitation wavelengths
are in register it is relatively easy to combine them into a single
multi-colored image.
• Here any overlap of staining is viewed as an additive color change.
• Most confocal microscopes are able to routinely image three or four
different wavelengths simultaneously

Cell morphology on the surface of modules. (A) Confocal laser scanning microscope
(CLSM) images of live (green) and dead (red) HUVECs spread on the surface of modules
after 4 and 7 days. (B) Actin cytoskeleton of HUVECs spread on the surface of modules
after 7 days at low magnification and high magnification. (C) SEM images of collagen
modules at low magnification and high magnification.
Scan speed of confocal microscope

• The scanning speed of most laser scanning systems is


around one full frame per second.
• This is designed for collecting images from fixed and
brightly labelled fluorescent specimens.
• Such scan speeds are not optimal for living specimens,
and laser scanning instruments are available that scan
at faster rates for more optimal live cell imaging.
• In addition to point scanning, swept field scanning Principle of operation of the high-speed z-scanning confocal
microscope. a) Due to the high axial scanning speed enabled by the
rapidly moves a µm-thin beam of light horizontally optofluidic lens, several axial scans are recorded within a pixel
exposure. b) The sum of all photons providing from different focal
and vertically through the specimen. planes forms an image with an extended depth of field (EDOF). c) By
using a fast acquisition card and appropriate synchronization, photons
arriving at different times can be sorted according to their
corresponding focal plane.
Spinning disk confocal microscope

• The spinning disk confocal microscope employs a different scanning


system from the LSCM.
• Rather than scanning the specimen with a single beam, multiple
beams scan the specimen simultaneously, and optical sections are
viewed in real time.
• Modern spinning disk microscopes have been improved significantly
by the addition of laser light sources and high-quality CCD detectors
to the instrument.
• Spinning disk systems are generally used in experiments where
high-resolution images are collected at a fast rate (high spatial and
temporal resolution), and are used to follow the dynamics of
fluorescently labelled proteins in living cells
Electron microscopy
• The two basic types of electron microscopy are scanning electron microscopy, for viewing surfaces of
bulk specimens, and transmission electron microscopy, for scrutinizing internal as well as external
features of extremely thin specimens.
• The scanning electron microscope is so named because a fine probe of electrons is scanned across the
surface of a specimen to generate an image with a three-dimensional appearance.
• In the transmission electron microscope, the electrons are transmitted through the specimen to reveal a
two-dimensional image of the interior of cells.
• Although these instruments operate in completely different ways, both use accelerated electrons and
electromagnetic lenses to generate images.
• Both types of electron microscope require high vacuum so that electrons, having little mass, can travel
down the column of the microscope without being scattered by air molecules.
• Images are recorded either on photographic films or captured digitally using computer interfaces.
This modern TEM has capabilities for scanning
transmission electron microscopy (STEM) and
X-ray microanalysis.
Transmission electron microscopy (TEM)
• The conventional transmission electron microscope generates a beam
of electrons in the electron gun, usually by heating a tungsten
filament and applying a high voltage of negative potential in the
range of -75 000 V.
• The heat and negative high voltage cause ejection of the valence
electrons of tungsten near the tip of the V-shaped filament.
• A highly negative, cup-like device, termed a shield, forces the
electrons into a cloud near the filament tip.
• An aperture in the shield permits some of the electrons to exit
towards an anode plate.
• The electrons, which are travelling at about half the speed of light,
then enter the magnetic fields of the first and second condenser
lenses, which focus the electrons onto the specimen.
Working principle of TEM
• The lenses in electron microscopes are windings of copper wire, or solenoids, which generate a magnetic field when an
electrical current is run through the wire. When electrons enter the magnetic fields of the condenser lenses, they come to
focus at certain distances, or focal lengths, from the lens.
• By adjusting the current running through the condenser lens coil, the focal length is adjusted so that the electrons
illuminate the specimen in the area being studied in the TEM.
• After electrons strike cellular organelles in the specimen, they are deflected to various degrees, depending upon the mass
of the cellular component. Areas of great mass deflect the electrons to such an extent that they are effectively removed
from the optical axis of the microscope.
• Such areas, where electrons have been subtracted, appear dark when ultimately projected onto the viewing screen.
• Areas that have less mass, and so scatter the electrons to lesser degrees, appear brighter on the viewing screen.
• As a result of the electron scattering by the specimen, a highly detailed image forms in the objective lens.
• This image is then further magnified up to one million times by the remaining three to four lenses of the TEM.
• Even though each lens may magnify only 100 times or less, the final magnification is the product of all of the lenses and
can quickly reach high magnifications
• The electromagnetic imaging lenses are constructed in the same way
as the illuminating or condenser lenses. Imaging lenses are used to
focus and magnify the image formed as a result of the interaction of
the electrons with the specimen.
• Ultimately, the electrons are projected onto a viewing screen coated
with a phosphorescent material that glows when struck by electrons.
• Greater numbers of electrons (from areas of the specimen having less
density) generate brighter areas on the screen.
• Since photographic emulsions are sensitive to electrons as well as to
photons, the image is recorded by lifting up the viewing screen and
allowing the electrons to strike a photographic film placed
underneath.
• After development, such images are termed electron micrographs.
• Besides high magnification, which refers to an increase in the size of
an object, TEMs produce images having high resolution.
• High-resolution images have tremendous detail and can withstand
the high magnification that makes the details visible
Scanning electron microscopy (SEM)
• The electron gun and the lenses in the scanning electron microscope
are nearly identical to the TEM.
• However, instead of forming an image, the SEM lenses focus the
beam of electrons to a very tiny spot on the specimen.
• Each of the three condenser lenses in the SEM makes the spot of
electrons progressively smaller.
• When the beam of electrons strikes the specimen, it causes the
ejection of low-energy, secondary electrons from the surface of the
specimen.
• Secondary electrons convey information about the surface of the
specimen and are used to generate the image.
• In contrast to the TEM, which illuminates the specimen area under
examination with a single, large spot, the SEM scans a tiny spot of
electrons across the specimen.
Parts of a scanning electron microscope
(SEM) and the typical signals that are
recorded from bone. BSE backscattered
electrons, SE secondary electrons, EDX
energy-dispersive X-ray spectroscopy
Working principle of SEM
• Various numbers of secondary electrons will be generated, depending
primarily upon topography, angle of entry of the beam into the specimen and
thickness of raised portions of the specimen.
• The secondary (image-forming) electrons are strongly attracted to a secondary
electron detector that is placed at a positive high voltage of around 12 000 V.
• When the secondary electrons strike a phosphorescent coating on the detector,
this generates a burst of light that travels down a light guide and into a
photomultiplier, where photons are converted to photoelectrons and the weak
signal amplified.
• Areas of the specimen that generate large numbers of secondary electrons
appear very bright when viewed on the display screen of the SEM, whereas
areas with few electrons are dark.
• The various shades of grey seen in an image give the impression of depth,
much like a black and white photograph conveys three-dimensionality.
• Two scanning events take place simultaneously, but in different locations in
the SEM.
SEM imaging

For each point on the specimen, there is a


corresponding point on the viewing screen, the
brightness of which depends upon the number of
secondary electrons generated by the specimen. The
size of the area scanned on the viewing screen is
fixed, based on the size of the screen (15 cm20 cm,
for example). However, the size of the area scanned
on the specimen is completely variable and under
the control of the operator. If one scans a line across
the specimen that is 1 cm in length and then
displays this line on a 20 cm monitor, the resulting
line is magnified 20 . A 20 mm line scanned on a
specimen when displayed to 20 cm would represent
a magnification of 10 000 X.
Applications
• Electron microscopy is a technology for examining the extremely
fine detail or ultrastructure of biological specimens.
• This methodology opened a totally new vista to the human eye when
biological ultrastructure was revealed as a dynamic and
architecturally complex arrangement of macromolecules under the
direction of genetic units termed genes.
• Complex subcomponents of the cell, termed organelles, were clearly
visualized and biochemical activities associated with each structure.
• Pathogenic microorganisms, the viruses, were finally captured on
film
• The structural organization of DNA into the chromosome was
elucidated using electron microscopy and the electron microscope
provided insight into many disease processes.
Scanning Transmission Electron Microscopy (STEM)
• The STEM combines features of both the TEM and SEM to produce transmission images obtained with a scanning
probe.
• Magnification is changed by changing the area scanned on the specimen and post specimen lenses are not strictly
needed in a STEM.
• In STEM mode, the final image resolution depends on use of a high brightness source to produce a small focused
probe with high current density.
• Images in the STEM are produced while scanning the beam over the specimen.
• STEM has elastic dark field imaging or inelastic dark field imaging. The problems of adhesion, electrostatic forces
and friction are eliminated by tapping mode.
• The tip is fixed on piezoelectric device at its resonant frequency.
• Electrons transmitted through the specimen can be detected on a detector on the axis of the microscope (bright
field detector (BF)), or on an annular detector sensing electrons scattered through a range of angles (annular dark
field detector (ADF) or annular bright field (ABF)).
• The main applications of a STEM are high spatial resolution imaging and microanalysis.
• The scanning transmission electron microscope (STEM) is one of the most useful tools in many areas of atomic-scale materials science
and nano-characterization.
• A STEM has the ability to generate local maps of the chemical composition and electronic structure at atomic resolution, even in complex
or unknown samples.
• Like the conventional TEM, the STEM primarily uses transmitted electrons to form an image.
• However, like the scanning electron microscope (SEM) a STEM scans a very small probe over a sample.
• In its basic form the STEM consists of an electron source, several lenses to focus these electrons to form a small probe, a scanning unit to
scan this probe across the sample, and a detector that collects a signal after the electrons have interacted with the specimen.
• An image is formed by recording a signal of interest as a function of the probe position.
• The image therefore relates to the part of the sample that the probe interacts with at each position.
A typical method for preparing biological tissues would
employ glutaraldehyde and osmium fixatives. After the
specimens have been dehydrated in ethanol, they are
infiltrated with a liquid, epoxy plastic that is then
hardened in an oven. At this point, the water throughout
the preserved cells has been replaced completely with
hard plastic and it is now possible to cut extremely thin
slices. This is accomplished using an ultramicrotome
that advances the specimen over a glass or diamond
knife and cuts ultrathin sections from the
plastic-embedded specimen. The sections are retrieved
onto a specimen carrier, or grid, stained for contrast
using heavy metals such as uranyl acetate and lead
citrate, and placed into the TEM for viewing.
Atomic Force Microscopy

► AFM is mainly used for the analysis of the non-conducting solids or


insulating materials.
► A force detecting cantilever stylus is used as a scanning probe.
► Deflection of cantilever is detected by optical sensors and the
movement of cantilever is controlled by piezoelectric tube.
► The piezoelectric tube facilitates the scanning of the sample with tip.
► The atomic force is controlled by tip and cantilever.
► Tip used is a diamond while cantilever used is a metal sheet.
► A typical AFM consists of a cantilever with a small tip (probe) at the
free end, a laser, a 4-quadrant photodiode and a scanner.
► The surface characteristics can be explored with very accurate
resolution in a range of 100 μm to less than 1 μm.
AFM – Sensing tip

SEM images of microfabricated silicon


AFM cantilevers and their tip shapes

• In AFM, a tip is used for imaging. It is generally made of silicon or silicon nitride.
• It approaches the sample in a range of interatomic distances (around 10 Å).
• The tip is commonly 3-15 microns in length.
• It is attached to the end of the spring cantilever.
• The cantilever is around 100-500 microns in length.
• When the tip, which is attached to the free end of the cantilever, come very close to the surface
attractive and repulsive forces due to the interactions between the tip and the sample surface
cause a negative or positive bending of the cantilever.
• This bending is detected by the help of a laser beam.
AFM – Cantilever system

The cantilever can be thought of as a spring. The quantity of the


generated force between the tip and the surface depends on the spring
constant (stiffness) of the cantilever and the distance between the tip and
the surface. This force can be characterized with Hooke’s Law.

If the spring constant of the cantilever is less than surface, a bending


occurs in the cantilever and this deflection is monitored. As the tip
travels across the sample, it moves up and down according to the surface
properties of the sample (eg. topography). These fluctuations are
sourced by the interactions (electrostatic, magnetic, capillary, Van der
Waals) between the tip and the sample. The displacement of the tip is
measured and a topographical image is obtained.
AFM imaging

• Generally the AFM probe does not move, instead of its


motion, the sample is moved in the x,y,z direction by a
piezoelectric material.
• Piezoelectric materials (piezocrystals) are ceramic materials
that can enlarge or shrink when a voltage is applied.
• By this way, very precise movements in the x,y,z directions
can be possible. (Position can be controlled in nanometer
resolution).
• A laser beam is focused onto the back of the cantilever. It can
be reflected back to a 4-quadrant photodiode detector.
• By the help of this position sensitive photodiode, the bending
of the cantilever can be measured precisely.
• The cantilever deflects according to the atomic force
variations between tip and the sample and thereby the
detector measures the deflection.
• The created image is a topographical illustration of the
sample surface.
AFM principle – Force-Distance Curve

• When the tip approaches to the surface of the sample, Van


der Waals forces cause attraction.
• In noncontact region, the distance between probe and the
surface is around tens to hundreds of angstroms.
• The effective forces are attractive.
• However as the distance becomes short in the length of
chemical bond i.e. a few angstroms the repulsive columbic
forces become dominant.
• In contact region, total Van der Waals forces are positive
(repulsive force)because of the interaction between the
positive nuclei and the overlap of the electron shells (Pauli
Principle).
Constant Height (of Scanner)
• In this mode, the spatial variation of the cantilever deflection is used directly to
generate the topographic data set because the height of the scanner is fixed as it
scans.
• Constant-height mode is often used for taking atomic-scale images of atomically
flat surfaces, where the cantilever deflections and thus variations in applied force
are small.
• Constant-height mode is also essential for recording real-time images of changing
surfaces, where high scan speed is essential.
Constant Force
• In this mode, the deflection of the cantilever can be used as input to a feedback
circuit that moves the scanner up and down in z, responding to the topography by
keeping the cantilever deflection constant.
• With the cantilever deflection held constant, the total force applied to the sample
is constant.
• In this mode, the image is generated from the scanner’s z-motion. The scanning
speed is thus limited by the response time of the feedback circuit.
• Constant-force mode is generally preferred for most applications.
Operating mode – Contact mode

• In contact mode, the tip is in a soft physical contact with the


surface.
• The tip is able to move above the surface with a specific
height or under a constant force.
• The movement is strongly influenced by frictional and
adhesive forces that can cause damage to the sample. When
the spring constant of cantilever is less than surface, the
cantilever bends.
• The force on the tip is repulsive.
• By maintaining a constant cantilever deflection (using the
feedback loops) the force between the probe and the sample
remains constant and an image of the surface is obtained.
• At very small tip-sample distances (a few angstroms) a very
strong repulsive force appears between the tip and sample
atoms.
Operating mode – Non-contact mode

• In this mode tip does not touch the sample, however it


oscillates above the surface during scan.
• It uses feedback loop to monitor changes in the amplitude
due to attractive Van der Waals forces so the surface
topography can be monitored.
• A polarization interaction between atoms: An
instantaneous polarization of an atom induces a
polarization in nearby atoms – and therefore an attractive
interaction.
• The forces cause a change in the oscillation amplitude,
resonance frequency and phase of the cantilever.
• The amplitude is utilized for feedback mechanism.
• Vertical motion of the piezo-scanner is utilized as a
height image.
• It is better for soft samples.
Operating mode – Tapping mode

• This mode eliminates the frictional force by intermittently


contacting the surface and oscillating with sufficient
amplitude to prevent it from being trapped in by adhesive
forces.
• This mode of operation is less destructive than contact mode.
• The cantilever oscillates nearby its resonance frequency. An
electronic feedback loop provides the oscillation amplitude
remaining constant so that a constant tip sample interaction is
conserved during the scan.
Comparison between the three
scanning modes: damage to the
sample

• Contact mode imaging (left) is heavily influenced by frictional and adhesive forces, and
can damage samples and distort image data.
• Non-contact imaging (center) generally provides low resolution and can also be hampered
by the contaminant (e.g., water) layer which can interfere with oscillation.
• Tapping Mode imaging (right) takes advantages of the two above. It eliminates frictional
forces by intermittently contacting the surface and oscillating with sufficient amplitude to
prevent the tip from being trapped by adhesive meniscus forces from the contaminant layer.
Comparison between AFM and Electronic Microscopes

Optical and electron microscopes can easily generate two dimensional images of a
sample surface, with a magnification as large as 1000X for an optical microscope, and a
few hundreds thousands ~100,000X for an electron microscope.
However, these microscopes cannot measure the vertical dimension (z-direction) of the
sample, the height (e.g. particles) or depth (e.g. holes, pits) of the surface features.
AFM, which uses a sharp tip to probe the surface features by raster scanning, can
image the surface topography with extremely high magnifications, up to 1,000,000X,
comparable or even better than electronic microscopes.
The measurement of an AFM is made in three dimensions, the horizontal X-Y plane
and the vertical Z dimension. Resolution (magnification) at Z-direction is normally higher
than X-Y.
Scanning Tunneling Microscope
• A Scanning Tunneling Microscope (STM) is an instrument
used to image and study the electronic properties of surfaces
at the atomic scale.
• In an STM, an atomically sharp conducting tip is brought
very close to the surface of the sample under study – the
distance between the two is on the order of a nanometer.
• The space between the tip and the sample, which is usually
vacuum, forms a potential barrier.
• When a bias voltage is applied between the tip and the
sample, a tunneling current flows between the two.
• The tunneling current depends on the tip-to-sample distance.
• It also depends on the local density of states (LDOS) of the
sample over the energies within eV of the Fermi energy.
• STM is thus a powerful experimental technique for
measuring the surface topography
• By applying a voltage between the tip and the sample a small electric current (0.01nA-50nA) can flow
from the sample to the tip or reverse, although the tip is not in physical contact with the sample. This
phenomenon is called electron tunneling .
• Piezoelectric transducers are used to control the lateral position r = (x, y) of the tip as well as its height z.
• The piezoelectric transducer controlling height is also connected to a feedback control unit which depends
on the tunneling current.
• The tip is grounded and the bias voltage is the voltage applied to the sample.
• There are four (non-independent) tunable parameters during measurement: the tip height (z), the tip
• position r = (x, y), the bias voltage, and the tunneling current.
• STM is operated in several modes such as topography, spectroscopy and spectroscopic imaging
Topography

STM topography (a) and current (b) image of a sample of


highly oriented pyrolitic graphite (HOPG). The tracking of the
topography and current image is clear. The scanning
parameters are the following: current setpoint 1.0 nA, bias
voltage 80 mA, integral gain 1, tip velocity 0.18 μm/s.

• In constant current mode, the tip is scanned across the surface of the sample and the height of the tip is
controlled by a feedback loop in such a way as to maintain a constant tunneling current.
• In this case, the height of the tip records the profile of the sample’s surface which contains the
corrugations of the atomic lattice.
• In constant height mode the tip is again scanned across the surface of the sample, this time at a constant
height, and the tunneling current is measured at each position.
• Using the dependence of the current on the tip-to-sample distance, surface profile of the sample can be
deduced.
• In both cases, sample voltage is kept constant.
Spectroscopy and imaging

• Scanning Tunneling Spectroscopy (STS) measures the electronic LDOS (Local Density Of States) of the sample.
• LDOS can be measured by keeping the tip at a constant height and position and measuring the tunneling conductance.
• In Spectroscopic Imaging STM (SI-STM) the topography and STS modes are combined.
• As in the topographic mode of operation, the tip is scanned across the sample’s surface at constant voltage.
• At each pixel in the topographic map the tip freezes its position and sweeps voltage.
• The spatial variation of the LDOS is mapped.
• The lower the temperature at which STM measurements are performed, the better the resolution.
• Because of the exponential dependence of the tunneling current on the tip-to-sample distance, STMs are extremely sensitive
to the effects of vibrations
COURSE TITLE: MEDICAL IMAGING TECHNIQUES AND
DATA ANALYSIS
COURSE CODE: BIO3005
COURSE TYPE: LT
MODULE NO. 2_PART 2
COURSE INSTRUCTOR: DR. N. VIGNESH

EMPLOYEE ID: 100589

DESIGNATION: ASSISTANT PROFESSOR

DEPARTMENT: SCHOOL OF BIOENGINEERING AND TECHNOLOGY


Dual energy X ray absorptiometry
• Dual energy X ray absorptiometry (DXA) is an X ray imaging technique primarily used to derive the
mass of one material in the presence of another through knowledge of their unique X ray attenuation at
different energies.

• DXA is an extremely accurate and precise method for quantifying bone mineral density (BMD) and
mass body composition assessment.

• Two images are made from the attenuation of low and high average X ray energy.

• DXA is a special imaging modality that is not typically available with general use X ray systems because
of the need for special beam filtering and near perfect spatial registration of the two attenuations.
• DXA is an extension of an earlier imaging technique called dual
energy photon absorptiometry (DPA).
• The DXA technique differs from DPA only in that DPA uses the
attenuation of monochromatic emissions from a radioisotope.
• While DXA uses polychromatic X ray spectra for each image,
centred at different energies.
• DXA’s primary commercial application has been to measure bone
mineral density to assess fracture risk and to diagnose osteoporosis.
• The X ray energies used are optimized for bone density assessment.
• For osteoporosis diagnosis, the lumbar spine, proximal hip and,
sometimes, the distal forearm are scanned.
• DXA is one of the most accurate and precise methods for
quantifying BMD and mass in vivo.
• Bone mineral mass, primarily consisting of hydroxyapatite, is the
mineral component of bone that is left after a bone is defleshed,
lipids extracted and ashed. The nature of the DXA system is that it
creates a planar (two-dimensional) image that is the combination of
low and high energy attenuations.
• Although density is typically thought of as a mass per unit volume,
DXA can only quantify the bone density as a mass per unit area,
since it uses planar images and cannot measure the bone depth.
• In contrast, the measurement of bone density using a computed
tomography (CT) system, called quantitative computed tomography
(QCT), can measure the true volume and volumetric bone density.
• Bone size varies as a function of age.
• Thus, DXA bone density values increase from birth to adulthood,
primarily because the bones become larger.
Lateral vertebral assessment is used to better visualize
vertebral fractures. The left image is the single energy
representation. The dual energy view of the same spine
is shown on the right. Fractures can be classified using
scoring methods reflecting the severity of the fracture.
THREE COMPARTMENT MODEL OF BODY COMPOSITION

• DXA defines the composition of the body as three materials having


specific X ray attenuation properties: bone mineral, lipid
(triglycerides, phospholipid membranes, etc.) and lipid free soft
tissue.
• However, adipose tissue contains lipid free mass, such as water and
proteins as well.
• The non-lipid soft tissue mass (STM) is the sum of body water,
protein, glycerol and soft tissue mineral mass.
• The model forces all tissue types into these three components.
• This limitation is true for most composition models that cannot
represent the body as a true three dimensional volume.
X-ray attenuation in DXA
• X rays in the energy range used for DXA interact with tissue using three processes: photoelectric absorption,
Compton (inelastic) scattering and coherent (elastic) scattering.
• Coherent scattering occurs when X rays pass close to an atom and cause ‘bound’ electrons to vibrate (resonate) at a
frequency corresponding to that of the X ray photon.
• The electron re-radiates this energy in all directions and at exactly the same frequency as the incoming photons
without absorption.
• Although a certain amount of elastic scattering occurs at all X ray energies, it never accounts for more than 10% of
the total interaction processes in diagnostic radiology. Compton scattering occurs when the incoming photon loses
some of its energy to the electron and then continues in a new direction (i.e. it is scattered) but with increased
wavelength and, hence, with decreased energy.
• Compton scatter creates two major problems in X ray imaging.
• First, it reduces the contrasts in the image unless it is removed by collimation before the detector.
• Second, it presents a radiation risk to the personnel using the equipment.
• Attenuation by the photoelectric effect occurs when a photon interacts with the atom by ejecting an electron from its
orbit or shell around a nucleus.
Limitations of DXA
1. Bone volume projected into pixel is not known.

2. Fan beam magnification.

3. Two compartments and only two.

4. Lack of standardization.

5. Degenerative changes.
DXA systems measure bone density in units of grams per unit area since DXA does not have the ability to measure tissue thickness.
Thus, DXA systems cannot tell the difference between thick low density bone and thin high density bone.
Back Scatter X-ray Imaging

• In difference to conventional transmission X-ray radiography


and computed tomography (CT), the X-ray backscatter
technique utilizes the scattered radiation caused by the
Compton scattering effect.
• As the Compton scattering effect depends on the electron
density in the scattering object, low-atomic-number Z
materials (eg. Al, perspex, composites and water) exhibit
predominant scattered radiation compared to the heavy
metals such as steel (Fe), copper (Cu), and lead (Pb),
respectively.
• The main limitations of this technique are the fixed
irradiation geometry and a single-viewing direction.
• The two ends of the slit collimator are equipped with a spring
loaded system for controlling the passage of backscattered
X-ray beam by adjusting the gap between upper and lower
part of the collimator.
• At the same time the resolution in the X-ray backscatter
image can be controlled by varying the slit width of the
camera.
• Furthermore, by varying the distance between the collimator
and the DDA, the magnification in a backscatter image can
be optimized.
• Apart from that, the present design of the X-ray
backscattering camera requires some additional lead
shielding in order to avoid undesired scattered radiation and
to protect the detector electronics.
• X-ray backscatter camera for nondestructive testing (NDT).
• Modelling of radiation techniques basically consist of four components: (i) the radiation source, (ii) the interaction
of radiation with material, (iii) the detection of the radiation, and (iv) the geometry of object under investigation.
• An X-ray backscatter imaging technique has been presented making use of a specially designed twisted slit scatter
camera.
• In order to achieve high backscatter intensities from a test object, it is necessary to optimize the backscatter system
parameters namely the angle between source and slit camera, the slit-collimator system, the shielding between the
source and the scatter camera, and the type of detector
Rotational angiography (3DRA)

• Congenital heart disease (CHD) is often associated with complex anatomical anomalies.
• High-resolution imaging modalities are particularly helpful.
• During the last decade, the three-dimensional rotational angiography (3DRA) has emerged as a
new facility for diagnostic and interventional procedures.
• 3DRA is performed by a C-arm of the angiography-system equipped with a flat detector.
• It generates a 3D volume data set from a single C-arm rotation (at least 180°) around the patient
during a continuous injection of contrast dye.
• It provides a precise view of cardiovascular and surrounding structures in various projections
and can be used for 3D-guidance additionally.
• The 3DRA, also called flat detector computed tomography (FD-CT) or cone beam CT, was
developed in the 1990s and initially used for neuroradiology procedures. Furthermore, the
benefit of this imaging tool was described for coil embolization of cerebral aneurysm, cardiac
electrophysiology, valve replacement and liver tumor embolization
• The three-dimensional model derived from 3DRA
demonstrates the vascular configuration and the spatial
relationship between cardiovascular structures and
surrounding tissue very precisely.
• This was due to the reconstructed 3D-models enabling the
visualization of anatomy and complex spatial relationships
from any desired angle of view. Additionally, comparative
measurements of vessel diameters in 3DRAs and biplane
angiographies show a high accuracy in settings with a low
blood flow as well as in regions with a more pulsatile flow
(e.g. aorta).
• Image quality depends on various factors that can mutually
influence each other.
• The poor temporal acquisition requires a constant injection of
contrast dye in the region of interest over seconds with a
suitable delay.
• Moreover, heart movement, spontaneous breathing and the
Rotational angiography in the right ventricle in a 3
long acquisition time of at least 5 s may result in artefacts month-old boy with Hypoplastic left heart syndrome.
with an inferior image quality.
• 3DRA is an invasive imaging tool that requires adequate
sedation and an vascular access which involves certain risks
such as infection, vessel damage and hemorrhage.
• Moreover, the generated 3D model from 3DRA has to be
processed and visualized in a proper way, which is mostly
done immediately after acquisition during cardiac
catheterization.
• This is time consuming, interrupts the course of
catheterization and needs special trained staff to adapt the
imaging data.
• Rotational angiography includes important flow information
with high temporal and spatial resolution.
• In the hands of an experienced investigator, the immediate
availability of these information during the dynamic process
of interventional catheterization of complex congenital heart
3D-guidance for stenting the arterial duct in a newborn
disease, gives 3DRA a great importance in selected cases. with Hypoplastic left heart syndrome. Color-code light
green: ductus arteriosus; red: aorta; violet: right ventricle;
light blue: pulmonary artery.
COURSE TITLE: MEDICAL IMAGING
TECHNIQUES AND DATA ANALYSIS
COURSE CODE: BIO3005
COURSE TYPE: LT
MODULE NO. 1_PART 1
COURSE INSTRUCTOR: DR. N. VIGNESH

EMPLOYEE ID: 100589

DESIGNATION: ASSISTANT PROFESSOR

DEPARTMENT: SCHOOL OF BIOENGINEERING AND TECHNOLOGY

1
TOPICS IN MODULE NO. 1

► Principles of optical microscopy.

► Different types of microscopy techniques: brightfield, darkfield, phase


contrast, reflected interference contrast, fluorescence, confocal.

► Basic principles, techniques, advantages and limitations of a) electron


microscope: transmission electron microscope (TEM), scanning electron
microscope (SEM).

► Principles of atomic force microscopy and scanning probe microscopy.

2
Introduction
► Biochemical analysis is frequently accompanied by microscopic examination of tissue,
cell or organelle preparations.
► Microscopic examinations are used in many different applications.
► Evaluation of the integrity of samples during an experiment.
► Mapping of the spatial distribution and other fine details of macromolecules within
cells.
► Direct measurement of biochemical events within living tissues.
► There are two fundamentally different types of microscope: the light microscope and
the electron microscope.
► Light microscopes use a series of glass lenses to focus light in order to form an image
whereas electron microscopes use electromagnetic lenses to focus a beam of electrons.
► Light microscopes are able to magnify to a maximum of approximately 1500 times
whereas electron microscopes are capable of magnifying to a maximum of
approximately 200 000 times.
3
Resolution – A reliable tool of microscopy evaluation

► Magnification is not the best measure of a microscope.


► Rather, resolution, the ability to distinguish between two closely spaced points in a
specimen, is a much more reliable estimate of a microscope’s utility.
► Standard light microscopes have a lateral resolution limit of about 0.5 micrometers
(µm) for routine analysis.
► In contrast, electron microscopes have a lateral resolution of up to 1 nanometer (nm).
► Both living and dead specimens are viewed with a light microscope, and often in real
color, whereas only dead ones are viewed with an electron microscope, and never in
real color.
► Computer enhancement methods have improved upon the 0.5 µm resolution limit of the
light microscope down to 20 nm resolution for specialized applications (example: total
internal reflection microscopy TIFR).
4
Light and electron microscopy. Schematic that
compares the path of light through a compound light
microscope (LM) with the path of electrons through a
transmission electron microscope (TEM).

5
Key differences between LM and TEM

• Light from a lamp (LM) or a beam of electrons from an


electron gun (TEM) is focussed at the specimen by a glass
condenser lens (LM) or electromagnetic lenses (TEM).
• For the LM the specimen is mounted on a glass slide with a
coverslip placed on top, and for the TEM the specimen is
placed on a copper or gold electron microscope grid.
• The image is magnified with an objective lens, glass in the
LM and electromagnetic lens in the TEM, and projected onto
a detector with the eyepiece lens in the LM or the projector
lens in the TEM.
• The detector can be the eye or a digital camera in the LM or
a phosphorescent viewing screen or digital camera in the
TEM.

6
Challenges in live cell imaging

► Applications of the microscope in biomedical research may be relatively simple and


routine.
► For example, a quick check of the status of a preparation or of the health of cells
growing in a plastic dish in tissue culture.
► Here, a simple bench-top light microscope is perfectly adequate.
► On the other hand, the application may be more involved, for example, measuring the
concentration of calcium in a living embryo over a millisecond timescale.
► Here a more advanced light microscope (often called an imaging system) is required.
► Images may be required from specimens of vastly different sizes and magnifications.
► The study of living cells may require time resolution from days, for example, when
imaging neuronal development or disease processes to milliseconds, for example, when
imaging cell signaling events. 7
Live cell imaging and applications

8
The light microscope

► The simplest form of light microscope consists of a single glass lens mounted in a metal frame – a
magnifying glass.
► Here the specimen requires very little preparation, and is usually held close to the eye in the hand.
► Focusing of the region of interest is achieved by moving the lens and the specimen relative to one
another.
► All modern light microscopes are made up of more than one glass lens in combination.
► The major components are the condenser lens, the objective lens and the eyepiece lens, and, such
instruments are therefore called compound microscopes.
► Each of these components is in turn made up of combinations of lenses, which are necessary to
produce magnified images with reduced artifacts and aberrations.
► For example, chromatic aberration occurs when different wavelengths of light are separated and
pass through a lens at different angles (formation of rainbow color).
► All modern lenses are now corrected to some degree in order to avoid this problem.
9
Two basic types of compound light microscope. An
upright light microscope (a) and an inverted light
microscope (b). Note how there is more room
available on the stage of the inverted microscope
(b). This instrument is set up for microinjection
with a needle holder to the left of the stage.

10
Components of light microscope
► The main components of the compound light microscope include a light source that is focused at the
specimen by a condenser lens.
► Light that either passes through the specimen (transmitted light) or is reflected back from the
specimen (reflected light) is focused by the objective lens into the eyepiece lens.
► The image is either viewed directly by eye in the eyepiece or it is most often projected onto a
detector, for example photographic film or, more likely, a digital camera.
► The part of the microscope that holds all of the components firmly in position is called the stand.
► There are two basic types of compound light microscope stand – an upright and an inverted
microscope.
► The light source is below the condenser lens in the upright microscope and the objectives are above
the specimen stage.
► The inverted microscope is engineered so that the light source and the condenser lens are above the
specimen stage, and the objective lenses are beneath it.
► This allows additional room for manipulating the specimen directly on the stage, for example, for
the microinjection of macromolecules into tissue culture cells, for in vitro fertilization 11of eggs or for
viewing developing embryos over time.
Light source – Illumination stage

► The correct illumination of the specimen is critical for achieving high-quality images
and photomicrographs. This is achieved using a light source.
► Typically light sources are mercury lamps, xenon lamps, lasers or light-emitting diodes
(LEDs).
► Light from the light source passes into the condenser lens, which is mounted beneath
the microscope stage in an upright microscope in a bracket that can be raised and
lowered for focusing.
► Light source is above the stage in the case of an inverted microscope.
► The condenser focusses light from the light source and illuminates the specimen with
parallel beams of light.
► Koehler illumination: A correctly positioned condenser lens produces illumination that
is uniformly bright and free from glare across the viewing area of the specimen.
► Condenser misalignment and an improperly adjusted condenser aperture diaphragm are
12
major sources of poor images in the light microscope.
Position of light source

13
Specimen stage

► The specimen stage is a mechanical device that is finely engineered to hold the
specimen firmly in place.
► Any movement or vibration will be detrimental to the final image.
► The stage enables the specimen to be moved and positioned in fine and smooth
increments, both horizontally and transversely, in the X and the Y directions, for
locating a region of interest.
► The stage is moved vertically in the Z direction for focusing the specimen or for
inverted microscopes.
► The objectives themselves are moved and the stage remains fixed.
► There are usually coarse and fine focusing controls for low magnification and high
magnification viewing respectively.
► The fine focus control can be moved in increments of 1 µm or better in the best research
microscopes. 14
15
Objective lens

► The objective lens is responsible for producing the magnified image, and can be the most expensive
component of the light microscope.
► Objectives are available in many different varieties.
► This may include the manufacturer, magnification (4X, 10X, 20X, 40X, 60X, 100X),immersion
requirements (air, oil or water), coverslip thickness (usually 0.17 mm) and often more-specialized
optical properties of the lens.
► In addition, lens corrections for optical artifacts such as chromatic aberration and flatness of field may
also be included in the lens description.
► Objective lenses can either be dry (glass/air/coverslip) or immersion lenses (glass/oil or
water/coverslip).
► As a rule of thumb, most objectives below 40X are air (dry)objectives, and those of 40X and above are
immersion (oil, glycerol or water).
► Dipping lenses are specially designed to work without a coverslip, and are dipped directly into water
or tissue culture medium. 16

► These are used for physiological experiments.


Principles of imaging with an optical
microscope: (a) ray diagram of the simplest
two lens microscope; (b) definition of
parameters of an objective lens; (c) point
spread function with the first minimum at
r=W/2; (d) Airy patterns in the image plane.
"F" marks the foci of lenses. "R" shows the
separation distance between the centers of
Airy discs.

17
Numerical aperture (NA)
► The numerical aperture (NA) is always marked on the lens. This is a number usually between 0.04
and 1.4.
► The NA is a measure of the ability of a lens to collect light from the specimen.
► Lenses with a low NA collect less light than those with a high NA.
► Resolution varies inversely with NA, which implies that higher NA objectives yield the best
resolution.
► Resolution of 0.2 µm can only be achieved using a 100 plan-apochromat oil immersion lens with a
NA of 1.4.
► Should there be a choice between two lenses of the same magnification, then it is usually best to
choose the one of higher NA.
► The shorter the wavelengths of illuminating light the higher the resolving power of the microscope.
► The limit of resolution for a microscope that uses visible light is about 300 nm with a dry lens (in air)
and 200 nm with an oil immersion lens.
► By using ultraviolet light (UV) as a light source the resolution can be improved to 100 nm because of
the shorter wavelength of the light (200–300 nm).
18
► The lateral resolution is usually higher than the axial resolution for any given objective lens
19
Eye-piece
► The eyepiece (sometimes referred to as the ocular) works in combination with the objective lens to further
magnify the image.
► allows it to be detected by eye or more usually to project the image into a digital camera for recording
purposes.
► Eyepieces usually magnify by 10X since an eyepiece of higher magnification merely enlarges the image
with no improvement in resolution.
► There is an upper boundary to the useful magnification of the collection of lenses in a microscope.
► For each objective lens the magnification can be increased above a point where it is impossible to resolve
any more detail in the specimen.
► Any magnification above this point is often called empty magnification.
► In addition to the human eye and photographic film there are two types of electronic detectors employed
on modern light microscopes.
► First is area detectors that actual form an image directly, for example video cameras and charge-coupled
devices (CCDs).
► Point detectors can be used to measure intensities in the image; for example photomultiplier
20 tubes
(PMTs) and photodiodes.
Ray-paths in the transmitted-light microscope showing
(A) the field set of planes that are conjugate with the
object and the final image and (B) the aperture set of
planes conjugate with the filament and the apertures of
condenser and objective lenses.

21
Optical contrast
► Most cells and tissues are colorless and almost transparent, and lack contrast when
viewed in a light microscope.
► Therefore to visualize any details of cellular components, it is necessary to introduce
contrast into the specimen.
► This is achieved either by optical means using a specific configuration of microscope
components, or by staining the specimen with a dye.
► More usually, using a combination of optical and staining methods.
► Contrast is achieved optically by introducing various elements into the light path of the
microscope and using lenses and filters.
► These change the pattern of light passing through the specimen and the optical system.
► This can be as simple as adding a piece of colored glass or a neutral density filter into
the illuminating light path; by changing the light intensity; or by adjusting the diameter
of a condenser aperture.
► All of these operations are adjusted until an acceptable level of contrast is achieved for 22
imaging.
23
Bright field microscopy

► The most basic mode of the light microscope is called brightfield (bright background),
which can be achieved with the minimum of optical elements.
► Contrast in brightfield images is usually produced by the color of the specimen itself.
► Bright-field is therefore used most often to collect images from pigmented tissues or
histological sections or tissue culture cells that have been stained with colorful dyes.
► In brightfield transmitted microscopy the contrast of the specimen is mainly produced
by the different absorption levels of light.
► The choice of appropriate optical equipment and correct illumination settings is vital for
the best contrast.
► Brightfield is operated in Köhler alignment - the aperture stop should be closed to
approximately 80% of the numerical aperture (NA) of the objective.

24
Brightfield image of living mouth epithelial cells on a
slide before and after digital contrast optimisation of
the archived image and corresponding histograms.

• For specimens lacking natural differences in internal absorption of


light like living cells, the acquired image has flat contrast.
• To better visualize the existing image features, subsequent digital
contrast optimization procedures may be applied. 25
Brightfield images and light path

Representative bright-field microscopy images depicting the metabolic activity of MCF7 cultures before (control)
and at indicated times after 8-Gy irradiation. Metabolic activity was measured by the ability of the cells to convert
26
the yellow MTT to its purple formazan metabolite, appearing as dark granules and crystals. TB- Trypan blue.
Factors in brightfield microscopy

► The spectral compositions of differing light sources are not identical and
furthermore are determined by temperature.
► Color temperature is of tremendous significance with regard to digital image
acquisition and display.
► It influences both color perception and color display to such a critical degree.
► Correcting color tinge(s) in true-color images is known as white balance.
► Three correction factors are calculated based on the pixels within the section –
one for each of the three color components Red (R), Green (G), Blue (B).
► The average intensity I for each pixel (n) within the section will be calculated:
In= (R+G+B)/3.
27
Darkfield microscopy

► Darkfield illumination produces images of brightly illuminated objects on a black


background.
► The working principle is circular oblique illumination
► This technique has traditionally been used for viewing the outlines of objects in liquid
media such as living spermatozoa, microorganisms or cells growing in tissue culture.
► For lower magnifications, a simple darkfield setting on the condenser will be sufficient.
► For more critical darkfield imaging at a higher magnification, a darkfield condenser
with a darkfield objective lens will be required.
► Light is directed to the specimen in a way that no direct light enters the objective.
► Visibility is achieved because of the diffraction or reflection of light by the particles.

28
• If there is no light scattering particle the image
is dark, if there is something that diffracts or
reflects the light, those scattered beams can
enter the objective and are visible as bright
white structures on a black background.
• This contrast method is especially used to
visualize scattering objects like small fresh
water micro-organisms or diatoms and fibers.
• Almost all upright microscopes can be easily
equipped for darkfield illumination.
• To ensure that no direct light is entering the
objective, the numerical aperture (NA) of the
condenser has to be about 15% higher than the
NA of the objective.
• Designed with an internal iris diaphragm to
reduce the NA to the appropriate amount for
darkfield observation.
29
Reflected darkfield

► Within the reflected microscopy applications the darkfield illumination is a


very common contrast technique.
► To achieve a reflected darkfield, the amount of special adaptations at the
microscope is more sophisticated.
► The central light stop is located within a cube of the reflected light attachment.
► The special bright-field field/darkfield objective guides the illumination light
in an outer ring to the specimen.
► Only the scattered light from the specimen runs in the normal central part of
the objective as image forming light rays.

30
31
Light path for darkfield compared to brightfield set up in transmitted and reflected illumination

32
Phase contrast microscopy

► Phase contrast is used for viewing unstained cells growing in tissue culture and
for testing cell and organelle preparations for lysis.
► The method images differences in the refractive index of cellular structures.
► Light that passes through thicker parts of the cell is held up relative to the light
that passes through thinner parts of the cytoplasm.
► It requires a specialized phase condenser and phase objective lenses.
► Each phase setting of the condenser lens is matched with the phase setting of
the objective lens.
► These are usually numbered as Phase 1, Phase 2 and Phase 3, and are found on
both the condenser and the objective lens.
33
Working principle of Phase Contrast

• Light that is travelling through part of a specimen and is not


absorbed by amplitude objects will not produce a clearly
visible image.
• The intensity remains the same, but the phase is changed
compared to the light just travelling in the surrounding areas.
• This phase shift of about a quarter wavelength for a cultured
cell is not visible to our eyes.
• Additional optical elements are needed to convert this
difference into an intensity shift.
• These optical elements create a contrast where un-deviated
and deviated light are ½ wavelength out of phase.
• results in destructive interference.
• This means that details of the cell appear dark against a
lighter background in positive phase contrast.

34
Phase Contrast – Destructive interference

• For phase contrast microscopy two elements are


needed.
• One is a ring slit insert for the condenser, the other is
special objectives that contain a phase plate.
• Both elements are located at the so called back focal
planes.
• Due to the optical principles of phase contrast it allows
good contrast in transmitted light when the living
specimens are unstained and thin.
• A specimen should not be more than 10 µm thick.
• For thicker specimens the dark contrast is valid.
• To determine how many cells within an adherent cell
culture are undergoing mitosis or have entered cell 35

death pathways.
36
Reflected interference contrast microscopy
(RIC)

► Light alone is not enough - An equally distributed light source does not produce clear
shadows and this causes reduced visibility of three dimensional structures.
► Our human vision is triggered to see three dimensions and is well trained to interpret
structures if they are illuminated more or less from one point.
► The resulting dark and bright areas at the surface of a structure allow us to easily
recognize and identify them.
► Therefore, a contrast method displays differences in a structure as a pattern of bright
and dark areas.
► Even though the structures are two dimensionally displayed, they look three
dimensional.
► RIC is a form of interference microscopy that produces images with a shadow relief.
► The simplest way to achieve a contrast method that results in a relief-like image is by37
using oblique illumination.
Differential interference contrast microscopy (a)
diagram and (b) texture of an oil-in water emulsion

• In transmitted microscopy the oblique condenser has an adjustable slit that can be rotated.
• After Köhler alignment of the microscope this slit is inserted in the light path.
• Results in illumination from one side so that specimens that vary in thickness and density are
contrasted.
• Rotating the slit enables us to highlight structures from every side.
• The contrast itself is produced by the complete thickness of the specimen and the resolution
of the image is limited due to the oblique illumination 38
Nomarski Differential Interference Contrast

• To overcome the limitations of oblique contrast,


Nomarski Differential Interference Contrast
(DIC) is commonly used for high resolving
images.
• The benefit of this method is that the relief like
image is only contrasted at the focus area.
• In addition, using infrared light (mostly used
around 700 or 900 nm) instead of white light,
this technique allows a very deep look of more
than 100 µm into thick sections.
• used in neurobiological research.
• Nomarski DIC creates an amplified contrast of
phase differences which occurs when light
passes through material with different refractive
indices.
39
Image generation in DIC
• To achieve transmitted Nomarski DIC images, four optical
elements are needed: a polarizer, two prisms and an analyzer.
• In contrast, the reflected DIC setup only one DIC prism (the
slider) is required.
• The wave vibration direction of light is unified by a polarizer
located between the light source and condenser.
• In the condenser, a special insert – a Wollaston prism
(matching the magnification of the objective) – divides every
light ray into two, called the ordinary and the extraordinary.
• At the specimen, the ray that is passing through e.g. a cell
part is delayed compared to the one passing through the
surrounding medium.
• This result in a phase shift of both rays, which are
recombined with the help of a second prism located at the
objective revolver.
• To create an easily observable pseudo 3D image, a prism can
be moved in and out to enhance the phase shift between 40

ordinary and extraordinary ray.


Key points to remember in DIC
• The DIC method does not require special objectives, but the
prism has to match with the objective used.
• Suitable prisms are available for most fluorite and
apochromat objectives.
• DIC contrast can be offered with shearing values for every
optimization, a high contrast set-up that is best for very thin
specimens.
• A standard prism slider for general use and a high resolution
slider for thick specimens like C. elegans or zebra fish
embryos.
• In principle it combines the oblique illumination of a
specimen where the refraction differences at various parts of
the specimen shape are used for contrast enhancement.
• It allows semi-transparent specimens structures to be
analyzed in a manner difficult to achieve using bright field
microscopy
Zernike phase contrast microscopy and differential interference contrast microscopy and their imaging results on
41 unstained check cells. (a) Zernike

phase contrast microscopy; (b) Zernike phase contrast microscopic image of unstained check cells; (c) DIC microscopy; (d) DIC microscopic image of
unstained check cells.
Fluorescence microscopy

• Fluorescence microscopy is currently the most widely used


contrast technique since it gives superior signal-to-noise
ratios for many applications.
• To identify the distribution of a specific protein within a
tissue, a fluorochrome can be used to mark the protein via an
antibody.
• Fluorescent molecules act like light sources that are located
within specific areas of a specimen.
• These indicators require energy to emit light and this is given
to the fluorochrome by the excitation light, provided by the
microscope light source.
• Light of longer wavelength from the excitation of the
fluorophore is then imaged.
• Fluorescence is popular because of the ability to achieve
highly specific labelling of cellular compartments. 42
Components of fluorescence microscopy

• Combinations of filters that are specific for the


excitation and emission characteristics of the
fluorophore of interest.
• There are usually three main filters: an excitation, a
dichromatic mirror (often called a dichroic) and a
barrier filter, mounted in a single housing above the
objective lens.
• For example, the commonly used fluorophore
fluorescein is optimally excited at a wavelength of
488 nm, and emits maximally at 518 nm.
• Chromatic mirrors and filters can be designed to filter
two or three specific wavelengths for imaging
specimens labelled with two or more fluorochromes 43
(multiple labelling).
Extrinsic fluorescence
1. Frequently, molecules of interest for biochemical
studies are non-fluorescent. In many of these cases,
an external fluorophore can be introduced into the
system by chemical coupling or non-covalent
binding.
2. A common non-conjugating extrinsic chromophore
for proteins is 1-anilino-8-naphthalene sulphonate
(ANS) which emits only weak fluorescence in polar
environment, i.e. in aqueous solution.
3. when bound to hydrophobic patches on proteins, its
fluorescence emission is significantly increased and
the spectrum shows a hypsochromic shift.
4. o-Phthalaldehyde is a non-fluorescent compound
that reacts with primary amines and
b-mercaptoethanol to yield a highly sensitive
fluorophore.
5. The intrinsic fluorescence of nucleic acids is very
weak and the required excitation wavelengths are
too far in the UV region to be useful for practical
applications.
6. SYBR Green is a fluorophore for DNA and it is less
hazardous than ethidium bromide and its
fluorescence emission in water is very weak and
increases about 30-fold upon binding to DNA.
Immunofluorescence microscopy
• Immunofluorescence microscopy is used to map the
spatial distribution of macromolecules in cells and
tissues.
• The method takes advantage of the highly specific
binding of antibodies to proteins.
• Antibodies are raised to the protein of interest and
labelled with a fluorescent probe.
• This probe is then used to label the protein of interest
in the cell and can be imaged using fluorescence
microscopy.
• The antibody to the protein of interest (primary
antibody) is further labelled with a second antibody
carrying the fluorescent tag (secondary antibody).
• Such a protocol gives a higher fluorescent signal than
using a single fluorescently labelled antibody 45
Fluorescence In Situ Hybridisation

• A related technique, fluorescence in situ hybridisation


(FISH), employs the specificity of fluorescently
labelled DNA or RNA sequences.
• The nucleic acid probes are hybridised to
chromosomes, nuclei or cellular preparations.
• Regions that bind the probe are imaged using
fluorescence microscopy.
• Many different probes can be labelled with different
fluorochromes in the same preparation.
• Multiple-colour FISH is used extensively for clinical
diagnoses of inherited genetic diseases.
• This technique has been applied to rapid screening of
chromosomal and nuclear abnormalities in inherited
46
diseases, for example, Down’s syndrome.
Fluorescence resonance energy transfer (FRET)
Fluorescence recovery after photo bleaching (FRAP)

If a fluorophore is exposed to high intensity


radiation it may be irreversibly damaged
and lose its ability to emit fluorescence.
Intentional bleaching of a fraction of
fluorescently labelled molecules in a
membrane can be used to monitor the
motion of labelled molecules in certain
(two-dimensional) compartments.

The time-dependent monitoring allows


determination of the diffusion coefficient.

Incorporation of phospholipids labelled with


NBD (e.g. NBD-phosphatidylethanolamine,
Fig. 12.13b) into a biological or artificial
membrane helps to study the rate of
diffusion across plasma membrane through
FRAP.

You might also like