Basic Science Techniques in Clinical Practice (PDFDrive)
Basic Science Techniques in Clinical Practice (PDFDrive)
Techniques in
Clinical Practice
Basic Science
Techniques in
Clinical Practice
Enjoy!
Hiten Patel
Manit Arya
Iqbal Shergill
Contents
1. Research Governance . . . . . . . . . . . . . . . . . . . . . . . . . . 1
S.J. Vyas, M. Arya, I.S. Shergill, and H.R.H. Patel
3. Immunohistochemistry . . . . . . . . . . . . . . . . . . . . . . . . 18
Philippa Munson
5. Flow Cytometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
P. Erotocritou, M. Arya, S.N. Shukla, and
H.R.H. Patel
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Contributors
INTRODUCTION
The Department of Health (DOH), United Kingdom, regulates
the conduct of medical practice in the country. Its scope of action
extends beyond the same, as it also defines and formulates crite-
ria pertaining to performing research activity. Indeed, research
governance (RG) is more like research regulation, and the DOH
would largely assume the role of the regulation.
Research governance oversees a broad range of regulations,
principles and standards of good practice that exist to achieve,
and continuously improve, research quality across all aspects of
health care in the United Kingdom and worldwide.1
The DOH is responsible for setting health and social care
policy in England. The department’s work sets standards and
drives modernization across all areas of the National Health
Service (NHS), social care, and public health. As well as policy
documents, the DOH also issues guidance on implementation of
policies. The research governance framework (RGF) for health
and social care defines the broad principles of good RG and is
key to ensuring that health and social care research is conducted
to high scientific and ethical standards. The first issue of the RGF
was issued in March 2001 and a later updated in April 2005.2,3
These guidelines and principles of RG apply to everyone con-
nected to health-care research, whether a chief investigator, care
professional, researcher, their employer(s), or support staff. This
list includes any health-related research which involves humans,
their tissues, and/or data.1
Examples of such research include the following:
CONCLUSION
Regulation of research has evolved considerably since medieval
ages, when most research and discovery was less controlled, but
the product of relentless persuasion and thinking of the human
mind. This applies mostly to medical discoveries such as admin-
istration of the first anesthetic (Lister), penicillin (Fleming), and
small pox vaccine. One wonders if such strict regulations existed,
whether it would have been possible to discover these landmark
medical innovations. We can believe that had mankind not taken
these risks then, we would not have seen so many advancements
in modern medicine as we see today. Alternatively, we have
become defensive as we protect ourselves from the speculated
and feared side effects of new discovery. It is more like a fear
and the insecurity of the unknown. However, regulation of
research is equally important so as to direct and organize it while
providing maximum protection to the participants and not com-
promise their safety in today’s day and age.
References
1. www3.imperial.ac.uk/clinicalresearchoffice/researchgovernance
2. www.dh.gov.uk/PublicationsAndStatistics/Publications/Publications
PolicyAndGuidance
3. Research Governance Framework for Health and Social Care, 2nd ed.
Publication of the Department of Health, United Kingdom.
4. Guidelines for Good Clinical Practice- ICH Guidelines.
1. RESEARCH GOVERNANCE 7
Selection of Subjects
Study subjects should be selected to ensure that they are repre-
sentative of the population to which the results of the study will
be applied (target population). For example, studies including
only patients referred to one particular hospital or volunteers
may not always provide a representative sample of a more general
population. The source of subjects and the inclusion and exclu-
sion criteria need to be clearly defined.
Confounding
In a study population, there could be differences in the charac-
teristics of the subjects, such as age and sex, which may affect
outcome, and which may also be related to the exposure of inter-
est. For example, in comparing the operative mortality between
two surgical techniques, the differences between the outcomes
of the two operations could be due to the procedures. It could
also be due to differences in the preoperative patient character-
istics, which could also affect the choice of the technique. In
statistical terms, the effects of operation and patient character-
istics are said to be confounded (see Figure 2.1). It is important
Operation type
Mortality Patient
characteristics
Sample Size
When planning a study it is important to estimate the number
of subjects required. Otherwise it could be impossible to tell if
a study has a good chance of producing worthwhile results.
In general, the larger a study is the greater its power and preci-
sion. Power is concerned with testing for effects (for example a
difference between two treatments) and is defined as the proba-
bility that a study will be able to detect a clinically important
effect if it really exists. Precision of a sample estimate is deter-
mined by the width of a confidence interval for the estimate.
Statistical formulae are available to calculate sample size for
a study.1
Bias
The presence of bias in a study may affect the validity of its find-
ings. Steps should be taken at the design stage to avoid bias. The
most commonly occurring biases are described below:
Writing a Protocol
A study design should start with the writing of a protocol. It
should lay out systematically the various stages of the study
described above and include strategies for data collection and
achieving completeness, data entry, storage and validation of
data, a broad statistical analysis plan, and the responsibilities of
the study personnel. An analysis plan ensures that analysis for
the study is performed in an objective way, thus avoiding data
dredging, which may show spurious relationships.
Blinding
Blinding is a technique used to minimize response bias. By blind-
ing treatment allocation from both the patient and assessors, it
is possible to eliminate response bias—this is known as double
12 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Placebos
If there is no existing standard beneficial treatment, then it is
reasonable to give the control group placebos instead of any
active treatment. Placebos are identical in appearance to the
active treatment, but are pharmacologically inactive. Placebos
are used because the act of taking a treatment may itself have
some benefit to the patient, so that part of any benefit observed
in the treatment group could be due to the knowledge/belief that
they had taken a treatment. Moreover, for a study to be double
blind, it is necessary for the two treatments to be indistinguish-
able. When the control treatment is an alternative active treat-
ment rather than a placebo, the different treatments should still
be indistinguishable, if possible.
Protocol Violations
A common problem relates to patients who have not followed the
protocol—for example patients who receive the wrong treatment
or do not take their treatment, known as noncompliers. If this
does occur, it is advisable to keep all randomized patients in the
trial and analyze groups as randomized. This is known as an
intention-to-treat analysis.
Types of Trials
Crossover Trials
The most common alternative is the crossover trial, in which all
patients are given both the intervention and control treatments
2. DESIGNING HEALTH STUDIES 13
Observational Studies
In observational studies, associations between health outcomes
and exposure to risk factors or preventative measures are
observed in subjects without any planned intervention. Observa-
tional studies may be classified as 1) descriptive (includes case
report/series and cross-sectional) or 2) analytic (includes cohort
and case-control studies). Descriptive observational studies are
used to describe disease patterns and trends. Often these studies
are used to generate hypotheses and plan health programs. Ana-
lytic studies may be used to estimate or test relationships between
an exposure to a risk factor or a preventative measure, and a
health outcome.
Case Report/Series
A case report is a detailed profile of a single patient, reported by
one or more clinicians. For example, a report was published on
a 40-year-old woman who developed pulmonary embolism after
use of oral contraceptive.4
A case series is an expanded version of a case report that
includes a series of patients with a given health condition. For
example, a study was conducted on 12 children who had received
the measles-mumps-rubella (MMR) vaccine and were referred to
a gastroenterology unit and had a history of normal development
followed by a pervasive developmental disorder (PDD).5
14 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Cross-Sectional Study
In this type of study design, all information on a group of sub-
jects is collected at a single time point. There is no typical format,
and each study may be designed to meet the need of the
researcher. This study design includes subjects who are exposed
to the risk factor or preventative measure under investigation, as
well as those who are not. The outcome of interest is compared
between the exposed and unexposed groups. This design is par-
ticularly suitable to estimate the prevalence of a disease and to
examine health trends. It may also be used to examine the asso-
ciation between a risk factor or a preventative measure and a
health outcome. An example is the Health Survey of England
conducted to monitor health trends, and estimate prevalence of
certain diseases and certain risk factors associated with this
outcomes.6
Cohort Study
In a cohort study, a group of subjects is identified according to
the research objectives and followed over the study period, until
the subjects drop out, have the event of interest, or reach the end
of the study period. The event rate is compared between the
group of subjects exposed to the risk factor, or any preventative
measure under investigation, and the unexposed group. A cohort
study may be both prospective and retrospective. A retrospective
2. DESIGNING HEALTH STUDIES 15
Case-Control Study
Case-control studies are always retrospective. They examine how
exposure to retrospective factors contributes to current health
conditions. A group of subjects with the outcome of interest is
recruited according to some prespecified inclusion criteria
(cases). Another group of subjects who have not experienced the
outcome of interest is recruited as controls. Exposure to the risk
factor of interest is then compared between the cases and con-
trols. Cases and controls may be matched on important con-
founding characteristics. For example, a matched case-control
study was used to investigate the association between the MMR
vaccine and PDD.8 The exposure of interest was the MMR vaccine.
Subjects with a diagnosis of PDD, while registered with a general
practitioner (GP), were recruited as cases. Subjects with no diag-
nosis of PDD, matched on age, sex, and GP to the cases, were
recruited as controls. The conclusion was that there was no
association between MMR vaccine and PDD.
References
1. Machin D, Campbell M, Fayers P, Pinol A. Sample Size Tables for
Clinical Studies, 2nd ed. London: Blackwell Science; 1997.
2. DESIGNING HEALTH STUDIES 17
INTRODUCTION
Immunohistochemistry describes the localization of antigens in
histological and cytological preparations using antibodies. It is
now recognized as an essential element and a major tool, both
in diagnostic and research-orientated cellular pathology. The
technique involves the detection of specific or highly selective
cellular epitopes with an antibody and appropriate labelling
system. Immunohistochemistry can be performed on cytological
preparations, frozen sections and paraffin-embedded histological
sections.
The first point of call for any researcher wishing to perform
immunohistochemistry should be the local immunohistoche-
mistry laboratory. Immunohistochemistry is a speciality that is
constantly changing and improving, and the local immunohisto-
chemistry laboratory will have more experience and more up-to-
date methods than most research laboratories. This should
therefore be done, even when in possession of an existing proto-
col from other researchers. It is also wise to refrain from pur-
chasing any reagents without first consulting the laboratory and
to ask the histology laboratory to perform the section cutting.
Although microtomy can be learnt in a few weeks, it is a skill
that takes years to perfect and poor section quality can severely
affect the interpretation of the immunohistochemistry.
BASIC IMMUNOHISTOCHEMISTRY
The basics of immunohistochemistry involve the use of an anti-
body to detect a specific epitope, which is then visualized using
a detection system and chromogen.
Fixation, be it with alcohol or formalin, will mask some anti-
gens to a certain extent. When this occurs, some form of antigen
retrieval will be needed to re-expose the antigen. This has to take
place before applying the primary antibody (Table 3.1).
3. IMMUNOHISTOCHEMISTRY 19
fixation/processing/embedding
↓
section cutting/microtomy
↓
dewaxing sections and taking to water
↓
antigen retrieval
↓
peroxidase block
↓
primary antibody
↓
secondary antibody
↓
tertiary layer
↓
chromogen
↓
counterstain
↓
dehydrate, clear and mount sections
METHODOLOGY
Immunohistochemistry methodology starts with the process of
fixation. The length of time a sample spends in fixative is very
important as under- or over-fixation can lead to problems with
proteolytic antigen retrieval.
Samples should not be kept in fixative indefinitely (24 hours
is optimal for formalin). If a delay is experienced between sample
collections, they should be taken to the histology laboratory for
processing once they have spent 24 hours in fixative.
3. IMMUNOHISTOCHEMISTRY 21
Fixation
Fixation is essential for tissue and antigen preservation. The
most important reactions that take place are those that stabilize
the proteins. The general principle is that the fixatives form
cross-links between proteins, thereby stabilizing the cytoskeletal
structure.
Formaldehyde (formalin) is the fixative of choice for routine
histology; therefore, any retrospective studies using patient
samples will involve the use of immunohistochemistry on
formalin-fixed, paraffin-embedded blocks. The aldehydes form
cross-links between protein molecules, the reaction being with
the basic amino acid lysine.
These cross-linking “methylene bridges” ensure that the
structures of intracytoplasmic proteins are not significantly
altered. However, they can also have the effect of “masking”
antigens, therefore, tissue that has been fixed in formalin will
generally require some form of antigen “unmasking,” i.e., antigen
retrieval.
Alcoholic fixatives are generally used for frozen sections or
cytological preparations, as they are poor penetrators of tissue.
They preserve most proteins in a relatively undenatured state,
although some antigen retrieval may be necessary. It is wise to
refrain from fixing frozen sections in glutaraldehyde or parafor-
maldehyde, as these will usually mask the antigen, and effective
antigen retrieval is very difficult to perform on frozen sections.
Antigen Retrieval
Antigen retrieval is the method by which antigens that have been
masked through fixation and processing are unmasked prior to
immunostaining. There are two main methods for antigen
retrieval: proteolytic and heat-mediated. Of these two, heat-
mediated antigen retrieval (HMAR) is the most effective on the
majority of antigens; however, the correct antigen retrieval
method must be determined for each antibody.
Proteolytic digestion generally utilizes trypsin, chymotrypsin,
protease, or proteinase K enzymes. The majority of these need
to be used at 37oC, with correctly prepared pH solutions. The
main pitfall of proteolytic digestion is that the digestion time has
to be tailored to the fixation time, i.e., the longer a tissue has
22 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Antibodies
The vast majority of primary antibodies available for use on
human tissue are made in either rabbits or mice. It is essential
to check (before purchasing) that the required primary antibody
is available that works on formalin-fixed, paraffin-embedded
tissue. Again, liaison with the local immunohistochemistry labo-
ratory will be helpful. Generally speaking, monoclonal murine
antibodies are preferable to polyclonal antibodies, as they tend
to be more specific. See Animal Tissue section for more specific
guidelines on selecting antibodies for tissue other than human.
3. IMMUNOHISTOCHEMISTRY 23
Avidin-Biotin Methods
These methods were first described by Heggeness and Ash3 (1977)
and utilize the high affinity of the glycoprotein avidin for biotin,
a low molecular weight vitamin.3 Avidin is present in egg white
and is composed of four subunits that form a tertiary structure
possessing four specific binding sites for biotin. Egg-white avidin
contains some oligosaccharide residues that possess an affinity
for some tissue components. As a result, a similar molecule,
streptavidin (extracted from the culture broth of the bacterium
Streptomyces avidinii), is generally used as this molecule does not
contain the oligosaccharide residues.
Biotin (vitamin H) can be conjugated to both antibody and
enzyme molecules. Up to 200 molecules of biotin can be conju-
gated to one antibody molecule, often with the aid of spacer
arms. By spacing the biotin molecules, streptavidin is given room
to bind and is able to maximize its strong affinity for biotin.
In a streptavidin-biotin complex, the streptavidin and biotinyl-
ated enzyme are supplied as two separate reagents, which need
to be added together 30 minutes before use. The streptavidin can
be added in slight excess so that the biotinylated enzyme does
not saturate all of the biotin-binding sites. Either peroxidase or
alkaline phosphatase can be used as the enzyme label.
24 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
STREPTAVIDIN + BIOTINYLATED
PEROXIDASE
ANTIGEN
PEROXIDASE-LABELLED STREPTAVIDIN
ANTIGEN
Polymer-Based Methods
Polymer-based methods can be either a two- or three-layer
system. In the two-layer system, the secondary antibody is part
of the polymer molecule. The secondary antibody is conjugated
to the polymer as are a large number of enzyme molecules. The
EnVision kit, available from Dako, U.K., is an example of a two-
layer polymer system. Vyberg and Neilsen4 (1998) reported com-
parable sensitivity between the Dako kit and a three-stage
avidin-biotin system (Figure 3.3).4
In the three-layer system, the secondary antibody is applied
unconjugated, then the tertiary antibody which is conjugated to
the polymer along with enzyme molecules. The Novolink Polymer
kit, available from Novocastra Laboratories, U.K., is an example
of a three-layer polymer system (Figure 3.4).
The main advantage of polymer-based systems is that
they can be used on tissues containing a lot of endogenous
biotin without producing background staining. (See Background
section.)
ANTIGEN
POLYMER + PEROXIDASE
MOLECULES
ANTIGEN
Animal Tissue
When using animal tissue, the detection system components
need to be chosen with care. First, the primary antibody must be
able to recognize that particular species; this information will be
on the specification sheet. The secondary antibody then needs to
recognize the primary antibody without cross-reacting with the
host tissue. For example, if the host tissue is mouse, the primary
antibody should be raised in another animal, e.g., rat. The sec-
ondary antibody will then be raised in another animal, e.g.,
rabbit, and be anti-rat.
Occasionally, the only available primary antibody will be
raised in the same species as the host tissue. In these circum-
stances, there are commercial kits available to facilitate these
staining procedures.
TROUBLESHOOTING/OPTIMIZATION
Background
The major causes of background staining in immunocytochem-
istry are hydrophobic and ionic interactions and endogenous
enzyme activity. Hydrophobicity is a property shared by most
proteins and confers stability on the tertiary structure of pep-
tides. It may also take place between different protein molecules
and impart stability to immune complexes.
28 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Endogenous Biotin
Biotin is a vitamin and coenzyme that is found in liver, kidney,
and a variety of other tissues. Biotin binds specifically and with
a very high affinity to avidin and streptavidin. Its endogenous
activity is most pronounced in frozen sections. It is possible to
block endogenous biotin with successive incubations of 0.1%
avidin and 0.01% biotin. The avidin blocks the endogenous
biotin, and the dilute biotin blocks any free sites left on the
avidin. Alternatively, if the tissue under investigation contains
large amounts of endogenous biotin, a different detection system
can be used, e.g., a polymer-based method, which does not use
biotin in its labelling system.
Washing
Thorough washing of slides in between the various steps of an
immunohistochemical technique is essential. Most protocols use
a tris buffered saline (TBS), although phosphate buffered saline
(PBS) can be used. As mentioned above, the addition of a deter-
gent to the buffer wash will help reduce some background stain-
ing; however, this should not be used when staining frozen
sections, as there is a higher risk of section detachment when
dealing with these specimens. It is worth remembering that it is
difficult to over wash sections whereas under washing them will
result in dirty, patchy staining.
References
1. Shi SR, Keye ME, Kalra KL. Antigen retrieval in formalin-fixed
paraffin-embedded tissues: an enhancement method for immun-
ohistochemical staining based on microwave oven heating of
sections. J Histochem Cytochem 1991;39:741–748.
2. Singh N, Wotherspoon AC, Miller KD, Isaacson PG. The effect of for-
malin fixation time on the immunocytochemical detection of antigen
using the microwave. J Pathol (Suppl.) 1993;382A.
3. Heggeness, MH Ash, JF. Use of the avidin-biotin complex for the
localization of actin and myosin with fluorescence microscopy. J Cell
Biol 1977;73:783.
4. Vyberg M, Neilsen S. Dextran polymer conjugate two-step visualisa-
tion system for immunocytochemistry: a comparison of EnVision+
30 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
INTRODUCTION
The fundamental aspect of cell culturing is to understand the
type of cells that the investigator wishes to grow. There are many
differences between the cell types; however, the easiest method
of categorizing them is into primary cells and cell lines.
Two types of cultured cell types:
• Primary cells
• Cell lines
Primary Cells
These are cells derived directly from tissue samples/biopsies
which have heterogeneous nature of cells with variable growth
friction. The cells are extracted directly from the tissue and
grown directly in specified optimal cell culture Medias.
Cell Lines
These are cells subcultured from primary cells but now have been
manipulated in the laboratory so that they last longer and can
go through more cell passages with growth friction of 80% or
more, than the original primary cells before they change their
morphology. On the whole, they tend to be more resilient than
primary cells. However, due to their manipulation, they may not
mimic in-vivo cells as closely as the unmanipulated primary cells.
Subcultured cells separated from primary cells create homoge-
neous cell lineages. They ascertain specific properties by the
process of cloning or physical cell separation, thus leading to
so-called cell strains.
32 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
• Gloves
• Sterilized equipment
• Laminar flow cabinet
• Singe-se pipettes
• Resterilization of equipment
• Gentamicin
• Streptomycin
• Penicillin G
• Amphotericin B (antifungal)
4. CELL CULTURING 33
FIGURE 4.1. Cell culturing in a laminar flow cabinet. Note the scientist
is using a battery-operated “pipette boy” to draw the media up into the
pipette. (Courtesy of P.N. Vaiude, Honorary Clinical Fellow, Barts and
the London NHS Trust.)
B
FIGURE 4.2. Caplan-1 (A) and Paca-3 (B) cell strains growing in solution
medium. (Courtesy of P.N. Vaiude, Honorary Clinical Fellow, Barts and
the London NHS Trust.)
Support Cells
Certain cells need “feeder” cells to support their growth, espe-
cially during the initial stages when their cell numbers are low.
Once the cells of interest have reached an adequate number and
are able to support themselves, the “feeder” cells are removed,
and the cells are allowed to grow on their own momentum. An
4. CELL CULTURING 35
example of this is the use of “3T3” cells from mice for keratino-
cytes (skin epithelial cells). These cells are gamma irradiated
before use so that they can provide support for the keratinocytes
when they are initially extracted from the biopsies and are low
in number. The 3T3 cells are irradiated so that they can provide
support but then die off within a few days; otherwise they will
overtake the cells being cultured and kill them.
Incubation
Ideal conditions for the growth of cells are normally maintained
for maximal growth. This usually includes incubating the cells
at a suitable temperature, environmental moisture, and an ade-
quate environmental CO2 concentration. Humidified CO2 incuba-
tors with shelves are usually available with these preset conditions
for cell culture. They are regularly cleaned out with a detergent
or 70% alcohol, and fungicide/bactericide is placed in the humid-
ified water to prevent cell infections during incubation.
CELL EXTRACTION
There are many means of extracting cells, and the one that we
will describe is for fibroblasts from tissue biopsies.
Fibroblast Extraction
Primary fibroblasts are fairly resilient cells that proliferate quickly
and do not need support cells for their growth. However, like all
cells, they will need their ideal medium and environmental con-
ditions (Figures 4.3 and 4.4).1
Fibroblast extraction steps:
FIGURE 4.3. Fibroblasts growing out from the explanted tissue sample
on a cell culture flask. The picture was taken with a camera mounted on
a light microscope. Magnified ×20. (Courtesy of P.N. Vaiude, Honorary
Clinical Fellow, Barts and the London NHS Trust.)
CONCLUSION
Cell culturing techniques have developed and significantly
improved over the decades and have enabled major scientific
leaps to important fields in cancer study, wound healing, and,
not least, stem cell technologies.2
Glossary:
References
1. Jones, GE., Witkowski, JA. Reduced adhesiveness between skin fibro-
blasts from patients with Duchenne muscular dystrophy. J Neurol Sci
1979;43(3):465-470.
2. Navsaria HA, Myers S, Leigh IM, McKay IA. Culturing skin in vitro
for wound therapy. (Review). Trends Biotechnol 1995;13:91-100.
Chapter 5
Flow Cytometry
P. Erotocritou, M. Arya, S.N. Shukla, and H.R.H. Patel
FLUORESCENCE
This is a key concept in flow cytometry and is the property of
molecules to absorb light energy at one wavelength, called the
excitation wavelength, and then rapidly re-radiates some of that
40 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Light emission
wavelength detected
Photomultiplier (color) Fluorochrome
LIGHT SCATTER
The flow cytometer is able to detect and quantify light scatter
signals as well as immunofluorescence. The light scatter signals
are due to the laser light of the flow cytometer reflecting and
refracting off the cells. Two types of light scatter are quantified:
1) Orthogonal or side light scatter, which is light scatter
measured at a right angle (90 degrees) to the direction of the
laser beam. This is detected in the side channel, the intensity of
the reading correlating with the granularity of the cell.
2) Forward or low angle scatter, which is light scatter
measured along the same axis that the laser light is traveling or
near 180 degrees from the point at which the laser beam intersects
the cell. This measurement correlates with cell size and cell
surface properties, such as cell ruffling. Therefore not only can
it be used to distinguish cell size, but also live from dead cells.
Fluidic System
The fluidic system focuses the cells/particles into a fine stream,
which are moved individually to intersect the laser light source.
Furthermore, the fluidic system is vital in cell sorting by flow
cytometry. Initially, the sample of fluorescently labelled suspen-
sion of single cells flows rapidly through a narrow channel in the
instrument, where a small amount of cell suspension joins a
larger amount of cell free buffer (sheath fluid). These two streams
do not mix, and consist of an inner sample stream surrounded
by an outer sheath stream (coaxial flow). The coaxial stream
ensures the cells are centered in the flowing stream, passing the
laser beam optimally centered, in addition to being spaced out
sequentially and passing the laser beam individually.
To ensure the sample fluid is flowing continuously, positive
air pressure is applied to the sample reservoir and sheath fluid.
A purge line is connected to the sheath inlet to allow a vacuum
to be applied for clearing blockages and air bubbles.
These mirrors and filters also act to separate and direct emis-
sions of varying wavelengths to the corresponding detectors or
photomultiplier tubes (PMT). Within the PMTs the incoming
light is converted into electronic signals. Subsequent electronic
and computational processing of these signals results in graphic
display and statistical analysis of the measurements being
made.
There are cytometers now available that are capable of ana-
lyzing up to 13 parameters for each cell (forward scatter, side
scatter and up to 11 colors of immunofluorescence).12 This allows
for cell size, lipid content, protein content, DNA content, enzyme
activity to name a few characteristics for each cell to measured.
Thus, allowing for a multidimensional representation of a popu-
lation to be obtained. With most cytometers, it is almost always
possible for at least 1,000 cells to be analyzed per second, whilst
with appropriate specimens some cytometers are able to analyze
up to 100,000 cells per second, with no marked reduction in data
quality.13
Color Assignment
The signal emitted by a fluorochrome is detected by its corre-
sponding PMT and then converted to a parameter that can be
acquired. The series of optical filters and mirrors used ensures
that only specific regions of the spectrum reach each PMT. The
PMT detectors in the flow cytometer are labeled FL1, FL2, FL3
and onwards depending on how many are present, with light of
specific emission wavelength only been detected by each.
5. FLOW CYTOMETRY 43
DATA ANALYSIS
Dot plot -
contains primary and
secondary antibody
This figure shows representative flow cytometry histograms and dot plots. On both the histograms and
dot plots the x-axis shows fluorescence intensity (channel number) on 1024 channels encompassing
4-log decades (i.e. logarithmic scale).
On the histograms the y-axis shows cell counts in each channel, whereas on the dot plots the y-axis
shows side scatter intensity - SSC (channel number) on a linear scale.
Quadrant gates were set on the dot plots using the background levels of fluorescence of the respective
unstained negative control (contains only the fluorochrome-conjugated secondary antibody without the
primary antibody).
Data due to dead cells/cell debris and non-specific binding of the secondary antibody being have been
gated out of the final results
References
1. Cram LS. Flow cytometry, an overview. Methods Cell Sci 2002;
24:1–9.
5. FLOW CYTOMETRY 47
size
standard
gel
electrophoresis
A
blotting
direction of
B transfer
detection
h·ν
/ β, γ
probe
hybridization
filter gel
paper nylon
cathode membrane anode
Blocking
Because the used membranes are most suitable to bind protein
and nucleic acids, nonspecific binding between probes and the
material has to be blocked. These nonspecific interactions are
prevented by soaking the membranes in a solution containing a
high concentration of DNA (e.g., herring or salmon sperm DNA)
in case of Southern and Northern hybridization. For antibody-
based Western hybridization, protein is used for blocking,
typically 5% nonfat dry milk or 3% bovine serum albumin (BSA).
The presence of a detergent in the blocking solution, like Tween
20 at 0.05% is also important.
Hybridization
The labeled probe is added to the blocked Southern, Northern,
or Western blot membrane in a buffered saline solution, contain-
ing a small percentage of detergent and nonfat milk or BSA.
Under gentle agitation, the probe molecules are allowed to bind
for a period of hours. The recommended final DNA or RNA probe
concentration is 2 to 10 ng/ml. A primary antibody is generally
incubated with the filter in dilution between 0.5 and 5 µg/ml. The
secondary antibody can be added after the unbound primary
antibody is removed by washing the membrane.
Washing
After hybridization, the membrane is rinsed repeatedly in several
changes of buffer to wash off any unbound, excess antibody, or
nucleic acid probe, in order to avoid unspecific background
signals.
Detection
If the probe is radioactive, the pattern of hybridization is visual-
ized on x-ray film by autoradiography. The membrane is pressed
against the film, which in turn is exposed, for a few minutes up
to weeks, wherever the probe bound. Because nonradioactive
detecting methods are safer, quicker, and cheaper, nowadays
autoradiography is scarcely used.
6. WESTERN, NORTHERN, AND SOUTHERN BLOTTING 55
O
NH
luminol
NH
NH2 O
+ H2 O2
protein
HRP h·
O
O
FC O
NH2 O
+ H2 O + N2
Stripping
After detection with one particular probe, this can almost com-
pletely be removed by stripping the membrane. The membrane
has to be submerged with a special stripping buffer and, in case
of Southern and Northern blots, heated to 94°C. Then the mem-
brane is ready for the next hybridization with a different probe.
This process can be repeated up to 20 times.
KEY POINTS
• Southern blots are used to identify DNA, Northern blots for
RNA, and Western blots for protein, respectively
• Molecules of DNA cut with restriction enzymes, RNA dena-
tured with formaldehyde, and protein denatured with SDS are
separated according to their size by agarose gel electrophoresis
(Southern, Northern) or SDS-PAGE (Western)
• The separated molecules are transferred to a solid nylon or
nitrocellulose membrane by capillary action (Southern, North-
ern) or electrophoresis (Western)
• The blotted membranes are blocked with extensive DNA
(Southern, Northern) or protein (Western) to prevent unspe-
cific binding of probes
• The blocked membranes are hybridized with a target-specific
ssDNA or RNA probe (Southern, Northern), or antibody probe
(Western)
• The probes are labeled with radioactivity, fluorescence dyes,
DIG or reporter enzymes
• A labeled secondary antibody directed against DIG or primary
antibody is applied in a second hybridization step
• After hybridization, unbound probes are extensively washed
off
• Bound probes are detected by autoradiography, fluorescence,
or enzymatic chemiluminescent emission of light
• Southern blotting can be used for DNA fingerprinting and
genome mapping, northern blotting for gene expression profil-
ing, and Western blotting for protein characterization, identi-
fication as well as expression analysis
References
1. Southern EM. Detection of specific sequences among DNA fragments
separated by gel electrophoresis. J Mol Biol 1975;98(3):503–517.
6. WESTERN, NORTHERN, AND SOUTHERN BLOTTING 57
2. Alwine JC, Kemp DJ, Stark GR. Method for detection of specific RNAs
in agarose gels by transfer to diazobenzyloxymethyl-paper and hybrid-
ization with DNA probes. Proc Natl Acad Sci USA 1977;74(12):
5350–5354.
3. Renart J, Reiser J, Stark GR. Transfer of proteins from gels to diazo-
benzyloxymethyl-paper and detection with antisera: a method for
studying antibody specificity and antigen structure. Proc Natl Acad Sci
USA 1979;76(7):3116–3120.
4. Towbin H, Staehelin T, Gordon J. Electrophoretic transfer of proteins
from polyacrylamide gels to nitrocellulose sheets: procedure and some
applications. Proc Natl Acad Sci USA 1979;76(9):4350–4354.
5. Burnette WN. Western blotting: electrophoretic transfer of proteins
from sodium dodecyl sulfate-polyacrylamide gels to unmodified nitro-
cellulose and radiographic detection with antibody and radioiodinated
protein A. Anal Biochem 1981;112(2):195–203.
Chapter 7
Fluorescent In Situ Hybridization
Fiona Campbell and John M.S. Bartlett
INTRODUCTION
In situ hybridization (ISH) technique was introduced by Gall and
Pardue1 in 1969. At that time the technique was limited by the use
of radioactively labelled probes that were subsequently visualized
by autoradiography. The development of interphase cytogenetics
in the 1980s and fluorescent labels in 19862 has seen the technology
applied in a number of fields. Although fluorescent in situ hybrid-
ization (FISH) is a valuable research tool, it is now also a technique
employed in the diagnostic laboratory. It is currently a standard
tool in cytogenetics laboratories, where it is used for the diagnosis
of hereditary disorders, chromosomal aberrations, and hemato-
logic cancer markers. More recently, the technique has been
applied to formalin-fixed, paraffin-embedded cells and tissues. The
application of FISH to detect gene amplifications (HER2 in breast
cancer),3 gene rearrangements (BCR-ABL in leukaemias),4 micro-
deletions,5 chromosomal duplication, and viral infections (HPV)
highlights the importance of this methodology, not only in the
clinical setting but in wider research applications.
This chapter serves to highlight the key points in the applica-
tions of FISH technology.
BASIC PRINCIPLES
Fluorescent in situ hybridization is based on the ability of com-
plimentary single-stranded nucleic acid molecules to hybridize
to each other, and therefore allow the demonstration of specific
genes or chromosomes in their cellular environment. The tech-
niques are simple and involve the pretreatment of the tissue or
cellular preparation to unmask the target DNA and the hybridiza-
tion of a specific DNA probe, of complimentary base sequence,
to this target.
Fluorescent in situ hybridization can be performed on either
fresh or archival tissues and unstained cytologic preparations.
Standard FISH protocols can be divided into 5 basic steps:
7. FLUORESCENT IN SITU HYBRIDIZATION 59
Probes
The main requirement for FISH is the development of a DNA
probe with homology to the target region. In most instances,
these probes are commercially available, and these are highly
recommended, as the “in-house” development of such probes
needs to be subject to rigorous quality control measures. The
probe DNA can be labelled by nick translation, polymerase chain
reaction (PCR), random priming, or chemical conjugation. All
probes must have Cot-1 DNA added, as this represents repetitive
sequences of the human genome and addition of it to the probe
will suppress nonspecific hybridization.6 Cot-1 DNA is usually
supplied with commercial probes.
All FISH procedures using commercial probes should be carried
out according to manufacturer’s instructions for optimal results, as
they may differ slightly from the method detailed below.
Sample Preparation
Fluorescent in situ hybridization methodologies are most
commonly applied to isolated cells from whole blood or other
bodily fluids, frozen-tissue sections, or formalin-fixed paraffin-
embedded (FFPE) tissues.1,2 Cytologic preparations and frozen-
tissue sections can be prepared on to silane-coated slides or
charged slides and fixed in an acid: alcohol mix, for example,
1 part acetic acid with 3 parts methanol.
Formalin-fixed paraffin-embedded tissue sections can also be
prepared by sectioning at 3–4 µm thickness and mounting onto
silane-coated or charged slides. Formalin fixation is used to
prevent tissue degradation and to preserve morphology but can
also affect the retention of nucleic acids. Most pathology labora-
tories use neutral buffered formalin to fix tissues, and this pro-
vides an optimal platform for tissue morphology assessment.
Neutral buffered formalin acts as a strong oxidizing agent, pro-
moting divalent protein-protein and protein-nucleic acid cross-
links in fixed tissues. This fixation is followed by dehydration and
60 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Acid Permeabilization
Following de-waxing and rehydration a number of different steps
may be used to increase tissue permeabilization and to allow
probe access. Each of these steps has the potential to cause tissue
damage and result in the loss of tissue morphology, causing the
technique to fail. Thus, tissue permeabilization is a balance
between allowing probe access and retaining tissue architecture.
The most frequently used pretreatment protocols involve one or
more steps, using acid, detergents, and/or reducing agents with
the aim to permeabilize tissue prior to protease digestion steps,
which break down the cellular proteins to enhance probe pene-
tration, and to reduce autofluorescence.
Most commonly, incubation in 0.2 normal hydrochloric acid
is used to remove protein from the tissue and improve probe
access. This incubation step is thought to reverse some of the
effects of formalin fixation.
Reducing Agents
Sodium metabisulphite, sodium thiocyanate,3 and MES are used to
break disulphide bonds formed by formalin fixation and aid subse-
quent protease digestion, and therefore, increase probe access.
Using both acids and reducing agents prior to protease diges-
tion ultimately reduces the amount of exposure to proteolytic
enzymes and therefore minimizes the amount of tissue damage
and any loss of morphology.
Protease Digestion
As mentioned above, this step needs to be performed with
minimal tissue damage. This is achieved by varying the times the
7. FLUORESCENT IN SITU HYBRIDIZATION 61
Posthybridization Wash
Posthybridization washes are essential to remove any excess
unbound probe and any nonspecifically bound probe. The strin-
gency of the posthybridization wash can be determined by
salt and formamide concentrations and by temperature.7 Increas-
ing temperature and formamide concentrations, and decreasing
salt concentrations, will decrease hybridization. Each of these
methods results in removing the hydrogen bonds required for
probe-target DNA binding, and, therefore, the removal of any
nonspecifically bound probe.
QUALITY ASSURANCE
Examples of a FISH protocol for formalin-fixed, paraffin-
embedded tissue sections are listed below.
This pretreatment method has been adapted from the Path-
Vysion pretreatment protocol (Abbott Diagnostic Labs., UK). If
many slides are to be treated, steps 2–19 can be performed on a
VP2000 automated tissue processing station, substituting the
2xSSC washes with distilled water. The VP2000 can accommo-
date up to 50 slides per run. All glassware should be rinsed with
distilled water before use.
Slide Pretreatment
1. Cut 4-µm tissue sections, pick up on silane-coated slides,
and bake overnight in a 56°C oven. Store at room temperature
until required
2. Set up two water baths, one at 37°C and one at 80°C.
Place one Coplin jar per 5 slides to be treated in each water bath.
Fill those at 80°C with 8% sodium thiocyanate (pre-treatment
solution) pH 6.5–7.0 and those at 37°C with 0.2 N HCl pH 2.0 ±
0.2 for protease digestion, but do not add the protease at this
time
3. Immerse slides into xylene for 5 minutes at room
temperature to remove the paraffin wax
4. Repeat step 3 with fresh xylene
5. Transfer slides to 99% ethanol for 5 minutes at room
temperature to remove xylene from the tissue sections
6. Repeat step 5 with fresh 99% ethanol
7. FLUORESCENT IN SITU HYBRIDIZATION 63
Acid Permeabilization
7. Remove slides and allow them to air-dry before immersing
in 0.2 N HCl for 20 minutes at room temperature
8. Remove slides and wash in distilled water for 3 minutes
9. Wash slides in 2xSSC buffer for 3 minutes at room
temperature
Protease Digestion
13. Incubate the slides in the protease at 37°C for the
recommended digestion time. This incubation time will vary
depending on the tissue type, tissue fixation method used, and
the concentration and activity of pepsin used
14. Remove slide and immerse in 2xSSC buffer for 5 minutes
at room temperature
15. Repeat step 14 with fresh 2xSSC buffer
16. Immerse slides in 10% neutral-buffered formalin for 10
minutes at room temperature
17. Place slides in 70% alcohol for 1 minute at room
temperature
18. Place slides in 85% alcohol for 1 minute at room
temperature
19. Place slides in 99% alcohol for 1 minute at room
temperature
21. Air-dry slides. Mount using a mountant containing DAPI
(4,6-diamidino-2-phenylindole-2-hydrochloride) and apply glass
cover slips
21. Assess the tissue digestion with a 100-W fluorescence
microscope that incorporates a filter specific for the excitation
and emission wavelengths of DAPI. If digestion is optimal, proceed
to step 22. If sections are under-digested, proceed to step
22. Repeat steps 13 to 21 and reassess the digestion. If
sections are over-digested, discard the slides and repeat the
procedure using new sections, reducing the incubation time in
protease
64 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
22. Place slides into 2xSSC buffer until the cover slips wash
off. Place slides into fresh 2xSSC buffer for 5 minutes
23. Completely dry slides in 56°C oven
Denaturation
24. In a fume hood, check the pH of the denaturing solution
(49 mls formamide, 7 mls 20xSSC, 14 mls distilled water) and
apply 100 µl to each slide. Cover with temporary cover slips
made from strips of Parafilm cut to size. Place slides on a flat-
bed-heated stage (e.g., Omnislide) and incubate at 72°C for 5
minutes
25. Remove the slides from the Omnislide and transfer a
fume hood. Remove the temporary cover slips
26. Within the fume hood, place the slides in 70% alcohol for
1 minute at room temperature
27. Place slides in 85% alcohol for 1 minute at room
temperature
28. Place slides in 99% alcohol for 1 minute at room
temperature
29. Remove the slides and leave to air dry at room
temperature.
Hybridization
30. Work in reduced light; apply 10 µl of the appropriate
probe to a 22 × 26 mm cover slip. Invert the slide gently on to the
cover slip, taking care to avoid air bubbles
31. Seal the edges of the cover slip with rubber cement
32. Incubate the slides on the Omnislide with a light-shielding
lid, overnight at 37°C
Posthybridization Wash
33. Place one Coplin jar containing 50 mls of posthybridization
wash buffer (2xSSC, 0.3% NP40) per 5 slides to be washed into
a water bath set at 72°C. Fill a Coplin jar or staining dish with
posthybridization wash buffer and keep at room temperature
34. Work in reduced light; remove the slides from the
Omnislide
35. Use forceps, remove the rubber cement from around the
edges of the cover slips and place the slides into the dish of
posthybridization wash buffer at room temperature until the
cover slips fall off
36. Check the temperature of the posthybridization wash
buffer is at 72 ± 1°C before proceeding
7. FLUORESCENT IN SITU HYBRIDIZATION 65
References
1. Watters AD, Going JJ, Cooke TG, Bartlett JMS. Chromosome 17 aneu-
somy is associated with poor prognostic factors in invasive breast
carcinoma. Breast Cancer Res Treat 2003;77:109–114.
2. Bartlett JMS. Pharmacodiagnostic testing in breast cancer: focus on
HER2 and trastuzumab therapy. Am J Pharmacogenomics 2005;5:
303–315.
3. Watters AD, Bartlett JMS. Fluorescence in situ hybridization in paraf-
fin tissue sections: pretreatment protocol. Mol Biotechnol 2002;21:
217–220.
4. Watters AD, Ballantyne SA, Going JJ, Grigor KM, Bartlett JMS.
Aneusomy of chromosomes 7 and 17 predicts the recurrence of
transitional cell carcinoma of the urinary bladder. BJU International
2000;85:42–47.
5. Bartlett JM, Going JJ, Mallon EA, et al. Evaluating HER2 amplifica-
tion and overexpression in breast cancer. J Pathol 2001;195:
422–428.
6. Edwards J, Krishna NS, Mukherjee R, Watters AD, Underwood MA,
Bartlett JMS. Amplification of the androgen receptor may not explain
development of androgen independent prostate cancer. Br J Urol
2001;88:1–10.
Chapter 8
Quantitative Reverse Transcriptase
Polymerase Chain Reaction
Lyndon M. Gommersall, M. Arya, Prabhabhai S. Patel,
and H.R.H. Patel
INTRODUCTION
Since the first documentation of real-time polymerase chain reac-
tion (PCR),1 it has been used for an increasing and diverse number
of applications, including mRNA expression studies, DNA copy
number measurements in genomic or viral DNAs,2–7 allelic
discrimination assays,8,9 expression analysis of specific splice
variants of genes10–13 and gene expression in paraffin-embedded
tissues,14,15 and laser captured microdissected cells.13,16–19 There-
fore, quantitative reverse transcriptase polymerase chain reaction
(Q-RT-PCR) is now essential in molecular diagnostics to quanti-
tatively assess the level of RNA or DNA in a given specimen. Q-
RT-PCR enables the detection and quantification of very small
amounts of DNA, cDNA, or RNA, even down to a single copy. It
is based on the detection of fluorescence produced by reporter
probes, which varies with reaction cycle number. Only during the
exponential phase of the conventional PCR reaction is it possible
to extrapolate back in order to determine the quantity of initial
template sequence. The “real-time” nature of this technology per-
tains to the constant monitoring of fluorescence from specially
designed reporter probes during each cycle. Due to inhibitors of
the polymerase reaction found with the template, reagent limita-
tion or accumulation of pyrophosphate molecules, the PCR reac-
tion eventually ceases to generate template at an exponential rate
(i.e., the plateau phase), making the end point quantitation of
PCR products unreliable in all but the exponential phase. Exam-
ples of fluorescent reporter molecules include dyes that bind to
the double-stranded DNA (i.e., SYBR® Green) or sequence-
specific probes (i.e., TaqMan® products or Molecular Beacons
Probes). The automation of the reaction as a whole has enabled
Q-RT-PCR assays to be easy to perform with higher sensitivity
and more specificity.
8. QUANTITATIVE REVERSE TRANSCRIPTASE PCR 67
Reporter
After hybridization and during Quencher
fluorophore
the extension phase, the 5′
endonuclease activity of the
Taq DNA polymerase cleaves
the probe which separates 5′
reporter and quencher dyes
and fluorescence is detected. 3′
FIGURE 8.1. Hydrolysis probes (e.g., TaqMan assay). (From Ref. 73)
rescence emission data that are collected during the PCR ampli-
fication (Figure 8.2). Figure 8.2 demonstrates a representative
amplification plot and defines the important terms associated
with it.
2,000,000
∆Rn Plateau
Sample
1,000,000
Log/exponential phase
Threshold Ct
0 No template
Baseline
0 20 40
PCR cycle number
Ct = Threshold cycle
versus the cycle number. During the early cycles of PCR ampli-
fication, ∆Rn values do not exceed the baseline.
• Threshold: An arbitrary threshold is chosen by the computers,
based on the variability of the baseline. It is calculated as ten
times the standard deviation of the average signal of the base-
line fluorescent signal between cycles three to 15. A fluorescent
signal that is detected above the threshold is considered a real
signal that can be used to define the threshold cycle (Ct) for a
sample. If required, the threshold can be manually changed for
each experiment so that it is in the region of exponential ampli-
fication across all amplification plots.
• Ct: This is defined as the fractional PCR cycle number at which
the reporter fluorescence is greater than the minimal detection
level (i.e., the threshold). The Ct is a basic principle of real-time
PCR and is an essential component in producing accurate and
reproducible data.1
EQUIPMENT REVIEW
There are a variety of instruments available on the market, each
of which has its own individual characteristics. Great care should
be taken when choosing which instrument to buy, and it is
important to match the instruments capabilities with laboratory
needs. Cost should not be the only factor when making a choice;
the cheaper models cannot compensate for the variance in the
optics and therefore are not capable of detecting smaller differ-
ences. The higher-throughput instrument may be more than is
needed. The ABI Prism® 7700 Sequence Detection System (SDS)
from Applied Biosystems was the first commercially available
thermocycler for real-time PCR, but has now been discontinued.
78 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
CONCLUSION
The introduction of real-time PCR technology has revolutionized
the field of molecular diagnostics and has enabled the shift of
molecular diagnostics toward a high-throughput, automated with
lower turnaround times. It allows the sensitive, specific, and repro-
ducible quantification of mRNA. Real-time PCR assays are charac-
terized by a wide dynamic range of quantification of 7–8 logarithmic
decades, a high technical sensitivity (<5 copies) and a high preci-
sion (<2% standard deviation).32 Also, no post-PCR steps are
required, thus avoiding the possibility of cross contamination due
to PCR products. The disadvantages of real-time quantitative PCR
when compared with conventional PCR include the fact that:
References
1. Higuchi R, Fockler C, Dollinger G, Watson R. Kinetic PCR analysis:
real-time monitoring of DNA amplification reactions. Biotechnology
1993;11:1026–1030.
8. QUANTITATIVE REVERSE TRANSCRIPTASE PCR 81
17. Ehrig T, Abdulkadir SA, Dintzis SM, Milbrandt J, Watson MA. Quan-
titative amplification of genomic DNA from histological tissue sec-
tions after staining with nuclear dyes and laser capture microdissection.
J Mol Diagn 2001;3:22–25.
18. Fink L, Seeger W, Ermert L, et al. Real-time quantitative RTPCR after
laser-assisted cell picking. Nat Med 1998;4:1329–1333.
19. Shieh DB, Chou WP, Wei YH, Wong TY, Jin YT. Mitochondrial DNA
4,977-bp deletion in paired oral cancer and precancerous lesions
revealed by laser microdissection and real-time quantitative PCR.
Ann NY Acad Sci 2004;1011:154.
20. Holland PM, Abramson RD, Watson R, Gelfand DH. Detection of
specific polymerase chain reaction product by utilizing the 5′–3′ exo-
nuclease activity of Thermus aquaticus DNA polymerase. Proc Natl
Acad Sci USA 1991;88:7276–7280.
21. Lee LG, Connell CR, Bloch W. Allelic discrimination by nick-translation
PCR with fluorogenic probes. Nucleic Acids Res 1993;21:3761–3766.
22. Cardullo RA, Agrawal S, Flores C, Zamecnick PC, Wolf DE. Detection
of nucleic acid hybridization by non-radiative fluorescence resonance
energy transfer. Proc Natl Acad Sci USA 1988;85:8790.
23. Heid CA, Stevens J, Livak KJ, Williams PM. Real time quantitative
PCR. Genome Res 1996;6:986–994.
24. Gibson UE, Heid CA, Williams PM. A novel method for real time
quantitative RT-PCR. Genome Res 1996;6:995–1001.
25. Dumur CI, Dechsukhum C, Wilkinson DS, Garrett CT, Ware JL,
Ferreira-Gonzalez A. Analytical validation of a real-time reverse
transcriptionpolymerase chain reaction quantitation of different
transcripts of the Wilms’ tumor suppressor gene (WT1). Anal Biochem
2002;309:127–136.
26. Jurado J, Prieto-Alamo MJ, Madrid-Risquez J, Pueyo C. Absolute
gene expression patterns of thioredoxin and glutaredoxin redox
systems in mouse. J Biol Chem 2003;278:45546.
27. Borg I, Rohde G, Loseke S, et al. Evaluation of a quantitative real-
time PCR for the detection of respiratory syncytial virus in pulmo-
nary diseases. Eur Respir J 2003;21:944–951.
28. Lin JC, Wang WY, Chen KY, et al. Quantification of plasma Epstein-
Barr virus DNA in patients with advanced nasopharyngeal carci-
noma. N Engl J Med 2004;350:2461–2470.
29. Castelain S, Descamps V, Thibault V, et al. TaqMan amplification
system with an internal positive control for HCV RNA quantitation.
J Clin Virol 2004;31:227–234.
30. Gilliland G, Perrin S, Bunn HF. Competitive PCR for quantitation
of mRNA. In: PCR Protocols: A Guide to Methods and Applications.
Innis, MA, ed. CA, USA: Academic Press, 1990, 60–69.
31. Suzuki T, Higgins PJ, Crawford DR. Control selection for RNA quan-
titation. BioTechniques 2000;29:332–337.
32. Bustin SA. Absolute quantification of mRNA using real-time reverse
transcription polymerase chain reaction assays. J Mol Endocrinol
2000;25:169–193.
8. QUANTITATIVE REVERSE TRANSCRIPTASE PCR 83
33. Rhoads RP, McManaman C, Ingvartsen KL, Boisclair YR. The house-
keeping genes GAPDH and cyclophilin are regulated by metabolic
state in the liver of dairy cows. J Dairy Sci 2004;87:248.
34. Steele BK, Meyers C, Ozbun MA. Variable expression of some “house-
keeping” genes during human keratinocyte differentiation. Anal
Biochem 2002;307:341–347.
35. Yperman J, De Visscher G, Holvoet P, Flameng W. β-actin cannot be
used as a control for gene expression in ovine interstitial cells derived
from heart valves. J Heart Valve Dis 2004;13:848.
36. Dheda K, Huggett JF, Bustin SA, Johnson MA, Rook G, Zumla A.
Validation of housekeeping genes for normalizing RNA expression
in real-time PCR. BioTechniques 2004;37:112, 116, 118.
37. BasA, Forsberg G, Hammarstrom S, Hammarstrom ML. Utility of
the housekeeping genes 18S rRNA, β-actin and glyceraldehyde-3-
phosphate-dehydrogenase for normalization in real-time quantita-
tive reverse transcriptase-polymerase chain reaction analysis of
gene expression in human T lymphocytes. Scand J Immunol 2004;59:
566–573.
38. Vandesompele J, De Preter K, Pattyn F, et al. Accurate normalization
of real-time quantitative RT-PCR data by geometric averaging of
multiple internal control genes. Genome Biol 2002;3:0034.I.
39. Morrison TB, Weis JJ, Wittwer CT. Quantification of low-copy tran-
scripts by continuous SYBR Green I monitoring during amplifica-
tion. BioTechniques 1998;24:954–958, 960, 962.
40. Ririe KM, Rasmussen RP, Wittwer CT. Product differentiation by
analysis of DNA melting curves during the polymerase chain reac-
tion. Anal Biochem 1997;245:154–160.
41. Gibellini D, Vitone F, Schiavone P, Ponti C, La Placa M, Re MC.
Quantitative detection of human immunodeficiency virus type 1
(HIV-1) proviral DNA in peripheral blood mononuclear cells
by SYBR green real-time PCR technique. J Clin Virol 2004;29:
282–289.
42. Blaschke V, Reich K, Blaschke S, Zipprich S, Neumann CJ. Rapid
quantitation of proinflammatory and chemoattractant cytokine
expression in small tissue samples and monocyte-derived dendritic
cells: validation of a new real-time RT-PCR technology. Immunol
Methods 2000;246:79–90.
43. Ramos-Payan R, Aguilar-Medina M, Estrada-Parra S, et al. Quantifi-
cation of cytokine gene expression using an economical real-time
polymerase chain reaction method based on SYBR Green I. Scand J
Immunol 2003;57:439–445.
44. Nakamura T, Scorilas A, Stephan C, et al. Quantitative analysis of
macrophage inhibitory cytokine-1 (MIC-1) gene expression in human
prostatic tissues. Br J Cancer 2003;88:1101–1104.
45. Gut M, Leutenegger CM, Huder JB, Pedersen NC, Lutz H. One-tube
fluorogenic reverse transcriptionpolymerase chain reaction for
the quantitation of feline coronaviruses. J Virol Methods 1999;77:
37–46.
84 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
INTRODUCTION
The problems associated with expressing and purifying human
proteins, especially in Escherichia coli, the primary host or-
ganism for high-throughput (HTP) applications, are well-
documented and have plagued researchers for decades. Low
yields due to toxicity, recombinant protein insolubility, and poor
purification are just some of the problems that result1 in typical
success rates from 2%–20% when expressing eukaryotic proteins
in E. coli (Service). HTP structural genomic (SG) projects, such
as NIH’s protein structure initiative (PSI) begun in 2000. Ini-
tially, the PSI focused on technology development to provide
highly automated procedures for cloning, expression testing,
protein purification, and protein crystallization, thus addressing
production problems by increasing throughput. The develop-
ment of these techniques has allowed PSI centers and other
similar initiatives around the world to deposit over 2,000 novel
protein structures in the Protein Dada Bank (PBD) as of January
2006 (PSI, http://www.nigms.nih.gov/Initiatives/PSI/). Neverthe-
less, despite the expenditure of significant resources,2 the rate of
discovery is much less than hoped for at the beginning of the
initiative due to bottlenecks at every stage of the pipeline1 as the
problems mentioned above persist. Also, the citation rate of
structures from SG centers is significantly lower than that for the
top structural biology laboratories,3 suggesting that the current
HTP approaches are not as successful in determining the struc-
tures of more difficult, and perhaps, more significant proteins.
The phrase “picking the low hanging fruit” is often used as an
analogy to describe this situation. Accordingly, in the second
9. PROTEONOMICS 87
phase of the PSI that began in mid-2005, four PSI centers are
focusing on high throughput production, while the remaining six
centers are specializing in specific areas, such as higher eukary-
otic proteins (especially human), membrane proteins, and pro-
teins relevant to disease.
CURRENT APPROACHES
HTP Cloning
The combination of open reading frame (ORF) identification by
genome sequencing projects and the availability of HTP cloning
methods such as the recombination-based Gateway cloning
system4 of Invitrogen and ligation-independent cloning or LIC,5
has enabled the creation of large numbers of “ORF clones,”
bypassing technical problems inherent in using pooled cDNA
libraries.6 A schematic of the HTP techniques used by SG centers
to proceed from cloning to protein purification is pictured in
Figure 9.1.
Once sequence-verified, ORF clones can be used to generate
a wide variety of expression clones by simple in vitro recombina-
tion techniques without the need for subsequent sequence valida-
tion. A variety of recombinational cloning systems (reviewed in
Marsischky and LaBaer)6 are used by HTP production facilities
as well as ligase-independent cloning (LIC) and the standard
cloning of PCR fragments by endonuclease/ligase cloning.6 A
major liability of the latter approach is the limitation on the
number of expression constructs that can be reasonably created.7
This is due to the effort required for their construction and the
introduction of sequence errors from faulty primers and the PCR
used in the creation of each clone. A downside to recombination
cloning is the addition of more non-native amino acids from the
translated recombination sites to the desired protein molecule.
Since more than 1,400 papers using Gateway cloning have been
published, any detrimental effects of these amino acids are prob-
ably minor.
Expression clones can be constructed to test many variables
that affect protein expression and purification. Chief among
these are the use of affinity tags, solubility tags, and vector
sequences required for the expression in a given expression
system (bacteria, mammalian, yeast, etc.). Although there are
trends,8–10 there is no way to predict a priori the best expression
construct and system for a given protein. The possible combina-
tions (e.g., a limited set might include 3 affinity tags, 3 solubility
fusions, N- and C-terminal locations, 3 expression hosts) to test
88 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Target identification
Plasmid isolation/verification
Transformation/transfection/infection
of expression host
Growth/Temperature shift/Induction
Reclone to screen:
solubility tag
No tag position
Soluble? expression strain
expression conditions
expression system
Yes truncations
Small-scale purification
IMAC
IEX
SEC
FIGURE 9.1. Schematic of the typical workflow for HTP protein produc-
tion. IMAC (immobilized metal-ion affinity chromatography), IEX (ion
exchange chromatography), SEC (size exclusion chromatography).
9. PROTEONOMICS 89
can become quite numerous and hence the need for automation-
friendly methods at all steps of HTP SG.
Hexa-histidine tags are by far the most commonly used
affinity tag in HTP applications due to the low cost, ease of
subsequent downstream purification, and the relatively modest
addition of non-native amino acids. However, this tag does not
enhance the solubility of the fused target protein, thus solubility
enhancing proteins are often cloned in-frame with the target
protein to improve solubility.11 Commonly used solubility partner
proteins are maltose binding protein (MBP), NusA, and thiore-
doxin.8 Reports of differential success with these and other fusion
proteins are widespread in the literature, however, MBP, NusA,
and thioredoxin are widely reported as the most useful.8,10,11
Expression Systems
Several expressions systems have been adopted by the majority
of HTP SG centers: E. coli, baculovirus, and mammalian cell
culture (HEK293 cells). Additional approaches are and will be
coming available to address particular problems of recombinant
protein expression.12 For example, a Rhodobacter expression
system shows promise for the expression of membrane proteins,13
and both Saccharomyces cerevisiae and Pichia pastoris have been
reported as amenable to HTP production of human proteins for
structural genomics.14
Of the available expression systems, E. coli is preferred for
HTP production, despite the limitations mentioned above, for its
ease of use and reduced cost. Advantages include a robust growth
rate, inexpensive media, a well-defined genetic system, a variety
of induction systems, and the availability of strains and tech-
niques for different applications. For example, isotopically
labeled amino acids important for NMR and x-ray crystallogra-
phy can be incorporated into the protein with the use of labeled
precursors in the media. Although E. coli has limited capability
for post-translational modification of proteins that can be impor-
tant for folding and activity,15 this can be beneficial for structural
studies due to the reduction in sample heterogeneity. Strains
with low protease levels (e.g., BL21) and modifications to express
tRNAs present at higher levels in eukaryotes than bacteria can
be the difference between no expression and high levels of soluble
protein.16 The major drawback to recombinant expression in E.
coli is the lack of the proper protein folding machinery, which
often leads to mis-folding and/or aggregation. However, new
solubility and affinity tags are frequently generated for protein
expression in E. coli,17–20 and the flexibility of the system allows
90 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Expression Conditions
The conditions under which a protein is expressed have some of
the most dramatic effects on the success of expression. Variables
included temperature, levels of inducer, media formulations,
aeration, time of induction, and time of harvest. The most dra-
matic of these variables is the temperature during induction. It
has long been known that lowering the temperature at the point
of induction can realize dramatic improvements in protein solu-
bility in E. coli (Schein and Noteborn).24 For this reason, HTP
protein expression in E. coli is routinely carried out at tempera-
tures ranging from 4°C–30°C.7,8,25 Temperature also can affect
solubility in the baculovirus/insect cell expression system (manu-
script in preparation).
The auto-induction system for E. coli recently described by
Studier26 obviates the need for monitoring culture ODs and the
addition of the expensive inducer IPTG when using the tradi-
tional T7 RNA polymerase transcription system widely favored
9. PROTEONOMICS 91
Purification
Immobilized metal-ion affinity chromatography (IMAC) is most
often the first step in the purification of His-tagged target pro-
teins. If expression optimization has been performed and the
target is expressed as a fusion to a solubility tag such as MBP, a
His tag is included in the construct for purification. HTP formats
for IMAC enable purification without the need for expensive
chromatography workstations in this first stage of screening.
Indeed, the entire protein production process from gene cloning
through expression and to purification can be performed in 96-
well format using IMAC.
Well-expressed, soluble proteins are often >90% pure after a
single IMAC step. For many uses, proteins expressed as fusions
92 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
ADDITIONAL APPROACHES
While HTP approaches have greatly improved the ability to
screen large numbers of samples and advances in cloning and
the introduction of new affinity and solubility tags have led to
the successful purification of proteins that were previously intrac-
table, the inherent problems of protein production in heterolo-
gous expression systems still remain. It is expected that as the
structure of the relatively tractable proteins are determined, what
remains are especially difficult proteins. Notable in this group
are membrane proteins, which are the target of 60%–70% of
pharmaceutical drugs.32
One lesson that has been learned repeatedly in the study of
proteins is that we do not know the rules that govern protein
9. PROTEONOMICS 93
Acknowledgment
This project has been funded in whole or in part with federal
funds from the National Cancer Institute, National Institutes of
Health, under contract N01-CO-12400. The content of this pub-
lication does not necessarily reflect the views or policies of the
Department of Health and Human Services, nor does mention
of trade names, commercial products, or organizations imply
endorsement by the U.S. government.
94
I
M
A
C
BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
References
1. Service RF. Structural genomics. Tapping DNA for structures pro-
duces a trickle. Science 2002;298:948–950.
2. Lattman E. The state of the Protein Structure Initiative. Proteins
2004;54:611–615.
3. Chandonia JM, Brenner SE. The impact of structural genomics:
expectations and outcomes. Science 2006;311:347–351.
4. Hartley JL, Temple GF, Brasch MA. DNA cloning using in vitro site-
specific recombination. Genome Res 2000;10:1788–1795.
5. Doyle SA. High-throughput cloning for proteomics research. Methods
Mol Biol 2005;310:107–113.
6. Marsischky G, LaBaer J. Many paths to many clones: a comparative
look at high-throughput cloning methods. Genome Res 2004;14:
2020–2028.
7. Acton TB, Gunsalus KC, Xiao R, et al. Robotic cloning and protein
production platform of the Northeast Structural Genomics Consor-
tium. Methods Enzymol 2005;394:210–243.
8. Dyson MR, Shadbolt SP, Vincent KJ, Perera RL, McCafferty J. Pro-
duction of soluble mammalian proteins in Escherichia coli: identifica-
tion of protein features that correlate with successful expression.
BMC Biotechnol 2004;4:32.
9. Holz C, Prinz B, Bolotina N, et al. Establishing the yeast Saccharo-
myces cerevisiae as a system for expression of human proteins on a
proteome-scale. J Struct Funct Genomics 2003;4:97–108.
10. Lichty JJ, Malecki JL, Agnew HD, Michelson-Horowitz DJ, Tan S.
Comparison of affinity tags for protein purification. Protein Expr
Purif 2005;41:98–105.
11. Waugh DS. Making the most of affinity tags. Trends Biotechnol 2005;
23:316–320.
12. Giomarelli B, Schumacher KM, Taylor TE, et al. Recombinant pro-
duction of anti-HIV protein, griffithsin, by auto-induction in a fer-
mentor culture. Protein Expr Purif. 2006;47(1):194–202.
13. Laible PD, Scott HN, Henry L, Hanson DK. Towards higher-
throughput membrane protein production for structural genomics
initiatives. J Struct Funct Genomics 2004;5:167–172.
14. Prinz B, Schultchen J, Rydzewski R, Holz C, Boettner M, Stahl U,
Lang C. Establishing a versatile fermentation and purification pro-
cedure for human proteins expressed in the yeasts Saccharomyces
cerevisiae and Pichia pastoris for structural genomics. J Struct Funct
Genomics 2004;5:29–44.
15. Jono S, Peinado C, Giachelli CM. Phosphorylation of osteopontin is
required for inhibition of vascular smooth muscle cell calcification.
J Biol Chem 2000;275:20197–20203.
16. Kane JF. Effects of rare codon clusters on high-level expression of
heterologous proteins in Escherichia coli. Curr Opin Biotechnol
1995;6:494–500.
17. Banki MR, Feng L, Wood DW. Simple bioseparations using self-
cleaving elastin-like polypeptide tags. Nat Methods 2005;2:659–661.
96 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
DNA MICROARRAY
Methodology
Although techniques such as RT-PCR and in situ hybridization
(ISH) can give information about gene expression, they are
limited in scope as typically one gene product is evaluated with
each assay. The advent of transcriptional profiling using DNA
microarray has revolutionized the field of molecular medicine as
measurement of thousands of genes simultaneously in a given
sample provide a vast amount of data for new disease classifica-
tions and biomarker discoveries. DNA microarray-based gene
expression profiling relies on nucleic acid polymers, immobilized
on a solid surface, which act as probes for complementary gene
sequences.1 Microarrays typically contain several thousand
single-stranded DNA sequences, which are “arrayed” at specific
locations on a synthetic “chip” through covalent linkage. These
DNA fragments provide a matrix of probes for fluorescently
labeled complementary RNA (cRNA) derived from the sample of
interest. The expression of each gene in the sample is quantified
by the intensity of fluorescence emitted from a specific location
on the array matrix which is proportional to the amount of that
gene product (Figure 10.1).2
Several different microarray systems have been developed,
using either 25-mer or 60-mer oligonucleotides or cDNA as
probes. It is important to note that technical differences between
various types of arrays can influence the subsets of genes de-
tected, especially when analyzing 12,500+ transcripts per slide.
Two main types of microarrays are used for evaluation of
clinical samples: cDNA or spotted arrays, and oligonucleotide
microarrays3
Unfixed sample
of tumor tissue
Tumor RNA
Surgical removal
of tumor tissue
Labeled tumor
Labeled control cDNA or cRNA
cDNA or cRNA
Poor
prognosis
Comparative Molecular
analysis of gene signature
expression
Good
prognosis
DNA microarray
Data Analysis
Relative levels of expression on the microarrays are analyzed
using sophisticated statistical techniques. There are two main
types of multisample analyses: class discovery (creating new
classes based on differences in expression among samples) and
class prediction (using samples from known biologic classes to
identify a list of genes whose expression pattern can be used to
predict the class of a new sample).6
The first step of analysis is normalization of the raw data
which maximizes the likelihood that the measurements of dif-
ferential expression are not artifacts of the hybridization reac-
tion.6 Data are then filtered to select those genes with the largest
magnitude of differences in expression for further analysis.
To discover new subgroups based on the gene expression
patterns of biologically similar samples (class discovery), unsu-
pervised analysis is used. This technique uses clustering algo-
rithms to group specimens according to similarities in their
transcriptional profile. For class prediction, supervised analysis
is used, whereby the gene expression profile of one defined group
is compared to another and a list of genes is generated which
distinguishes the two groups. Once a set of genes is identified in
the “training set,” they are ranked by their power to predict the
group to which each sample belongs by cross-validation. Typi-
cally, one sample at a time is left out, the classifier is trained on
the remaining samples, and the sample is then classified based
on its correlation to a predictor set generated from the remaining
samples. The independent predictive ability of the gene list (“clas-
sifier”) is ideally performed on a separate set of samples. Often
this is achieved by dividing the original set into a training set and
a validation set.
10. DNA AND TISSUE MICROARRAYS 101
Clinical Applications
Molecular phenotyping of human tumors using micorarray tech-
nology has provided insights into tumor subtypes and their atten-
dant clinical outcomes. For example, using microarray technology,
Sorlie et al. demonstrated that invasive breast cancers could be
divided into four main subtypes by unsupervised analysis.7 These
groups were distinguished by differences in their expression pro-
files: a “luminal cell-like” group expressing ER; a “basal cell-like”
group expressing keratins 5 and 17, integrin β4, and laminin, but
lacking ER; an Erb-B2 positive group; and a “normal” epithelial
group, typified by genes characteristic of basal epithelial cells
and adipose cells. A subsequent study by this group showed that
these subtypes have specific clinical outcomes, which are repro-
ducible in other data sets.
Other investigators have used supervised analysis to identify
a signature profile in breast cancer patients at very low risk for
distant relapse.8 This 70-gene prognostic signature was able to
delineate a group of patients with very low risk of distant recur-
rence in an independent dataset which appeared to perform
better than traditional prognostic tumor characteristics as defined
by St. Gallen or NIH criteria.9
While microarray profiling has provided important insights
into the biology of disease, applying this technology to the study
of response to therapy is likely to benefit patients in a more
102 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
TISSUE MICROARRAY
Construction
Tissue microarrays complement the large scale genomic/pro-
teomic discovery approach of DNA arrays by allowing the simul-
taneous analysis of DNA, RNA, or protein in large numbers of
samples per single experiment (as opposed to DNA arrays which
look at large numbers of gene products simultaneously in a test
sample). By linking these data to relevant outcome information,
e.g., survival, these analyses can give insight into the clinical
significance of a given biomarker.
Although the concept of standardizing and streamlining
immunohistochemistry (IHC) techniques have been previously
reported,12 Kononen et al.13 first described a device for the con-
struction of TMAs that could be feasibly accessible to many labs
(Figure 10.1). The bulk of the time spent in TMA construction is
the collection of the appropriate paraffin-embedded “donor”
10. DNA AND TISSUE MICROARRAYS 103
Applications
Over 600 studies have been published using TMA technology
since Kononen’s initial description in 1998. Most of these studies
were performed on various malignancies to study different
molecular markers using IHC. When linked to a clinical end-
point, e.g., survival or response to a specific therapy, they have
the ability to assess rapidly the prognostic or predictive value
respectively of the marker of interest. TMAs have also been used
to validate genes discovered by genomic surveys such as DNA
microarrays. A variety of TMAs spanning tumor development
10. DNA AND TISSUE MICROARRAYS 105
Quantitative Analysis
Whereas analysis or “scoring” of chromagen-linked IHC stains of
TMAs is relatively straightforward with a bright-field microscope,
it is time-consuming and laborious, often making this the rate-
limiting step. More importantly, because judgments of intensity
are subjective and limited, intra- and interobserver reproduc-
ibility is difficult. Many efforts to produce a more automated and
quantitative analysis of IHC stains have been developed, and
more recently, several commercially available programs have
become available.19–21 We have developed an automated quantita-
tive technology that uses modified IHC with immunofluores-
cence-based detection rather than optical density, which allows
increased sensitivity and dynamic range.22 Using molecular tags
to define tumors (i.e., cytokeratin for epithelium) and localize
subcellular compartments (i.e., DAPI for nucleus), protein expres-
sion is assessed on a continuous scale by co-localization algo-
rithms. This technology has been applied to the study of a variety
of biomarkers in numerous different cancers.23,24 In addition,
because a molecular tag is simply defined by a molecule with
specificity for a defined/localized antigen, one can envision study-
ing in situ quantitative co-localization to subcellular compart-
ments such as Golgi or mitochondria and stromal compartments
such as endothelial cells. As the TMA technology becomes
increasingly utilized, the need for more sophisticated biostatisti-
cal strategies to rigorously organize and analyze these data
becomes even critical. Simple spreadsheet and standard statisti-
cal software packages will most likely be inadequate. Clustering
algorithms analogous to those employed for gene expression
arrays may be further adapted and utilized in the future.
106 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
References
1. Southern E, Mir K, Shchepinov M. Molecular interactions on micro-
arrays. Nat Genet 1999;21:5–9.
2. Sauter G, Simon R. Predictive molecular pathology. N Engl J Med
2002;347:1995–1996.
3. Ramaswamy S, Golub TR. DNA microarrays in clinical oncology.
J Clin Oncol 2002;20:1932–1941.
4. Cheung VG, Morley M, Aguilar F, et al. Making and reading micro-
arrays. Nat Genet 1999;21:15–19.
10. DNA AND TISSUE MICROARRAYS 107
EXTRACELLULAR RECORDINGS
Extracellular recordings involve the acquisition of electrical
signals from a population of neuronal processes (e.g., axons) or
the electrical “field” generated in the space surrounding the cell
bodies or dendrites of a group of neurons. These electrical signals
are typically very small and require substantial amplification.
One method for acquiring and recording such small signals is by
the use of “suction” electrodes. Suction electrodes, comprise a
glass or plastic tipped capillary filled with a conductor (usually
saline) into which a nerve or a piece of central nervous system
is drawn. With such electrodes, it is possible to record two types
11. BASIC SCIENTIFIC TECHNIQUES IN RECORDING CELLULAR DATA 111
INTRACELLULAR RECORDINGS
One of the most important signals that can be recorded from the
nervous system is the electrical potential across the membrane
of individual neurons. This measurement is accomplished either
by inserting a very fine glass electrode (sharp electrode) inside a
neuron or by sealing a blunter electrode (patch electrode) to the
membrane and then rupturing the membrane under the tip to
gain access the cell interior (Figure 11.1D). The electrodes are
connected to specialized amplifiers that can compensate for their
high capacitance and resistance which would otherwise filter the
electrical signals. These electrical signals provide information
about the electrical properties of the individual neurons, their
connections with other neurons and their activity during complex
behaviors such as locomotion.
112 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Dendrites
Motoneuron
soma
ROI: 2
RECORDINGS
(dendrite)
∆F/Fo
20%
ROI: 3
(dendrite)
40mV
Electrical
Vm motoneuron
(intracellular recording)
100µV
Right vr-L6
(extracellular recording)
Right vr-L5 electrical stimulation 2 sec
used to inject a dye inside the cell in order to reveal its cytoar-
chitecture. Ideally, the intracellular label should have the follow-
ing characteristics: Firstly, it should be easily ejected from the
intracellular micropipette without clogging; secondly, it should
be water soluble and diffuse quickly so that the cell can be filled
in a reasonable time; thirdly, it should not interfere with the
function of the neuron and finally, it must be easily visualized in
the light or fluorescence microscope.
Horseradish peroxidase (HRP) was one of the first successful
intracellular labels. Since then, low molecular weight markers
such as the biotin-lysine complex (biocytin; MW = 372) and neu-
robiotin (MW = 323) have been widely employed. In addition,
several uorescent
fl markers (Lucifer Yellow, Alexa hydrazides)
have been used to visualize the injected cells using a fluorescent
microscope (Figure 11.1A, B). The use of fl uorescent markers is
particularly useful when it is necessary to visualize the neuron
while obtaining intracellular recordings. Most of the intracellular
markers carry a positive or negative charge which is determines
how the marker is injected to the cell. For example, biocytin is
negatively charged and requires negative current pulses for injec-
tion whereas neurobiotin is positively charged and therefore
requires positive current.
OPTICAL RECORDINGS
Optical recording methods offer another set of powerful tools for
investigating neuronal and network function. They have the
advantage of being noninvasive and are capable of resolving the
activity of many cells simultaneously. Currently, optical tech-
niques fall into three categories. First, uorescent
fl probes of
intracellular ion concentration (e.g., calcium, sodium and chlo-
ride); second, direct measurement of membrane potential using
voltage-sensitive dyes; third, intrinsic signal imaging which mon-
itors the changes in tissue properties (light scattering, hemoglo-
bin oxygenation) that accompany neuronal activity. In the spinal
cord, we have used all three methods to monitor the activity and
spatio-temporal dynamics of individual neurons and neuronal
populations during several refl ex and complex behaviors. 4
CALCIUM-SENSITIVE OPTICAL IMAGING
Calcium-sensitive dyes exhibit the largest changes in fluorescence
on binding to their target ion.5 This is, in part, because calcium
ions undergo much larger changes in intracellular concentration
than other ions (often 100-fold), and so provide a very easily
detected indirect signal of neuronal activity. A critical component
of experiments involving ion-sensitive dyes is the loading of the
neurons under investigation. Several successful approaches have
114 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
been reported, ranging from direct injection into the tissue using
a membrane-permeable type of dye (AM),6 retrograde loading,7
and electroporation.8 The particular loading method employed
will be dictated by the requirements of the study.
Ion-indicator dyes change their uorescence
fl when the dye
binds the free ion in question. Of course, care must be taken
when using such dyes not to “buffer” the ions which can change
their dynamics and possibly the neuronal function being studied.
For this reason, the lowest useable concentration of the dye
should be employed. When using conventional epifluorescence
microscopy, these changes in uorescence
fl are monitored with
sensitive charged coupled device (CCD) or intensified video
cameras. Such devices usually operate at 30 frames/sec but spe-
cialized cameras can employ much higher frame rates.
Alternatively, the uorescent
fl signals can be detected using
confocal or multiphoton microscopy (Figure 11.1). Multiphoton
microscopy exploits the fact that uorophores
fl exposed to very
brief laser pulses can absorb two photons at a time instead of
the usual single photon. Each of the absorbed photons is approxi-
mately double the wavelength of the single photon that is nor-
mally absorbed. This has several major advantages. First, long
wavelengths penetrate biological tissue with less scattering than
shorter wavelengths and can visualize labeled cells up to several
hundred micrometers below the surface. Second, because the
probability of the uorophore
fl absorbing two photons is highest
at the focal plane, only a thin slice of tissue is fluorescent, thereby
resulting in reduced phototoxicity. Finally, 2-photon microscopy
allows the use of nondescanned detectors which do not use a
pin-hole to achieve confocality and, as a result, collect both the
direct and scattered light emitted from the fluorophore.9
photodiode array
multiplexer
amplifiers
objective
Extracellular electrodes
DC power supply
B C
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
A 1 2 3
cc
4 5 6 7
Optical
Right VR 8 9 10 11
(extracellular)
12 13 14 15
Left VR
(extracellular)
25 ms
INTRODUCTION
Publication of research is an important though often neglected
aspect of the research pathway. Although often perceived as the
final link in the chain, an ever increasing emphasis on audit dic-
tates that the research ultimately with be judged on the quality
of publications emanating from it. In turn, this will influence the
availability of future research grants. Therefore, it is sensible to
plan likely outlets for any research findings at the very earliest
stage of an application for funding. If, on consideration, there
are unlikely to be any obvious outlets for the results, the would-
be researcher should give very careful thought to the value of
embarking on the research project in the first place.
In recent years, academia has been driven by the “Holy Grail”
of the Research Assessment Exercise with its dependence on
placing research publications in journals where they not only
attract attention but from which they will be regularly quoted by
other researchers. Essential to this is the concept of impact
factor, an index that is not without its critics.
Mercifully, an increasing number of funding bodies are start-
ing to attach equal importance to other important benefits from
research, including the development of a critical mind that leads
to writing and awarding of higher research degrees such as MD
and PhD, and the possible immediate benefits the research might
have to clinical practice. Quite clearly, the longer the period
devoted to research the greater the chances of higher quality
publication. Ultimately, this is still likely to be audited by the
grant awarding body and the quality of the research and the way
in which it is presented is likely to have impact on future funding
opportunities.
Sources of funding include major government or indepen-
dent bodies such as the Medical Research Council or the Well-
come Trust; at present arguably the most respected sources of
118 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
IMPACT FACTOR
This is one of several indexes by which the quality of journal
is compared. Inevitably, it has been subject to some criticism
and there are alternative ways of measuring this. Table 12.1
provides the definitions for impact factor, immediacy index, and
cited half-life. Table 12.2 gives impact factors for a variety of
12. PRESENTING AND PUBLISHING RESEARCH DATA 121
General
New England Journal of Medicine 34.8
Nature 30.9
Science 29.7
Cell 26.6
Lancet 18.3
British Medical Journal 7.2
Rheumatology
Arthritis and Rheumatism (USA) 7.2
Annals of the Rheumatic Diseases 3.8
Rheumatology 3.7
The Journal of Rheumatology 2.9
Rehabilitation
Clinical Rehabilitation 1.0
122 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
ASSESSMENT
Editors and journals vary in the style of assessment. A small
number use a single assessor and, typically, major international
general journals have a professional committee that considers
papers as well. The editors of speciality journals tend to send
papers to three assessors, one of whom may well be a member
of the editorial board. Since the composition of the board is
published in the journal, it is therefore normally possible to
predict whom one of the assessors might be! If a paper spans
several disciplines the group of referees is likely to be representa-
tive of each of the various disciplines. Sometimes papers are sent
routinely to a statistician; in other cases, this is only done if one
of the first groups of assessors specifically requests it.
Once a decision is made, this is relayed back to the author,
normally with copies of the different assessors comments, the
assessor usually also receiving copies of each others comments,
which not only educates assessors as well as authors but also
provides an element of internal audit.
Authors are sometimes disappointed if the editor’s final deci-
sion does not at first seem to be in line with the feedback pro-
vided for authors. This is because, in addition to providing
authors feedback, each assessor has the opportunity to make
comments to the editor in confidence that the authors will not
see. An editor may also decide to weight the opinion of a senior
and particularly trusted assessor more highly than that of a
trainee assessor to whom the paper has also been sent. Ulti-
mately, an author may also fall foul of the pages available to be
filled (which correlates with the cost of the subscription to
readers) and the particular balance of contents in recent issues
of the journal, the editor needing to keep the readership happy
as well as the authors. If a junior researcher has had work
rejected for one of these reasons, this should normally be
explained in a letter from the editor.
It is unusual for a paper to be accepted first time without any
change. If reconsideration of the work is offered subject to the
changes suggested by assessors, the author is well advised to
make these changes since it then becomes extremely hard for the
editor and the original assessors to reject it.
If the rejection seems unfair and the assessors’ comments
particularly critical, the author can always appeal to the editor
with a set of reasonable arguments supplied to him/her. In this
124 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
FUTURE TRENDS
Publication of results is not just to enhance the curriculum vitae.
It is also seemed important by ethics committees since a failure
to disseminate the results of research might be considered uneth-
ical if patients had been exposed to risk by participation in a
study from which nothing was then learnt. A discussion of future
dissemination therefore forms an integral part of ethics commit-
tee submission and would also be of interest to the research
directorate of a trust since publicity through dissemination of
research can only enhance its reputation.
Funding charities will also be keen to ensure that their money
has been spent in optimal fashion. Here, a broader view may be
taken. Given that the use of impact factors can be justifiably
criticized, though may still represent the best compromise, are
stakeholders best satisfied with a large output of papers that are
only occasionally cited or is payback better with a smaller number
of papers that are cited more frequently? Many are now deeming
successful MD and PhD dissertations as almost important since
these emphasize the educational importance of research to the
researcher and seed research for future generations. Put another
way, the teaching component of research may be as important
as the research itself. By contrast, the NHS, as an organization
responsible for patient care, may attach most importance of all
to changes in clinical practice that might result from the research,
especially if this demonstrates options for saving the managers
budget by recommending less expensive clinical practice that is
equally as effective as more expensive older methods.
It should also be noted that within two to three years major
journals are going to insist on an international standard of ran-
12. PRESENTING AND PUBLISHING RESEARCH DATA 125
OVERVIEW
This chapter will consider commonly used methods for describ-
ing and analyzing data. We begin with an introduction to some
important basic statistical concepts and then focus on some of
the most well used methods of analysis for observational and
intervention studies. The two types of data we will discuss in
detail are continuous and binary data. For further reading, we
recommend consulting a medical statistics textbook.1,2,3,4
TYPES OF DATA
Numerical Data
Numerical data are either continuous or discrete. Continuous
variables can take any value within a given range; examples are
height, weight, and blood pressure. Discrete variables take only
certain values, usually integers. Number of GP visits per year and
other count variables are examples of discrete variables. Discrete
variables may be treated as continuous if they have a wide range
of values.
Categorical Data
A variable is categorical if individuals are assigned to one of a
number of categories. The simplest form is a binary variable with
two categories, for example, alive/dead, pregnant/not-pregnant,
and cancer/cancer-free. With more than two categories, the vari-
able is either nominal, where there is no ordering of the categories,
such as with blood group (A/B/AB/O), or ordinal, where there is
ordering such as with pain score (none/mild/moderate/severe).
Data Structure
Understanding the structure of data is essential, as it determines
the appropriate method of analysis. Most statistical methods
assume that subjects or observations are independent of one
13. ANALYZING HEALTH STUDIES 127
DESCRIBING DATA
It is important to perform a descriptive analysis, as it allows us
to gain a basic understanding of the data. This helps to select the
appropriate method of analysis as well as identify problems such
as outliers (observations inconsistent with the rest of the data)
and missing values. We can examine and summarize the data
using graphs, tabulations, and descriptive statistics.
Continuous Data
Descriptive Statistics
We summarize continuous data with measures of central ten-
dency (the location of the middle of the distribution) and spread
(the variability of the distribution). The appropriate measures
depend on the research question and the shape of the distribu-
tion of the data.
Categorical Data
To summarize categorical data, we calculate the number and
proportion (or percentage) of subjects that fall into each cate-
gory. For a graphical illustration of categorical data, one could
use a bar chart, which is similar to a histogram but displays bars
for each category.
ANALYZING DATA
As mentioned in the chapter on study design, we usually collect
data from a representative sample of individuals in order to make
inferences about the target population of such individuals. For
example, to evaluate the effectiveness of a new treatment for
breast cancer, we might evaluate the treatment on a sample of
breast cancer patients from two UK centres, and then draw con-
clusions about the likely usefulness of this treatment for all
breast cancer patients.
However, statistical uncertainty arises because we have infor-
mation for only one of many potential samples that could have
been taken from the population. The two basic methods of quan-
tifying this uncertainty are estimation and hypothesis testing.
Estimation
We use our study sample to estimate population characteristics.
For example, the mean blood pressure amongst a sample of British
men might be used to estimate the true mean blood pressure for
all British men. We could be interested in descriptive measures
such as proportions or means, or comparative measures, such as
relative risks, or differences in means between two groups.
Sampling Distributions
A sample estimate is unlikely to be exactly equal to the true
population value and different samples will provide different
13. ANALYZING HEALTH STUDIES 129
Standard Errors
The standard error (SE) is used to quantify how precisely the
population characteristic is estimated by the sample. It is the
estimated standard deviation of the sampling distribution with
smaller values of the standard error indicating higher precision
and less uncertainty. Larger sample sizes give smaller standard
errors and hence more precise estimates.
Confidence Intervals
It is often more useful to construct a range for the likely values
of the population characteristic. The sample estimate and its
standard error can be used to provide such a range in the form
of a confidence interval. Conventionally we calculate 95% confi-
dence intervals, which we can think of as the interval likely to
contain the true population value with a probability of 95%.
Because these intervals are obtained from the sampling distribu-
tion of the parameter, the formal interpretation of confidence
intervals is in terms of taking many samples from the population.
If we took 100 samples from the population and calculated a 95%
confidence interval for each, we would expect 95 of these inter-
vals to include the population value.
The upper and lower confidence limits are often calculated
as
estimate ± multiplier × SE
where the multiplier is derived from the sampling distribution.
For example, an estimate of the average age of patients undergo-
ing elective cardiac bypass graft surgery is 60.5 years (SE = 1.7
years) based on a sample of 31 such patients from a London
hospital. A 95% confidence interval showing the likely range for
the true average age is 57.0 to 64.0 (60.5 ± 2.0 × 1.7).
Hypothesis Testing
The sample data can be used to test a predefined statistical
hypothesis about a population characteristic. Typically this null
hypothesis describes situations of no effect or no difference. For
example, when investigating the possible link between respira-
130 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
tory infection and sudden infant death, we would test the null
hypothesis that there was no link. There are many statistical tests
suitable for particular hypotheses and types of data, and all
produce probability statements called P-values.
P-values
To test our hypothesis, we consider how much evidence there is
in the data against the null hypothesis. The amount of evidence
is quantified in the form of a P-value, which lies between 0 and
1. Large P-values suggest we have little evidence that the null
hypothesis is untrue, and so we do not reject the null hypothesis.
Large P-values do not tell us that the null hypothesis is true.
Small P-values tell us we have evidence against the null hypoth-
esis, and we would reject the null hypothesis.
Formally, P-values are the probability of observing similar or
more unlikely data than our own sample when the null hypoth-
esis is true.
Statistical Significance
Often we refer to small P-values as statistically significant and
large P-values as nonsignificant. A cut-off of 0.05 conventionally
defines statistical significance, but this is arbitrary and hence
should be used cautiously.
EXAMPLES
Using examples, we illustrate the basic methods of analysis for
continuous and binary outcomes and look at the interpretation
of statistical results. We focus on the comparison between two
groups of data, both independent and paired. The results shown
can be obtained using any good statistical software.
1 236 1 31 178 1
2 209 1 32 242 1
3 253 0 33 273 0
4 250 1 34 164 1
5 156 1 35 185 1
6 281 1 36 153 0
7 251 0 37 218 1
8 201 1 38 187 1
9 257 1 39 212 0
10 203 1 40 248 0
11 230 0 41 255 0
12 210 0 42 158 1
13 291 1 43 234 0
14 278 0 44 268 0
15 263 0 45 194 1
16 241 0 46 188 1
17 270 0 47 212 0
18 227 1 48 272 1
19 186 1 49 165 1
20 236 1 50 212 1
21 228 1 51 260 0
22 246 1 52 218 1
23 185 0 53 261 1
24 212 1 54 244 1
25 280 0 55 207 0
26 244 1 56 248 0
27 294 1 57 304 0
28 276 1 58 204 0
29 173 0 59 230 0
30 202 1 60 181 1
132 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Type A Type B
8
8
6
6
Frequency
Frequency
4
4
2
2
0
0
150 200 250 300 150 200 250 300
total cholesterol total cholesterol
300
300
250
250
total cholesterol
total cholesterol
200
200
150
150
Type A Type B
Violation of Assumptions
If the assumptions of the t-test are violated, we may use a differ-
ent approach. Welch’s test is suitable if the data are approxi-
mately Normal but the variances are unequal.4 When there is
non-normality, transforming data to a scale where assumptions
are met may be useful; for example, analyzing log-transformed
data and reporting results in terms of geometric means. Alterna-
tively, we may use a nonparametric method, which does not
require Normality assumptions. The Mann-Whitney U (or Wil-
coxon Rank Sum test) is the nonparametric counterpart of the
two-sample t-test. The latter compares the medians of the two
groups, assuming that their distributions have identical shapes.
TABLE 13.3. Mean (SD) triglyceride level for each CVD group, and the
group of differences
-1 0 1 2 3
Inverse Normal
Violation of Assumptions
If the differences between paired measurements are severely
non-normal, it may be preferable to transform the original data
or use a nonparametric method such as the Wilcoxon signed rank
test.
Analysis
When comparing groups of binary data, the parameter of interest
is the estimated risk difference. For this trial, the difference is
136 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
Off-pump 8 (26.7%) 22 30
On-pump 19 (63.3%) 11 30
Total 27 33 60
Violation of Assumptions
If the sample size is too small, we may use Fisher’s Exact Test
and exact methods to obtain confidence limits.
Recurrence?
(no hormone)
Recurrence?
(hormone) Yes No Total
Yes 9 6 15
No 18 14 32
Total 27 20 47
Analysis
The risk difference is 25.5% in favor of the hormone treatment,
and the standard error is 9.7%. Assuming Normality, the 95%
confidence interval is 6.5% to 44.6%. To formally compare the
groups, we use McNemar’s test. This gives a P-value of 0.014,
showing evidence of a beneficial effect of hormone treatment.
This test is valid provided that both discordant pairs in the table
exceed 5. We have 6 and 18 discordant pairs.
Violation of Assumptions
If the sample size is too small, we can use exact methods for
testing and calculating confidence intervals.
CONCLUSION
We have described the most basic methods for summarizing and
analyzing two groups of continuous or binary outcomes. More
advanced methods exist for different types of data, more than
two groups, and other types of associations. Regression is used
138 BASIC SCIENCE TECHNIQUES IN CLINICAL PRACTICE
References
1. Altman DG. Practical Statistics for Medical Research. Boca Raton, FL:
Chapman and Hall, 1991
2. Bland JM. An Introduction to Medical Statistics, 3rd edition. New
York: Oxford University Press, 2000
3. Kirkwood BR, Sterne JAC. Essential Medical Statistics, 2nd edition.
Oxford, UK: Blackwell Science, 2003
4. Armitage P, Berry G, Matthews JNS. Statistical Methods in Medical
Research, 4th edition, Oxford, UK: Blackwell Science, 2002
5. Ragland DR, Brand RJ. Coronary heart disease mortality in the
Western Collaborative Group study. Am J Epidemiol 1988;127:
462–475.
6. Bessant R, Duncan R, Ambler G, et al. Prevalence of conventional and
lupus-specific risk factors for cardiovascular disease in patients with
systemic lupus erythematosus: a case control study. Arthritis Rheum
2006;55(6):892–899.
7. Zamvar V, Williams D, Hall J, et al. Assessment of neurocognitive
impairment after off-pump and on-pump techniques for coronary
artery bypass graft surgery: prospective randomised controlled trial.
BMJ 2002;325:1268–1271.
Chapter 14
Future
H.R.H. Patel, I.S. Shergill, and M. Arya
V agarose electrophoresis,
Variance, 127 51–52
Vinorelbine, 102 blocking, 54
hybridization, 54
W sample preparation,
Welch’s test, 133 49–50
Wellcome Trust, 117–118 definition of, 48, 49
Western blotting Wilcoxon Rank Sum test,
applications of, 48, 56 133
basic principles and Wilcoxon signed rank test,
methods of 135