PHYSICS FOR Diagnostic Radiology PDF
PHYSICS FOR Diagnostic Radiology PDF
PHYSICS FOR Diagnostic Radiology PDF
DIAGNOSTIC
RADIOLOGY
THIRD EDITION
P P Dendy, B Heaton
With contributions by
O W E Morrish, S J Yates, F I McKiddie, P H Jarritt, K E Goldstone,
A C Fairhead, T A Whittingham, E A Moore, and G Cusick
A TAY L O R & F R A N C I S B O O K
Physics for Diagnostic Radiology
Third Edition
Series in Medical Physics and Biomedical Engineering
Series Editors: John G Webster, Slavik Tabakov, Kwan-Hoong Ng
Contemporary IMRT
S Webb
Series in Medical Physics and Biomedical Engineering
P P Dendy, B Heaton
With contributions by
O W E Morrish, S J Yates, F I McKiddie, P H Jarritt, K E Goldstone,
A C Fairhead, T A Whittingham, E A Moore, and G Cusick
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials
or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material repro-
duced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming,
and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copy-
right.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been
granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifica-
tion and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
2. Production of X-Rays............................................................................................................ 23
P P Dendy and B Heaton
v
vi Contents
The Series in Medical Physics and Biomedical Engineering describes the applications of physi-
cal sciences, engineering, and mathematics in medicine and clinical research.
The series seeks (but is not restricted to) publications in the following topics:
• Artificial organs • Patient monitoring
• Assistive technology • Physiological measurement
• Bioinformatics • Prosthetics
• Bioinstrumentation • Radiation protection, health physics, and dosimetry
• Biomaterials • Regulatory issues
• Biomechanics • Rehabilitation engineering
• Biomedical engineering • Sports medicine
• Clinical engineering • Systems physiology
• Imaging • Telemedicine
• Implants • Tissue engineering
• Medical computing and mathematics • Treatment
• Medical/surgical devices
The Series in Medical Physics and Biomedical Engineering is an international series that
meets the need for up-to-date texts in this rapidly developing field. Books in the series
range in level from introductory graduate textbooks and practical handbooks to more
advanced expositions of current research.
The Series in Medical Physics and Biomedical Engineering is the official book series of the
International Organization for Medical Physics.
Objectives
vii
viii About the Series
Activities
Official journals of the IOMP are Physics in Medicine and Biology and Medical Physics and
Physiological Measurement. The IOMP publishes a bulletin, Medical Physics World, twice a
year, which is distributed to all members.
A World Congress on Medical Physics and Biomedical Engineering is held every
three years in cooperation with IFMBE through the International Union for Physics and
Engineering Sciences in Medicine (IUPESM). A regionally based international conference
on medical physics is held between world congresses. IOMP also sponsors international
conferences, workshops, and courses. IOMP representatives contribute to various interna-
tional committees and working groups.
The IOMP has several programs to assist medical physicists in developing countries.
The joint IOMP Library Programme supports 69 active libraries in 42 developing coun-
tries, and the Used Equipment Programme coordinates equipment donations. The Travel
Assistance Programme provides a limited number of grants to enable physicists to attend
the world congresses. The IOMP website is being developed to include a scientific database
of international standards in medical physics and a virtual education and resource center.
Information on the activities of the IOMP can be found on its website at www.iomp.org.
Acknowledgements
We are grateful to many persons for constructive comments on and assistance with the
production of this book. In particular, we wish to thank Dr J Freudenberger, G Walker,
Dr G Buchheim, Dr K Bradshaw, Mr M Bartley, Professor G Barnes, Dr A Noel, D Goodman,
Dr A Parkin, Dr I S Hamilton, D A Johnson, I Wright, S Yates, E Hutcheon, M Streedharen
and K Anderson.
Figures 2.9, 2.15 and 2.28 are reproduced by permission of Siemens AG. Dura and
STRATON are registered trademarks of Siemens.
Figures 2.24 and 2.25 are reproduced by permission of Philips Electronics UK Ltd.
Figures 5.18 and 5.20 are reproduced by permission of Medical Physics Publishing.
Figure 5.26 is reproduced by permission of the British Institute of Radiology.
Figure 7.2a is reproduced by permission of Artinis Medical Systems.
Figures 7.3, 7.19, 7.20, 7.23, 7.24, 7.25 and 7.26 are reproduced by permission of the
British Institute of Radiology.
Figure 7.4 is reproduced by permission of The Royal Society, London UK (Campbell
F W, Phil Trans Soc B, 290, 5–9, 1980).
Figure 7.11 is reproduced by permission of Elsevier Publishing and Professor
P N T Wells (Scientific Basis of Medical Imaging edited by P N T Wells, 1982, figure
1.19, p. 18).
Figure 7.21 is reproduced by permission of the International Commission on
Radiation Units and Measurements.
Figure 7.22 is reproduced by permission of the Radiological Society of North
America Inc (Macmohon H, Vyborny C J, Metz C E et al. Radiology, 158, 21–26,
figure 5, 1986).
Figure 9.13 is reproduced by permission of Dr J N P Higgins and H Szutowicz.
Figure 9.16 is reproduced by permission of Professor G T Barnes.
Figure 9.19 is reproduced by permission of Oxford University Press.
Figure 11.2 is reproduced by permission of Elsevier Publishing (Cherry S R,
Sorenson J A & Phelps M E. Physics in Nuclear Medicine, 3rd edition, 2003).
Figure 12.12 is reproduced by permission of the Radiological Society of North
America (Boyce J D, Land C E, & Shore R E. Risk of breast cancer following low-
dose radiation exposure, Radiology, 131, 589–597, 1979).
Figures 12.14 and 12.17 and Table 12.2 are reproduced by permission of the British
Institute of Radiology.
Tables 13.1, 13.3, 13.4, 13.5 and 13.16 are reproduced by permission of the Health
Protection Agency, UK.
Tables 13.2, 13.8, 13.9, 13.10, 13.11, 13.12, 13.14 and 13.15 are reproduced by permission
of the British Institute of Radiology.
ix
Introduction to the Third Edition
Learn from yesterday, live for today, hope for tomorrow. The important thing
is not to stop questioning.
Albert Einstein
The first edition of this book, published in 1987, was written in response to a rapid devel-
opment in the range of imaging techniques available to the diagnostic radiologist over
the previous 20 years and a marked increase in the sophistication of imaging equipment.
There was a clear need for a textbook that would explain the underlying physical prin-
ciples of all the relevant imaging techniques at the appropriate level.
Since that time, there have been major developments in imaging techniques and the
physical principles behind them. Some of these were addressed in the second edition, pub-
lished in 1999, notably the much greater importance attached to patient doses, the increas-
ingly widespread use of digital radiography, the importance of both patient dose and
image quality in mammography, the increasing awareness of the need to protect staff and
related legislation. The chapters on ultrasound and magnetic resonance imaging (MRI)
were completely rewritten.
The past decade has seen yet more advances, and parts of the second edition are no lon-
ger ‘state of the art’. In this third edition all the chapters have been revised and brought
up-to-date, with major additions in the following areas:
The second evolutionary change since 1987 has been in the scope of the anticipated read-
ership. Radiologists in training are still a primary target, and there are many reasons to
emphasise the importance of physics education as a critical component of radiology train-
ing. As an imaging technique becomes more sophisticated it is essential for radiologists
to know ‘how it works’, thus providing them with a unique combination of anatomical,
physiological and physical information. This helps to differentiate the expertise of radiolo-
gists from that of other physicians who read images and helps to position radiology as a
xi
xii Introduction to the Third Edition
And finally—why the quotation from Einstein? Digital imaging, molecular imaging and
functional imaging have great potential in medicine, but as they develop they will inevi-
tably require a better knowledge of physics and become more quantitative. We have tried
Introduction to the Third Edition xiii
to show the way forward to both radiologists and scientists who are prepared to ask the
question, Why?
References
Bresolin L, Bissett III GS, Hendee WR and Kwakwa FA, Methods and resources for physics education
in radiology residence programmes, Radiology, 249, 640–643, 2008.
EFOMP, Guidelines for education and training of medical physicists in radiology—in preparation,
European Federation of Organisations for Medical Physics.
Hendee WR, An opportunity for radiology, Radiology, 238, 389–394, 2006.
Contributors
G Cusick F I McKiddie
Medical Physics and Bioengineering Department of Nuclear Medicine
UCL Hospitals NHS Foundation Trust NHS Grampian
London, United Kingdom Aberdeen, United Kingdom
P H Jarritt
Clinical Director of Medical Physics and
Clinical Engineering
Department of Medical Physics and
Clinical Engineering
Cambridge University Hospitals NHS
Foundation Trust
Cambridge, United Kingdom
xv
1
Fundamentals of Radiation
Physics and Radioactivity
SUMMARY
• Why some atoms are unstable is explained.
• The processes involved in radioactive decay are presented.
• The concepts of physical and biological half-life and the mathematical expla-
nation of secular equilibrium are addressed.
• The basic physical properties of X and gamma photons and the importance
of the K shell electrons in diagnostic radiology are explained.
• The basic concepts of the quantum nature of electromagnetic (EM) radia-
tion and energy, the inverse square law and the interaction of radiation with
matter are introduced.
CONTENTS
1.1 Structure of the Atom.............................................................................................................2
1.2 Nuclear Stability and Instability.......................................................................................... 4
1.3 Radioactive Concentration and Specific Activity............................................................... 6
1.3.1 Radioactive Concentration........................................................................................ 6
1.3.2 Specific Activity..........................................................................................................7
1.4 Radioactive Decay Processes................................................................................................7
1.4.1 β– Decay........................................................................................................................7
1.4.2 β+ Decay........................................................................................................................7
1.4.3 α Decay.........................................................................................................................8
1.5 Exponential Decay.................................................................................................................. 8
1.6 Half-life.....................................................................................................................................9
1.7 Secular and Transient Equilibrium.................................................................................... 11
1.8 Biological and Effective Half-Life....................................................................................... 13
1.9 Gamma Radiation................................................................................................................. 14
1.10 X-rays and Gamma Rays as Forms of Electromagnetic Radiation................................ 14
1.11 Quantum Properties of Radiation...................................................................................... 16
1.12 Inverse Square Law.............................................................................................................. 17
1.13 Interaction of Radiation with Matter................................................................................. 17
1.14 Linear Energy Transfer........................................................................................................ 19
1
2 Physics for Diagnostic Radiology
(a) (b) e–
e– e–
+1 +8
e– e–
–
e
e– e–
K shell K shell
e– L shell
Hydrogen Oxygen
FIGURE 1.1
Examples of atomic structure. (a) Hydrogen with one K shell electron. (b) Oxygen with two K shell electrons
and six L shell electrons.
Fundamentals of Radiation Physics and Radioactivity 3
(a) (b)
Electrons Energy (keV) Electrons Energy (keV)
0 0
3M –0.005 2P –0.02
8L –0.08 12 O –0.07
32 N –0.6
18 M –2.8
2K –1.5 8L –11.0
2K –69.5
FIGURE 1.2
Typical electron energy levels. (a) Aluminium (Z = 13). (b) Tungsten (Z = 74).
of an energy ‘well’ that gets deeper as the electron is trapped in shells closer and closer
to the nucleus.
The unit in which electron energies are measured is the electron volt (eV)—this is the energy
one electron would gain if it were accelerated through 1 volt of potential difference. One
thousand electron volts is a kilo electron volt (keV) and one million electron volts is a mega
electron volt (MeV). Some typical electron shell energies are shown in Figure 1.2. Note that
1. If a free electron is assumed to have zero energy, all electrons within atoms have
negative energy—that is, they are bound to the nucleus and must be given energy
to escape.
2. The energy levels are not equally spaced and the difference between the K shell
and the L shell is much bigger than any of the other differences between shells
further away from the nucleus. Shells are distinguished by being given a letter.
The innermost shell is the K shell and subsequent shells follow in alphabetical
order. When a shell is full (e.g. the M shell can only hold 18 electrons) the next
outer shell starts to fill up.
The K shell energies of many elements are important in several aspects of the physics of
radiology and a table of their various values and where they are used for different aspects
of radiology is given in Table 1.1. This table will be useful for reference when reading the
subsequent chapters.
The X-ray energies of interest in diagnostic radiology are between 10 and 120 keV. Below
10 keV too many X-rays are absorbed in the body, above 120 keV too few X-rays are stopped
by the image receptor. However, higher energy gamma photons are used when imaging
with radioactive materials where the imaging process is quite different.
Insight
K Shell Energies
The most important energy level in imaging is the K shell energy. The L shell energies are small
(lead 15.2 keV, tungsten 12.1 keV, caesium 5.7 keV, for example) and are mostly outside the energy
range of interest in radiology (we have set the lower limit at 10 keV, some L shell energies are
4 Physics for Diagnostic Radiology
TABLE 1.1
K Shell Energies for Various Elements and the Aspect of Radiology Where They
Are Important
Element Area of Application Z Number K Shell Energy (keV)
Carbon (a) 6 0.28
Oxygen (a) 8 0.53
Aluminium (b) 13 1.6
Silicon (c) 14 1.8
Phosphorus (a) 15 2.1
Sulphur (a) 16 2.5
Calcium (a) 20 4.0
Copper (f) 29 9.0
Germanium (c) 32 11.1
Selenium (c) 34 12.7
Molybdenum (b and e) 42 20.0
Rhodium (e) 45 23.2
Palladium (e) 46 24.4
Caesium (c) 55 36.0
Barium (d and f) 56 37.4
Iodine (c and d) 53 33.2
Gadolinium (c) 54 50.2
Erbium (b) 68 57.5
Ytterbium (c) 70 61.3
Tungsten (e) 74 69.5
Lead (f) 82 88.0
(a) Body tissue components—but the X-rays associated with these K shells have too low an
energy to have any external effect and are absorbed in the body.
(b) Used to filter the beam emerging from the X-ray tube.
(c) Used as a detector (in a monitor) or an image receptor of X-ray photons.
(d) Used as a contrast agent to highlight a part of the body.
(e) Used to influence the spectral output of an X-ray tube.
(f) Used as shielding from X-ray photons.
slightly above this). Since the (negative) K shell energy is a measure of how tightly bound these two
electrons are held by the positive charge on the nucleus, the binding energy of the K shell increases
as the atomic number increases as can be seen in Table 1.1. As noted in Table 1.1 K shell energies
have important applications in the shape of the X-ray spectra (Section 2.2), filters (Section 3.8),
intensifying screens, scintillation detectors and digital receptors (Chapter 5) and contrast agents
(Section 6.3.4).
The total number of protons and neutrons, collectively referred to as nucleons, within the
nucleus is called the mass number, usually given the symbol A. Each particular combin-
ation of Z and A defines a nuclide. One notation used to describe a nuclide is AZN.
The number of protons Z defines the element N, so for hydrogen Z = 1, for oxygen Z = 8
and so on, but the number of neutrons is variable. Therefore an alternative and generally
simpler notation that carries all necessary information is N-A. The notation AZN will only
be used for equations where it is important to check that the number of protons and the
number of nucleons balance.
Nuclides that have the same number of protons but different numbers of neutrons are
known as isotopes. Thus O-16, the most abundant isotope of oxygen, has 8 protons (by def-
inition) and 8 neutrons. O-17 is the isotope of oxygen which has 8 protons and 9 neutrons.
Since isotopes have the same number of protons and hence when neutral the same number
of orbital electrons, they have the same chemical properties.
The number of neutrons required to stabilise a given number of protons lies within fairly
narrow limits and Figure 1.3a shows a plot of these numbers. Note that for many elements
of biological importance the number of neutrons is equal to the number of protons, but the
most abundant form of hydrogen, which has one proton but no neutrons, is an important
exception. At higher atomic numbers the number of neutrons begins to increase faster
than the number of protons—lead, for example, has 126 neutrons but only 82 protons.
An alternative way to display the data is to plot the sum of neutrons and protons against
the number of protons (Figure 1.3b). This is essentially a plot of nuclear mass against
(a)
208
82 Pb
120
80 (b)
127
53
60
Number of neutrons
40
α decay
Mass (P + N)
β– decay α decay
30
β– decay
20 40
20Ca
β+ decay
10 β+ decay
16O
8
12C
6
1H 10 20 30 50 80 Charge (P)
1
Number of protons
FIGURE 1.3
Graphs showing the relationship between number of neutrons and number of protons for the most abundant stable
elements. (a) Number of neutrons plotted against number of protons. The dashed line is at 45°. The cross-hatched area
shows the range of values for which the nucleus is likely to be stable. (b) Total number of nucleons (neutrons and pro-
tons) plotted against number of protons. On each graph the changes associated with β+, β– and α decay are shown.
6 Physics for Diagnostic Radiology
nuclear charge (or the total charge on the orbiting electrons). This concept will be useful
when considering the interaction of ionising radiation with matter, and in Section 3.4.3 the
near constancy of mass/charge (A/Z is close to 2) for most of the biological range of ele-
ments will be considered in more detail.
If the ratio of neutrons to protons is outside narrow limits, the nuclide is radioactive or
a radionuclide. For example, H-1 (normal hydrogen) is stable, H-2 (deuterium) is also stable,
but H-3 (tritium) is radioactive. A nuclide may be radioactive because it has too many or
too few neutrons.
A simple way to make radioactive nuclei is to bombard a stable element with a flux of
neutrons in a reactor. For example, radioactive phosphorus may be made by the reaction
shown below:
31
15 P + 01 n = 32 0
15 P + 0 γ
(the emission of a gamma ray as part of this reaction will be discussed later). However, this
method of production results in a radionuclide that is mixed with the stable isotope since
the number of protons in the nucleus has not changed and not all the P-31 is converted to
P-32. Radionuclides that are ‘carrier free’ can be produced by bombarding with charged
particles such as protons or deuterons, in a cyclotron; for example, if sulphur is bombarded
with protons,
34
16 S + 11 p = 17
34
Cl + 01 n
The radioactive product is now a different element and thus may be separated by chemical
methods.
The activity of a source is a measure of its rate of decay or the number of disintegrations per
second. In the International System of Units it is measured in becquerels (Bq) where 1 Bq is
equal to one disintegration per second. The becquerel has replaced the older unit of the curie
(Ci), but since the latter is still encountered in textbooks and older published papers and is
still actively used in some countries, it is important to know the conversion factor.
1 Ci = 3.7 × 1010 Bq
Hence,
injection. If one wishes to inject a large activity of technetium-99m (Tc-99m) in a small vol-
ume, perhaps for a dynamic nuclear medicine investigation, it is preferable to elute a ‘new’
molybdenum-technetium generator when the yield might be 8 GBq (200 mCi) in a 10 ml
eluate [0.8 GBq ml–1 (20 mCi ml–1)] rather than an old generator when the yield might be
only about 2 GBq (50 mCi) [0.2 GBq ml–1 (5 mCi ml–1)]. For a fuller discussion of the produc-
tion of Tc-99m and its use in nuclear medicine see Section 1.7 and Chapter 10.
Insight
Pure Radionuclides
The number of molecules in one gram-molecular weight is 6.02 × 1023 (Avogadro’s number). Very
few radionuclide solutions or solids are pure radionuclide. Most consist of radioactivity mixed
with some form of non-radioactive carrier.
1.4.1 𝛃– Decay
A negative β particle is an electron. Its emission is actually a very complex process but it
will suffice here to think of a change in the nucleus in which a neutron is converted into a
proton. The particles are emitted with a range of energies. Note that although the process
results in emission of electrons, it is a nuclear process and has nothing to do with the orbiting
electrons.
The mass of the nucleus remains unchanged but its charge increases by one, thus this
change is favoured by nuclides which have too many neutrons.
1.4.2 𝛃+ Decay
A positive β particle, or positron, is the anti-particle to an electron, having the same mass
and an equal but opposite charge. Again, its precise mode of production is complex but
8 Physics for Diagnostic Radiology
it can be thought of as being released when a proton in the nucleus is converted to a neu-
tron. Note that a positron can only exist while it has kinetic energy. When it comes to rest
it spontaneously combines with an electron.
The mass of the nucleus again remains unchanged but its charge decreases by one, thus
this change is favoured by nuclides which have too many protons.
1.4.3 𝛂 Decay
An α particle is a helium nucleus, thus it comprises two protons and two neutrons. After α
emission, the charge is reduced by two units and the mass by four units.
The effects of β–, β+ and α decay are shown in Figure 1.3. Note that emission of α particles
only occurs for the higher atomic number nuclides.
Δ N ∝ N Δ t
dN = –kN dt
where the constant of proportionality k is characteristic of the radionuclide, known as its decay
constant or transformation constant, and the negative sign has been introduced to show that,
mathematically, the number of radioactive nuclei actually decreases with elapsed time.
The equation may be integrated (see Insight) to give the well-known exponential
relationship
N = N0 exp(–kt)
Insight
Mathematics of the Exponential Equation
dN = − kN dt
Fundamentals of Radiation Physics and Radioactivity 9
Rearranging
dN
= − k dt
N
Nt dN t
∫N = − k ∫ dt
0 N 0
[ln N ] NNt0 = − kt
Nt
ln = − kt
N0
Nt
= e − kt (from the definition of Naperian logarithms)
N0
N = N0e − kt (the sub t is usually dropped from Nt )
Since the activity of a source A is equal to the number of disintegrations per second,
dN
A= = − kN = kN 0 exp( − kt)
dt
when t = 0,
⎛ dN ⎞
⎜⎝ ⎟ = kN 0 = A0 ,
dt ⎠
so
A = A0 exp(–kt) (1.1)
1.6 Half-Life
An important concept is the half-life or the time (T½) after which the activity has decayed to
half its original value.
If A is set equal to A0/2 in Equation 1.1,
Hence
⎛ − ln 2t ⎞ ⎛ − 0.693 ⎞
A = A0 exp ⎜ ⎟ = exp ⎜ (1.2)
⎝ T‰ ⎠ ⎝ T‰ ⎟⎠
10 Physics for Diagnostic Radiology
⎛ − 0.693 ⎞
N = N 0 exp ⎜
⎝ T‰ ⎟⎠
1. The idea of half-life may be applied from any starting point in time. Whatever the
activity at a given time, after one half-life, the activity will have been halved.
2. The activity never becomes zero, since there are many millions of radioac-
tive nuclei present, so their number can always be halved to give a residue of
radioactivity.
Clearly, if the value of T½ is known, and the rate of decay is known at one time, the rate
of decay may be found at any later time by solving equation 1.2 given above. However, the
activity may also be found, with sufficient accuracy, by a simple graphical method.
Proceed as follows:
1. Use the y-axis to represent activity and the x-axis to represent time.
2. Mark the x-axis in equal units of half-lives.
3. Assume the activity at t = 0 is 1. Hence the first point on the graph is (0,1).
4. Now, apply the half-life rule. After one half-life, the activity is ½, so the next point
on the graph is (1,½).
5. Apply the half-life rule again to obtain the point (2,¼) and successively (3, 81 )(4, 161 )
(5, 321 ).
See Figure 1.4a.
Note that, so far, the graph is quite general without consideration of any particular
nuclide, half-life or activity. To answer a specific problem, it is now only necessary to
re-label the axes with the given data, for example, ‘The activity of an oral dose of I-131 is
90 MBq at 12 noon on Tuesday, 4 October. If the half-life of I-131 is 8 days, when will the
activity be 36 MBq?’ Figure 1.4b shows the same axes as Figure 1.4a re-labelled to answer
this specific problem. This quickly yields the answer of 10½ days, that is, at 12 midnight
on 14 October.
This graphical approach may be applied to any problem that can be described in terms
of simple exponential decay.
Insight
Solving this problem using equation 1.2:
⎛ −ln 2 . t ⎞
36 = 90 exp ⎜ where t is the required time in days.
⎝ 8 ⎟⎠
⎛ 90 ⎞ ⎛ ln 2. t ⎞
In ⎜ ⎟ = ⎜ from which t = 10.6 days.
⎝ 36 ⎠ ⎝ 8 ⎟⎠
Fundamentals of Radiation Physics and Radioactivity 11
(a) (b)
1 90
Activity (MBq)
Activity
1/2 45
Activity is 36 MBq
after 10½ days
1/4 22.5
1/8 11.3
1/16 5.7
1/32 2.8
1 2 3 4 5 8 16 24 32 40
Half-lives Time (days)
FIGURE 1.4
Simple graphical method for solving any problem where the behaviour is exponential. (a) A basic curve that
may be used to describe any exponential process. (b) The same curve used to solve the specific problem on
radioactive decay set in the text.
(a)
100
75
Activity (%)
50
25
6 12 18 24
(b) Time (h)
100
Activity (%)
50
25
12.5
75
Activity (%)
50
25
6 12 18 24 30 36 42 48
Time (h)
FIGURE 1.5
(a) Increase in activity of a daughter when the activity of the parent is assumed to be constant. Generation of
Tc-99m from Mo-99 has been taken as a specific example, but with the assumption that the supply of Mo-99 is
constantly replenished. (b) The decay curve for Mo-99 which has a half-life of 67 h. (c) Increase in activity of a
daughter when the activity of the parent is decreasing. Generation of Tc-99m from Mo-99 has been taken as a
specific example. Curves (a) and (b) are multiplied to give the resultant activity of Tc-99m.
Insight
Secular Equilibrium
⎡ ⎛ − ln 2 ⋅ t ⎞ ⎤
N = Nmax ⎢1 − exp ⎜ ⎥
⎢⎣ ⎝ T‰ ⎟⎠ ⎥⎦
where T½ is now the half-life of the daughter radionuclide.
Thus after n half-lives
Substituting n = 10 then (½)10 = 1/1024 so N differs from its maximum value by less than 1 part
in 1000.
Fundamentals of Radiation Physics and Radioactivity 13
1. The half-life of the parent is much longer than the half-life of the daughter; for
example, radium-226, which has a half-life of 1620 years, decays to radon gas which
has a half-life of 3.82 days. For most practical purposes the activity of the radon gas
reaches a constant value, only changing very slowly as the radium decays. This is
known as secular equilibrium.
2. The half-life of the parent is not much longer than that of the daughter. The
most important example for radiology arises in diagnostic nuclear medicine
and is molybdenum-99 (Mo-99) which has a half-life of 67 h before decaying to
technetium-99m (Tc-99m) which has a half-life of 6 h. Now the growth curve for
Tc-99m when the Mo-99 activity is assumed constant (Figure 1.5a) must be multi-
plied by the decay curve for Mo-99 (Figure 1.5b). The resultant (Figure 1.5c) shows
that an actual maximum of Tc-99m activity is reached after about 18 h. By the time
the 10 half-lives (60 h) required for Tc-99m to come to equilibrium with Mo-99 have
elapsed, the activity of Mo-99 has fallen to half its original value.
This is known as transient equilibrium because although the Tc-99m is in equilibrium with
the Mo-99, the activity of the Tc-99m is not constant. It explains why the amount of activity
that can be eluted from a Mo-Tc generator (see Section 1.3 and Chapter 10) is much higher
when the generator is first delivered than it is a week later.
⎛ ln 2 . t ⎞
C = C0 exp ⎜ ⎟
⎝ T‰biol ⎠
(cf. Equation 1.2), where T½ biol is the biological half-life.
When physical and biological processes are combined, the overall loss is the product of
two exponential terms and the activity at any time after injection is given by
⎛ ln 2 . t ⎞ ⎛ ln 2 . t ⎞
A = A0 exp ⎜ ⎟ . exp ⎜ ⎟
⎝ T‰phys ⎠ ⎝ T‰biol ⎠
⎡ ⎛ ⎞⎤
= A0 exp ⎢ ln 2 . t ⎜ 1 + 1 ⎟ ⎥
⎢⎣ ⎝ T‰phys T‰biol ⎠ ⎥⎦
To find the effective half-life T½ eff, set
⎛ − 0.693t ⎞
A = A0 exp ⎜ ⎟
⎝ T‰eff ⎠
14 Physics for Diagnostic Radiology
Hence, by inspection,
1 1 1
= +
T½ eff T½ phys T½ biol
Note that if T½ phys is much shorter than T½ biol, the latter may be neglected, and vice versa.
For example, if T½ phys = 1 h and T½ biol = 20 h,
1 1
= 1+ = 1.05
T½ eff 20
and T½ eff = 0.95 h or almost the same as T½ phys.
Insight
Decay Schemes
It should be noted that in some radionuclides all disintegrations do not produce all the possible
gamma photon energies or β particles with the same maximum energy that the radionuclide can
produce. However, for a large number of disintegrations the ratio of gamma photons at one energy
to those of another energy is always constant. This is illustrated in Table 1.2.
TABLE 1.2
Decay Data for Molybdenum-99
Maximum Energy Percentage of Gamma Photon Percentage of
Type of Particle of Decay (MeV) Disintegrations (%) Energies (MeV) Disintegrations (%)
β– 0.45 14 0.04 1.0
β– 0.87 ~1 0.14 4.6
β– 1.23 85 0.18 4.5
0.37 1.0
0.74 10.0
0.78 4.0
TABLE 1.3
The Different Parts of the Electromagnetic Spectrum Classified in Terms of Wavelength,
Frequency and Quantum Energy
X-rays and
Radio Waves Infra-red Visible Light Ultra Violet Gamma Rays
Wavelength (m) 103 – 10–2 10–4 – 10–6 5 × 10–7 5 × 10–8 10–9 – 10–13
Frequency (Hz) 3 × 10 – 3 × 10
5 10 3 × 10 – 3 × 10
12 14 6 × 1014 6 × 1015 3 × 1017 – 3 × 1021
Quantum 10–9 – 10–4 10–2 – 1 2 20 103 – 107
energy (eV)
and therefore requires a medium for propagation (see Chapter 15), EM radiation can travel
through a vacuum. However, like sound, EM radiation exhibits many wave-like properties
such as reflection, refraction, diffraction and interference and is frequently characterised
by its wavelength. EM waves can vary in wavelength from 10 –13 m to 103 m and different
parts of the EM spectrum are recognised by different names (see Table 1.3).
X-rays and gamma rays are both part of the EM spectrum and an 80 keV X-ray is identi-
cal to, and hence indistinguishable from, an 80 keV gamma ray. To appreciate the reason
for the apparent confusion, it is necessary to consider briefly the origin of the discoveries
of X-rays and gamma rays. As already noted, gamma rays were discovered as a type of
radiation emitted by radioactive materials. They were clearly different from α rays and β
rays, so they were given the name gamma rays. X-rays were discovered in quite a different
way as ‘emission from high energy machines of radiations that caused certain materials,
such as barium platino-cyanide to fluoresce’. It was some time before the similar identity
of X-rays produced by machines and gamma rays produced by radioactive materials was
confirmed.
For a number of years, X-rays produced by machines were of lower energy than gamma
rays, but with the development of linear accelerators and other high-energy machines, this
distinction is no longer useful.
No distinction between X-rays and gamma rays is totally self-consistent, but it is reason-
able to describe gamma rays as the radiation emitted as a result of nuclear interactions, and
X-rays as the radiation emitted as a result of events out with the nucleus. For example, one
method by which nuclides with too few neutrons may approach stability is by K-electron
capture. This mode of radioactive decay has not yet been discussed. The nucleus ‘steals’
an electron from the K shell to neutralise one of its protons. The K shell vacancy is filled
by electrons from outer shells and the energy that has to be lost in this process is emitted
16 Physics for Diagnostic Radiology
Aperture of area a at an
angle θ to the normal to
θ direction of energy flow
Flow of energy
FIGURE 1.6
A simple representation of the meaning of intensity.
Thus, the smaller the value of λ, the larger the value of the energy packet. For a typical
diagnostic X-ray wavelength of 2 × 10 –11 m, the value of ε in joules for a single photon is
inconveniently small, so the electron volt, a unit of energy that has already been intro-
duced (see Section 1.1), is used where 1 eV = 1.6 × 10 –19 J. A wavelength of 2 × 10 –11 m cor-
responds to a photon energy of 62 keV.
Fundamentals of Radiation Physics and Radioactivity 17
Sphere of radius R
Sphere of radius r
FIGURE 1.7
A diagram showing the principle of the inverse square law.
A e–
B
C
e–
Stream of
ionising e–
particles
e– e–
FIGURE 1.8
Simple model of the interaction of radiation with matter. Interaction A causes excitation, interaction B causes
ionisation, and interaction C causes multiple ionisations. At each ionisation an electron is released from the
nucleus to which it was bound. Recall the comment in Section 1.1 that matter consists mostly of empty space.
Hence the chance of a collision is much smaller in practice than this diagram suggests.
TABLE 1.4
Approximate Ranges of Electrons in Soft Tissue
Electron Energy (keV) Approximate Range (mm)
20 0.01
40 0.03
100 0.14
400 1.3
The aforementioned, very simple model may also be used to predict how easily differ-
ent types of radiation will be attenuated by different types of material. Clearly, as far as
the stopping material is concerned, a high density of large nuclei (i.e. high atomic number)
will be most effective for causing many collisions. Thus gases are poor stopping materials,
but lead (Z = 82) is excellent and, if there is a special reason for compact shielding, even
depleted uranium (Z = 92) is sometimes used.
With regard to the bombarding particles, size (or mass) is again important and since
the particle is moving through a highly charged region, interaction is much more
probable if the particle itself is charged and, therefore, likely to come under the influ-
ence of the strong electric fields associated with the electron and nucleus. Since X and
gamma ray quanta are uncharged and have zero rest mass, they are difficult to stop
and higher energy photons require dense material such as lead to cause appreciable
attenuation.
The β– particle is more massive and is charged so it is stopped more easily—a few
mm of low atomic number materials such as perspex will usually suffice. Since it will be
shown in Chapter 3 that the mechanism of energy dissipation by X and gamma rays is
via secondary electron formation, a table of electron ranges in soft tissue will be helpful
(Table 1.4).
Fundamentals of Radiation Physics and Radioactivity 19
Protons and α particles are more massive than β– particles and are charged, so they
are stopped easily. α particles, for example, are so easily stopped, even by a sheet of
paper, that great care must be taken when attempting to detect them to ensure that
the detector has a thin enough window to allow them to enter the counting chamber.
Neutrons are more penetrating because, although of comparable mass to the proton,
they are uncharged.
One final remark should be made regarding the ranges of radiations. Charged particles
eventually become trapped in the high electric fields around nuclei and have a finite range.
Beams of X or gamma rays are stopped by random processes, and as shown in Chapter 3,
are attenuated exponentially. This process has many features in common with radioactive
decay. For example, the rate of attenuation by a particular material is predictable but the
radiation does not have a finite range.
TABLE 1.5
Approximate Values of Linear Energy
Transfer for Different Types of Radiation
LET
Radiation (keV 𝛍 m–1)
1 MeV γ rays 0.5
100 kVp X-rays 6
20 keV β– particles 10
5 MeV neutrons 20
5 MeV α particles 50
20 Physics for Diagnostic Radiology
Insight
Different Forms of Energy
Mechanical Energy
This can take two well-known forms.
1. Kinetic energy, ½ mv2, where m is the mass of the body and v its velocity.
2. Potential energy, mgh, where g is the gravitational acceleration and h is the height of the
body above the ground.
Kinetic energy is more relevant than potential energy in the physics of X-ray production and the
behaviour of X-rays.
Electrical Energy
When an electron, charge e, is accelerated through a potential difference V, it acquires energy eV.
Thus if there are n electrons they acquire total energy neV. Note:
1. Current (i) is rate of flow of charge. Thus i = ne/t where t is the time. Hence, rearranging an
alternative expression for the energy in a beam of electrons is Vit.
2. Just as Vit is the amount of energy gained by electrons as they accelerate through a potential
difference V, it is also the amount of energy lost by electrons (usually as heat) when they
fall through a potential difference of V, for example, when travelling through a wire that has
resistance R.
3. If the resistor is ‘ohmic’, that is to say it obeys Ohms law, then V = iR and alternative expres-
sions for the heat dissipated are V 2/R or i 2R. Note, however, that many of the resistors
encountered in the technology of X-ray production are non-ohmic.
Heat Energy
When working with X-rays, most forms of energy are eventually degraded to heat and when a
body of mass m and specific heat capacity s receives energy E and converts it into heat, the rise
in temperature ∆T will be given by
E = ms∆T
Mass Energy
As a result of Einstein’s work on relativity, it has become apparent that mass is just an alternative
form of energy. If a small amount of matter, mass m, is converted into energy, the energy released
E = mc2 where c is the speed of EM waves. This change is encountered most frequently in radioac-
tive decay processes. Careful calculation, to about one part in a million, shows that the total mass
of the products is slightly less than the total mass of the starting materials, the residual mass having
been converted to energy according to the above equation. Annihilation of positrons (see Section
3.4.4) is another good example of the equivalence of mass and energy.
1.16 Conclusion
Ionising and non-ionising radiation may be used for imaging without a detailed mathematical
understanding of the underlying physics. However, to obtain the best images or quantitative
information from them in the safest possible manner, a full understanding of their physical
properties is imperative. The subsequent chapters in this book build on the basic background
information contained in this chapter to allow the maximum benefits to be achieved.
Further Reading
Allisy-Roberts P and Williams J Farr 2008. Physics for Medical Imaging, 2nd edition. Saunders, Elsevier,
pp 1–21, Chapter 1.
Bushberg J T, Seibert A J A, Leidholdt E M and Boone J M 2002. The essential physics of medical imaging,
2nd edition. Lippincott, Williams and Wilkins, Philadelphia, pp 17–29.
Johns H E and Cunningham J R 1983. The Physics of Radiology, 4th edition. Thomas Springfield, Chapter 2.
Meredith W J and Massey J B 1977. Fundamental Physics of Radiology, 3rd edition. Wright Bristol.
Exercises
1. Describe in simple terms the structure of the atom and explain what is meant by
atomic number, atomic weight and radionuclide.
2. What is meant by the binding energy of an atomic nucleus? Define the unit in
which it is normally expressed and indicate the order of magnitude involved.
22 Physics for Diagnostic Radiology
9. Radionuclide A decays into a nuclide B which has an atomic number one less than
that of A. What types of radiation might be emitted either directly or indirectly in
the disintegration process? Indicate briefly how they are produced.
10. Give typical values for the ranges of α particles and β– particles in soft tissue. Why
is the concept of range not applicable to gamma rays?
11. For an unknown sample of radioactive material explain how it would be possible
to determine by simple experiment
a. The types of radiation emitted
b. The half-life
12. State the inverse square law for a beam of radiation and give the conditions under
which it will apply exactly.
13. A surface is irradiated uniformly with a monochromatic beam of X-rays of wave-
length 2 × 10 –11 m. If 20 quanta fall on each square cm of the surface per second, what
is the intensity of the radiation at the surface? (Use data given in Section 1.11).
14. Place the following components in order of the power of dissipation:
a. A fluorescent light
b. An X-ray tube
c. An electric fire
d. A pocket calculator
e. An electric iron
2
Production of X-Rays
SUMMARY
• The photons emitted by an X-ray tube are not all of the same energy (wave-
length). There are two components: a continuous spectrum and one or more
line spectra—the origins of each are explained.
• The important difference between quantity of X-rays and quality of X-rays
produced is emphasised and the various factors affecting each of them are
discussed.
• Design features of the X-ray tube that are essential for high quality perfor-
mance in radiology are analysed.
• A good radiograph requires the field of view to be uniformly exposed to
X-rays. This is not a trivial requirement and both the limitations and meth-
ods for optimisation are discussed.
• An X-ray tube is subject to both electrical and thermal rating limits.
These will be discussed with particular reference to design features of
the tube.
• The elements of good quality assurance required to ensure that excessive
radiation is not used for each exposure and to keep repeat examinations to a
minimum, will be outlined.
CONTENTS
2.1 Introduction........................................................................................................................... 25
2.2 The X-ray Spectrum.............................................................................................................. 26
2.2.1 The Continuous Spectrum...................................................................................... 27
2.2.2 The Low and High Energy Cut-Off....................................................................... 27
2.2.3 Shape of the Continuous Spectrum....................................................................... 28
2.2.4 Line or Characteristic Spectra................................................................................. 30
2.2.5 Factors Affecting the X-ray Spectrum................................................................... 31
2.2.5.1 Tube Current, IT.......................................................................................... 31
2.2.5.2 Time of Exposure....................................................................................... 31
2.2.5.3 Applied Voltage.......................................................................................... 31
2.2.5.4 Waveform of Applied Voltage.................................................................. 32
2.2.5.5 Filtration...................................................................................................... 33
2.2.5.6 Anode Material........................................................................................... 33
23
24 Physics for Diagnostic Radiology
2.1 Introduction
When electrons are accelerated to energies in excess of 5 keV and are then directed onto a
target surface, X-rays may be emitted. The X-rays originate principally from rapid decelera-
tion of the electrons when they interact with the nucleus of the target atoms. These X-rays
are known as ‘Bremsstrahlung’ or braking radiation.
The essential features of a simple, low output, X-ray tube are shown in Figure 2.1 and
comprise:
–ve +ve
High potential difference
Tungsten insert
as target Cooling oil
Electron flow
Evacuated enclosure Cooling fins
X-rays
Heated metal
filament cathode
Copper rod anode
to conduct away heat
FIGURE 2.1
Essential features of a simple, stationary anode X-ray tube. Note that for simplicity only X-ray photons passing
through the window are shown. In practice they are emitted in all directions.
26 Physics for Diagnostic Radiology
3. A metal anode (the target) with a high efficiency for conversion of electron energy
into X-ray photons
4. A thinner window in the chamber wall that will be transparent to most of the
X-rays
Insight
Electrical Supply to the Tube
This tube will be energised by an electrical supply generator with the following features:
In this chapter the mechanisms of X-ray production will be considered in detail and the
main components of a modern X-ray tube will be described. X-ray tubes may be used clin-
ically for either radiography—the creation of still images with a short pulse of X-rays, or
for fluoroscopy—the production of images in real time using continuous X-ray exposure.
Physical factors affecting the design and performance of X-ray sets and the implications
for obtaining high quality images will be discussed for both applications.
0.8
K shell characteristic
Relative intensity
radiations
0.6
0.4
Low energy
cut-off
0.2 High energy
cut-off
0
0 20 40 60 80 100
Energy (keV)
FIGURE 2.2
Spectrum of radiation incident on a patient from an X-ray tube operating at 100 kVp using a tungsten target and
2.5 mm aluminium filtration.
Production of X-Rays 27
Flux of
incoming b X-ray photon
electrons
X-ray
photon
FIGURE 2.3
Schematic representation of the interaction of electrons with matter. (a) Interaction resulting in the generation of
low energy electromagnetic radiations (infra red, visible, ultraviolet and very soft X-rays). All these are rapidly
converted into heat. (b) Interaction resulting in the production of an X-ray. (c) Production of an X-ray after previ-
ous interactions that resulted only in heat generation.
28 Physics for Diagnostic Radiology
of the anode, by the window of the X-ray tube and by any added filtration—that the inten-
sity emerging is negligible. X-ray attenuation is discussed in detail in Chapter 3.
The high energy cut-off occurs because all the energy of an electron may, very occasionally,
be used to produce a single X-ray photon. Hence for any given electron energy, determined by
the accelerating voltage across the X-ray tube, there is a well-defined maximum X-ray energy
equal to the energy of a single electron. This corresponds to a minimum X-ray wavelength.
Note that it is not possible, by quantum theory, for the energy of several electrons to be stored
up in the anode to produce a jumbo-sized X-ray quantum of energy.
It is useful to calculate the electron velocity, the maximum X-ray photon energy and the
minimum X-ray photon wavelength associated with a given tube kilovoltage. To avoid com-
plications associated with relativistic effects, a tube operating at only 30 kV is considered.
The energy of each electron is given by the product of its charge (e coulombs) and the
accelerating voltage (V volts)
Note the distinction between an accelerating voltage, measured in kV, and the electron
energy after passing from the anode to the cathode measured in keV.
The electron velocity can be obtained from the fact that its kinetic energy is ½mev 2e
where me is the mass of the electron and ve its velocity. Hence
Since me = 9 × 10 –31 kg, ve is approximately 108 m s–1 which is one-third the speed of light. It
can be seen from this example that relativistic effects are important even at quite low tube
kilovoltages.
From above, the maximum X-ray photon energy εImax is 4.8 × 10 –15 J and the minimum
wavelength is obtained by substitution in
hc
e = hf =
l
(h is the Planck constant, c the speed of light and λ the wavelength of the resulting X-ray.)
Hence
Note that calculations giving the maximum X-ray photon energy and minimum wave-
length are valid even when the electrons travel at relativistic speeds.
(a) (b)
Relative intensity
Limit set by operating
voltage of tube
ε ε + dε
Energy (keV)
FIGURE 2.4
A simplified explanation of the shape of the continuous X-ray spectrum. (a) Production of X-rays from a very thin
anode. Note that the intensity of the beam in the small range ε to ε+ dε will be equal to the number of photons
per square metre per second multiplied by the photon energy. Fewer high energy photons are produced but their
energy is higher and the product is constant. (b) Production of X-rays from a thicker anode treated as a series of
thin anodes. (c) X-ray emission (solid line) compared with X-ray production (dotted line).
A thick anode may now be thought of as composed of a large number of thin layers.
Each will produce a similar distribution to that shown in Figure 2.4a, but the maximum
photon energy will gradually be reduced because the incident electrons lose energy as
they penetrate the anode material. Thus, the composite picture for X-ray production might
be as shown in Figure 2.4b.
However, before the X-rays emerge, the intensity distribution will be modified in two
ways. First, X-rays produced deep in the anode will be attenuated in reaching the surface
of the anode and second, all the X-rays will be attenuated in penetrating the window of the
X-ray tube. Both processes reduce the intensity of the low energy radiation more than that
of the higher energies so the resultant is the solid curve in Figure 2.4c.
Insight
Effective Energy
Two properties of the spectrum that are sometimes mentioned are the photon energy at which the
intensity is maximum (εImax) and the mean energy (εmean). Since the spectrum is not symmetrical,
they are not the same and neither has much practical significance.
30 Physics for Diagnostic Radiology
A more useful quantity is the effective energy (εeff ). This is defined as the energy of a narrow
beam of monochromatic radiation that would have the same penetrating power (measured in
terms of half value layer (HVL) or linear attenuation coefficient) as the mixed energy spectrum. For
a well-filtered beam (see Section 3.9) εeff will be close to, but not identical with εImax and approxi-
mately one-third εmax.
(a) (b)
L L
Incoming
electron
K Vacancy in K
K shell filled by
Nucleus Nucleus
L shell electron
Z = 74
Electron
ejected from
K shell X-ray photon of
fixed energy
emitted
FIGURE 2.5
Production of a line spectrum. (a) The incoming electron removes an electron from the K shell of a tungsten
atom. (b) An electron from the L shell falls into the K shell potential well and an X-ray photon with a well-
defined energy εL – εK is emitted. Only the K and L shell electrons are shown.
Production of X-Rays 31
Insight
Effect of kV on Exposure Factors
Note that the effect of increasing kV on the amount of radiation (mAs) required for an exposure
is generally greater than that implied by E ∝ kV2. At higher kV the radiation penetrates the patient
more easily and detector sensitivity varies with kV. For a film-screen receptor a very approximate
relationship between kV1 and kV2
TABLE 2.1
Efficiency of Conversion of Electron Energy into
X-rays as a Function of Tube Kilovoltage
Tube Kilovoltage (kV) Heat (%) X-rays (%)
60 99.5 0.5
200 99 1.0
4000 60 40
32 Physics for Diagnostic Radiology
(a)
(b)
Voltage
(c)
(d)
Time
FIGURE 2.6
Examples of different voltage waveforms. (a) Mains supply; (b) Half wave rectification; (c) Full wave rectification
using a bridge rectifier; (d) Three phase supply (with rectification).
Production of X-Rays 33
used routinely nowadays, an almost constant voltage output can be achieved after rectifi-
cation and smoothing.
Variation in the voltage supply is known as the voltage ripple and is defined as Vmax – Vmin/
Vmax. Variations in V will affect the instantaneous output. For general radiography a ripple
of 5% may be acceptable, but when highly uniform X-ray output is essential, for example,
in computed tomography (CT), ripple must be reduced to less than 1%. This subject is
considered in more detail in Section 2.3.4. In future, in accordance with standard prac-
tice, operating voltages will be expressed in kVp to emphasise that the peak voltage with
respect to time is being given.
2.2.5.5 Filtration
This also has a marked effect on both the quantity and quality of the X-ray beam, not
only reducing the overall output but also reducing the proportion of low energy photons.
Special filters (K-edge filters) can be used to create a window of transmitted X-ray energies
and thus reduce the number of both high and low energy photons. The effect of beam fil-
tration is considered in detail in Section 3.9.
commonly called the space charge. The number of electrons in the space charge tends to a
self-limiting constant value dependent on the filament temperature.
For reasons related primarily to geometrical unsharpness in the image, a small target for
electron bombardment on the anode is essential. However, unless special steps are taken,
the random thermally induced velocities and mutual repulsion of the electrons leaving
the cathode will cause a broad beam to strike the anode. Therefore the filament is sur-
rounded by an auxiliary electrode, or focussing cup (the Wehnelt electrode), typically made
of nickel. This electrode provides an electric field which exercises a focusing action on the
electrons by changing the equipotential lines and pressing the electrons together to pro-
duce a small focal spot on the anode. Originally, the Wehnelt electrode was maintained at
the same potential as the emitter but smaller spots can be obtained by making its potential
slightly more negative.
If the Wehnelt electrode is made about 2 kV more negative than the filament, or an addi-
tional electrode or ‘grid’ is used to provide this voltage, the electron beam will be stopped
completely. This technique, known as grid control, can be used to improve the output profile
of the X-rays since pulsed control of the current switches the beam on and off with very lit-
tle inertia. A more recent alternative method of output control is primary pulsing. This is one
of the benefits of recent developments in high frequency generation (see Section 2.3.4) and
permits the direct modulation of tube voltage. Output profiles with steep rise and fall times
are important when rapid pulses or very short exposures (a few milliseconds) are required,
for example in fluorography, digital subtraction angiography, CT and paediatrics.
Most diagnostic X-ray tubes have a choice of focal spot size. A smaller focal spot pro-
duces sharper images (see Section 6.9.1) but places greater demands on heat dissipation in
the anode (see Section 2.5.3). Some tubes have a dual filament assembly, each filament hav-
ing its own focusing cup producing two spots of different sizes. Alternatively, the negative
voltage bias on the Wehnelt electrode may be varied to refocus the electron output from a
single filament electrostatically.
Note that spot size does vary somewhat with tube current and tube kilovoltage since the
focusing action cannot be readily adjusted to compensate for variations in the mutual elec-
trostatic repulsion between electrons when either their density or their energy changes.
The effect may not be apparent if tube current is increased from 100 mA to 300 mA at
140 kVp but at 80 kVp the focal spot size may increase by a factor of two or more.
The effective or apparent size of the focal spot on the anode is smaller than the actual
focal spot because of the anode angle. The smaller the anode angle the smaller the appar-
ent focal spot size (see Sections 2.4 and 6.9.1).
1. A high conversion efficiency for electrons into X-rays. High atomic numbers are
favoured since the X-ray intensity is proportional to Z. At 100 keV, lead (Z = 82)
converts 1% of the energy into X-rays but aluminium (Z = 13) converts only
about 0.1%.
2. A high melting point so that the large amount of heat released causes minimal
damage to the anode.
3. A high conductivity so that the heat is removed rapidly.
Production of X-Rays 35
4. A low vapour pressure, even at very high temperatures, so that atoms are not
boiled off from the anode.
5. Suitable mechanical properties for anode construction.
In stationary anodes the target area is pure tungsten (W) (Z = 74, melting point 3370°C)
set in a metal of higher conductivity such as copper. Originally, rotating anodes were also
made of pure tungsten. However, at the high temperatures generated in the rotating anode
(see Section 2.3.3), deep cracks developed at the point of impact of the electrons. The del-
eterious effects of damaging the target in this way are discussed in Sections 2.3.5 and 2.4.
The addition of 5%–10% rhenium (Rh) (Z = 75, melting point 3170°C) greatly reduced the
cracking by increasing the ductility of tungsten at high temperatures. The wear resistant
rhenium alloy in the focal spot path ensures minimal ageing, thus high and constant expo-
sure values for a long life. However, pure W/Rh anodes would be extremely expensive
so molybdenum is now chosen as the base metal. Molybdenum (Z = 42, melting point
2620°C) stores twice as much heat, weight for weight, as tungsten, but the anode volume
is now greater because molybdenum has a smaller density than tungsten. As shown in
Figure 2.7a only a thin layer of W/Rh is used to prevent distortion that might arise from
the differences in thermal expansion of the different metals.
(b) r ~ 40 mm
x ~ 6 mm
y ~ 2 mm
Mean radius
of rotation
r
x Bombarded spot
Stationary y
width x
spot
FIGURE 2.7
(a) Detail of the target area on a modern rotating anode. (b) Principle of the rotating anode showing the area
bombarded in a 0.01 s exposure at 50 Hz.
36 Physics for Diagnostic Radiology
distribution of X-rays. Design features related primarily to heat dissipation are discussed
below, the spatial distribution of X-rays is considered later.
Vacuum envelope
and shielding
Stator
Cathode Anode
connections connections
Bearings
Rotor
Anode
FIGURE 2.8
Design features of a rotating anode X-ray tube.
Production of X-Rays 37
It may be shown that, if the exposure time is long enough for the anode to rotate at least
once:
For the 80 mm diameter anode suitable for general radiographic work shown in Figure 2.7(b),
this is an improvement of about 6 × 40/2 = 120 times and the heat input can be increased con-
siderably (although not by a factor of 120). When high loading is required, that is high heating
rates and high heat storage capacity, anodes up to 200 mm in diameter may now be used.
In addition to anode size, surface area, disc mass and rate of rotation all affect the load-
ing. If a graphite block is brazed onto the back of an anode, its low mass and high melt-
ing point increase the heat storage capacity, and the heat radiating efficiency is increased
because of the bigger anode surface and the better emission coefficient of black graphite.
Rotation rates range from 3000–3600 rpm with 50–60 Hz mains supply, up to 9–10,000
rpm with a 3-phase supply, ensuring that the anode rotates several times during even the
shortest exposure, thus maximising the area over which the heat is distributed. However,
this does create some problems with respect to the type of mounting and cooling mecha-
nism. Adequate electrical contact is maintained via bearings on which the anode rotates,
but the area of contact is quite insufficient for adequate heat conduction. Either ball bear-
ings or sleeve bearings may be used (see Insight).
Insight
Bearing Systems
• They form a connection between a very hot anode plate and a cold environment.
• They must operate in a vacuum.
• They must withstand high turning speeds and weight loads.
Lubrication of ball bearings cannot be by oil or grease because of the vacuum required in the
X-ray tube. However, the lubricants must be soft, deformable materials that are stable at high
temperatures and have low vapour pressure under vacuum. Dry metallic lubricants such as silver
paste are used.
A recent development is the introduction of liquid, sliding bearings which utilise the aquaplan-
ing effect of liquid metals. A good analogy is a locked car wheel on a wet road surface. Water
accumulates between the tyre and road and forms a bow wave. As pressure builds a wedge is cre-
ated between them and eventually a film of water is forced in, separating the tyre and road along
the whole contact area. For sliding bearings a liquid metal (e.g. an eutectic of gallium, indium
and tin which melts at –10oC) is used in a 20 µm gap. An important advantage of the new design
is the extra anode cooling (1–2 kW) by fast heat flux from the anode through the liquid metal into
the cooling system.
Since the anode is an evacuated tube, there are no heat losses by convection. The initial
mode of heat transfer from the anode to the cooling oil must therefore be primarily by
radiation at a rate proportional to (anode temperature)4 − (oil temperature)4. With a rotat-
ing anode, heat loss by conduction along the anode support is actually minimised since it
38 Physics for Diagnostic Radiology
might result in overheating of the bearings. Thus the rotating anode is mounted on a thin
rod of low conductivity material such as molybdenum. Care must be taken that the length
and method of support of this rod ensure that the anode remains stable when rotating.
Although radiation remains the primary mechanism of heat loss, other developments
have improved heat dissipation and heat storage.
100
Maximum temperature (%)
80
7.5 MHU
60 Conventional
STRATON
40
20
0
0.1 1 10 100 1000 10000
Time/s (log scale)
FIGURE 2.9
Cooling curves for a rotating envelope tube (Siemens STRATON) compared with cooling curves for conventional
tubes. (MHU = mega heat unit (see Section 2.5.5)) (With permission from Oppelt A (ed.) Imaging systems for medi-
cal diagnostics, chapter 12, X-ray components and systems, Siemens, Erlangen, 2005, 264–412. Dura and STRATON
are registered trademarks of Siemens.)
Production of X-Rays 39
where n1 and n2 are the number of turns on the primary and secondary coils, respectively.
Note
100 × 10 3 × 50 × 10 −3
I in = = 20 A
240
so input currents are very high. (Hence the requirement for battery-powered or
capacitor discharge mobile X-ray units (see Section 2.6))
3. Power loss occurs in all transformers, and the amount depends on working condi-
tions, especially Iin. Hence Vout and Iout also vary and auxiliary electrical circuits
are required to stabilise outputs from an X-ray set.
4. The efficiency of a transformer increases as the frequency of operation rises and
consequently its size decreases.
Vin Vout
B
n2
turns
3
A
n1 turns n2 turns
FIGURE 2.10
Essential features of a simple transformer.
40 Physics for Diagnostic Radiology
Figure 2.14 (see later) also shows, on the extreme left, an autotransformer. An autotrans-
former comprises one winding only and works on the principle of self-induction. Since the
primary and secondary circuits are in contact, it cannot transform high voltages or step up
from low to high voltages. However, it does give a variable secondary output on the low
voltage side of the transformer and hence controls kV directly.
AC
voltage Anode
supply Bridge
X-ray tube
of four
rectifiers Cathode
FIGURE 2.11
Essential features of a full wave rectified supply. (Solid) and (dotted) arrows show that irrespective of whether
A or B is at a positive potential, the current always flows through the X-ray tube in the same direction.
Production of X-Rays 41
1. High output—The output is very high and comparable with a three phase, 12-peak
generator. At 100 kV (virtually DC), 0.1 A gives 10 kW so for a 0.1 s exposure there
are 1000 J of energy to dissipate.
2. Reduced voltage ripple—At this high frequency, because pulses are very short the
kV never falls very far below its peak value. Thus the average energy of the X-ray
photons is higher than for a three phase supply and the output is very constant.
3. Compact size—The transformer equation u/fnA = constant where n is the num-
ber of turns on the transformer and A its cross-sectional area, shows that if f is
increased by a factor of 100, say from 50 Hz to 5 kHz, nA may be reduced by a
similar amount. Since the efficiency has been improved the transformer is much
smaller, perhaps one-tenth the size of a three phase 12 peak generator.
4. Rapid response—The high voltage is switched on and off, and its level may be regu-
lated even during exposure, under feed-back control of the inverter. The rise time
of the tube voltage can be less than 200 µs.
5. Long-term stability—The tube current is more stable at the higher frequency f 2 and
is independent of the voltage.
6. Timer precision—The precision of the exposure timer can be improved.
7. Voltage range—The generator may be used across the full kV range from mammog-
raphy to CT.
Stage 1 Stage 2 Stage 3
Rectifer Inverting rectifier
u1 f1 u0 u2 f2
Converter
FIGURE 2.12
Schematic representation of frequency converter.
42 Physics for Diagnostic Radiology
+ +
Bridge V X-ray
Capacitor
circuit volts tube
– –
Flow of electrons
+ +
Bridge
Zero volts Capacitor
circuit
– –
Flow of electrons
FIGURE 2.13
Illustration of the use of a capacitor for voltage smoothing. For explanation see text.
TABLE 2.2
Nominal Values of Voltage Ripple for Different Generators
Voltage Wave Form Ripple (%)
Half wave/full wave 100
Full wave + capacitor smoothing 20
Three phase – 6 pulse unsmoothed 15
Three phase – 12 pulse unsmoothed 6
High frequency generatora (typical) 1–4
High frequency generatora (attainable but expensive) 0.1
a Note that a high frequency output has to be smoothed. Otherwise it
would be just a half wave rectified profile at high frequency.
Production of X-Rays 43
Exposure switch
V
Mains in AC Earthed
414 V
KV
AF
selector
Autotransformer Transformer mA control
Timer
Filament
supply volts
FIGURE 2.14
Simplified representation of the position of kV and current meters in the electrical circuit.
AF measures the filament supply current (IF) which may be adjusted to give the required
thermionic emission before exposure starts. The actual tube current flowing during expo-
sure (Ic) is measured by ammeter Ac.
Insight
Borosilicate Glass versus Metal/Ceramic Envelope
Glass tubes are more likely to break during manufacture and it is difficult to adjust mechanical
tolerances inside the glass housing. However, they are cheaper and so are still used for standard
applications, that is, radiography with smaller power demands.
Metal/ceramic envelopes allow better mechanical precision, less manufacturing wastage and
can resist larger mechanical stress. The stress tolerance is particularly important in CT and fast-
moving 3D imaging in angiography.
The envelope also provides a vacuum seal to the metallic components that protrude
through it. Great care must be taken at the manufacturing stage to achieve a very high
level of vacuum before the tube is finally sealed. The electrons have a mean free path of
several metres. If residual gas molecules are bombarded by electrons, the electrons may
44 Physics for Diagnostic Radiology
be scattered and strike the walls of the envelope, thereby causing reactions that result in
release of gas molecules and further reduction of the vacuum.
The presence of atoms or molecules of gas or vapour in the vacuum, whatever their ori-
gin, is likely to have a deleterious effect on the performance of the tube. For example, metal
evaporation from the anode can cause a conducting film across a glass envelope, thereby
distorting the pattern of charge across the tube. This can change the output characteristics
since it is assumed that the flow of electrons from cathode to anode will be influenced by the
repulsive effect of a static layer of charge on the tube envelope. If this charge is not static, the
electrons in the beam are not repelled by the tube envelope and deviate to it. This diversion
of current may significantly reduce tube output. Metal enclosures repel ion deposits so they
are less susceptible to build-up of tungsten ions from tungsten vapour. They also collect elec-
trons scattered from the anode, thereby reducing extrafocal radiation (see Section 2.4).
Both residual gas and anode evaporation cause a form of tube instability which may
occasionally be detected during screening as a kick on the milliammeter as discharges
take place. In the extreme case, the tube goes ‘soft’ and arcs over during an exposure.
• Shields against stray X-rays because it is lined with lead—leakage must not exceed
1 mGy in 1 h at 1 m (see Section 14.5.2)
• Provides an X-ray window—which filters out some low energy X-rays
• Contains the anode rotation power source
• Provides high voltage terminals
• Insulates the high voltage
• Allows precise mounting of the X-ray tube envelope
• Provides a means for mounting the X-ray tube
• Provides a reference and attaching surface for X-ray beam collimation devices
• Contains the cooling oil
and a large current to flow which operates a relay closing the primary circuit. The switch-
ing is very rapid and is suitable for most radiographic exposures. The circuit used to drive
the device is, however, quite complicated and beyond the scope of this book.
Switching the high voltage side of the transformer can be undertaken in two ways. The
large electrical power that has to be accommodated (up to 100 kW) precludes the use of
solid state devices and triode valves have to be used. Alternatively the exposure can be
regulated with the Wehnelt electrode (grid control)—see Section 2.3.1. This grid is used to
switch the tube on and off very rapidly by changing its voltage from negative to just posi-
tive relative to the cathode. When negative, even though the full kVp is applied across the
tube, electrons cannot move from the cathode to the anode.
100
80
Instantaneous kV (% kVp)
60
40
20
0
A B C D Time E F G H
FIGURE 2.15
Timing uncertainties due to lag in switching mechanism. (A) Exposure ‘on’ command; (B) X-ray output starts;
(C) 75% maximum kVp; (D) 100% of maximum kVp; (E) Exposure ‘off’ command; (F) Response to ‘off’ com-
mand; (G) 75% of maximum kVp; (H) X-ray output terminates. C to G is the International Electrotechnical
Commission (IEC) definition of irradiation time. There is a corresponding uncertainty in output (patient dose),
shown shaded. The shorter the exposure, the greater the percentage uncertainty. (With permission from Oppelt
A (ed.) Imaging systems for medical diagnostics, chapter 12, X-ray components and systems, Siemens, Erlangen, 2005,
264–412.)
46 Physics for Diagnostic Radiology
Vo
Vs
R3
R2
R1
t1 t2 t3
Time (s)
FIGURE 2.16
Curves showing the rate of discharge of a capacitor through resistors of different resistance. The time taken to
reach VS when the switching mechanism would operate, depends on the value of R.
of discharge of the capacitor depends on the values of C and R. A family of curves for fixed
C and variable R is shown in Figure 2.16. Note that a large resistance reduces the rate of
flow of charge so the rate of fall of V is slower.
These curves may be used as the basis for a timer if a switching device is arranged to
operate when the potential across C reaches say Vs.
1
Q = 1000 × or ± 0.1 mAs
1000
2.3.6.5 The Photo Timer
The weakness of any timer that predetermines the exposure is that a change in any fac-
tor which affects the amount of radiation actually reaching the receptor, notably patient
attenuation, will alter the response. For a digital detector it may be possible to adjust for
this by altering the window level (see Section 6.3.3) but for film the blackening will change.
Production of X-Rays 47
Thus the skill of the radiographer in estimating the thickness of the patient and choos-
ing the correct exposure is of great importance. In the photo timer the exposure is linked
more directly to the amount of radiation reaching the receptor. This is known as automatic
exposure control or AEC.
One design, used especially when film is the receptor, places small ionisation chamber
monitors in the cassette tray system between the patient and the film-screen combination.
The amount of radiation required to produce a given degree of film blackening with a
given film-screen combination under standard development conditions is known, so when
the ion chamber indicates that this amount of radiation has been received, the exposure
is terminated. This type of exposure control does not need to be ‘set’ before each expo-
sure, but some freedom of adjustment is provided to allow for minor variations in film
blackening if required. Adjustment will also be required if screens of different sensitivity
are used.
As an alternative to ion chambers, photomultiplier tubes (see Section 4.8) may be used
after the X-rays have been converted to light by a phosphor. Some have the disadvantage,
however, of being X-ray opaque so they must be placed behind the cassette, where the
X-ray intensity is low, and special radiolucent cassettes must be used. Others use phos-
phor-coated lucite, which can be placed in front of the cassette as it does not attenuate the
X-ray beam. The light produced is internally reflected to the side where the photomulti-
plier can view it. The energy response of the phosphor probably differs from that of the
receptor and some form of compensation must be built into the circuitry by using software
control.
A weakness with some types of phototimer is that the ion chamber or photomultiplier
tube only monitors the radiation reaching a small part of the receptor and this may not be
representative of the radiation reaching the rest of the receptor. This problem can be partially
overcome by using several small ion chambers, usually three, and controlling the exposure
with the one, that is, closest to the region of greatest interest on the resulting image.
There must also be a backup exposure timer so that should the AEC fail, the patient
exposure, although longer than necessary, will be terminated without intervention by the
operator. It should be noted that this backup time should not be set at the thermal limit of
the tube as this could result in a large radiation dose to the patient.
It is essential that high tension cables are not twisted or distorted in any other manner
that might result in breakdown of the insulation. They must not be load bearing.
1. The radiation intensity reaching the detector is still not quite uniform, being max-
imum near the centre of the field of view. This is due to
(a) An inverse square law effect—radiation reaching the edges of the field has to
travel further
(b) A small obliquity effect—beams travelling through the patient at a slight angle
must traverse a greater thickness of the patient and are thus more attenuated
Neither of these factors is normally of great practical importance.
Production of X-Rays 49
(a) 90º
45º
180º θ=0
Electron beam
incident on
thin metal
target
–45º
–90º
(b) α = 30º
Incident α
Site of X-ray production
electron
deep in the anode
beam
A
Anode
Emitted
X-rays
B
FIGURE 2.17
(a) Approximate spatial distribution of X-rays generated from a thin metal target bombarded with 40 keV elec-
trons. This figure is known as a polar diagram: the distance of the curve from the origin represents the relative
intensity of X-rays emitted in that direction. The polar diagram that might be obtained with 100 keV electrons is
shown dotted. (b) The effect of self-absorption within the target on X-ray production from a thick anode.
2. The anode angle selected does not remove the asymmetry completely and this is
known as the heel effect. The effect of X-ray absorption in the target, which results
in a bigger exposure at A than at B, is more important than asymmetry in X-ray
production, which would favour a bigger exposure at B.
3. Some compensation for the heel effect can be achieved by tilting the filter. The left
hand edge of the beam will pass through a smaller thickness of filter than the right
hand edge. This modification is being used in some mammography tubes.
4. No such asymmetry exists in a direction normal to that of the incident electron
beam so if careful comparison of the blackening on the two sides of the film is
essential the patient should be positioned accordingly although care must be taken
balancing a tall patient at right angles to the table.
5. The shape of the exposure profile is critically dependent on the quality of the
anode surface. If the latter is pitted owing to overheating by bombarding elec-
trons, much greater differences in exposure may ensue.
6. An angle of about 13o–16o is frequently chosen for general radiography and this has
one further benefit. One linear dimension of the effective spot for the production of
50 Physics for Diagnostic Radiology
Incident
electrons
Shield
window
Percentage
50
exposure
rate
Angle of filter
75
Area of
heel effect Beam
100 intensity
FIGURE 2.18
Variation of X-ray intensity across the field of view for a typical anode target angle.
X-rays is less than the dimension of the irradiated area by a factor equal to sin α. Sin
13o is about 0.2, so angling the anode in this way allows the focusing requirement on
the electron beam to be relaxed whilst ensuring a good focal spot for X-ray produc-
tion (Figure 2.19). This is known as the line focus principle. If a very small focal spot
(∼0.3 mm) is required, a smaller angle, perhaps only 6o may be used. Note that with
a small anode angle, the heel effect greatly restricts the field size. This may not be a
problem if the field of view is inherently small but, in general, the only compensation
is to increase the focus-receptor distance. For example, if the minimum acceptable
variation in optical density across the field of view is 0.2, for a film size of 43 cm ×
35 cm the minimum focus-film distance increases from about 110 cm for a 16o angle
to 150 cm for a 12o angle. There is a consequent loss of intensity at the film due to the
inverse square law. Some X-ray tubes have used anodes with two angles so that the
best angle for the focal spot size chosen can be used, but this is rare now.
Even with a well-designed anode, a certain amount of extrafocal radiation arises from
regions of the anode out with the focal spot. These X-rays may be the result of poorly colli-
mated electrons but are more usually the consequence of secondary electrons bouncing off
the target and then being attracted back to the anode remote from the focal spot. Note that
extrafocal radiation is not scattered radiation. Extrafocal X-rays may contribute as much as
10%–15% of the total output exposure of the tube but are of lower average energy. Many
of them will fall outside the area defined by the light beam diaphragm and under extreme
conditions may cast a shadow of the patient (Figure 2.20).
Production of X-Rays 51
(a) α = 45º
Dimension of area
bombarded by Anode
electrons
Dimension of
effective X-ray
source
α
Dimension of area
bombarded by
Anode
electrons
Dimension of
effective X-ray
source
FIGURE 2.19
The effect of the anode angle on the effective focal spot size for X-ray production. (a) If the anode is angled at 45°,
the effective spot for X-ray production is equal to the target bombarded by electrons. (b) If the anode is angled
at less than 45° to the vertical the effective spot for X-ray production is less than the bombarded area. Note that
in all cases these areas are measured normal to the X-ray beam. The actual area of the impact of electrons on the
anode will be greater for reasons explained in the text.
Over the region of interest, the principal effect of extrafocal radiation is that it creates a
uniform low level X-ray intensity. This contributes to the reduction in contrast produced by
scattered radiation (see Section 6.5). Since this reduces image quality and hence, indirectly,
increases the dose for imaging, it remains an important consideration. An additional effect
is an increase in geometric blurring (enlarged effective spot size).
Insight
Secondary Collimation
Note that some of the X-ray photons from off-focus sources can be stopped by secondary
collimation—a second set of collimators placed below the first set (Figure 2.21). This double
collimation acts somewhat like a parallel hole collimator in a gamma camera (see Section 10.2.1).
52 Physics for Diagnostic Radiology
Edge of light
beam diaphragm
Edge of phantom
imaged by extra
focal radiation
Outer edge of
extra focal
radiation
FIGURE 2.20
Radiograph showing the effect of extra focal radiation. The field of view defined by the light beam diaphragm
is shown on the right but the outer edge of the ‘patient’ (a phantom in this instance) is also radiographed on the
left by the extra focal radiation.
Incoming electrons
Source of extra
focal radiation
Collimator 1
FIGURE 2.21
Reduction of extra focal radiation by double collimation.
With rotating anode tubes extrafocal radiation is generated on the anode plate—that
is, in a strip perpendicular to the tube axis, so the effect is most clearly visible on edges
of the radiograph that are parallel to the axis of the X-ray tube. In the metal cased tubes
discussed in Section 2.3.5 the case is at earth potential and attracts off-focus electrons. The
amount of off-focus radiation produced is reduced but not eliminated.
Extrafocal radiation is a potentially serious problem in image intensifier fluoroscopy
because if unattenuated radiation reaches the input screen of the image intensifier the very
Production of X-Rays 53
bright fluorescent areas reduce contrast by light scattering and light conduction effects. The
effect of extrafocal radiation on receptors in digital radiology probably merits investigation.
(a)
Tube current
Saturation point
50 kVp
Tube current (mA)
350
FIGURE 2.22
(a) The effect of increasing tube kilovoltage on the tube current for a fixed filament current. (b) A family of
curves relating tube current to filament current for different applied voltages.
more and more electrons from the space charge around the cathode are being attracted to
the anode. In theory, the tube current should plateau when the voltage is large enough to
attract all electrons to the anode. In practice there is always a cloud of electrons (the space
charge) around the cathode and as the potential difference is increased, a few more electrons
are attracted to the anode. The result is that, as the tube kV is increased, the maximum tube
current attainable also increases. Hence a typical family of curves relating tube current to
filament current might be as in Figure 2.22b. Modern X-ray tubes contain several compensat-
ing circuits one of which stabilises the tube current against the effect of changes in voltage.
E = (kV)4 × I × t
Production of X-Rays 55
The required value of E will depend on the body parts being radiographed—for example,
low for dental work, high for lateral lumbar spine. However, the required power in the
generator (kV × mA) will depend also on t since short exposures for a given E will require
higher kV and/or mA values.
The nominal operating power is specified as the kW that can be delivered at 100 kV for
0.1 s. Thus a 30 kW tube allows 300 mA, a 100 kW tube allows 1000 mA. Typical maximum
powers for different applications are shown in Table 2.3.
TABLE 2.3
Typical Maximum Generator Powers for Different
Applications
Application Typical Maximum Power (kW)
Dental and mammography 5
Mobiles 30
General purpose fixed units 60–80
Angiography and CT 100
56 Physics for Diagnostic Radiology
With cooling
dissipated as heat
Heat lost by
Total energy
cooling
between
t and 2t
Without cooling
t 2t
Time of exposure
FIGURE 2.23
Total energy dissipated as heat for different exposure times with and without cooling.
current is more nearly proportional to the perimeter of the spot since the rate at which heat
is conducted away becomes the most important consideration.
The larger focal spot, although allowing short exposure times, increases geometrical
unsharpness (Section 6.9.1).
• Its radius, which will determine the circumference of the circle on which the elec-
trons fall
• Its rate of rotation
• The anode angle
• The focal spot size
The last two are closely related since the critical factor for heating is the area of the electron
bombardment spot. For the same electron bombardment area a large angle anode will give
a large focal spot, a small anode angle will give a small spot.
Rather old rating curves showing the maximum permissible tube current for different
exposure times for anodes of different design are shown in Figure 2.24. A small anode
angle and rapid rotation give the highest rating but note that the differences between the
curves become progressively less as the exposure time is extended.
Note that
1. For the first complete rotation of the anode surface electrons are falling on
unheated metal. For an anode rotation frequency of 50 Hz (the mains supply) one
rotation requires 0.2 s so for even shorter exposures electrons fall on only part of
the target length and the maximum tube current is independent of exposure time.
The maximum permissible tube current for a stationary anode operating under
similar conditions would be much lower.
2. For these old X-ray tubes the permitted tube current was very low (a rating curve
for a high performance modern tube is shown later in Figure 2.28).
Production of X-Rays 57
120
FIGURE 2.24
Historical rating curves showing the maximum permissible tube current at different exposure times for anodes of
different design. Each tube is operating at 100 kVp three phase with a 0.3 mm focal spot. (1) Type PX 410 4 inch diam-
eter anode with a 10° target angle and 150 Hz stator. (2) Type PX 410 4 inch diameter anode with a 10° target angle
and 50 Hz stator. (3) Type PX 410 4 inch diameter anode with a 15° target angle and 50 Hz stator. Curves (1) and (2)
show the effect of increasing the speed of rotation of the anode. Curves (2) and (3) show the effect of changing the
target angle. (From Waters G, J Soc X-ray Tech. Winter 1968/69, 5, with permission of Philips Healthcare.)
80 kVp 50 kVp
600
400
150 kVp
300
200
100
FIGURE 2.25
Maximum permissible tube current as a function of exposure time for various tube kilovoltages for a rotating
anode. Type PX 306 tube with a 3 inch diameter anode, 15° target angle operating on a single phase with a 60
Hz stator and a 2 mm focal spot—circa 1982. Note: the dotted line indicates that the maximum permissible fila-
ment current would probably be exceeded under these conditions. (Commercial Rating Curves for PX 306 X-Ray
Tube, with permission of Philips Healthcare.)
Insight
Full Wave Rectified and Three Phase Supplies
When full wave rectified and three phase supply rating charts are compared at the same kVp, all
other features of anode design being kept constant, the curves actually cross (Figure 2.26). For very
short exposures higher currents can be used with a three phase than with a single phase supply,
but the converse holds at longer exposures.
To understand why this is so, consider the voltage and current waveforms for two tubes with the
same kVp and mA settings (Figure 2.27). Note:
1. The current does not follow the voltage in the full wave rectified tube. As soon as the poten-
tial difference is sufficient to attract all the thermionically emitted electrons to the anode,
the current remains approximately constant.
2. The three phase current remains essentially constant throughout.
3. The peak value of the current must be higher for the full wave rectified tube than for the
three phase tube, if the average values as shown on the meter are to be equal.
600
Tube current (mA)
300
FIGURE 2.26
Maximum permissible current as a function of exposure time for 80 kVp single phase full wave rectified (dotted
line) and 80 kVp three phase supplies (solid line).
Voltage
Time
Current
Tm Time
FIGURE 2.27
Voltage and current profiles for two tubes with the same kVp and mA settings but with three phase (solid line)
or full wave rectified (dotted line) supplies.
Production of X-Rays 59
For very short exposures, instantaneous power is important. This is maximum at Tm and since
the voltages are then equal, power is proportional to instantaneous current and is higher for
the full wave rectified system. Inverting the argument, if power dissipation cannot exceed a
predetermined maximum value, the average current limit must be lower for the full wave rec-
tified tube.
For longer exposures, average values of kV and mA are important. Average values of current
have been made equal, so power is proportional to average voltage and this is seen to be higher
for the three phase supply. Hence, again inverting the analysis, the current limit must be lower for
the three phase tube as predicted by the rating curve graphs.
It is left as an exercise to the reader to explain, by similar reasoning, why the rating curve for a
full wave rectified tube will always be above the curve for a half wave rectified tube.
(a)
800
2008
Tube current (mA)
600
1995
400
200
0
0.1 1 10 100
Time (s)
(b) 600
Tube current (mA)
400 F2
200
F1
0
0.1 1 10 100
Time (s)
FIGURE 2.28
More modern rating curves: (a) Curves at 120 kVp for a Siemens 502 MC tube for computed tomography with a 0.8
mm × 1.1 mm focal spot (circa 1995) and a Siemens STRATON tube with a similar focal spot size (2008). (b) Curves
for the Siemens 502 MC tube with a small, 0.6 mm × 0.6 mm, spot (F1) and a larger 0.8 mm × 1.1 mm spot (F2), both at
120 kVp. (Characteristics Rating Curves for a Siemens 502 MC Tube Designed for Computerised Tomography 1995.
Reproduced by permission of Siemens Healthcare. Dura and Straton are registered trademarks of Siemens.)
1. The surface of the target can be overheated by repeated exposures before the sur-
face heat has time to dissipate into the body of the anode.
2. The entire anode can be overheated by repeating exposures before the heat in the
anode has had time to radiate into the surrounding oil and tube housing.
Production of X-Rays 61
3. The tube housing can be overheated by making too many exposures before the
tube shield has had time to lose its heat to the surrounding air.
The second and third problems can also arise during continuous fluoroscopy. Although
tube currents are now low, 1–5 mA, compared to 500 mA or more in radiography, exposure
times can be very long. The surface of the target will not overheat but heat dissipation from
the entire anode or tube housing may still require consideration.
The heat capacity of the total system, or of parts of the system, is sometimes expressed in
heat units (HU). By definition, 1.4 HU are generated when 1 J of energy is dissipated.
The basis of this definition can be understood for a full wave rectified supply
HU = 1.4 × energy
= 1.4 × root mean square (rms) kV × average mA × s
But
Hence
HU = kVp × mA × s
Thus the HUs generated in an exposure are just the product of (voltage) × (current) × (time)
shown on the X-ray control panel, so the introduction of the HU was very convenient for
single phase generators.
Unfortunately, this simple logic does not hold for modern generators. The mean kV is
now much higher, perhaps 0.95 kVp or better, so
Hence for three phase supply the product of kVp and mAs as shown on the meters must be
multiplied by 1.35 to obtain the heat units generated. With the near universal use of three
phase and high frequency generators, joules are becoming the preferred unit.
The rating charts already discussed may be used to check that the surface of the target
will not overheat during repeat exposures. This cannot occur provided that the total heat
units of a series of exposures made in rapid sequence does not exceed the heat units per-
missible, as deduced from the radiographic rating chart, for a single exposure of equiva-
lent total exposure duration.
When the time interval between individual exposures exceeds 20 s there is no danger of
focal track overheating. The number and frequency of exposures is now limited either by
the anode or by the tube heat storage capacity. A typical set of anode thermal characteristic
curves is shown in Figure 2.29. Two types of curve are illustrated:
1. Input curves showing the heat stored in the anode after a specified, long period of
exposure. Also shown, dotted, is the line for 700 watt input power in the absence
of cooling. This line is a tangent to the curve at zero time since the anode is ini-
tially cold and loses no heat. At constant kVp the initial slope is proportional to the
current. As the anode temperature increases, the anode starts to lose heat and the
curve is no longer linear.
62 Physics for Diagnostic Radiology
150
500 watts
A
100
250 watts
50
0
0 2 4 6 8
Time (min)
FIGURE 2.29
Typical anode thermal characteristic curves, showing the heat stored in the anode as a function of time for dif-
ferent input powers.
2. A cooling curve showing the heat stored in the anode after a specified period of
cooling. Note that if the heat stored in the anode after exposure is only 120 kJ, the
same cooling curve may be used but the point A must be taken as t = 0.
Two other characteristics of the anode are important. First, the maximum anode heat stor-
age capacity, which is 200 kJ here, must be known. For low screening currents, the heat
stored in the anode is always well below its heat limit, but for higher input power the maxi-
mum heat capacity is reached and screening must stop.
The second characteristic is the maximum anode cooling rate. This is the rate at which
the anode will dissipate heat when near its maximum temperature (800 watts) and gives a
measure of the maximum current, for given a kVp, at which the tube can operate continu-
ously. Note that under typical modern screening conditions, say 2 mA at 80 kVp, the rate
of heat production is only 2 × 10 –3 × 80 × 103 = 160 W.
During screening, or a combination of short exposures and screening, the maximum
anode heat storage capacity must not be exceeded. Exercises in the use of this rating chart
are given at the end of this chapter. Note that with the increased use of microprocessors
to control X-ray output, the system will not allow the operator to make an exposure that
might exceed a rating limit.
When the total time for a series of exposures exceeds the time covered by the anode
thermal characteristic chart, a tube shield cooling chart must be consulted. This is similar
to the anode chart except that the cooling time will extend (typically) to 100 min and the
maximum tube shield storage capacity in some modern units may be as high as 4 × 106 J.
Note that cooling of the housing can be enhanced with a heat exchanger. For example, by
pumping oil or water through a set of tubes in the housing the time taken for the housing
to lose 90% of its heat might be reduced by 50%.
As a final comment on thermal rating, it is worth noting that a significant amount of
power is required to set the anode rotating and this is also dissipated eventually as heat. In
a busy accident department taking many short exposures in quick succession, three times
as much heat may arise from this source as from the X-ray exposures themselves.
Production of X-Rays 63
500
400
Tube current (mA)
300
200
Fixed current
100 Falling load time
time
0
0.01 0.1 1 1
Time/s (log scale)
FIGURE 2.30
Illustration of the falling load principle. To achieve 200 mAs at a fixed mA requires a one second exposure.
Using the falling load principle, the tube can operate at 500 mA for 0.1 s = 50 mAs, then drop to 350 mA for a
further 0.4 s = 140 mAs, and finally to 200 mA for 0.05 s = 10 mAs, a total of 200 mAs in only 0.55 s.
64 Physics for Diagnostic Radiology
During multiple exposures a photoelectric cell may be used to sample radiant heat from
the anode and thereby determine when the temperature of the anode disc has reached a
maximum safe value. A visual or audible warning is then triggered. In modern systems
the tube loading is under computer control. Anode temperature is continuously calcu-
lated from a knowledge of heat input and cooling characteristics. When the rating limit is
reached, generator output is automatically reduced.
inverter (see Section 2.3.4). As with the medium/high frequency generator described
there, the AC voltage produced differs significantly from the mains AC voltage in that
it is at 5 kHz (or higher), some 100 times greater than the normal mains frequency. At
this high frequency the transformer is more efficient and as a consequence can be much
smaller.
Although the transformer output is basically single phase full wave rectified, at 5 kHz
the capacitance inherent in the secondary circuit smoothes the output to a waveform that
is essentially constant. The tube current is also constant for the whole of the exposure
which, as well as simplifying design, allows exposures to be calculated with ease. The bat-
tery is depleted and the voltage falls slightly from one exposure to the next so some form
of compensation must be applied until the unit is recharged. This compensation can be
applied either automatically or manually. Recharging takes place, when necessary, at a low
current from any hospital 13 A mains supply.
Typical values, or range of values, for the specification of a battery powered mobile unit
are shown in Table 2.4. The output is very stable and low power machines are very suitable
for chest radiography or premature baby units.
TABLE 2.4
Typical Specifications for a Range of
Battery Operated Mobile Units
Imax 100–400 mA
kVp 50–100 kV
Shortest exposure 1 ms
Focal spot 0.7 mm
Power 10–30 kW
Tube heat storage 100–200 kJ
66 Physics for Diagnostic Radiology
80 Start kV
60 End kV
Instantaneous kV
40
20
0
0 10 20
Exposure (current time) (mAs)
FIGURE 2.31
Variation of kV with exposure for a capacitor discharge mobile unit.
Evans et al. (1985) were perhaps the first to propose an empirical formula for the equiv-
alent kilovoltage of a capacitor discharge unit, that is the setting on a battery powered
constant potential kilovoltage machine which would produce the equivalent radiographic
effect. The equivalent kV is approximately equal to the starting voltage minus one-third
of the fall in tube voltage which occurs during the exposure. For a 1 µF capacitor the tube
voltage drop is numerically equal to the mAs selected, hence
If, therefore, one has a radiographic exposure setting of 85 kV and 30 mAs, the equivalent
voltage is 75 kV. If this exposure is insufficient and an under-exposed radiograph is pro-
duced, simply increasing the mAs may not increase the blackening on the film sufficiently.
For example, suppose the exposure is increased to 50 mAs at the same 85 kV. The equiva-
lent kV will now be only 68 kV. Since the equivalent kV is less, some of the increase in mAs
will be used to provide soft radiation which does not contribute to the radiograph. The
appropriate action is to change the exposure factors to 92 kV and 50 mAs, thereby main-
taining the equivalent kV at 75 kV but increasing the exposure to 50 mAs.
Insight
Under-exposure and Digital Receptors
Note that with a digital receptor the software will compensate for this under-exposure and pro-
duce an image within the desired grey scale range. However, this is bad radiographic practice
since the quantum noise in the image will be increased and may reach an unacceptable level.
If it is desired to reduce the starting kV after the capacitor has been charged or when
radiography is finished, the capacitor much be discharged. When the ‘discharge’ button is
depressed, an exposure takes place at a low mA for several seconds until the required charge
Production of X-Rays 67
X-ray tube
window
Lead shutter
intercepting beam
Motor for moving
Mirror
lead shutter
Diaphragm
jaws
FIGURE 2.32
Arrangement of the lead shutter for preventing exposure during capacitor discharge and the light beam dia-
phragm assembly on a capacitor discharge mobile unit.
has been lost. During this exposure the tube produces unwanted X-rays. These are absorbed
by a lead shutter across the light beam diaphragm. An automatic interlock ensures that the
tube cannot discharge without the lead shutter in place to intercept the beam (Figure 2.32).
It should be noted, however, that this shutter does not absorb all the X-rays produced, espe-
cially when the discharge is taking place from a high kV. Neither patient nor image receptor
should be underneath the light beam diaphragm during the discharge operation.
New special capacitors, sometimes called ultra caps or boost caps, have been developed
with smaller-volume storage units than normal electrolytic capacitors. Capacitor discharge
units require more operator training to ensure optimum performance than other mobile
units, but when used under optimum conditions they have a sufficiently high output to
permit acceptably short exposures for most investigations.
All these factors affect the degree of film blackening and the level of contrast. With a dig-
ital detector the system software will correct for variations and deterioration may become
quite serious before it is detected in the image.
Tube kV may be measured directly by the service engineer. However, for more frequent
checks a non-invasive method is to be preferred. The usual method is based on the fact that
different materials show different attenuation properties at different beam energies (see
Chapter 3). When this approach was first suggested by Ardran and Crooks (1968), a copper
step wedge was used to compensate for the difference in sensitivity between a fast and a
slow film screen. With suitable calibration the wedge thickness that gave an exact match
in film blackening could be converted into tube kilovoltage. This involves exposure to the
special cassette at each kVp with an appropriate mAs.
An early method for checking exposure times was to interpose a rotating metal disc
with a row of holes or a radial slit in it between the X-ray tube and the film. Inspection
of the pattern of blackening allowed the exposure time to be deduced. Both of the above
methods are described more fully in older text books—see, for example, Dendy and
Heaton (1987).
Later development simplified both these checks, using a digital timer meter and a digital
kV meter attached to a fast responding storage cathode ray oscilloscope (CRO). Several
balanced photo detectors can be used under filters of different materials and different
thicknesses. By using internally programmed calibration curves, a range of kVps may be
checked with the same filters. Accuracy to better than 5% should be achievable. This is
particularly important for mammography because the absorption coefficient of soft tis-
sue falls rapidly with increasing kV at low energies. Note that the CRO also displays the
voltage profile so a fairly detailed analysis of the performance of the X-ray tube generator
is possible. For example, any delay in reaching maximum kV could affect output at short
exposure times. Note that these devices are only accurate at a set tube filtration and read-
ings must be corrected if the tube filtration is different. In modern equipment semiconduc-
tor technology has further simplified these measurements.
Consistency of output may be checked by placing an ion chamber in the direct beam and
making several measurements. Both the effect of changes in generator settings (kV, mAs)
on the reproducibility of tube output and the repeatability between consecutive exposures
at the same setting can be checked. Measurements should be made over a range of clini-
cal settings to confirm a linear relationship between tube output (mGy in air for a fixed
exposure time) and the preset mA. It is not normal to make an absolute calibration of tube
current.
For AEC systems a film and suitable attenuating material, for example, perspex or a
water equivalent slab, should be used to determine the film density achieved when an
object of uniform density is exposed under automatic control. A check should be made
to ensure that the three ion chambers are matched, so that irrespective of which one is
selected to control the exposure, the optical density is similar and repeatable. For a digital
system a semi-conductor detector may be used to check that the dose at the image receptor
is consistent and within specified limits.
Low energy X-rays in the spectrum would be absorbed in the patient, increasing the
dose but contributing nothing to the image. They must be selectively removed by filtra-
tion, see Section 3.9. Recommended values for the total beam filtration are 1.5 mm Al for
units operating up to 70 kVp and 2.5 mm Al for those operating above 70 kVp and these
should be checked. To do so an ion chamber is placed in the direct beam and readings are
obtained with different known thickness of aluminium in the beam. The HVL, that is the
thickness of aluminium, which reduces beam intensity to half, may be obtained by trial
Production of X-Rays 69
and error or graphically. Note that the HVL obtained in this way is not the beam filtration
(although when expressed in mm of Al the values are sometimes very similar) and the
filtration must be obtained from a look-up table (Table 2.5).
It is left as an exercise to the reader, after a careful study of Chapter 3, to explain why the
look-up table will be different at other tube voltages and for other voltage profiles.
Since all operators are urged to use the smallest possible field sizes, it is important to
ensure that the optical beam, as defined by the light beam diaphragm, is in register with
the X-ray beam. This may be done by placing an unexposed X-ray film on the table and
using lead strips or a wire rectangle to define the optical beam. An exposure is made,
at a very low mAs because there is no patient attenuation, and the film developed. The
exposed area should correspond to the radiograph of the lead strips to better than 1 cm at
a focus-film distance of 1 m (Figure 2.33). At the same time a check can be made that the
axis of the X-ray beam is vertical by arranging two small (2 mm) X-ray opaque spheres
vertically one above the other about 20 cm apart in the centre of the field of view. If their
images are not superimposed on the developed radiograph, the X-ray beam axis is incor-
rectly aligned.
Few centres check focal spot size on general radiographic equipment regularly, per-
haps because there is evidence from a range of routine X-ray examinations that quite large
changes in spot size are not detectable in the quality of the final image. However, focal spot
size is one of the factors affecting tube rating and significant errors in its value could affect
TABLE 2.5
Typical Relationship between the Beam Filtration and Half Value
Layer for a Full Wave Rectified X-ray Tube Operating at 70 kVp
Half value layer (mm A1) 1.0 1.5 2.0 2.5 3.0
Total filtration (mm A1) 0.6 1.0 1.5 2.2 3.0
Metal frame
coincident with
the light beam
Area exposed
to X-rays
Small metal
object for sizing
FIGURE 2.33
Radiograph showing poor alignment of the X-ray beam and the light beam diaphragm as defined by the metal
frame. The small coin is used for orientation and sizing.
70 Physics for Diagnostic Radiology
Rotating anode
Focal spot
Axis of tube
Bombarding
electrons
Not less
than 10 cm
X-rays
Pinhole size 0.03 mm
0.075 mm Pinhole
8°
X-ray image
plane
FIGURE 2.34
Use of a pin-hole technique to check focal spot size.
the performance of the tube generator. The pin-hole principle illustrated in Figure 2.34
may be used to measure the size of the focal spot. The drawing is not to scale but typical
dimensions for a 1 mm spot are shown.
By similar triangles
and for a pin-hole of this size this ratio is usually about 3. Note that the pin-hole must be
small—its size affects the size of the image and hence the apparent focal spot size. The
‘tunnel’ in the pin-hole must be long enough for X-rays passing through the surrounding
metal to be appreciably attenuated. This principle is used in the slit camera.
Focal spot size can also be measured and information may be obtained on uniformity
of output within the spot, if required, by using a star test pattern. For fuller details see,
for example, Curry et al. 1990. Such information would be important, for example, if one
Production of X-Rays 71
were attempting to image, say, a 0.4 mm blood vessel at two times magnification because
image quality would then be very dependent on both spot size and shape. Similarly, the
high resolution required in mammography (see Section 9.2) requires regular checks on the
focal spot size.
For further information on quality control, the reader is referred to ‘further reading’
given at the end of the chapter.
2.8 Conclusions
In this chapter the basic principles of X-ray production have been discussed. Since the out-
put from the X-ray set is the starting point for high quality radiographic images, there are
important principles to establish at this early stage in the book.
TABLE 2.6
Milestones in the Development of X-ray Tube
Technology
1950s Rotating X-ray anode
Control of exposure timer
1960s High duty X-ray tubes (200 kHU)
Large anode target disc
High speed rotation
Molybdenum/tungsten target
1970s Rhenium/tungsten target
Three phase 12-peak generators
Short exposure times (1 ms)
1980s High frequency generators
Even higher duty X-ray tubes (500 kHU)
Microprocessor control
1990s Anodes mounted on spiral tube bearings
Variable focal spots
Liquid metal bearings
Metal/ceramic housing
Enhanced filtration (see Section 3.9)
2000s Constant potential generators
Magnetic beam deflection
Rotating tube envelope
References
Ardran G M and Crooks H E. Checking diagnostic X-ray beam quality. Br J Radiol 41, 193–198,
1968.
Curry TS, Dowdey JE and Murray RC Jr. Christensen’s Introduction to the Physics of Diagnostic Radiology,
4th edn. Lea & Febiger, Philadelphia, 1990.
Dendy P and Heaton B. Physics for Radiologists, 1st edn. Blackwell Scientific Publications, Oxford,
1987, p 60.
Evans S A, Harris L, Lawinski C P and Hendra I R F. Mobile X-ray generators: a review. Radiography
51, 89–107, 1985.
Further Reading
AAPM. Quality control in diagnostic radiology. Report no. 74, Task Group 12, Diagnostic Imaging
Committee, Medical Physics Publishing, Madison, July, 2002.
Bushberg J T, Seibert J A, Leidholdt E M Jr, and Boone J M. The Essential Physics of Medical Imaging, 2nd
edn. Lippincott, Williams and Wilkins, Philadelphia, 2002.
Dowsett D J, Kenny P A and Johnson R E. The Physics of Diagnostic Imaging, 2nd edn. Hodder Arnold,
London, 2006, pp 71–112.
Production of X-Rays 73
IPEM. Measurement of the performance characteristics of diagnostic X-ray systems used in medicine. Part
I X-ray tubes and generators (2nd ed). Report no 32—Institute of Physics and Engineering in
Medicine, York, YO24 1ES, 1996.
IPEM. Recommended standards for the routine performance testing of diagnostic X-ray imaging sys-
tems. Report no 91—Institute of Physics and Engineering in Medicine, York, YO24 1ES,
2005.
Oppelt A (ed.) Imaging systems for medical diagnostics, chapter 12, X-ray Components and Systems,
Siemens, Erlangen, 2005, pp 264–412.
Exercises
1. Explain why the X-ray beam from a diagnostic set consists of photons with a range
of energies rather than a monoenergetic beam.
2. What is meant by ‘characteristic radiation’? Describe very briefly three processes
in which characteristic radiation is produced.
3. Describe, with the aid of a diagram, the two physical processes that give rise to the
production of X-rays from energetic electrons. How would the spectrum change if
the target were made thin?
4. Explain why there is both an upper and a lower limit to the energy of the photons
emitted by an X-ray tube.
5. What is the source of electrons in an X-ray tube and how is the number of elec-
trons controlled?
6. The cathode of an X-ray tube is generally a small coil of tungsten wire.
(a) Why is it a small coil?
(b) Why is the material tungsten?
7. Figure 2.8 shows a rotating anode tube. Explain the functions of the various parts
and the advantages of the materials used.
8. How would the output of an X-ray tube operating at 80 kVp change if the tungsten
anode (Z = 74) were replaced by a tin anode (Z = 50)?
9. What is the effect on the output of an X-ray set of
(a) Tube kilovoltage?
(b) The voltage profile (ripple)?
10. It is required to take a radiograph with a very short exposure. Explain carefully
why it may be advantageous to increase the tube kilovoltage.
11. What advantages does a rotating anode offer over a stationary anode in an X-ray
tube?
12. Discuss the effect of the following on the rating of an X-ray tube:
(a) Length of exposure
(b) Profile of the voltage supply as a function of time
(c) Previous use of the tube
74 Physics for Diagnostic Radiology
13. Discuss the factors that determine the upper limit of current at which a fixed
anode X-ray tube can be used.
14. What do you understand by the thermal rating of an X-ray tube? Explain how
suitable anode design may be used to increase the maximum permissible average
beam current for
(a) Short exposures
(b) Longer exposures
15. A technique calls for 550 mA, 0.05 s with the kV adjusted in accordance with
patient thickness. If the rating chart of Figure 2.25 applies, what is the maximum
kVp that may be used safely?
16. A technique calls for 400 mAs at 90 kVp. If the possible milliampere values are
500, 400, 300, 200, 100 and 50 and the rating chart in Figure 2.25 applies, what is the
shortest possible exposure time?
17. An exposure of 400 mA, 100 kVp, 0.1 s is to be repeated at the rate of six exposures
per second for a total of 3 s. Is this technique safe if the rating chart of Figure 2.25
applies?
18. A radiographic series consisting of six exposures of 200 mA, 75 kVp and 1.5 s has
to be repeated. What is the minimum cooling time that must elapse before repeat-
ing the series if the rating chart of Figure 2.29 applies?
19. If the series of exposures in exercise 18 is preceded by fluoroscopy at 100 kVp and
3 mA, for how long can fluoroscopy be performed before radiography?
20. Discuss the developments in X-ray tube technology since 1950 with specific refer-
ence to the production of high quality X-ray images of patients.
21. Suggest reasons why radionuclides do not provide suitable sources of X-rays for
medical radiography.
3
Interaction of X-Rays and Gamma Rays with Matter
SUMMARY
• In this chapter the fundamental physics of the interaction of X-rays and
gamma rays with matter is explained.
• The meaning of ‘bound’ and ‘free’ electrons is discussed and a careful dis-
tinction is made between attenuation, absorption and scatter.
• The four interaction processes that are important in the diagnostic energy
range—elastic scattering, the photoelectric effect, the Compton effect and
pair production—are discussed.
• The concept of linear attenuation coefficient is introduced and the distinc-
tion between narrow and broad beam attenuation is explained.
• The implications of beam filtration (the selective attenuation of different
parts of the X-ray spectrum) by different materials is considered.
• Finally, the chapter concludes with a review of the optimum operating kVps
for some standard radiographic procedures.
CONTENTS
3.1 Introduction........................................................................................................................... 76
3.2 Experimental Approach to Beam Attenuation................................................................. 76
3.3 Introduction to the Interaction Processes.......................................................................... 79
3.3.1 Bound and Free Electrons....................................................................................... 79
3.3.2 Attenuation, Scatter and Absorption.....................................................................80
3.4 The Interaction Processes....................................................................................................80
3.4.1 Elastic Scattering....................................................................................................... 81
3.4.2 Photoelectric Effect................................................................................................... 81
3.4.3 The Compton Effect.................................................................................................. 82
3.4.3.1 Direction of Scatter.................................................................................... 85
3.4.3.2 Variation of the Compton Coefficient with Photon Energy and
Atomic Number.......................................................................................... 87
3.4.4 Pair Production......................................................................................................... 88
3.5 Combining Interaction Effects and Their Relative Importance..................................... 89
3.6 Broad Beam and Narrow Beam Attenuation.................................................................... 91
3.7 Consequences of Interaction Processes when Imaging Patients................................... 94
3.8 Absorption Edges................................................................................................................. 94
75
76 Physics for Diagnostic Radiology
3.1 Introduction
The radiographic process depends on the fact that when a beam of X-rays passes through
matter its intensity is reduced by an amount that is determined by the physical properties,
notably thickness, density and atomic number, of the material through which the beam
passes. Hence it is variations in these properties from one part of the patient to another
that create detail in the final radiographic image. These variations are often quite small, so
a full understanding of the way in which they affect X-ray transmission under different
circumstances, especially at different photon energies, is essential if image detail is to be
optimised. Note that one of the causes of inappropriate X-ray requests is because the clini-
cal symptoms do not suggest there will be any informative changes in thickness, tissue
density or atomic number in the affected region (RCR 2007).
In this chapter an experimental approach to the problem of X-ray beam attenuation in
matter will first be presented and then the results will be explained in terms of funda-
mental processes. Finally some implications of particular importance to radiology will be
discussed.
α × (αI0) = α²I0.
Interaction of X-Rays and Gamma Rays with Matter 77
If, as shown in Figure 3.1b, α = ½ it may readily be observed that the variation of intensity
with thickness is closely similar to the variation of radioactivity with time discussed in
Section 1.5. In other words the intensity decreases exponentially with distance, as expressed
by the equation I = I0 e–µx where I0 is the initial intensity and I the intensity after passing
through thickness x. µ is a property of the material and is known as the linear attenuation
coefficient. If x is measured in mm, µ has units mm–1.
A quantity analogous to the half-life of a radioactive material is frequently quoted. This
is the half value layer (HVL) or half value thickness (HVT) H½ and is the thickness of material
that will reduce the beam intensity to a half.
The analogy with radioactive decay is shown in Table 3.1.
Attenuation behaviour may be described in terms of either µ or H½, since there is a sim-
ple relationship between them. If the value of H½ is known, or is calculated from µ, then
(b)
I0
Intensity I
α=½
α2 = ¼
α3 = ⅛
H½ H½ H½
Thickness x mm
FIGURE 3.1
(a) Transmission of a monoenergetic beam of gamma rays through layers of attenuating medium of different
thickness. (b) Variation of intensity with thickness of attenuator.
TABLE 3.1
Analogy between Beam Attenuation and Radioactive Decay
Attenuation of a Monoenergetic
Radioactive Decay Gamma Ray Beam
A = Aoe–αt I = Ioe–µx
ln 2 0.693 ln 2 0.693
a= = m= =
T½ T½ H½ H½
A = Aoe–0.693t/T½ I = Ioe–0.693x/H½
78 Physics for Diagnostic Radiology
TABLE 3.2
Typical Values of µ and H½
µ (mm–1)
Atomic Density (Narrow
Energy (keV) Material Number (kg m–3) Beam) H½ (mm)
30 Water 7.5a 103 0.036 19
60 0.02 35
200 0.014 50
30 Bone 12.3a 1.65 × 103 0.16 4.3
60 0.05 13.9
200 0.02 35
30 Lead 82 11.4 × 103 33 2 × 10–2
60 5.5 0.13
200 1.1 0.6
a These values are effective atomic numbers. For a discussion of the calculation
of effective atomic numbers for mixtures and compounds see Section 3.5.
Source: Adapted from Johns H E and Cunningham J R. The Physics of Radiology,
4th ed. Thomas, Springfield, 1983.
the graphical method described in Section 1.6 may be used to determine the reduction in
beam intensity caused by any thickness of material. Conversely, the method may be used
to find the thickness of material required to provide a given reduction in beam intensity.
This is important when designing adequate shielding. The smaller the value of µ, the lar-
ger the value of H½ and the more penetrating the radiation. Table 3.2 gives some typical
values of µ and H½ for monoenergetic radiations.
The following are the main points to note:
(1) In the diagnostic range µ decreases (H½ increases) with increasing energy, that is
the radiation becomes more penetrating.
(2) µ increases (H½ decreases) with increasing density. The radiation is less penetrat-
ing because there are more molecules per unit volume available for collisions in
the stopping material.
(3) Variation of µ with atomic number is complex although it clearly increases quite
sharply with atomic number at very low energies. In Table 3.2 some trends are
obscured by variations in density.
(4) For water, which for the present purpose has properties very similar to those of
soft tissue, H½ in the diagnostic range is about 30 mm. Thus in passing through a
water phantom the intensity of a narrow X-ray beam will be reduced by a factor of
2 for every 30 mm travelled. If a phantom is 18 cm across this represents six HVTs
so the intensity is reduced by 26 or 64 times.
(5) At similar energies, H½ for lead is 0.1 mm or less so quite a thin layer of lead pro-
vides perfectly effective shielding for, say, the door of an X-ray room.
It is sometimes convenient to separate the effect of density ρ from other factors. This is
achieved by using a mass attenuation coefficient, µ/ρ, and then the equation for beam inten-
sity is rewritten
I = I0e– (µ/ρ)ρx
Interaction of X-Rays and Gamma Rays with Matter 79
5l
ρ1 A=1
A=1
ρ2
FIGURE 3.2
Demonstration that the mass attenuation coefficient of a gas is independent of its density.
When the equation is written in this form, it may be used to show that the stopping power
of a fixed mass of material per unit area is constant, as one would expect since the gamma
rays encounter a fixed number of atoms. Consider, for example, two containers, each filled
with the same gas and each with the same area A, but of length 5l and l (Figure 3.2). Let the
densities of the two gases be ρ1 and ρ 2. Furthermore let the mass of gas be the same in each
container. Then ρ 2 will be equal to 5ρ1, since the gas in container 2 occupies only one-fifth
the volume of the gas in container 1.
If the simple expression I = I0e–µx is used, both µ and x will be different for the two vol-
umes. If I = I0e–(µ/ρ)ρx is used, then since the product ρx is constant, µ/ρ is also the same for
both containers, thus showing that it is determined by the types of molecule and not their
number density. Of course both equations will show that beam attenuation is the same in
both volumes. The dimensions of mass attenuation coefficient are m2kg–1.
It is important to emphasise that in radiological imaging the linear attenuation coeffi-
cient is the more relevant quantity.
may be much greater than the binding forces, in which case the latter may be discounted
and the electron behaves as if it were ‘free’. Since for the low atomic number elements
found in the body, the energy of one photon of X-rays is much higher than the binding
energy of even K shell electrons, most electrons can behave as if they are free when the
interaction is strong enough. However, the interaction is frequently much weaker, a sort
of glancing blow by the photon which involves only a fraction of its energy, and thus in
many interactions electrons behave as if they were bound. Hence interactions that involve
both bound and free electrons will occur under all circumstances and it is frequently the
relative contribution of each type of interaction that is important.
This simple picture allows two general statements to be made. First, the higher the energy
of the bombarding photons, the greater the probability that the interaction energy will
exceed the binding energy. Thus the proportion of interactions involving free electrons
can be expected to increase as the quantum energy of the radiation increases. Second, the
higher the atomic number of the bombarded atom, the more firmly its electrons are held
by electrostatic forces. Hence interactions involving bound electrons are more likely when
the mean atomic number of the stopping material is high.
very low photon energies and ending with the one that dominates at high photon energies.
Low photon energies are sometimes referred to as ‘soft’ X-rays, higher photon energies as
‘hard’ X-rays.
K shell electrons
FIGURE 3.3
Schematic representation of the photoelectric effect. (a) Incident photon ejects electron leaving vacancy in shell.
(b) Higher energy electron fills vacancy and characteristic X-ray photon emitted. (c) Positively charged ion and
free electron (ion pair) produced.
82 Physics for Diagnostic Radiology
is used to overcome the binding energy of the electron, the remainder is given to the elec-
tron as kinetic energy and is dissipated locally (Figure 3.3a) (see Section 3.2). Although the
photoelectric interaction can happen with electrons in any shell, it is most likely to occur
with the most tightly bound electron the photon is able to dislodge. The following equa-
tion describes the energy changes
hf = W + 1 me u2
2
However, this now leaves the residual atom in a highly excited state since there is a vacancy
in one of its orbital electron shells often the K shell. One possibility is that the vacancy will
be filled by an electron of higher energy and characteristic X-ray radiation will be pro-
duced (Figure 3.3b) in exactly the same way as characteristic radiation is produced as part
of the X-ray spectrum from, say, a tungsten anode. Remaining behind is an atom minus
one electron which is referred to as a positively charged ion with one extra proton. The
liberated electron and this positively charged ion are sometime referred to as an ion pair
(Figure 3.3c). The number of X-rays emitted, expressed as a fraction of the number of pri-
mary vacancies created in the atomic electron shells is known as the fluorescence yield.
In high atomic number materials the fluorescence yield is high and quite appreciable
re-radiation of characteristic radiation in a manner similar to X-ray production may occur.
This factor is important in the choice of suitable materials for X-ray beam filtration (see
Section 3.9).
In lower atomic number materials any X-rays that are produced are of low energy (cor-
responding to low K shell energy) and are absorbed locally. Also the fluorescence yield
is low. Production of Auger electrons released from the outer shell of the atom is now
more probable. These electrons have energies ranging from a few to several hundred elec-
tron volts, so their ranges in tissue are short and the photoelectric interaction process now
results in total absorption of the energy of the initial photon. Note that the dense shower of
Auger electrons that is emitted deposits its energy in the immediate vicinity of the decay
site. The resulting high local energy density can equal or even exceed that along the track
of an α particle with corresponding radiobiological damage (see Section 12.4).
Since the process is again concerned with bound electrons, it is favoured in materials of
high mean atomic number and the photoelectric mass attenuation coefficient τ/ρ is propor-
tional to Z³. The process is also favoured by low photon energies with τ/ρ proportional to
1/(hf )³. Notice that, as a result of the Z³ factor, at the same photon energy lead (Z = 82) has a
300 times greater photoelectric coefficient than bone (Z = 12.3). This explains the big diffe-
rence in µ values for these two materials at low photon energies as shown in Table 3.2.
The cross-section for a photoelectric interaction falls steeply with increasing photon energy
although the decrease is not entirely regular because of absorption edges (see Section 3.8).
Thus the photoelectric effect is the major interaction process at the low end of the diag-
nostic X-ray energy range.
Scattered photon of
lower energy hf
ϕ
Photon of energy θ
hf Compton electron
e–
FIGURE 3.4
Schematic representation of the Compton effect.
the photon has energy hf and momentum hf/c and makes a billiard ball type collision with
a stationary free electron, with both energy and momentum conserved (Figure 3.4).
The proportions of energy and momentum transferred to the scattered photon and to the
electron are determined by θ and ϕ. The kinetic energy of the electron is rapidly dissipated
by ionisation and excitation and eventually as heat in the medium and a scattered photon
of lower energy than the incident photon emerges from the medium—assuming no further
interaction occurs. Thus the process is one of scatter and partial absorption of energy.
The equation used most frequently to describe the Compton process is
h
lʹ − l = (1 − cosf) (3.1)
me c 2
where λ′ is the wavelength of the scattered photon and λ is the wavelength of the incident
photon. This equation shows that the change in wavelength ∆λ when the photon is scat-
tered through an angle ϕ is independent of photon energy. However, it may be shown that
the change in energy of the photon ∆E is given by
E2
ΔE = (1 − cosf) (3.2)
me c 2
Thus the loss of energy by the scattered photon does depend on the incident photon energy.
For example, when the photon is scattered through 60°, the proportion of energy taken by
the electron varies from about 2% at 20 keV to 9% at 200 keV and 50% at 1 MeV.
Remaining behind is a positively charged ion as in the photoelectric effect.
Insight
Energy Sharing in the Compton Process
So
hc hc hc
Δλ = − 2 ⋅ ΔE
E − ΔE E E
Rearranging
E2 E2
ΔE = ⋅ Δl = (1− cos f) (from Equation 3.1)
hc mec 2
Note that
mec2 = 9 × 10 –31 × 9 × 1016 = 81 × 10 –15 J
1 J = 6.3 × 1018 eV
So
mec2 = 81 × 10 –15 × 6.3 × 1018 eV
= 510 × 103 eV or 510 keV
(the more exact value is 511 keV, a number that will re-appear in Chapter 11 as the photon energy
of importance in positron emission tomographic imaging, PET).
Example 1 A 20 keV photon is scattered through 30o
20 × 20
ΔE = (1 − cos 30)
510
400 × 0.14
=
510
= 0.11 keV
The photon loses only one tenth of a keV of energy which is taken away by the electron.
Example 2 A 100 keV photon is back scattered through 180o
100 × 100
ΔE = (1− cos 180)
510
1000 × 2
=
510
= 39 keV
Thus the photon is now losing 39% of its energy to the electron.
Since the process is one of attenuation with partial absorption, the variation in the amount
of energy absorbed in the medium, averaged over all scattering angles, with initial photon
energy depends on the following:
(1) The probability of an interaction (Figure 3.5a)
(2) The fraction of the energy going to the electron (Figure 3.5b)
(3) The fraction of the energy retained by the photon (Figure 3.5b)
To find out how much energy is absorbed in the medium, the Compton cross-section must
be multiplied by the percentage of energy transferred to the electron (Figure 3.5c). This
shows that there is an optimum X-ray energy for energy absorption by the Compton effect.
However, it is well above the diagnostic range.
Interaction of X-Rays and Gamma Rays with Matter 85
(a) (b)
Compton cross-section area
transferred to electorn
(m2 electron–1 10–28)
Percentage of energy
100 100
Percentage of energy
retained by photon
0.6
0.4 50 50
0.2
(c)
(as a fraction of the maximum)
Energy absorbed as a result
1.0
of Compton interactions
0.5
FIGURE 3.5
(a) Variation of Compton cross-section with photon energy. (b) Percentage of energy transferred to the electron (dot-
ted line) and percentage retained by the photon (solid line) per Compton interaction as a function of photon energy.
(c) Product of (a) and (b) to give variation in total Compton energy absorption as a function of photon energy.
As shown by Equation 3.2, the amount of energy transferred to the electron depends
on the scattering angle ϕ. At diagnostic energies, the proportion of energy taken by the
electron, that is absorbed, is always quite small. For example, even a head-on collision
(ϕ = 180°) only transfers 8% of the photon energy to the electron at 20 keV. Thus at low ener-
gies Compton interactions cause primarily scattering and this will have implications when
the effects of scattered radiation on image contrast are considered. Although the energy of
scattered X-rays is always lower than the energy of the primary beam, in very low energy
work, for example mammography, the difference can be quite small.
One further consequence of Equation 3.2 is that when E is small, quite large values of
ϕ are required to produce appreciable changes ∆E. This is important in nuclear medi-
cine where pulse height analysis is used to detect changes in E and hence to discriminate
against scattered radiation.
For thicker objects, for example a patient, the situation is further complicated by the
fact that both the primary beam and the scattered radiation will be attenuated. Thus,
although in Figure 3.7 the polar diagrams of Figure 3.6 could be applied to each slice in
turn, because of body attenuation the X-ray intensity on slice Z may be only 1% of that on
slice A. In the example shown in Figure 3.7 the intensity of radiation scattered back at 150°
is 4 times higher than that scattered forward at 30° and is comparable with the intensity
Incident Scattering
photons point
FIGURE 3.6
Polar diagrams showing the spatial distribution of scattered X-rays around a free electron at two different
energies.
X-rays
0.40% (150°)
0.25% (120°)
30 cm
0.16% (90°)
0.11% (60°)
Slice A
0.10% (30°)
Phantom
22 cm of mix D
Slice Z
30 cm
FIGURE 3.7
Distribution of scattered radiation around a patient-sized phantom of tissue equivalent material. The phantom
measured 30 × 30 cm by 22 cm deep and a 400 cm² field was exposed to 100 kVp X-rays. Intensities of scattered
radiation at 1 m are expressed as a percentage of the incident surface dose. (Adapted from McVey, G. The effect
of phantom type, beam quality, field size and field position on X-ray scattering simulated using Monte Carlo
techniques. Br. J. Radiol. 79, 130–141, 2006.)
Interaction of X-Rays and Gamma Rays with Matter 87
in the primary transmitted beam. This work by McVey (2006) confirms the slightly earlier
work of Sutton and Williams (2000) on a RANDO phantom which produced essentially
the same factor of 4 difference.
Hence a high proportion of the scattered radiation emerging from the patient travels in a
backwards direction and this has implications for radiation protection, for example when
using an over-couch tube for fluoroscopy.
Insight
Factors Controlling the Effect of Compton-Scattered Photons on Image Quality
In very unfavourable conditions, scattered photons reaching the image receptor could contribute
as much as ten times radiation to the image as the primary beam. Scatter has a seriously adverse
effect on contrast (see Section 6.5).
It is not possible to eliminate scatter completely but some measures can be taken. Working
through the imaging process, factors affecting scatter are as follows:
• Tube kV—The kVp has only a small effect on scatter production. As can be seen from
Figure 3.8 (see page 90) the production of scattered radiation by the Compton effect is
relatively constant throughout the diagnostic range. However, the scattered radiation reach-
ing the imaging receptor will rise appreciably as the kVp rises because as more scattered
photons move in a forward direction and they have a higher energy so they are less attenu-
ated by the patient and are therefore not self absorbed. Overall, as the kVp rises so does the
scatter but the dose to the patient to produce the image falls.
• Beam collimation—The effect of field size is an important factor determining scatter but in
practical terms is often more relevant to patient dose (see Chapter 13) than actually affect-
ing the scattered radiation reaching the imaging plane. As the size of the X-ray field on the
body increases from a very small value, the quantity of scattered radiation reaching the
imaging plane rises rapidly at first but then slowly reaches a maximum as the field continues
to increase in size. Increasing the field size beyond this point does not change the amount
of scatter reaching the image receptor. Although the total amount of scattered photons con-
tinues to rise as the beam size is increased, self absorption within the body stops scattered
photons travelling through to the image receptor. In practice this saturation point is at a field
size of about 30 × 30 cm.
• Body thickness—The total number of scattered photons will increase as the thickness of the
body in the beam increases but the amount reaching the image receptor reaches a constant
value for a given kVp as the scatter produced in the upper part of the body is absorbed by
the body. The thickness of the body part being imaged can only be influenced on a very
few occasions by the use of compression (see Section 6.7).
• Grids—This is the most effective form of scatter reduction at the receptor—see
Section 6.8.
• Air gap technique—This may also be useful in some circumstances, especially in high kV
radiography—see Section 9.3.3.
TABLE 3.3
Values of Charge/Mass Ratio for the Atoms of Various Elements in the
Periodic Table
H C N O P Ca Cu I Pb
Z 1 6 7 8 15 20 29 53 82
A 1 12 14 16 31 40 63 127 208
Z/A 1 0.5 0.5 0.5 0.5 0.5 0.5 0.4 0.4
of other atoms, low energy interactions frequently do not give the electron sufficient energy
to break away from these other forces. Thus in practice the Compton mass attenuation
coefficient is approximately constant in the diagnostic range and only begins to decrease
(σ/ρ ∝ 1/hf) for photon energies above about 100 keV.
σ/ρ is almost independent of atomic number. To understand why this should be so, recall
the information from Chapter 1 on atomic structure. The Compton effect is proportional
to the number of electrons in the stopping material. Thus if the Compton coefficient is
normalised by dividing by density, σ/ρ should depend on electron density. Now for any
material the number of electrons is proportional to the atomic number Z and the density
is proportional to the atomic mass A. Hence
s Z
∝
r A
Examination of Table 3.3 shows that Z/A is almost constant for a wide range of elements
of biological importance, decreasing slowly for higher atomic number elements.
Hydrogen is an important exception to the ‘rule’ for biological elements. Materials that
are rich in hydrogen exhibit elevated Compton interaction cross-sections and this explains,
for example, the small but measurable difference in mass attenuation coefficients in the
Compton range of energies between water and air, even though their mean atomic num-
bers of approximately 7.5 and 7.8 (see Section 3.5), respectively are almost identical.
The Compton effect is a major interaction process at diagnostic X-ray energies, particu-
larly at the upper end of the energy range.
E = me– c² + me+ c²
Any additional energy possessed by the photon is shared between the two particles as
kinetic energy. Each particle can receive any fraction between all and nothing.
The electron dissipates its energy locally and has a range given by Table 1.2. The posi-
tron dissipates its kinetic energy but when it comes to rest it undergoes the reverse of the
Interaction of X-Rays and Gamma Rays with Matter 89
f ormation reaction, annihilating with an electron to produce two 0.51 MeV gamma rays
which fly away simultaneously in opposite directions.
e+ + e– → 2γ (0.51 MeV)
These gamma rays (or annihilation radiation) are penetrating radiations which escape from
the absorbing material and 1.02 MeV of energy is re-irradiated. Hence the pair production
process is one of attenuation with partial absorption.
Pair production is the only one of the four processes considered that shows a steady
increase in the chance of an interaction with increasing photon energy above 1.02 MeV
(but see Section 3.8). Since a large, heavy nucleus is required to remove some of the photon
momentum, the process is also favoured by high atomic number materials (the pair pro-
duction mass attenuation coefficient π/ρ ∝ Z).
Although pair production has no direct relevance in diagnostic radiology, the subsequent
annihilation process is important when positron emitters are used for in vivo imaging in
positron emission tomography (see Section 11.3.1). The characteristic 0.51 MeV gamma rays
are detected and since two are emitted simultaneously, coincidence circuits may be used
to discriminate against stray background radiation.
I = I0e– (τ/ρ)ρx
I = I0e– (σ/ρ)ρx
and so on.
TABLE 3.4
A Summary of the Four Main Processes by Which X-rays and Gamma Rays Interact
with Matter
Normal Symbol Type of Variation with Variation with
Process for Process Interaction Photon Energy (hf) Atomic Number (Z)
Elastic ε/ρ Bound electrons ∝ l/hf ∝ Z²
Photoelectric τ/ρ Bound electrons ∝ l/(hf)³ ∝ Z³
Compton σ/ρ Free electrons Almost constant Almost independent
10–100 keV; of Z
∝ l/hf above
100 keV
Pair production π/ρ Promoted by Rapid increase ∝Z
heavy nuclei above 1.02 MeV
90 Physics for Diagnostic Radiology
or
I = I0e– (τ/ρ + σ/ρ + . . .).ρx
μ τ σ
= + +
r r r
where additional interaction coefficients can be added if they contribute significantly
to the value of µ/ρ. Hence the total mass attenuation coefficient is equal to the sum of
all the component mass attenuation coefficients obtained by considering each process
independently.
In the range of energies of importance in diagnostic radiology, the photoelectric effect
and the Compton effect are the only two interactions that need be considered (as shown
above). Since the latter process generates unwanted scattered photons of lower energy but
the former does not, and the former process is very dependent on atomic number but the
latter is not, it is clearly important to know the relative contributions of each in a given situa-
tion. Figure 3.8a, b shows photoelectric and Compton cross-sections for nitrogen, which has
(a) (b)
100
Interaction cross-section areas (m2/electron 10–28)
10
1.0
0.1
0.01
FIGURE 3.8
(a) Compton (•) and photoelectric (°) interaction coefficients for nitrogen (Z = 7). (b) Compton (•) and photoelec-
tric (°) interaction coefficients for aluminium (Z = 13).
Interaction of X-Rays and Gamma Rays with Matter 91
approximately the same atomic number as soft tissue, and for aluminium, with approxi-
mately the same atomic number as bone, respectively. Because the photoelectric coefficient
is decreasing rapidly with photon energy (note the logarithmic scale) there is a sharp tran-
sitional point. By 30 keV the Compton effect is already the more important process in soft
tissue.
Understanding the difference in relative importance of these two effects at different
energies is quite fundamental to appreciating the origins of radiological image contrast
that is caused by differences in atomic number in body materials. The photoelectric effect
is very dependent on Ζ, the Compton effect is not. Thus at energies where the photoelec-
tric effect dominates, for example, mammography, small changes in both mean atomic
number and in kV can have a big effect on contrast. At much higher energies, for example,
120 keV where the Compton effect dominates, image contrast will be relatively insensi-
tive to differences in atomic number and small changes in kV. Note that contrast which
is caused by differences in density between different structures, is not susceptible to this
effect.
For the higher atomic number material the photoelectric curve is shifted to the right but
the Compton curve remains almost unchanged and the cross-over point is above 50 keV.
This trend continues throughout the periodic table. For iodine which is the major interac-
tion site in a sodium iodide scintillation detector, the cross-over point is about 300 keV and
for lead it is about 500 keV. Note that even for lead, pair production does not become com-
parable with the Compton effect until 2 MeV and for soft tissues not until about 20 MeV.
Note that although calculating the combined effect of different interaction processes
to estimate attenuation is fairly straightforward, it is not so easy to work out an effec-
tive atomic number. This is a useful concept in both radiological imaging and in radia-
tion dosimetry when dealing with a mixture or compound. One wishes to quote a
single atomic number for a hypothetical material that would interact in exactly the
same way as the mixture or compound. Unfortunately, the effective atomic number of
a mix of elements will change as the photon energy increases from the region where
the photoelectric effect dominates to the region where the Compton effect dominates.
This is because in the photoelectric region the effective atomic number must be heav-
ily weighted in favour of the high Z components. No such weighting is necessary in
the Compton region. For further discussion on this point see Johns and Cunningham
(1983).
Finally, it is now possible to interpret more fully the data in Table 3.2. For any material,
the linear attenuation coefficient will decrease with increasing energy, initially because
the photoelectric coefficient is decreasing and subsequently because the Compton coeffi-
cient is decreasing. At low energy there is a very big difference between the µ values for
water and lead, partly because of the density effect but also because the photoelectric effect
dominates and this depends on Z³.
However, if a group of students were each asked to measure H½ for a beam of radiation
in a given attenuating material, they would probably obtain rather different results. This is
because the answer would be very dependent on the exact geometrical arrangement used
and whether any scattered radiation reached the detector.
Two extremes, narrow beam and broad beam conditions, are illustrated in Figure 3.9a, b.
For narrow beam geometry it is assumed that the primary beam has been collimated so that
the scattered radiation misses the detector. For broad beam geometry, radiation scattered
out of primary beam that would otherwise have reached the detector, labelled A and A’, is
not recorded. However, radiation such as B and B’ which would normally have missed the
detector is scattered into it and multiple scattering may cause a further increase in detec-
tor reading. Hence a broad beam does not appear to be attenuated as much as a narrow
beam and, as shown in Figure 3.10, the value of H½ will be different. Absorbed radiation is
of course stopped equally by the two geometries. When stating H½ values, the conditions
(especially broad or narrow beam) should always be specified. It should be assumed that
modern data is for a narrow beam.
In radiology broad beam geometry is used when the image receptor is either film or a
two dimensional digital detector. Therefore an important consequence of the inferior broad
beam attenuation is the recommendation to reduce the field size to the smallest value con-
sistent with the required image. If a broader beam than necessary is used extra scattered
radiation reaches the receptor thereby reducing contrast (see Section 6.5). Reducing the
field size of course also reduces the total radiation energy absorbed in the patient and, in
interventional procedures, radiation scattered to staff.
(a)
Absorbed
photon
Source
Geometrical limits
X
of beam
Detector
Scattered
photon
Collimation Attenuator
(b)
A
Source B
Geometrical
X
limits of beam
B
Detector
A
Absorbed
photon
Attenuator
FIGURE 3.9
Geometrical arrangement for study of (a) narrow beam and (b) broad beam attenuation.
Interaction of X-Rays and Gamma Rays with Matter 93
1.0
Transmitted intensity
H½ broad
beam
0.5
H½ narrow
beam Broad
beam
Narrow
beam
Absorber thickness
FIGURE 3.10
Attenuation curves and values of H½ corresponding to the geometries shown in Figure 3.9.
TABLE 3.5
Effect of Broad and Narrow Beams on Transmission of 100 kVp
X-ray Photons (with 1 mm Cu Filtration) through 8 cm of Perspex
Narrow Beam Broad Beam
HVL cm 2.2 3.3 (5)
Transmitted intensity (%) 11 16
Insight
More on Half Value Layers
As explained in the main text, because of the variable contribution from scattered radiation,
the measured value of HVL, with the same radiation beam and absorber will vary depending
on the geometry of the measuring system. It is important to be aware of this when using HVL
measurements to predict some other parameter such as the filtration in an X-ray beam. Graphical
methods of finding the HVL will normally produce sufficiently accurate values to use for these
predictions if carried out carefully. Narrow beam geometry (as in Figure 3.9a) is the easiest to
reproduce but is not representative of the very broad beams used in radiology with most imaging
systems.
Because of the exponential nature of the attenuation process this difference can have a big
effect on transmission. To illustrate this the transmission of an X-ray beam operating at 100 kVp
with 1mm Cu filtration and perspex sheets acting as an attenuator , under broad and narrow beam
conditions, is shown in Table 3.5.
Figures for transmitted intensity show the big contribution of scattered radiation under broad
beam conditions to the beam intensity after attenuation. This can be very important when calcu-
lating the shielding in the walls of a radiation enclosure and the fundamental formula I = I0 e –μx is
changed to I = BI0e –μx where B is a ‘Build up factor’, the magnitude of which is generally obtained
from tables of shielding data for different materials. The value of B is a function of the photon
energy, the thickness of the attenuator, the distance from the attenuator to the measuring device
and the area of the beam. It can never be less than 1. As well as being important in shielding
calculations it is also important in quantitative calculations in digital imaging, for example, CT.
Note: This variation in HVL is quite different from the variation resulting from filtration and
beam hardening (see Section 3.9).
94 Physics for Diagnostic Radiology
TABLE 3.6
Effective Atomic Numbers and Densities for Body Constituents
Constituents Effective Atomic Number Density (kg m–3)
Air 7.8 1.2
Fat 6.6 0.9 × 103
Water 7.5 1.0 × 103
Muscle/soft tissue 7.6 1.0 × 103
Bone 13.8 1.8 × 103
Lung 7.4 2.4 × 102
Interaction of X-Rays and Gamma Rays with Matter 95
As shown in Figure 3.11 there will be a substantial difference in the attenuating proper-
ties of a material on either side of the absorption edge. There are two reasons for the sud-
den increase in absorption with photon energy. First the number of electrons available for
release from the atom increases. However, in the case of lead the number available only
increases from 80 to 82 since the K shell only contains two electrons, and the increase in
absorption is proportionately much bigger than this. Thus a more important reason is that
a resonance phenomenon occurs whenever the photon energy just exceeds the binding
energy of a given shell. Since at 88–90 keV the photon energy is almost exactly equal to
that required to remove K shell electrons from lead, a disproportionately large number of
K shell interactions will occur and absorption by this process will be high.
Because of absorption edges, there will be limited ranges of photon energies for which
a material of low atomic number actually has a higher absorption coefficient than a mate-
rial of higher atomic number and this has a number of practical applications. For example
the presence of K absorption edges has an important influence on the selection of suitable
materials for intensifying screens, where a high absorption efficiency by the photoelectric
process is required. Although tungsten in a calcium tungstate screen has a higher atomic
number than the rare earth elements and therefore has an inherently higher mass absorp-
tion coefficient, careful comparison of the appropriate absorption curves (Figure 3.12)
shows that in the important energy range from 40 to 70 keV where for many investigations
there will be a high proportion of photons, absorption by rare earth elements is actually
higher than for tungsten.
(b)
10
Percentage transmitted at this photon energy
(a)
1.0 1.0
0.8
µ/ρ for lead m2kg–1
0.6
0.1
0.4
0.2
0.01
FIGURE 3.11
(a) Variation in the mass attenuation coefficient for lead across the K-edge boundary. (b) Corresponding varia-
tion in transmission through 1 mm of lead. Note the use of logarithmic scale; at the absorption edge the trans-
mitted intensity falls by a factor of about 500.
96 Physics for Diagnostic Radiology
CaWO4
LaOBr
39 keV 69.5 keV
20 40 60 80 100
Photon energy (keV)
FIGURE 3.12
Curves showing the relative absorptions of lanthanum oxybromide (LaOBr) and calcium tungstate (CaWO4) as
a function of X-ray energy in the vicinity of their absorption edges (not to scale).
1. In the use of iodine (Z = 53, K-edge = 33 keV) and barium (Z = 56, K-edge = 37 keV)
as contrast agents. Typical diagnostic X-ray beams contain a high proportion of pho-
tons at or just above these energies, thus ensuring high absorption coefficients.
2. The use of an erbium filter (Z = 68, K-edge = 57.5 keV) for paediatric radiography.
3. The presence of absorption edges also has a significant effect on the variation of
sensitivity of photographic film with photon energy (see Section 14.7.1).
5000
Intensity (arbitrary units)
4000
3000
2000
1000
20 40 60 80 20 40 60 80 20 40 60 80
Photon energy (keV)
FIGURE 3.13
Curves showing the effect of filters on the quality (spectral distribution) of an X-ray beam. (a) Spectrum of the
emergent beam generated at 100 kVp with inherent filtration equivalent to 0.5 mm Al. (b) Effect of an ideal filter
on this spectrum. (c) Effect of total filtration of 2.5 mm aluminium on the spectrum.
increasing the dose to the patient. The effect of an ideal low energy filter is shown in
Figure 3.13a, b, but, in practice, no filter completely removes the low energy radiation or
leaves the useful component unaffected.
The position of the K absorption edge must also be considered when choosing a filter
material. For example, tin, with a K-edge of 29 keV, will transmit 25–29 keV photons rather
efficiently and these would be undesirable in, for example, a radiographic exposure of the
abdomen. Aluminium (Z = 13, K-edge = 1.6 keV) is the material normally chosen for filters
in the diagnostic range. Aluminium is easy to handle, and ‘sensible’ thicknesses of a few
mm are required. Photons of energy less than 1.6 keV, including the characteristic radia-
tion from aluminium, will either be absorbed in the X-ray tube window or in the air gap
between filter and patient. The effect of 2.5 mm of aluminium on a 100 kV beam from a
tungsten target is shown in Figure 3.13c.
Finally, the thickness of added filtration will depend on the operating kVp and the inher-
ent filtration. The inherent filtration is the filtration caused by the glass envelope, insulat-
ing oil and window of the X-ray tube itself and is usually equivalent to about 0.5–1 mm
of aluminium. Note that the thickness of aluminium that is equivalent to this inherent
filtration will vary with kV. The total filtration should be at least 1.5 mm Al for tubes oper-
ating up to 70 kVp (e.g. dental units) and at least 2.5 mm Al for tubes capable of operating
at higher kVp. Note that if a heavily filtered beam is required at high kV 0.5 mm copper
(Z = 29) may be preferred. However, characteristic X-rays at 9 keV will now be produced as
a result of the photoelectric effect (Section 3.4.2) so aluminium will be required too as these
could contribute to the patient skin dose.
When a beam passes through a filter, it becomes more penetrating or ‘harder’ and its H½
increases. If log (intensity) is plotted against the thickness of absorber, the curve will not
be a straight line as predicted by I = I0e–µx. It will fall rapidly at first as the ‘soft’ radiation
is removed and then more slowly when only the harder, high energy component remains
(Figure 3.14a). Note that the beam never becomes truly monochromatic but after about
four HVLs, that is when the intensity has been reduced to about one-sixteenth of its origi-
nal value, the spread of photon energies in the beam is quite small. The value of H½ then
98 Physics for Diagnostic Radiology
(a) (b)
100%
10
Intensity (Ix) (log scale) Linear attenuation
of the softest component
(assumed monoenergetic)
10%
H½ (mm)
5
Linear attenuation
1% of the hardest component
(assumed monoenergetic)
10 20 5 10 15 20
Thickness of aluminium (mm) Thickness of aluminium (mm)
FIGURE 3.14
(a) Variation of intensity (Ix) with absorber thickness (x) as an heterogeneous beam of X-rays becomes progres-
sively harder on passing through attenuating material (plotted on a log scale). (b) Corresponding change in the
value of H½.
TABLE 3.7
Skin Dose and Exposure Time for Comparable Density Radiographs of a Pelvic
Phantom (18 cm thick) Using a 60 kVp Beam with Different Filtration
Aluminium Filtration (mm) Skin Dose in Air (mGy) Exposure Time at 100 mAs
None 20.7 1.4 (1)
0.5 16.1 1.6 (1)
1.0 11.0 1.6 (4)
3.0 4.1 2.1 (4)
Source: Adapted from Trout E D, Kelley J P and Cathey G A. The use of filters to control
radiation exposure to the patient in diagnostic radiology. Am. J. Roentgenol. 67,
946–963, 1952.
becomes constant within the accuracy of measurement and for practical purposes the beam
is monochromatic (Figure 3.14b).
The effect of filtration on skin dose is shown in Table 3.7. There is clearly a substantial
benefit to be gained in terms of patient dose from the use of filters. Note, however, the final
column of the table which shows that there is a price to be paid. Although heavy filtration (3
mm Al) reduces the skin dose even more than 1 mm filtration, the X-ray tube output begins
to be affected and this is reflected in the increased exposure time. A prolonged exposure
may not be acceptable; but if an attempt is made to restore the exposure time to its unfil-
tered value by increasing the tube current, there may be problems with the tube rating.
Insight
More on Filtration and Tube Rating
It should be noted that the results in Table 3.7 were obtained using an X-ray tube that would now
be considered of an old design, which would have been a single phase unit with, at the best, full
rectification. For a modern unit there would be two important differences:
(1) The technical advances in X-ray tube design discussed in Chapter 2 have resulted in power-
ful X-ray tubes with much greater intensity. The loss of intensity because of filtration then
Interaction of X-Rays and Gamma Rays with Matter 99
becomes less of a problem—typically, with a two and a half times increase in tube power
an additional 0.5 mm of Cu filtration may be used without loss of image quality during
screening but with a consequent saving in patient dose.
(2) The effect of filtration on the spectrum for a modern three-phase unit or high frequency unit
would also be less. This is because an older unit, with high voltage ripple, would produce
many of the photons whilst the tube was operating below its kVp. With almost zero ripple
in the kVp these X-ray photons would be absent. Note that the inherent low energy photons
arising from the mechanism of X-ray production itself would still be present.
At the higher kVps currently used for pelvic radiographs, increasing the filtration from 1 mm to
3 mm aluminium would have a very much smaller effect on exposure time.
Thus far the discussion has been concerned with the use of filtration to remove the low
energy part of the spectrum. Occasionally, for example, when scatter is likely to be a prob-
lem, it may b e desirable to remove the high energy component of the spectrum. This is dif-
ficult because the general trend for all materials is for the linear attenuation coefficient to
decrease with increasing keV. However, it is sometimes possible to exploit the absorption
edge discussed in Section 3.8. The K shell energy and hence the absorption edge increases
with increasing atomic number and for elements in the middle of the periodic table, for
example, gadolinium (Z = 64, K-edge = 50 keV) and erbium (Z = 68, K-edge = 57.5 keV), this
absorption may remove a substantial proportion of the higher energies in a conventional
spectrum. Figure 3.15 shows the effect of a 0.25 mm thick gadolinium filter on the spec-
trum shown in Figure 3.13c.
As with low energy filtration, the output of the useful beam is reduced so there is an
adverse effect on tube loading. Thus the technique is perhaps best suited to thin body
parts where scatter is a problem and a grid is undesirable because of the increased dose to
the patient. Paediatric radiology is a good example.
A K-edge filter may also sometimes enhance the effect of a contrast agent, for example,
iodine. Refer again to Figure 3.12 and imagine the lanthanum curve to be replaced by the
absorption curve for iodine (Z = 53, K-edge = 33 keV) and the tungsten curve replaced by
5000
4000
Intensity (arbitrary units)
3000
2000
1000
20 40 60 80 100
Photon energy (keV)
FIGURE 3.15
The effect of a 0.25 mm gadolinium filter on a conventionally filtered X-ray spectrum (solid curve). The dotted
curve is taken from Figure 3.13c.
100 Physics for Diagnostic Radiology
10 20 30 40
Energy (keV)
FIGURE 3.16
The effect of a 0.05 mm molybdenum filter on the spectrum from a tube operating at 35 kV constant potential
using a molybdenum target (dotted line = no filter; solid line = with filter).
the curve for erbium. The curves would be very similar with both edges displaced slightly
to the left. Thus in the energy range from 33 to 57.5 keV, the erbium filter would be trans-
mitting X-rays freely, whilst absorbing higher energies, whereas the iodine contrast in the
body would absorb heavily in the 33–57.5 keV range.
Because materials are relatively transparent to their own characteristic radiation, the effect
of filtration can be rather dramatic when the filter is of the same material as the target anode
producing the X-rays. This effect is exploited in mammography and Figure 3.16 shows how
a 0.05 mm molybdenum filter changes the spectrum from a tube operating at 35 kV constant
potential using a molybdenum target. Note that the output is not only near monochromatic,
it contains a high proportion of characteristic radiation. This component of the spectrum
does not drift with kVp, for example, with generator performance, and this helps to keep the
soft tissue contrast constant. For further discussion see Section 9.2.2.
3.10 Conclusions
In this chapter both experimental and theoretical aspects of the interaction of X-rays and
gamma rays with matter have been discussed. In attenuating materials the intensity of
such beams decreases exponentially, provided they are monochromatic or near mono-
chromatic, at a rate determined by the density and mean atomic number of the attenuator
and the photon energy. In the diagnostic X-ray range the situation is more complex as the
linear attenuation coefficient of the material falls as the beam is hardened when the lower
energy (soft) X-ray photons are absorbed.
The two most important interaction processes are the photoelectric effect and the Compton
effect. The former is primarily responsible for differences in attenuation (contrast) at low
photon energies and its effect is very dependent on atomic number. However, the photoelec-
tric effect decreases rapidly with increasing photon energy and when the Compton effect
dominates, only differences in density cause any appreciable difference in attenuation.
Interaction of X-Rays and Gamma Rays with Matter 101
TABLE 3.8
Optimum kVp for Different Procedures
Procedure Optimum kVp (± 5)
Mammography 30
Extremities 55
Dental 65
Thoracic spine 70
Lateral lumber spine 80
Chest PA 85
For imaging, the photoelectric effect is the more desirable because it discriminates on
the basis of tissue density and atomic number. The Compton effect is less desirable (though
unavoidable) because it discriminates only on the basis of density. It also generates scat-
tered lower energy radiation which is undesirable because it reduces contrast in the radio-
graph and constitutes a radiation hazard to staff.
The ideal kVp is one which allows sufficient intensity of X-rays to penetrate the patient to
form the image with adequate variation between tissues to give the required contrast. Since
tissue penetration increases with tube kV but contrast decreases (and scatter increases)
choice of kVp for a particular imaging procedure is a compromise. A further complication
is introduced when patient dose also has to be taken into account. Filtering the beam to
remove the low energy photons hardens the beam.
Suggested optimum kVps for some standard radiographic procedures are shown in
Table 3.8.
Note
1. For contrast studies a kVp should be chosen ideally so that the peak intensity in
the X-ray photons is as close to the energy of the K shell absorption edge of the
contrast material as possible. The requirement to penetrate the patient sometimes
means this cannot be adhered to.
2. The choice of optimum kVp may have to be modified because of the energy
response of the image receptor (see Section 5.4.1).
References
Johns H E and Cunningham J R. 1983 The Physics of Radiology, 4th ed. Thomas, Springfield.
McVey G. 2006 The effect of phantom type, beam quality, field size and field position on X-ray scat-
tering simulated using Monte carlo techniques. Br. J. Radiol. 79: 130–141.
RCR Working Party. 2007 Making the best use of Clinical Radiology Services The Royal College of
Radiologists, London.
Sutton D G and Williams J R. 2000 Radiation Shielding for Diagnostic X-Rays British Institute of
Radiology.
Trout E D, Kelley J P and Cathey G A. 1952 The use of filters to control radiation exposure to the
patient in diagnostic radiology. Am. J. Roentgenol. 67: 946–963.
102 Physics for Diagnostic Radiology
Further Reading
Bushberg J T, Seibert A J A, Leidholt E M and Boone J M. 2002 The essential physics of medical imaging
2nd ed. Lippincott, Williams and Wilkins, Philadelphia, pp 31–60, Chapter 3.
Dowsett D J, Kenny P A and Johnson R E. 2006 The physics of diagnostic radiology 2nd edition Hodder
Arnold, London, pp 113–142, Chapter 5.
Exercises
1. Explain the terms
(a) Inverse square law
(b) Linear attenuation coefficient
(c) Half value thickness
(d) Mass absorption coefficient
Indicate the relationships between them, if any.
2. Describe the process of Compton scattering, explaining carefully how both atten-
uation and absorption of X-rays occur.
3. Describe the variation of the Compton attenuation coefficient and Compton
absorption coefficient with scattering angle in the energy range 10–200 keV.
4. How does the process of Compton scattering of X-rays depend on the nature of the
scattering material and upon X-ray energy? What is the significance of the process
in radiographic imaging?
5. An X-ray beam loses energy by the processes of absorption and/or scattering.
Discuss the principles involved at diagnostic X-ray energies and explain how the
relative magnitude of the processes is modified by different types of tissue.
6. Explain why radiographic exposures are usually made with an X-ray tube voltage
in the range 50–110 kVp.
7. If the mass attenuation coefficient of aluminium at 60 keV is 0.028 m²kg–1 and its
density is 2.7 × 10³ kg m–³, estimate the fraction of a monoenergetic incident beam
transmitted by 2 cm of aluminium.
8. A parallel beam of monoenergetic X-rays impinges on a sheet of lead. What is the
origin of any lower energy X-rays which emerge from the other side of the sheet
travelling in the same direction as the incident beam?
9. What is meant by characteristic radiation? Describe briefly three situations in
which characteristic radiation is produced.
10. How would a narrow beam of 100 kV X-rays be changed as it passed through a
thin layer of material? What differences would there be if the layer were
(a) 1 mm lead (Z = 82, ρ = 1.1 × 104 kg m–3)
(b) 1 mm aluminium (Z = 13, ρ = 2.7 × 103 kg m–3)
11. Before the X-ray beam generated by electrons striking a tungsten target is used for
radiodiagnosis it has to be modified. How is this done and why?
Interaction of X-Rays and Gamma Rays with Matter 103
12. What factors determine whether a particular material is suitable as a filter for
diagnostic radiology?
13. A narrow beam of X-rays from a diagnostic set is found experimentally to have a
half value thickness of 2 mm of aluminium. What would happen to
(a) The half value of thickness of the beam
(b) The exposure rate
if an additional filter of 1 mm aluminium were placed close to the X-ray source?
14. Discuss the advantages and disadvantages of using aluminium as the filter mater-
ial in X-ray sets at 20, 80 and 110 kVp generating potentials.
15. Compare the output spectra produced by a tungsten target and a copper target
operating at 60 kVp. What would be the effect on these spectra of using
(a) An aluminium filter
(b) A lead filter
16. The dose rate in air at a point in a narrow beam of X-rays is 0.3 Gy min–1. Estimate,
to the nearest whole number, how many half value thicknesses of lead are required
to reduce the dose rate to 10 –6 Gy min–1. If H½ at this energy is 0.2 mm, what is the
required thickness of lead?
17. What is the ‘lead equivalence’ of a material?
18. Explain what you understand by the homogeneity of an X-ray beam and describe
briefly how you would measure it.
19. What is meant by an inhomogeneous beam of X-rays and why does it not obey the
law of exponential attenuation with increasing filtration?
4
Radiation Measurement
SUMMARY
• Measurement of the intensity of an X-ray beam must be made in terms of
measurable physical, chemical or biological changes the X-rays may cause.
• The importance of ionisation in air as the primary radiation standard is
explained.
• Measuring instruments that depend on the principle of ionisation in air are
described.
• The relationship between exposure in air and absorbed dose is explained.
• Radiation monitors which depend on other physical principles (semi-
conductors and scintillation crystals) are described.
• Measurement of radiation spectra is described and the variation in sensitiv-
ity of some detectors with photon energy is explained.
CONTENTS
4.1 Introduction......................................................................................................................... 106
4.2 Ionisation in Air as the Primary Radiation Standard................................................... 107
4.3 The Ionisation Chamber.................................................................................................... 108
4.4 The Geiger–Müller Counter.............................................................................................. 111
4.4.1 The Geiger–Müller Tube........................................................................................ 112
4.4.2 Comparison of Ionisation Chambers and Geiger–Müller Counters............... 113
4.4.2.1 Type of Radiation..................................................................................... 113
4.4.2.2 Sensitivity.................................................................................................. 113
4.4.2.3 Nature of Reading.................................................................................... 113
4.4.2.4 Size............................................................................................................. 114
4.4.2.5 Robustness and Simplicity..................................................................... 114
4.5 Relationship between Exposure and Absorbed Dose................................................... 114
4.6 Practical Radiation Monitors............................................................................................. 117
4.6.1 Secondary Ionisation Chambers........................................................................... 117
4.6.2 Dose Area Product Meters.................................................................................... 119
4.6.3 Pocket Exposure Meters for Personnel Monitoring........................................... 119
4.7 Semi-conductor Detectors................................................................................................. 120
4.7.1 Band Structure of Solids........................................................................................ 120
4.7.2 Mode of Operation.................................................................................................. 121
4.7.3 Uses of the Silicon Diode....................................................................................... 123
105
106 Physics for Diagnostic Radiology
4.1 Introduction
Lord Kelvin (1824–1907), who is probably best remembered for the absolute thermo-
dynamic scale of temperature, is reported to have stated on one occasion: ‘Anything that
cannot be expressed in numbers is valueless’. In view of the potentially harmful effect of
X-rays it is particularly important that methods should be available to ‘express in numbers’
the ‘strength’ or intensity of X-ray beams.
With respect to measurement, three separate features of an X-ray beam must be identified.
The first consideration is the flux of photons travelling through air from the anode towards
the patient. The ionisation produced by this flux is a measure of the radiation exposure. If
expressed per unit area per second it is the intensity. Of more fundamental importance as
far as the biological risk is concerned is the absorbed dose of radiation. This is a measure of the
amount of energy deposited as a result of ionisation processes. Finally, it may be important
to know about the energy of the individual photons. Because of the mechanism of produc-
tion, an X-ray beam will contain photons with a wide range of energies. A complete speci-
fication of the beam would require determination of the full spectral distribution as shown
in Figure 2.2. This represents information about the quality of the X-ray beam.
Clearly the intensity of an X-ray beam must be measured in terms of observable physi-
cal, chemical or biological changes that the beam may cause, so it will be useful to review
briefly relevant properties of X-rays. Two of them are sufficiently fundamental to be clas-
sified as primary properties—that is to say measurements can be made without reference to
a standard beam.
1. Heating effect. X-rays are a form of energy which can be measured by direct con-
version into heat. Unfortunately, the energy associated with X-ray beams used in
diagnostic radiology is so low that the temperature rise can scarcely be measured
(see Section 12.2.1).
2. Ionisation. In the diagnostic energy range X-rays cause ionisation by photoelec-
tric and Compton interactions in any material through which they pass. Pair
production is only important at higher energies. The number of ions produced
in a fixed volume under standard conditions of temperature and pressure will
be fixed.
A number of other properties of X-rays can, and often are, used for dosimetry. In
all these situations, however, it is necessary for the system to be calibrated by first
measuring its response to beams of X-rays of known intensity so these are usually
called secondary properties.
3. Physical effects. When X-rays interact with certain materials, visible light is emit-
ted. The light may either be emitted immediately following the interaction
Radiation Measurement 107
(fluorescence); after a time interval (phosphorescence); or, for some materials, only
upon heating (thermoluminescence).
4. Physico-chemical effects. The action of X-rays on photographic film is well-known
and widely used.
5. Chemical changes. X-rays have oxidising properties, so if a chemical such as ferrous
sulphate is irradiated, the free ions that are produced oxidise some Fe++ to Fe+++.
This change can readily be detected by shining ultraviolet light through the solu-
tion. This light is absorbed by Fe+++ but not by Fe++.
6. Biochemical changes. Enzymes rely for their action on the very precise shape associ-
ated with their secondary and tertiary structure. This is critically dependent on
the exact distribution of electrons, so enzymes are readily inactivated if excess free
electrons are introduced by ionising radiation.
7. Biological changes. X-rays can kill cells and bacteria, so, in theory at least, irradiation
of a suspension of bacteria followed by an assay of survival could provide a form
of biological dosimeter.
Unless specifically stated otherwise, in the remainder of this chapter references to X-rays
apply equally to gamma rays of the same energy.
None of the properties of ionising radiation satisfies all these requirements perfectly but
ionisation in air comes closest and has been internationally accepted as the basis for radi-
ation dosimetry. There are two good reasons for choosing the property of ionisation.
absorbed, almost 3000 ion pairs will have been formed when all the secondary
ionisation has taken place.
2. As shown in Chapter 12, the extreme sensitivity of biological tissues to radiation is
directly related to the process of ionisation so it has the merit of relevance as well
as sensitivity.
There are also good reasons for choosing to make measurements in air:
1. It is readily available.
2. Its composition is close to being universally constant.
3. More important, for medical applications, the mean atomic number of air (Z = 7.6)
is very close to that of muscle/soft tissue (Z = 7.4). Thus, provided ionisation and
the associated process of energy absorption is expressed per unit mass by using
mass absorption coefficients rather than linear absorption coefficients, results in
air will be closely similar to those in tissue.
The unit of radiation exposure (X) is defined as that amount of radiation which produces
in air ions of either sign equal to 1 C (coulomb) kg–1. Expressed in simple mathematical
terms:
ΔQ
X=
Δm
where ∆Q is the sum of all the electrical charges on all the ions of one sign produced
in air when all the electrons liberated in a volume of air whose mass is ∆m are com-
pletely stopped in air. The last few words (‘are completely stopped in air’) are extremely
important. They mean that if the electron generated by say a primary photoelectric
interaction is sufficiently energetic to form further ionisations (normally it will be), all
the associated ionisations must occur within the collection volume and all the electrons
contribute to ∆Q.
The older, obsolescent unit of radiation exposure is the roentgen (R). One roentgen is that
exposure to X-rays which will release one electrostatic unit of charge in one cubic centime-
tre of air at standard temperature and pressure (STP). Hence 1 R = 2.58 × 10 –4 C kg–1.
Insulator
Anode
Guard wires
Diaphragm
X-ray beam
Collecting volume
Guard electrode
FIGURE 4.1
The free air ionisation chamber.
High voltage
supply to R V High impedance
ion chamber voltmeter
Resistor of
about 1010 Ω
Switch
opened
during
S exposure
C V Voltmeter
Capacitor
FIGURE 4.2
Simplified electrical circuits for measuring (a) current flow (exposure rate), (b) total charge (exposure).
110 Physics for Diagnostic Radiology
Since 1 ml of air weighs 1.3 × 10 –6 kg at STP, a chamber of capacity 100 ml contains 1.3 × 10 –4
kg of air. A typical exposure rate might be 2.5 µC kg–1 h–1 (a dose rate of approximately
0.1 mGy h–1, see Section 4.5) which corresponds to a current flow of (2.5 × 10 –6/3600) ×
1.3 × 10 –4 Cs–1 or about 10 –13 A.
If R = 1010 Ω, since V = IR then V = 1 mV which is not too difficult to measure. However,
the voltmeter must have an internal resistance of at least 1013 Ω so that no current flows
through it and this is quite difficult to achieve.
Since the free air ionisation chamber is a primary standard for radiation measure-
ment, accuracy better than 1% (i.e. more precision than the figures quoted here) is
required.
Although it is a simple instrument in principle, great care is required to achieve such
precision and a number of corrections have to be applied to the raw data.
Insight
Corrections to the Ionisation Chamber Reading
Corrections must be made if the air in the chamber is not at STP. For air at pressure P and tempera-
ture T, the true reading RT is related to the observed reading by
⎛P ⎞ ⎛ T ⎞
RT = R0 ⎜ 0 ⎟ ⋅ ⎜ ⎟
⎝ P ⎠ ⎝ T0 ⎠
The requirement for precision also creates design difficulties. For example, great care must
be taken, using guard rings and guard wires (see Figure 4.1), to ensure that the electric
field is always precisely normal to the plates. Otherwise, electrons from within the defined
volume may miss the collecting plate or, conversely, may reach the collecting plates after
being produced outside the defined volume.
Major difficulties arise as the X-ray photon energy increases, especially above about
300 keV, because of the ranges of the secondary electrons (see Table 1.4). Recall that all
the secondary ionisation must occur within the air volume. If the collecting volume is
increased, eventually it becomes impossible to maintain field uniformity.
Thus the free air ionisation chamber is very sensitive in the sense that one ion pair is
created for the deposition of a very small amount of energy. However, it is insensitive
when compared to solid detectors that work on the ionisation principle because air is a
poor stopping material for X-rays. It is also bulky and operates over only a limited range
of X-ray energies. However, it is a primary measuring device and all other devices must be
calibrated against it.
Ionisation chambers are used to measure the dose rates and doses produced during
the routine testing of diagnostic imaging systems. To accommodate the wide variation
in field size and intensity, field intensity chambers of several different sizes have to be
used.
Radiation Measurement 111
E F
D
Large increase
Current
in current here
B C
FIGURE 4.3
Variation in current appearing across capacitor plates with applied potential difference for a fixed X-ray beam
intensity. (AB) Loss of ions by recombination. (BC) Ionisation plateau. (CD) Proportional counting. (EF) Geiger–
Müller region. Beyond F—continuous discharge. The voltage axis shows typical values only.
e−
Electron rapidly Direction of
accelerated strong electric field
+
− −
+ +
− −
− − +
+
− − + + − −
− −− −
FIGURE 4.4
Amplification of ionisation by the electric field.
112 Physics for Diagnostic Radiology
measure radiation exposure. However, very precise voltage stabilisation is required since
the amplification factor is changing rapidly with small voltage changes. Due to this, very
few portable radiation-measuring instruments which are required to measure precisely
are designed to work in this region.
Beyond D the amplification increases rapidly until the so-called Geiger–Müller (GM)
plateau is reached at EF. Beyond F there is continuous discharge.
1. In Figure 4.3, the GM plateau was attained by applying a high voltage between
parallel plates. However, it is the electric field E = V/d, where d is the distance
between the plates, that accelerates electrons. High fields can be achieved more
readily using a wire anode since near the wire E varies as 1/r, where r is the radius
of the wire. Thus the electric field is very high close to a wire anode even for a
working voltage of only 300–400 V. When working on the GM plateau EF the count
rate changes only slowly with applied voltage so very precise voltage stabilisation
is not necessary.
2. The primary electrons are accelerated to produce an avalanche as in the propor-
tional counter, but in the avalanche discharge excited atoms as well as ions are
formed. They lose this excitation energy by emitting X-ray and ultraviolet photons
which liberate outer electrons from other gas atoms creating further ion pairs by
a process of photoionisation. As these events may occur at some distance from the
initial avalanche, the discharge is spread over the whole of the wire. Because of the
high electric fields in the GM tube, the positive ions reach the cathode in sufficient
Incident
β particles
FIGURE 4.5
Essential features of an end-window Geiger–Müller tube suitable for detecting β particles.
Radiation Measurement 113
numbers and with sufficient energies to eject electrons. These electrons initiate
other pulses which recycle in the counter producing a continuous discharge.
3. The continuous discharge must be stopped before another pulse can be detected.
This is done by adding a little alcohol or bromine to the counting gas which is
either helium or argon at reduced pressure. The alcohol or bromine molecules
‘quench’ the discharge because their ionisation potentials are substantially less
than those of the counting gas. During collisions between the counting gas ions
and the quenching gas molecules, the ionisation is transferred to the latter. When
these reach the cathode, they are neutralised by electrons extracted by field emis-
sion from the cathode. The electron energy is used up in dissociating the molecule
instead of causing further ionisation. The alcohol or bromine also has a small
effect in quenching some of the ultraviolet photons.
4. The discharge is also quenched because a space charge of positive ions develops
round the anode, thereby reducing the force on the electrons.
5. Finally, quenching can be achieved by reducing the external anode voltage using
an external resistor and this is triggered by the early part of the discharge.
Once discharge has been initiated, and during the time it is being quenched, any further
primary ionisation will not be recorded as a separate count. The instrument is effectively
‘dead’ until the externally applied voltage is restored to its full value, typically after about
300 µs. This is known as the dead time.
Thus the true count is always higher than the real count. The difference is minimal at
10 counts per second but at 1000 counts per second the monitor is dead for 1000 × 300 ×
10 –6 = 0.3 s in every second and losses become appreciable.
4.4.2.2 Sensitivity
Because of internal amplification, the GM tube is much more sensitive than the
ionisation chamber and may be used to detect low levels of contamination (but see
Section 4.8).
4.4.2.4 Size
The ionisation chamber must be big enough to collect all secondary electron ionisations.
Since the GM tube does not have the property of proportionality, there is no point in mak-
ing it large and it can be much more compact.
Insight
Compensated GM Tubes
A ‘Compensated Geiger’ can be used as a radiation monitor. This instrument uses a specially
designed Geiger tube called an energy compensated Geiger tube, in which the pulse rate can be
related to the exposure. Dose rate meters based on an energy compensated Geiger tube are, typi-
cally, only suitable for measuring photons with an energy between 60 keV and 1.25 MeV. Below
60 keV the sensitivity of the Geiger tube falls quite rapidly and the instrument under records.
Therefore it cannot be used for making measurements in diagnostic X-ray departments, even
when the primary photon dose is of high energy because the radiation field also contains scattered
radiation at a much lower energy.
DA (Gy) = 34 E (C kg –1)
⎛ DT ⎞ ( ma / r) T
⎜⎝ D ⎟⎠ = ( m / r)
A a
A
where (µa/ρ) T and (µa/ρ)A are the mass absorption coefficients of tissue and air,
respectively.
Radiation Measurement 115
Insight
Conversion of Exposure in Air to Dose in Air
A term that is being used increasingly in radiation dosimetry is KERMA. This stands for kinetic
energy released per unit mass and must specify the material concerned. Note that KERMA places
the emphasis on removal of energy from the beam of indirectly ionising particles (X- or gamma
photons) to create secondary electrons. Absorbed dose relates to where those electrons deposit
their energy in the medium.
There are two reasons why KERMA in air (KA) may differ from dose in air (DA). First, some sec-
ondary electron energy may be radiated as bremsstrahlung. Second, the point of energy depos-
ition in the medium is not the same as the point of removal of energy from the beam because of
the range of secondary electrons. However, at diagnostic energies bremsstrahlung is negligible
and the ranges of secondary electrons are so short that KA = DA to a very good approximation.
Now the number of ion pairs generated in each kilogram of air multiplied by the energy required
to form one ion pair (W) is equal to the energy removed from the beam. But the first term is the def-
inition of radiation exposure, say E, and the third term is the definition of KERMA in air KA. Thus
The energy to form one ion pair W is close to 34 J C–1 (34 electron volts per ion pair) for all types of
radiation of interest to radiologists and, coincidentally, over a wide range of materials of biological
importance. By definition 1 Gy = 1 J kg–1. Thus
Many older textbooks use the unit roentgen for exposure in air and rad for dose. Since 1 R = 2.58
10 –4 C kg–1, and 1 Gy = 100 rads, DA (rad) = 100 x 2.58 × 10 –4 × 34 E (roentgen). Hence DA (rad)
= 0.88 E (roentgen).
In this book, dose in air will be used in preference to exposure. ‘Skin dose’ will be used to
describe the dose in air at the surface of the patient.
To convert a dose in air to dose in any other material, recall that for a given incident flux of pho-
tons, the energy absorbed per unit mass depends only on the mass absorption coefficient of the
medium. Hence
⎛ DT ⎞ (ma / r)T
⎜⎝ D ⎟⎠ = (m / r)
A a A
Note the use of a subscript ‘a’ to distinguish the mass absorption coefficient from the mass attenu-
ation coefficient.
116 Physics for Diagnostic Radiology
Dose in matter
Bone
Dose in air
3
1
Fat
10 102 103
Photon energy (keV) log scale
FIGURE 4.6
The ratio (dose in matter/dose in air) plotted as a function of radiation energy for bone and fat.
It follows that only a knowledge of the relative values of µa /ρ is required to convert a known dose
in air to the corresponding dose in any other material.
The ratio D T/DA is plotted as a function of different radiation energies for bone and fat in
Figure 4.6. It is left as an exercise to the reader to justify the shapes of these curves from a knowl-
edge of
Insight
Other Quantities Related to Dose Measurement
In Section 4.1 the terms flux of photons and intensity were introduced without mathematical
treatment. Whereas many readers will find the explanations of these terms given there sufficient,
readers who wish to study dosimetry more deeply will find it useful to know the mathematical
relationships between these and some other related terms.
Consider a situation where there are N photons passing through an area A m2 in a time t seconds.
Photon fluence, Φ = the number of photons passing through a unit area (N/A photons m–2)
Photon flux, ϕ = the number of photons passing through unit area in unit time (N/A·t
photons m–2 s–1)
For a monoenergetic beam Ψ = Φ·ε = N·ε/A Jm –2 where ε is the energy of each photon. (See
Section 1.11 for conversion of eV to Joules. Note this is an approximate figure because the charge
on an electron has to be measured experimentally.)
Energy flux, or intensity, ψ = the energy passing through unit area in unit time
Gas compressed
to solid form
1
2
3
FIGURE 4.7
Secondary electron flux in a gas-filled cavity. Consider the solid as a series of layers starting at the edge of the
cavity. Layers 1, 2 and 3 contribute to the ionisation in the cavity but layer 4 does not.
118 Physics for Diagnostic Radiology
Dense
assume minimal attenuation
material
Incident flux of photons,
Less dense
material
FIGURE 4.8
A schematic demonstration that the flux of secondary electrons at equilibrium is independent of the density of
the stopping material. The upper material has 2.5× the density of the lower. It therefore produces 2.5 times more
electrons per slice. However, the range of these electrons is reduced by the same factor. When equilibrium is
established, shown by a * for each material, 10 secondary electrons are crossing each vertical slice (the electron
density) in each case. Note that equilibrium establishes quicker in the more dense material.
FIGURE 4.9
A simple, compact secondary ionisation chamber that makes use of the ‘air equivalent wall’ principle.
4.8), it is possible to pack a large number of xenon gas detectors of very uniform sensitivity
into a small space. The use of Xe/Kr high pressure gas detectors in computed tomography
is discussed in Section 8.4.1.
Energy
Conduction band
Permitted
Forbidden band energy
levels
Valence band
A
C
B
FIGURE 4.10
The band structure of energy levels found in solids. A, B, C show, schematically, the permitted energy levels in
insulators, conductors and semi-conductors, respectively.
Radiation Measurement 121
Impurities are often added deliberately and the manufacture of materials with the
requisite properties depends on the production of very pure crystals to which impurities
are added under carefully controlled conditions.
Holes e– e– Holes
p n p n
e– e–
Electrons Electrons
Meter Zero
reading reading
+ +
FIGURE 4.11
(a) Voltage applied to p-n diode in forward bias configuration. (b) Voltage applied to p-n diode in reverse bias
configuration.
122 Physics for Diagnostic Radiology
with the positive polarity to the p-type and the negative to the n-type then the device is
said to be forward biased and a current will flow. If the polarity is the other way round as
in Figure 4.11b then the device is said to be reverse biased and the depletion zone increases
in depth as electrons and holes are drawn away from the junction until the internal poten-
tial across it is equal and opposite to the applied potential.
The reason this is called a diode is that a plot of current flowing against the voltage across
the junction has the form of Figure 4.12 similar to a diode. When reverse biased no cur-
rent flows but when forward biased the current rises very rapidly and exponentially with
applied voltage.
If an X- or gamma ray beam interacts with the depletion layer then the electrons pro-
duced are attracted to the anode and are proportional to the amount of charge released
(i.e. the radiation intensity). Although the depletion layer is very thin (typically 10–30 µm)
sensitivity, that is the size of the current, is much higher than might be expected because
the detecting medium is a solid (compared to a gas) and the yield of electrons for each
interacting X-ray photon (the conversion efficiency) is very high.
The depletion layer provides an excellent radiation detector, behaving very much like
a parallel plate ionisation chamber because if any electrons are generated as a result of
ionising interactions, they can migrate to the anode and be registered as a current. The
thickness of the depletion layer is determined by the magnitude of V.
In practice only very thin silicon-based detectors, typically about 200 µm, can be con-
structed to this design. These detectors also respond very well to visible light and near
infra red (Figure 4.13) so they can be used as silicon photodiodes in conjunction with a
scintillation crystal (see Section 4.8).
For better detection of X- and gamma radiation, thicker crystals of germanium with lith-
ium diffused into them were originally used to obtain X-ray and particularly gamma ray
spectra. They had adequate efficiency but, unlike silicon-based detectors which can operate
at room temperature, they must be cooled to liquid nitrogen temperature (–190ºC) before
they can be used and during the whole of their working life. Very high purity germanium
detectors (only about 109–1011 electrical impurities per cubic centimetre) which only need
Current
Voltage
FIGURE 4.12
Variation in current flowing through a diode with voltage across the junction.
Radiation Measurement 123
0.6
0.5
Response (A W–1)
0.4
0.3
0.2
0.1
FIGURE 4.13
Spectral response of a silicon photodiode.
to be cooled during operation to reduce noise, are now available and have replaced lithium
drifted detectors for most purposes requiring spectral analysis.
Solid-state detectors dispense with the need for relatively bulky photomultiplier tubes
(PMTs) (see Section 4.8) and the requirement for stabilised high voltage supplies. As previ-
ously intimated, they have a high ionisation yield so the energies of photoelectrons gener-
ated by X- or gamma ray absorption can be measured with very high precision.
Insight
More on Sensitivity of Solid-State Conductors
Sensitivity increases with the stopping power of the semi-conductor. Germanium with a Z of 32
is better than silicon with a Z of 14. Volume for volume a silicon-based detector is about 18,000
times more sensitive than an air-filled ionisation chamber. This is partly due to the density as iden-
tified above and partly because only 2.5–3 eV are required to release an electron compared to 34
eV in a gas. This produces many more charge carrying pairs. Silicon-based detectors can operate
at room temperature but germanium detectors require to be cooled down to the temperature of
liquid nitrogen before they can be used.
The production of many more charge carrying pairs means that the statistics of counting are
much better. The size of the pulse is proportional to the energy deposited and because the rate
of clearance of the charge produced is very rapid, that is the pulse has a short rise time, these
solid-state detectors can be used to count the number of photons in high flux radiation fields.
This allows high purity germanium detectors to be used for the spectral analysis of X- and gamma
photon fields.
and to apply the ALARA principle (Section 14.2.1). The recorded dose over a period of time
satisfies legal requirements.
Typical performance characteristics include a range from 1 µGy–16 Gy X-rays, a linear
response with dose rate from 5 µGy h –1 to 3 Gy h –1, calibration to better than 5%, rapid response
time (1 µs), and stability over a wide range of temperature (–20°C to +80°C) and humidity.
Since the mean atomic number of silicon is very different from that of air, some variation in
detector sensitivity with photon energy would be expected, especially in the photoelectric
region (see Section 4.10). However, adequate energy compensation has been applied to give
a uniform response (±15%) from 20 keV up to megavoltage energies. They can be affected by
non-ionising electromagnetic fields but these problems have been largely overcome. For fur-
ther information on electronic personal dosimeters see Section 14.7, where traditional meth-
ods of personal dosimetry (film and thermoluminescent dosimetry) are discussed.
Insight
CMOS Detectors
The microelectronics which drives the advanced and robust imaging capabilities of camera phones
is now an emerging technology for both radiation measurement in radiology and for being trialled
and used increasingly in medical X-ray imaging in single photon emission computed tomography
(SPECT), positron emission tomography (PET) and gamma cameras. A detailed description is out-
with the scope of this book but an outline is included here as these detectors are likely to become
used widely over the next few years.
Complementary metal oxide technology (CMOS) allows the integration of the read out electron-
ics and the radiation sensor onto the same piece of material. CMOS imagers include an array of
photo-sensitive diodes which can be sensitive to light or to radiation. Each pixel has one diode.
These pixels are active pixels in that each has its own individual amplifier (unlike charge coupled
devices, CCDs). In addition, each pixel in a CMOS imager can be read directly on an x-y coor-
dinate system, rather than by the progressive transfer of charge from one detector element to the
next—the ‘bucket-brigade’ process of a CCD (see Section 5.13.4.2). This means that while a CCD
pixel always transfers a charge, a CMOS pixel always detects a photon directly, converts it to a
voltage and transfers the information directly to the output. This fundamental difference in how
information is read out of the imager, coupled with the manufacturing process, gives CMOS imag-
ers several advantages over CCDs.
They are physically more robust with high spatial resolution. They are very fast and are capable
of detecting high intensity radiation fields without saturation. Because each pixel is read individu-
ally there is the potential to carry out pulse size analysis on the photons detected opening up the
opportunity for new X-ray imaging contrasts based on tissue spectral properties. They have an
inherently low noise and are very radiation tolerant which should give a long useful lifetime. The
reading of each pixel individually should mean that the effect of dead or bad pixels is very much
reduced compared to other systems where one faulty pixel can have a big effect on obtaining the
information from many others. There should also be no latent image ‘left over’ after readout. At
the time of writing the size of these devices is restricted, however, and will have to be increased
before they can replace other imaging devices.
1. Since it is a high density solid, its efficiency, especially for stopping higher energy
gamma photons, is greatly increased. A 2.5 cm thick NaI (T1) crystal is almost
100% efficient in the diagnostic X-ray energy range. Contamination monitors that
must be capable of detecting of the order of 30 counts per second from an area of
1000 mm2 invariably contain scintillation crystals.
2. It has a rapid response time, in contrast to an ionisation chamber which
responds only slowly owing to the need to build up charge on the electrodes (see
Figure 4.1).
3. Different scintillation crystals can be constructed that are particularly sensitive to
low energy X-rays or even to neutrons. Beta particles and alpha particles can be
detected using plastic phosphors.
NaI (T1) detectors are used extensively in nuclear medicine and the properties that ren-
der them particularly appropriate for in vivo imaging will be discussed in Chapter 10.
Alternative scintillation detectors are caesium iodide doped with thallium and bismuth
germanate. Like NaI (T1), the latter has a high detection efficiency, and is preferable at
high counting rates (e.g. for CT) because it has little ‘after glow’—persistence of the light
associated with the scintillation process. Bismuth germanate detectors also exhibit a good
dynamic range and long-term stability.
The light signal produced by a scintillation crystal is too small to be used until it
has been amplified and this is achieved by using either a PMT or a photodiode (see
Section 4.7.3).
The main features of the PMT coupled to a scintillation crystal (Figure 4.14) are follows:
1. An evacuated glass envelope, one end of which has an optically flat surface. Since photon
losses must be minimised, the scintillation crystal must either be placed in contact
with this surface or if it is impracticable, must be optically coupled using a piece of
optically transparent plastic—frequently referred to as a ‘light guide’ or ‘light pipe’.
2. A layer of photoelectric material such as caesium-antimony. The characteristic of such
materials is that their work function, that is, the energy required to release an elec-
tron, is very low. Thus electrons are emitted when visible or ultraviolet photons
fall on the photocathode, although the efficiency is low with only one electron
emitted for every 10 incident photons.
126 Physics for Diagnostic Radiology
1
2
Incident X-or
gamma rays
NaI MgO Transparent Light-tight
crystal reflecting face housing
surface
Photoelectric 1 Released 2 Amplification of electrons
interaction photoelectron in dynode chain
FIGURE 4.14
Main features of a PM tube coupled to a scintillation crystal for radiation detection.
Note that during the complete detection process in the crystal and PMT, the signal twice
takes the form of photons, once as X-ray photons and once as visible light photons, and
twice takes the form of electrons. Since a PMT is an extremely sensitive light detector, great
care must be taken to ensure that no stray light enters the system.
1. About 30 eV of energy must be dissipated in the crystal for the production of each
visible or ultraviolet photon.
2. Even assuming no loss of these photons, only about one photoelectron is produced
for every 10 photons on the PMT photocathode.
Thus to generate one electron at the photocathode requires about 300 eV and a 140 keV
photon will produce only 400 electrons at the photocathode. This number is subject to con-
siderable statistical fluctuation (N ½ = 20 or 5%). The result is that a monoenergetic beam of
gamma rays will produce a range of pulses and will appear to contain a range of energies
(Figure 4.16a). This is a particular problem in the gamma camera and will be considered
further in Section 10.3.4.5.
As discussed in Section 4.7.2, counting statistics are much better in a semi-conduc-
tor detector, so the pulse from a monoenergetic beam is much more precisely defined
(Figure 4.16b).
Energy discriminators
Low High
Noise
Number of recorded
counts or signals
Pulse corresponding
to Eγ
Background
counts
Pulse strength (γ photon energy)
FIGURE 4.15
Use of a pulse height analyser to determine the spectrum of photon energies in an X-ray beam. A typical pulse
height spectrum obtained after monochromatic gamma rays, Eγ, have passed through scattering material, and
the use of energy discriminators to select the peak are shown. The tail of pulses is due to Compton scattering,
which are produced fairly uniformly at all energies but selective absorption at low energies creates the maxi-
mum. The sharp rise for very low pulses is due to noise.
128 Physics for Diagnostic Radiology
(a) (b)
FIGURE 4.16
Typical spread in the strength of signals from a monoenergetic beam of gamma rays when using as the primary
detector (a) a NaI (T1) crystal, (b) a solid-state device.
Insight
Cause of Variation in Mass Absorption Coefficient
In the vicinity of 1 MeV, interactions are by Compton processes. The mass absorption coeffi-
cient is therefore independent of atomic number and sensitivity is independent of photon energy.
Radiation Measurement 129
103
102
101
m2kg–1 100
Tissue Caesium Iodide
10–1
10–2
10–3
1 10 100 1000
Photon energy (keV)
FIGURE 4.17
Variation in mass absorption coefficient with photon energy for soft tissue and caesium iodide.
However, as the photon energy decreases and approaches 100 keV, photoelectric absorption
becomes important. The effect is much greater when the detector has a high mean atomic number
compared to that in air (Z = 7.6) or tissue (Z = 7.4). Thus the mass absorption coefficient for CsI
increases much more rapidly than that of air and tissue and sensitivity increases.
Note that below about 40 keV the sensitivity does start to decrease. This is because of the
effect of the absorption edges. Near an absorption edge, although the incidence of photoelectric
interactions is high, conditions are very favourable for the generation of characteristic radiation.
Since a material is relatively transparent to its own characteristic radiation, a high proportion of
this energy is reradiated and is therefore not available to cause detector response.
4.11 Conclusions
A wide range of properties of ionising radiation is available for radiation measurement.
Ionisation in air is taken as the reference standard, partly because it can be related directly
to fundamental physical processes and partly because it is very sensitive in terms of the
number of ion pairs created per unit energy deposition.
The choice of instrument in a given situation will depend on a variety of factors, includ-
ing sensitivity, energy, dynamic range (the range of dose over which the monitor will
perform reliably), response time, performance at high count rates, variation in response
with photon energy, uniformity of response between detectors, long-term stability, size,
operating conditions, such as temperature or requirements for stabilised voltage supplies
and cost.
It is important to distinguish carefully between radiation measurements with, for
example, an ionisation chamber, and radiation detection with a GM counter. The latter is
very sensitive and may be very suitable for detecting radiation leakage or contamination
from spilled radioactivity. It should not normally be used as a radiation-measuring device
unless specially adapted to do so.
130 Physics for Diagnostic Radiology
There have been big advances in recent years in the use of semi-conductor detectors, not
only for radiation measurement but also in the development of a new generation of digital
image receptors (see Chapter 5).
Three aspects of radiation measurement have been identified—exposure, absorbed
dose and the ‘quality’, or spectral distribution, of the radiation. If an absolute measure of
absorbed dose is required, reference back to the ionisation process and cross-calibration
will be required. However, for many applications, for example, in digital radiology and
nuclear medicine, there is no need to convert the numerical data into dose routinely so
uniformity of response and long-term stability are more important.
One area in which quantitative dose measurement is required is personal monitoring. This
subject is deferred until Chapter 14 on ‘Practical Radiation Protection’, because this is where
personal monitoring has most impact. Furthermore, two important mechanisms of personal
monitoring, film blackening and thermoluminescence are not explained until Chapter 5.
Further advances in the development of semi-conductor technology, for measurement of
X-rays, the development of a new generation of image receptors and to provide informa-
tion about tissue properties, can be expected.
Further Reading
Greening J R (1985) Fundamentals of Radiation Dosimetry, 2nd ed. (Medical Physics Handbooks 15)
Adam Hilger Ltd, Bristol, and the Hospital Physicists’ Association.
Harrison R M (1997) Ionising radiation safety in diagnostic radiology. Imaging 9, 3–13.
McAlister J M (1979) Radionuclide Techniques in Medicine, Cambridge University Press. Cambridge.
Powsner R A and Powsner E R (2006) Essential Nuclear Medicine Physics. 2nd ed. Blackwell
Publishing. Oxford.
Smith F A (2000) A Primer in Applied Radiation Physics, World Scientific Publishing Chapter 5.
Singapore.
Exercises
1. Suggest reasons why ionisation in air should be chosen as the basis for radiation
measurement.
2. Draw a labelled diagram of a (free air) ionisation chamber and explain the prin-
ciple of its operation.
3. Explain how the following problems are overcome in an ionisation chamber:
(a) Recombination of ions
(b) Definition of the precise volume from which ions are collected.
4. Explain how a (free air) ionisation chamber might be used to measure the exposure
at a given point in an X-ray beam. How would you expect the exposure to change
if thin aluminium filters were inserted in the beam about half way between the
source and the chamber?
5. Outline briefly the important features of an experimental arrangement for mea-
suring exposure in air and discuss the factors which limit the maximum energy
of the radiation that can be measured in this way.
Radiation Measurement 131
6. Explain from first principles why the reading on a free air ionisation chamber will
decrease if the temperature increases.
7. Show that under specified conditions the reading on a dose area product meter
will not vary with the distance from the tube focus. What conditions must be
satisfied?
8. Explain the importance of the concept of electron equilibrium in radiation
dosimetry.
9. Describe a small cavity chamber for measurement of radiation exposure. Discuss
the choice of material for the chamber wall and its thickness.
10. Describe the operation of a Geiger-Müller tube and explain what is meant by ‘dead
time’.
11. Define the Gray and show how absorbed dose is related to exposure.
12. Estimate the number of ion pairs created in a cell 10 µm in diameter by a single
dose of 10 mGy X-rays.
13. What is meant by the depletion layer in a p-n diode? How does this arise and how
can it be used as the basis for a radiation monitor?
14. Explain how a scintillation detector works.
15. Show that the energy of a photon in the visible range is about 3 eV.
16. Describe the sequence of events that leads to a pulse of electrons at the anode of a
photomultiplier tube if the tube is directed at a NaI scintillation crystal placed in
a beam of photons.
17. Explain how a scintillation detector may be used to measure:
(a) The energy
(b) The intensity of a beam of radiation
18. Explain in as much detail as possible the shapes of the curves in Figure 4.17.
19. Discuss the factors which may have to be considered in the choice of a monitor for
a specific purpose.
5
The Image Receptor
SUMMARY
• X-rays cannot be seen by the human eye so all X-ray images have to be cap-
tured on some form of receptor.
• The essential differences between analogue and digital images are
explained.
• The mode of operation of film and film screen combinations as receptors for
analogue images is discussed.
• The concepts of optical density, film gamma, film speed and latitude are
introduced.
• Receptors which produce digital images in radiography are then considered.
They work on quite different principles.
• The use of both analogue and digital receptors in fluoroscopy is considered.
• A brief review of quality control measurements that are necessary to ensure
receptors are performing to a high standard completes the chapter.
CONTENTS
5.1 Introduction......................................................................................................................... 134
5.2 Analogue and Digital Images........................................................................................... 135
5.3 Fluorescence, Phosphorescence, Photostimulation and Thermoluminescence......... 138
5.4 Phosphors and Photoluminescent Screens..................................................................... 139
5.4.1 Properties of Phosphors......................................................................................... 139
5.4.2 Production of Photoluminescent Screens............................................................ 143
5.4.3 Film-Phosphor Combinations in Radiography.................................................. 144
5.5 X-ray Film............................................................................................................................. 144
5.5.1 Film Construction................................................................................................... 144
5.5.2 Characteristic Curve and Optical Density.......................................................... 145
5.5.3 Film Gamma and Film Speed............................................................................... 147
5.5.4 Latitude.................................................................................................................... 149
5.6 Film Used with a Photoluminescent Screen................................................................... 149
5.7 Reciprocity........................................................................................................................... 152
5.8 Film-Screen Unsharpness.................................................................................................. 152
5.9 Introduction to Digital Receptors and Associated Hardware..................................... 153
5.9.1 Analogue to Digital Converters (ADCs).............................................................. 153
5.9.2 Pixellating the Image.............................................................................................. 154
133
134 Physics for Diagnostic Radiology
5.1 Introduction
Röentgen discovered X-rays when he noticed that a thin layer of barium platinocyanide on
a cardboard screen would fluoresce even when the discharge tube (a primitive X-ray tube)
was covered by black paper. Simultaneously he had discovered the first X-ray receptor! The
receptor is the third essential component, after the X-ray tube which produces the beam
and the patient, whose body tissues generate the primary radiation contrast (Chapter 3),
required to create high quality radiographic images. In this chapter we shall discuss the
various mechanisms by which different receptors produce images and in Chapter 6 con-
centrate on the important properties that make images ‘fit for purpose’ in the diagnostic
process.
The Image Receptor 135
Planar images still comprise the majority of clinical imaging examinations because they
produce the highest resolution at lowest cost so this is a good starting point for a planned
series of investigations. Furthermore, until about 1990, almost all radiographic images,
defined here as static images created with a short pulse of X-rays, would have used spe-
cialised radiographic film as the receptor in conjunction with increasingly sophisticated
screens to increase sensitivity and reduce patient dose.
The photographic process, whether using X-rays or light in conjunction with film, cre-
ates an analogue image but in the last 20 years receptors that are capable of producing
digital images have developed rapidly. The precise difference between analogue and digital
images will be explained in the next section because it is fundamental to understanding
how different receptors work. However, at this point it is worthwhile explaining why film,
having reigned supreme for 80 years, should have been eclipsed so rapidly that in some
larger hospitals radiology departments are now almost entirely filmless apart from histor-
ical archives.
Digital images have a number of potential advantages over film because the images are
collected and stored electronically in such a manner that image acquisition, signal process-
ing, storage and display are virtually four separate stages and can be optimised separately.
In particular, post-processing options, especially contrast enhancement (see Section 6.11),
can improve visualisation. Also, since digital images are stored in a computer, the ability
of the computer to perform routine pre-programmed tasks with a high degree of accuracy
means that computer-aided detection and diagnosis may become a useful aide to the radi-
ologist, especially in screening situations where large numbers of ‘normal’ images have to
be examined.
However, these benefits are marginal compared with the advantage that accrues from
the fact that digital images are generated in electronic format. There are well-documented
problems with film, both at the reporting stage and for archiving images. For example
both the radiologist and the film have to be in the same place at the same time, films may
be required in theatre, films get lost or may deteriorate with time and there is a big prob-
lem with film storage. Thus digital images have a massive advantage in terms of image
storage, rapid retrieval and rapid transmission over short and long distances, for example,
for remote reporting. Simultaneous viewing by multiple users or students is possible and
the productivity and workflow of the radiology department is enhanced. All these factors
are considered in the design of Picture Archiving and Communications Systems (PACS)
which have been the major driver for digital radiology and will be discussed in Chapter 17,
along with issues related to the storage and transmission of the large volumes of data asso-
ciated with digital images.
varying continuously and smoothly across the image. When the film is a radiograph this
format is ideal for visual inspection but it is not easy to extract quantitative data from ana-
logue images.
To extract and manipulate numerical information, the complex distribution pattern
of photon interactions must be collected and stored in a computer. In principle the x
and y co-ordinates of every X-ray interaction with the receptor could be registered and
stored. This is sometimes known as ‘list mode’ data collection and is used for a few
specialised studies in nuclear medicine. With the much larger number of photons in
an X-ray image, this approach is impractical and the data have to be ‘condensed’. This
is done in two ways. First, the image space is sub-divided into a number of compart-
ments, which are normally, but not necessarily, square and of equal size, called pixels.
In a digitised image the X-ray interaction is assigned to the appropriate compartment but
the position of the interaction is located no more precisely than this. Thus the image has
a discontinuous aspect that is absent from an analogue image. The number and size of
compartments is variable. For example in the extreme case of a 2 × 2 matrix illustrated
in Figure 5.1a, all interactions would be assigned to just one of four areas. Figure 5.1b
illustrates an 8 × 8 matrix.
The data are also condensed in a second way. The electrical signals within a pixel are
not allowed to take a continuous set of values but are ‘quantised’. Just as photon energies
are quantised (see Section 1.11), the signal in each pixel can only take one of a discrete set
of values. Each level is known as a pixel value or digital value and the total available num-
ber of levels is known as the bit depth. For example, if there were only two allowed pixel
values, and these were represented by two grey levels in the final image, all pixels would
appear either black or white.
Figure 5.2 illustrates the digitisation process. The image of the wedge on the left shows
continuously changing density. The image on the right is discontinuous, with each step
representing the pixel or grey level that includes a range of densities in the original
image. This example shows the effect of digitising with just eight available grey levels
but in normal practice, many more would be available to more accurately capture subject
contrast.
(a) (b)
FIGURE 5.1
Examples of coarse pixellation (a) 2 × 2 matrix, (b) 8 × 8 matrix.
The Image Receptor 137
(a) (b)
16 cm
2 cm
FIGURE 5.2
Images of an aluminium wedge using (a) radiographic film showing a continuous variation of density and
(b) the same image digitised with eight grey levels (3-bit) showing a step wedge effect. The effect is analogous
to imaging a step wedge which has 8 × 2 cm levels.
Insight
More on Pixels
1. Although digital images are a relatively recent innovation in planar radiology, they may already
be familiar to a reader with experience in other modalities since computed tomography (CT)
and magnetic resonance imaging (MRI) images, and some nuclear medicine and ultrasound
images have been digital since each imaging technique was introduced.
2. Choice of pixel matrix size and number of pixel levels will be discussed in detail later. Suffice
to say for the moment that resolution can be no better than the pixel size and the number of
pixel levels, (usually chosen on a binary scale—2, 4, 8, 16 etc, to facilitate computer input)
must be sufficient to display small contrast differences. For general purpose radiography
3500 × 4300 or 1750 × 2150 pixels (~100–200 µm pixel size for a 35 cm × 43 cm field) are
typical and each pixel is assigned to a distinct digital level. The number of available levels
typically ranges from 1024 (210) to 65,536 (216).
3. There is nothing to be gained by decreasing pixel size below the resolving capability of the
imaging system. Thus, whereas a 1024 × 1024 matrix (0.2 mm pixel size) might be fully justified
for a high resolution screen used in cine-angiography of the skull, larger pixels are acceptable
in nuclear medicine where the system resolution of a gamma camera is no better than 5 mm.
4. As the pixel size becomes smaller, the size of the signal becomes smaller and the ratio of
signal to noise gets smaller. When counting photons, the signal size N is subject to Poisson
statistical fluctuations (noise) of N½ (see Section 7.5), so the signal-to-noise ratio is propor-
tional to N½ and decreases as N decreases. In MRI (see Chapter 16), electrical and other
forms of noise cannot be reduced below a certain level and thus the size of the signal, from
the hydrogen atoms, is a limiting factor in determining the smallest useful pixel size.
5. Finer pixellation places a burden on the computer in terms of data storage and manipula-
tion. A 1024 × 1024 matrix contains over a million pixels and the data from each one must
be stored and examined individually.
6. The properties of the resulting image may not correspond to the optimum imaging capabil-
ity of the detector if any compression or processing of the data has been applied following
acquisition.
138 Physics for Diagnostic Radiology
In fluorescence, the migration of electrons and holes to the fluorescent centres and the
emission of a photon of light happens so quickly that it is essentially instantaneous. Not all
(a) (b)
Conduction band e Conduction band
e
Electron traps e e
Electron already
Energy Energy present
E1 = hf1
Hole traps
Valence band
e Valence band
(c)
Conduction band
e Trap originally
empty
Energy
E2 = hf2
e Valence band
FIGURE 5.3
(a) Schematic representation of electron and hole traps in the ‘forbidden’ energy band in a solid. (b) The change
in energy level resulting in light emission in fluorescence. (c) The change in energy level resulting in light emis-
sion in phosphorescence.
The Image Receptor 139
transitions of electrons at luminescent centres produce light. The efficiency of the transfer
can vary enormously between different materials.
The phenomenon of phosphorescence also depends on the presence of traps in the forbid-
den band but differs in the following respects from fluorescence:
If the electron trap is only a little way below the conduction band, the electron eventu-
ally acquires this energy by statistical fluctuations in its own kinetic energy. Thus light is
emitted after a time delay (phosphorescence). The time delay that distinguishes phospho-
rescence from fluorescence is somewhat arbitrary and might range from 10 –10 s to 10 –3 s.
The two processes can be separately identified by heating the material. Light emission
by phosphorescence is facilitated by heating, because the electrons more readily acquire
the energy required to escape. Fluorescence is temperature independent. In radiology
phosphorescence is sometimes called ‘afterglow’ and, unless the light is emitted within a
very short time when it may contribute to the quantum yield, its presence in a fluorescent
screen is detrimental.
If the energy difference between the electron trap and the conduction band is some-
what greater, the chance of the electron acquiring sufficient energy by thermal vibrations
at room temperature is negligible and the state is metastable. However, if the electron is
given extra kinetic energy, it may be released and then light is emitted. When the energy
difference is not too great, the photons in a visible light laser beam may have sufficient
energy to cause photostimulated phosphorescence. If the energy difference is greater,
the extra energy must be supplied as heat and this is essentially the process of thermo-
luminescence. Generally in thermoluminescent material the traps are at several different
energy levels so light emissions can occur at quite different temperatures resulting in a
characteristic “glow curve.”
hotons and the imaging process is much more efficient, resulting in better quality images
p
and lower dose to the patient.
A number of factors must be taken into consideration when maximising phosphor sen-
sitivity and performance:
1. Only the radiation that interacts with the detector contributes to the signal so
the quantum efficiency (QE), which describes the probability of a single quan-
tum of X-rays interacting with the detector, must be high. It is increased by
increasing the linear attenuation coefficient, and hence the atomic number and
density of the detector and also by increasing the detector thickness. Note,
however, that increasing detector thickness will increase unsharpness (see
Section 5.8).
2. The packing density of phosphor grains, or the fill factor in some digital detectors
(see Section 5.10) also affects the QE, since it alters the area available to stop X-rays.
3. The overall luminescent radiant efficiency, or light output per unit beam intensity
of the phosphor, depends on its efficiency for converting X-ray photon energy to
light photons and must be high.
4. The spectral output of the phosphor must be matched to the spectral sensitivity of
the next stage in the imaging system.
5. Finally, there are practical considerations. The chemical properties of a phos-
phor can limit how it is used, for example, a hygroscopic phosphor must always
be used either encapsulated or inside a vacuum. Phosphors must also be com-
mercially available in known crystal sizes of uniform sensitivity at a reasonable
price.
Insight
Quantum Efficiency
If the linear attenuation coefficient of the detector is µ(E) (E indicates that it is photon energy
dependent), and detector thickness d, the transmitted intensity I is
I = I0e m(E )d
I0 − I0e m(E )d
And the QE is
TABLE 5.1
Atomic Numbers and Luminescent Radiant Efficiencies for
Some Important Phosphors. The Element behind the : Sign
Is the Activator to the Phosphor Salt before the : Sign
Z of Heavy Luminescent Radiant
Phosphor Elements Efficiency (%)
BaFCl: Eu2+ 56 13
BaSO4: Eu2+ 56 6
CaWO4 74 3.5
CsBr: Tl 35/55 8
CsI: Na 53/65 10
Csl: Tl 53/55 11
Gd2O2S: Tb 64 15
La2O2S: Tb 57 12
Y2O2S: Tb 39 18
(ZnCd)S: Ag 30/48 18
NaI: Tl 53 10
Photoluminescent screens may be used as the input for three basic imaging systems:
Until the mid 1960s, the most widely used phosphors were calcium tungstate in radiogra-
phy and zinc-cadmium sulphide in fluoroscopy and image intensifiers. As a direct result
of the American space programme, new phosphors were developed which have since been
adopted for medical use. They are mainly crystals of salts of the rare earth elements or
crystals of barium salts activated by rare earth elements.
The reason why the rare earth phosphors have replaced calcium tungstate for many
purposes can be seen in Table 5.1. The Z value of the new rare earth phosphors is slightly
lower than that of the heavy elements in calcium tungstate phosphor, and thus slightly less
energy may, under some conditions, be absorbed from an X-ray beam. However, this is
more than compensated for by the fact that the luminescent radiant efficiency for the rare
earth phosphors is at least three times higher. The same light output from the phosphor
can thus always be achieved with a much lower X-ray dose. In practice, because of absorp-
tion edges (see Section 3.8), at certain photon energies X-ray absorption would also be
higher (see Figure 5.4). For some elements in the phosphors (e.g. gadolinium with a K-edge
at 50.2 keV) this edge is close to the peak in the intensity spectrum of the X-ray beam after
it has been transmitted through a patient (see for example Figure 5.4).
In Table 5.1 figures are also given for comparison for thallium-doped sodium iodide (NaI:Tl)
the primary detector used almost universally in gamma cameras in nuclear medicine.
Spectral outputs are shown for some phosphors in Figure 5.5 and receptor sensitivities
for some typical receptors in Figure 5.6. It is clear that there can be a major loss in sensitiv-
ity if these spectra are not matched.
142 Physics for Diagnostic Radiology
5 16
14
12
4 10
Mass attenuation coefficient (m2kg–1)
8
6
4
3
2
0
10 30 50 70
0
10 20 30 40 50 60 70 80
Photon energy (keV)
FIGURE 5.4
Mass attenuation coefficients for the majority element in some important receptors showing the positions of
absorption edges: solid line showing caesium (Z = 55); long-dashed line showing gadolinium (Z = 64). The inset
shows the coefficients for selenium (Z = 34). A typical spectrum for the unscattered component of a 70 kVp beam
after heavy filtration in the patient is also shown by the short-dashed line (continuous spectrum shown only).
Note the positions of the K-edges in relation to the transmitted spectrum.
100
80
% relative response
60
40
20
FIGURE 5.5
The spectral output of different phosphors. Solid line CsI:Na. Dashed line (ZnCd)S:Ag. Dot dash line BaFCl:Eu2+.
Dotted line CaWO4.
The Image Receptor 143
100
80
% relative response
60
40
20
FIGURE 5.6
The spectral response of different light receptors. Solid line—S20 photocathode. Dashed line—X-ray film. Dot
dash line—the eye. Dotted line—S11 photocathode.
Insight
Efficiency of the Phosphor Layer
packing density is almost 100% thus giving a gain of approximately two over crystal deposited
screens with a consequently higher X-ray absorption per unit thickness. For these reasons it is
possible to get better resolution and less noise from these phosphors, and this is important in
the construction and performance of, for example, an image intensifier (see Section 5.13.1).
Emulsion
Protective layer
FIGURE 5.7
The basic construction of an X-ray film.
The Image Receptor 145
These electrons move through the crystal and are trapped by the sensitivity speck. An
electron at a sensitivity speck then attracts a positively charged silver ion to the speck and
neutralises it to form a silver atom. This occurs many times and the result is an area of the
crystal with a number of neutral silver atoms on the surface. This crystal is then said to
constitute a latent image. For the crystal to be developable, between 10 and 80 atoms of sil-
ver must be produced. During development with a reducing alkaline agent, crystals with a
latent image in them allow the rest of the silver ions present to be reduced and thus form a
dark silver grain speck on the film. The film is fixed and hardened at the same time using
a weakly acidic solution. The crystals which did not contain a latent image are washed off
at the fixation stage leaving a light area on the film.
If the developing agent is too strong it will develop crystals in which no latent image is
present. Even in an unexposed film some crystals will be developed to produce a low level
of blackening called fog. The ‘fog’ level can be increased by using inappropriate developing
conditions, for example, too strong a developer or too high a developing temperature.
where I0 is the incident intensity of visible light and I is the transmitted intensity.
(a) Visible light
I0
Blackened
Density D
film
Film 1 Density D1
I1
Film 2 Density D2
FIGURE 5.8
A simple interpretation of optical density: (a) for a simple film, (b) for two films superimposed.
146 Physics for Diagnostic Radiology
Basing the definition on the log of the ratio of incident and transmitted intensities has
three important advantages:
1. It represents accurately what the eye sees, since the physiological response of the
eye is also logarithmic to visible light.
2. A very wide range of ratios can be accommodated and the resulting number for
the density is small and manageable (see Table 5.2).
3. The total density of two films superimposed is simply the sum of their individual
densities.
When different amounts of light are transmitted through different parts of the film (Figure
5.9), the difference in density between the two parts of the film is called the contrast. Hence,
I0 I I
C = D1 − D2 = log 10 − log 10 0 = log 10 2 (5.1)
I1 I2 I1
The eye can easily discern differences in density over a range from approximately 0.25–2.5,
the minimum discernible difference being about 0.02.
If the density produced on a film is plotted against the log of the radiation exposure pro-
ducing it, the characteristic curve of the film is generated (Figure 5.10). This is frequently
TABLE 5.2
Relationship between Optical Density and
Transmitted Intensity
Transmitted Intensity as a
Percentage of I0 (%) OD = log10 I0/I
10 log1010 = 1.0
1 log10100 = 2.0
0.01 log1010000 = 4.0
Visible light
I0 I0
Density D1 Density D2
I1 I2
FIGURE 5.9
Representation of contrast between two parts of a blackened film as a difference in transmitted light intensities.
The Image Receptor 147
Saturation
D2
Solarisation
Approx.
Density
linearity
D1
Fog Toe
level
Log E1 Log E2
Log exposure
FIGURE 5.10
A typical characteristic curve for an X-ray film. This is often called an H and D curve after Hurter and Driffield
who developed it for photographic analysis.
referred to as an ‘H and D curve’ after Hurter and Driffield who developed it for photo-
graphic analysis. The log scale again allows a wide range of exposure to be accommodated.
Each type of film has its own characteristic curve although all have the same basic shape.
The finite density at zero exposure is due to a small contribution from ‘fogging’, that is, the
latent images produced during manufacture, by temperature, humidity and other non-radia-
tion means. This can be kept to a minimum but never completely removed. Note that the appar-
ently horizontal initial portion of the curve arises primarily because one logarithmic quantity
(D) has been plotted against another (log E). This has the effect of compressing the lower end
of the curve. If the data were re-plotted on linear axes, there would be a steady increase in film
blackening with exposure from zero dose but when using the characteristic curve, a finite dose
is required to be given to the film before densities above the fog level are recorded.
The initial curved part of the graph is referred to as the ‘toe’ of the characteristic curve
and this leads into the approximately linear portion of the graph covering the range of
densities and doses over which the film is most useful. Eventually, after passing over the
shoulder of the curve, the graph is seen to saturate and further exposure produces no
further blackening. This decrease in additional blackening is due to the black spots from
developed crystals overlapping until eventually the production of more black silver spots
has no further effect on the overall density. At very high exposures (note that the scale is
logarithmic), film blackening begins to decrease again, a process known as solarisation.
Insight
Solarisation
The reader may like to satisfy themselves that if a film is exposed to light intensities in the solarisa-
tion region where the film gamma is –1, an exact copy can be created. However, this technique is
no longer used as copying is simple and straightforward using digital techniques.
D2 − D1
g=
log E2 − log E1
148 Physics for Diagnostic Radiology
If no part of the curve is approximately linear, the average gradient may be calculated
between defined points on the steepest part of the curve.
The gamma of a film depends on the type of emulsion present, principally the distribu-
tion and size of the silver bromide crystals, and second on how the film is developed. If
the crystals are all the same size a very ‘contrasty’ film is produced with a large gamma.
A wide range of crystal sizes will produce a much lower gamma (Figure 5.11). A ‘fast’ film
with large crystals generally also has a wide range of crystal sizes. Finally, increased grain
size reduces resolution although unsharpness in the film itself is rarely a limiting factor.
The correct characteristic curve for a film can only be obtained by using the developing
procedure recommended by the film manufacturer, including the concentration of devel-
oper, the temperature of the developer, the period of development and even the amount
of agitation to be applied to the film. An increase in any of these factors will result in the
over-development of the film, a decrease will under-develop the film.
Within realistic limits, over-development increases the fog level, the film gamma and
the saturation density. Under-development has the opposite effect (Figure 5.12).
The amount of radiation required to produce a given density is an indication of film speed.
The speed is usually taken to be the reciprocal of the exposure that causes unit density above
fog so a fast film requires less radiation than a slow film. The speed of the film depends on
the size of the crystals making up the emulsion and on the energy of the X-rays striking the
film. If the crystals are large then fewer X-ray interactions are required to blacken a film and,
because of this, fast films are often called ‘grainy’ films as the crystals when developed give
Fast emulsion Slow emulsion
Large gamma Small gamma
Density
E2 > E1
Log E1 Log E2
Log exposure
FIGURE 5.11
Characteristic curves for films of different gammas and different speeds.
Under-development
Density D
Log exposure
FIGURE 5.12
Variation of the characteristic curve for a film, for different development conditions. Note how the gamma and
fog level are affected.
The Image Receptor 149
a ‘grainy’ picture. This is because the energy deposited by a single X-ray photon is sufficient
to produce a latent image in a large crystal as well as a small crystal. Fewer large crystals
need be developed to obtain a given density. The speed of the film varies with the energy of
the X-ray photon, basically because the atomic numbers of the elements in the receptor, and
hence their absorption properties, are energy-dependent (see Section 4.10). In practice, due to
the wide range of photon energies present in a diagnostic X-ray beam this variation in sensi-
tivity can be neglected during subjective assessment of X-ray films.
The relative speed of two films is dependent on their characteristic curves and the den-
sity at which the speed is compared. In the extreme, if the curves cross then there will be
places where one film is faster than the other, one place where they have the same speed
and other places where the relative speeds are reversed.
5.5.4 Latitude
Two distinct, but related aspects of latitude are important, film latitude and exposure lati-
tude. Consider first film latitude. The optimum range of densities for viewing, using a
standard light box, is between 0.25 and 2.5. Between these two limits the eye can see small
changes in contrast quite easily. The latitude of the film refers to the range of exposures
that can be given to the film such that the density produced is within these limits. The
higher the gamma of the film, the smaller the range of exposures it can tolerate and thus
the lower the latitude. For general radiography a film with a reasonably high latitude is
used. There is an upper limit however, because if the gamma of the film is made too small
the contrast produced is too small for reasonable evaluation.
Exposure latitude is related to the object and can be understood by reference to
Figure 5.10. If a radiograph is produced in which all film densities are on the linear por-
tion of the curve, (i.e. the object contains a narrow range of contrasts) the exposure may be
altered, shifting these densities up or down the linear portion, without change in contrast.
(Radiologists often prefer a darker film to a lighter film although there may in fact be no
difference in contrast when the densities are measured and the contrast evaluated.) In
other words there is ‘latitude’ or some freedom of choice over exposure. If, on the other
hand, the range of densities on a film covers the whole of the linear range, (i.e. the object
contains a wide range of contrasts), exposure cannot be altered without either pushing the
dark regions into saturation or the light regions into the fog level. There is no ‘latitude’
on choice of exposure. Exposure latitude can be restored by choosing a film with a lower
gamma, (greater latitude), but there is a loss of contrast.
Note that in other imaging applications the ratio of the largest signal that can be handled
by the detector without saturation to the weakest signal that can be distinguished from the
noise is known at the dynamic range. The dynamic range of film is very small.
The advantages of using film in conjunction with fluorescent screens are two-fold. First
a much greater number of X-ray photons are absorbed by the screen compared to the num-
ber absorbed by film alone. The ratio varies between 20 and 40 depending on the screen
composition.
Second, by first converting the X-ray photon energy into light photons, the full blacken-
ing potential is realised. If an X-ray photon is absorbed directly in the film, it will sensitise
only one or two silver grains. However, each X-ray photon absorbed in an intensifying
screen will release at least 400 photons of light—some screens will release several thou-
sand photons. Thus, although tens of light photons are required to produce a latent image,
the final result is that the density on the film for a given exposure is between 30 and
300 times blacker (depending on the type of screen) when a screen is used than when a
film alone is exposed.
Insight
Intensification Factor
The increase in blackening when using a fluorescent screen is quantified by the intensification
factor, defined as follows:
for the same film blackening. A value for the intensification factor is only strictly valid for one
density and one kVp. This is because, as shown in Figure 5.13, when used in conjunction with a
screen, the characteristic curve of the film is altered. Not only is it moved to the left, as one would
expect, but the gamma is also increased.
When using screens, double-sided film has radiological advantages as well as process-
ing advantages since two emulsion layers enable double the contrast to be obtained for a
given exposure. This is because the superposition of two densities (one on each side of the
film base) produces a density equal to their sum. Although, theoretically, the same con-
trast could be achieved by doubling the thickness of a single emulsion, this is not possible
Screened Unscreened
film film
Density D
Log exposure
FIGURE 5.13
Change in the characteristic curve of film when using a screen.
The Image Receptor 151
in practice due to the limited range in emulsion of the light photons from the screens.
Note that, because the range of X-ray photons is not limited in this way, the contrast
for unscreened films would be almost the same for a one-sided film of double emulsion
thickness. The mechanical advantage of double-sided film would, of course, still remain.
Reference was made in Section 5.5.3 to the speed of the film. In practice it is the speed of
the film/screen combination which is important. This is described by a ‘speed class’. This
system is based on a similar numerical system to the American Standards Association (ASA)
film speed system used in photography. The special feature of ASA film speeds 32, 64, 125
is that the logarithms of these numbers 1.5, 1.8, 2.1 show equal increments. In other words
speed classes are equally spaced on the ‘log exposure’ axis of the characteristic curve. The
higher the speed class the more sensitive the film so speed classes for film-screen combi-
nations used in radiography are generally higher (Table 5.3).
Very high resolution, high contrast combinations are likely to have a speed class in the
region of 100–150, whereas very fast combinations (typically 600 or above) will make sac-
rifices in terms of resolution and possibly contrast. Note that quantum mottle limits the
speed of the system that can usefully be used (see Section 6.10).
A film-screen combination must be used and contained in a light-tight cassette, the film
is fogged by ambient light. The front of the cassette is made of a low atomic number mate-
rial, generally either aluminium or plastic. The back of the cassette is either made of, or
lined with, a high atomic number material. This high atomic number material is more
likely to absorb totally the X-ray photons passing through both screens and film by the
photoelectric effect than to undergo a Compton scatter reaction, which could backscatter
photons into the screen.
The two screens are kept in close contact with the film by the felt pad exerting a constant
pressure as shown in Figure 5.14. If close contact is not maintained then resolution is lost
due to the greater opportunity for the spread of light leaving the screen before reaching
the film.
The light emitted from a screen does not increase indefinitely if the screen thickness is
increased to absorb more X-ray photons. A point is reached where increasing the thickness
produces no more light because internal absorption of light photons in the screen takes
place. The absorption of X-ray photons produces an intensity gradient through the screen
and significantly fewer leave the screen than enter. The light produced at a given point in
the screen is directly proportional to the X-ray intensity at that point.
TABLE 5.3
The Range of Speed Classes Used in Film Screen Radiography and the
Corresponding Linear Increases on a Logarithmic Scale
Speed Class 100 200 400 800
Logarithm 2.0 2.3 2.6 2.9
FIGURE 5.14
Cross-section through a ‘loaded’ cassette containing a ‘sandwich’ of intensifying screens and double-sided film.
152 Physics for Diagnostic Radiology
In the case of the back screen, the reduction in intensity through it is of no significance.
This screen must be thick enough to ensure that the maximum amount of light for a given
X-ray intensity is produced, but once this is achieved it need be no thicker. The front
screen thickness is a compromise between achieving the maximum light output through
X-ray photon absorption and not reducing the X-ray photon intensity by too much in the
area of effective maximum light production, that is, in the layers of the screen closest to
the film. The compromise thickness for the front screen, giving maximum light produc-
tion, is in fact somewhat less than optimum for the back screen. Since unsharpness is less
from the back screen, attempts to optimise light production can improve image sharp-
ness. For further discussion on the way in which the screen can affect image quality see
Section 5.8.
If the screens are of unequal thickness they must never be reversed but for some modern
cassettes there is no difference between the thickness of the front and back screens.
5.7 Reciprocity
Basically the law of reciprocity states that, if two quantities x and y are multiplied together,
the same result will be obtained by using 10x and 0.1y or vice versa. A good example is
multiplying current and voltage, 10A × 0.1V and 0.1A × 10V both give one watt of power.
The exposure received by a cassette can be considered to depend on two basic param-
eters, the intensity of the radiation beam striking the cassette and the time of exposure.
The intensity at a fixed kVp is proportional to the milliamps (mA) of the exposure, and
the exposure is thus proportional to the milliampseconds (mAs). For unscreened film the
same mAs will always give the same blackening of the film regardless of the period of
exposure, that is the law of reciprocity is obeyed. When using screened film, however, it is
found that for very short exposures and very long exposures, although the same mAs are
given, the blackening of the film is less.
For long exposures this effect is known as latent image fading and the amount of fading
depends to a large extent on whether the image has been produced by X-ray interactions
or by visible light photons. In general, a single X-ray photon will form a developable grain
because it deposits so much energy. Hence image fading does not occur. Conversely, many
visible light photons are required to produce a latent image and if their rate of arrival is
slow the silver halide lattice may revert to its normal state before sensitisation of the speck
is completed. This effect is called failure of the law of reciprocity.
X-ray photon
Film A B
FIGURE 5.15
Schematic representation of image unsharpness created by the use of an intensifying screen.
the X-ray beam, rather than pebble-shaped, has helped to reduce this source of image
unsharpness with no loss in sensitivity (see Section 5.13.1).
This unsharpness, or loss of resolution caused by photoluminescent screens, will be dis-
cussed again in Section 5.11 since it is a major problem in both analogue and many forms
of digital image.
1. A small detector scans across the area occupied by the image (Figure 5.16a). This
is a very simple geometry but will require a long time to acquire all the data, even
in a relatively coarse 256 × 256 matrix.
2. A linear array of 200 × 1.0 mm detectors only has to make linear movements to
cover the image space (Figure 5.16b). Data collection is now speeded up but may
still take several seconds.
3. A static array of detectors may be used (Figure 5.16c). This will be inherently more
rapid since there is no mechanical movement, provided that the electronics is fast
enough to handle the rapid collection of large amounts of data.
Insight
Slot Scanning Systems
A few systems have adopted the scanning principle, using a one-dimensional array with linear
motion. In an early system proposed for DR of the chest (Tesic et al. 1983) a vertical fan beam
of X-rays scanned traversely across the patient in a PA orientation. An entrance slit 0.5 mm wide
defined the beam, an exit slit 1.0 mm wide removed much of the scatter. The beam fell on a
gadolinium oxysulphide screen backed by a vertical linear detector array consisting of 1024 pho-
todiodes with a 0.5 mm spacing (Figure 5.17). The complete system of X-ray tube, collimators and
detectors scanned over the patient in 4.5 s, sampling in 1024 horizontal positions. The estimated
entrance dose was 250 µGy.
The advantage of this type of system is that there is no need for an anti-scatter grid. The disad-
vantage however is that there is a greater load on the X-ray generator as the exposure time is much
greater than in conventional radiography.
The Image Receptor 155
(a) (b)
1 mm 1 mm
Scanned distance 25 cm
detectors
25 cm
1 mm 1 mm
detector
25 cm 20 cm
Scanned dimensions
(c)
25 cm
1 mm
25 cm
FIGURE 5.16
Detector arrangements for digital radiology (a) scanning detector, (b) linear array of detectors, (c) static array of
independent detectors: 250 × 250 = 62,500 detectors in toto.
Rays
X-ray
source
Rays
Exit Detector
Entrance slit slit array
FIGURE 5.17
Diagram of a scanning digital radiography system. The collimated fan beam moves horizontally across the
patient. The exit slit and detectors move with the beam.
156 Physics for Diagnostic Radiology
(a) A large-area, thin layer of X-ray sensitive material which is capable of generating
either light (a phosphor) or electrons (a semiconductor)
(b) A two-dimensional array of tiny matrix elements constructed as a thin film transis-
tor (TFT). To achieve the necessary resolution (say 150 µm) across a 30 cm × 45 cm
image requires an array of 2000 × 3000 individual, electrically isolated, TFTs
FPIs can work in two slightly different ways (see Figure 5.18). In an indirect detector active
matrix FPI the X-ray sensitive material is a phosphor, typically thallium-doped columnar
caesium iodide (see Section 5.13.1) or gadolinium oxysulphide. The light emitted from the
phosphor is converted by a photodiode, typically amorphous silicon (a-Si) into electrical
charges which are stored as an electronic image in the TFTs.
(a) (b)
Indirect conversion
X-rays
X-rays
Light
l
tro
Photodiode
on
gc
Digital Multiplexer
X-ray image
(c) (d)
Direct conversion X-rays
converts X-rays to
increased noise
on
TFT
an
Sc
FIGURE 5.18
Illustrating indirect and direct conversion in active matrix flat panel imagers. In indirect conversion (a and
b) the X-ray photons are first converted by a phosphor to visible photons which are subsequently converted
to an electrical signal by a photodiode. In direct conversion (c and d) a photoconductor is used to convert the
X-ray photons directly into an electrical signal. (Reproduced with permission from Zhao W, Andriole K P and
Samei E Digital radiography and fluoroscopy in Advances in medical physics – 2006. Wolbarst A B, Zamenhof R G
and Hendee W R, eds, Medical Physics Publishing, Madison, WI, 1–23, 2006.)
The Image Receptor 157
In the direct detector active matrix FPI the X-ray receptor is a layer of amorphous sele-
nium (a-Se) which converts the X-ray pattern into a spatial pattern of electron-hole pairs.
An electric field now drives the electrons (or holes depending on field direction) towards
the TFT array where they are collected on the pixel electrodes.
With both FPIs, at the end of the exposure the X-ray image has been converted to an array of
electrical charges, with the charge at each pixel proportional to the absorbed radiation in the
corresponding region of the image. Each TFT acts like a microscopic valve, so for read out a
scanning control circuit applies a bias voltage which activates the TFTs one row at a time and
the image charges are transferred to charge-sensitive amplifiers. Because all scanning move-
ments are controlled electronically with no mechanical movements, and modern computers
can handle the rapid data flow, scanning is very fast. Each row takes about 30 µs to read out so
a detector with 2000 × 2000 pixels can be read out in real time at about 15 frames s–1.
Insight (1)
Indirect Detector Active Matrix FPIs
Some further points on indirect detector FPIs affecting overall sensitivity response are as follows:
1. Columnar CsI will allow thicker layers (500 µm) to be used giving better QE without too
much loss of resolution.
2. About 25 eV are required to produce a light photon in CsI (i.e. almost 10 times the energy
of the photon itself), so a 50 keV X-ray photon produces about 2000 photons.
3. The light output from CsI, which covers a broad band from 400 to 700 nm, is well matched
to the sensitivity response of a-Si.
4. The fraction of the pixel area sensed by the photodiode (the fill factor) is not 1.0 because
part of each pixel is occupied by electronics associated with the TFT and scanning lines.
As the pixel size decreases the fill factor falls because the obstructed area becomes a big-
ger percentage of the whole. Conversely, the continuing reduction in size of the electronic
components of each pixel will increase the fill factor in the future. Fill factors for currently
available detectors range from 0.5 to 0.9 depending on pixel size.
Insight (2)
Direct Detector Active Matrix FPIs
1. Because of the relatively low atomic number of selenium, the thickness of this layer is quite
high (about 1 mm) to give good QE.
2. Since X-rays are converted directly to electrons, which can be collimated by electric fields,
the image blurring caused by light spreading in a scintillator is much reduced, even for the
relatively thick a-Se layer.
3. The pixel electrode can be built on top of the TFT and scanning lines so the fill factor is
close to 1.0.
4. Overall conversion gain at 1000 electrons per 50 keV photon is about the same for direct
and indirect FPIs but very small pixels are easier to achieve with the direct system (because
there are no photodiodes) and the fill factor is not compromised.
An array of detectors that uses a somewhat different principle to produce digital images is
the charge coupled device (CCD) discussed in Section 5.13.4.
158 Physics for Diagnostic Radiology
(a)
ADC
Filter to remove
Mirror laser light
Amplifier
Light
He/Ne Laser guide
Incoming
laser beam Photomultiplier tube
Scan
direction Photostimulated emission
Phosphor plate and scattered laser light
Plate movement
(b)
1. Phosphor
plate is 2. Plate is
exposed to scanned by
X-rays, capturing laser,
latent stimulating
radiographic trapped
image electrons
4. Plate is
3. Light is
exposed to
emitted by
high intensity
stimulated
light, removing
electrons and
any residual
recorded by
latent image
photomultiplier
FIGURE 5.19
(a) The read out system for a digital phosphor plate. (b) The life cycle of the plate. (Reproduced with permission
from Zhao W, Andriole K P and Samei E Digital radiography and fluoroscopy in Advances in medical physics –
2006. Wolbarst A B, Zamenhof R G and Hendee W R, eds, Medical Physics Publishing, Madison, WI, 1–23, 2006.)
bit depth. Ultimately each small area of the phosphor has a light output allocated to it. The
image is stored in the computer memory.
To prepare the screen for reuse it has to be exposed to a high intensity flood light for a
short period of time. This completely removes any remaining energy of excitation associ-
ated with residual trapped electrons from the screen and recreates a uniform response on
the screen. The life cycle of the plate is shown in Figure 5.19b.
160 Physics for Diagnostic Radiology
5.11.3 Properties
As both CR and standard film/screen systems are essentially based on the same image
forming device, many of the properties are almost the same. The doses required to prod-
uce a usable image are similar with the lower limit on dose reduction essentially being
the noise produced by quantum mottle. Spatial resolution for general CR work is slightly
inferior to that of film-screen and the causes of loss of resolution are different. In a film/
screen system the light spreads out isotropically following the conversion of an X-ray pho-
ton to light photons. In the storage phosphor system the main contribution to unsharpness
comes from the scattering of the stimulating beam as it enters the phosphor thus creating
a spread in the luminescence along its path. The intensity of the stimulating laser light
also affects the resolution. A higher intensity beam results in a lower resolution but this
is compensated for to some extent by the increase in the amount of stored signal released.
The resolution is also limited by the size of the pixels sampled and read out time. If the
laser spot has moved to the next point on the phosphor before all of the emitted light from
the previous stimulation has been collected, this light will contribute to the signal of the
neighbouring pixel. A matrix array of approximately 2000 by 2500 is typical.
The relative intensity of light emitted is proportional to X-ray exposure at the plate over
approximately four decades. This ‘latitude’ allows CR to be used over a wide range of
exposure conditions without adjusting the detectors. Image processing frequently per-
mits images of acceptable diagnostic quality to be extracted from over-exposed or under-
exposed plates—situations which would necessitate retakes with film screen. Also this
wide dynamic range can be exploited in chest radiography using a windowing technique
(Section 6.3.3) or in dual energy subtraction (Section 9.5.3).
Insight
CR and DR Compared and Contrasted
10. In terms of workflow, a DR room can image a greater number of patients in a session than
CR if those patients are presenting for a standard examination, for example, chest radiogra-
phy or mammography.
11. CR is generally available at a much lower initial cost than DR. This is especially true if
replacing an existing film-screen system as the same X-ray equipment can be used. When
a DR system is installed it is normally as part of a complete radiographic room. Quoted
lifetimes of both CR and DR detectors are similar at about 7 years; however, CR plates are
normally replaced at much shorter intervals than this due to mechanical damage associated
with the handling and readout process of most readers. This provides an additional cost for
CR systems.
(a) The beam from either a He/Ne gas laser or a solid state diode laser is focussed
with lenses and scanned under computer control in a raster pattern onto the film
using a beam deflecting device.
(b) The light transmitted through the film at any point depends on the optical density
at that point and falls on a photo detector.
(c) The laser and photo detector scan along the line of film producing a continuously
varying intensity at the photo detector.
(d) This light is sampled at regular intervals, once per pixel.
(e) The analogue electrical signal is digitised and stored in a matrix of pixel addresses
(x, y, co-ordinates and digitised photo detector reading).
Insight
Advantages of Laser Beams for Digitising Films
1. The beam can be highly collimated—for example, producing spots as small as 50 µm.
2. This small spot and low light scattering results in high spatial resolution—4000 × 5000
pixels and 10 lp mm –1.
3. Because of the high light intensity of the laser beam there are a large number of pixel levels
(12 bits) so the accuracy is high. The system can easily accommodate the full OD range of
film.
Flat bed CCD scanners, or less frequently digital cameras or analogue cameras with ADCs,
may also be used to digitise films.
162 Physics for Diagnostic Radiology
Laser Computer
Beam orientation
Beam-sweep
device (Pixel address)
Laser light
X-ray film
Photo-detector
(Pixel value)
FIGURE 5.20
A simple arrangement for film digitisation—for detail of operation see text. (Reproduced with permission from
Zhao W, Andriole K P and Samei E Digital radiography and fluoroscopy in Advances in medical physics – 2006.
Wolbarst A B, Zamenhof R G and Hendee W R, eds, Medical Physics Publishing, Madison, WI, 1–23, 2006.)
Insight
Eye-Phosphor Combination in Fluoroscopy
Before the invention of the image intensifier in the 1950s, fluoroscopic screens were the primary
means by which moving radiographic images were viewed. It is instructive to investigate the rea-
son why direct viewing of the screen results in poor images.
A fluoroscopic screen consists of plastic backing for protection, a zinc-cadmium sulphide
(ZnCdS) scintillation screen, and a thick lead glass protective screen. The resolution of the ZnCdS
screen varies with crystal size in exactly the same way and for the same reasons as the resolution
The Image Receptor 163
Sensitivity Emission of
of eye fluorescent
screen
0.5
FIGURE 5.21
Comparison of the emission spectrum of visible light from a fluorescent screen with the sensitivity of the eye.
of a radiographic screen. The wavelength of emitted light is of course independent of the X-ray
photon energy. When this technique was used historically, it was essential that certain basic pre-
paratory steps were taken. For example, the light output was so low that the room had to be dark-
ened and the eye fully dark-adapted. However, this caused further problems. The spectral output
of a cadmium activated zinc sulphide screen closely matches the spectral response of the eye
using cone vision at high light intensities (Figure 5.21) but the eye is poorly equipped for detect-
ing changes in light levels or resolution at the low intensities of light given out by a fluoroscopic
screen. Furthermore, these levels cannot be increased by increasing the X-ray intensity without
delivering an even higher dose to the patient. At the levels of light involved, 10 –3–10 –4 cdm –2, the
resolution of the eye is no better than about 3 mm at a normal viewing distance of 25 cm and
only changes in light levels of approximately 20% can be detected. This is because vision at these
levels is by rod vision alone, cone vision having a threshold of brightness perception of about
0.1 cdm –2. Also the spectral response of rods peaks at a wavelength of 500 nm, where screen
output tends to be poor (Figure 5.6). For further discussion see Section 7.3.2.
All of the figures quoted above are for a well dark-adapted eye, so the eye had to be at low
light levels for 20–40 min before the fluoroscopic screen was viewed. This allowed the visual
purple produced by the eye to build up and sensitise the rods. Alternatively, because the rods are
insensitive to red light, red goggles could be worn in ambient light, producing the same effect.
Exposure to ambient light for even a fraction of a second completely destroyed the build-up of
visual purple.
Anode
X-ray
exposure Aluminium foil
Vaccum housing
FIGURE 5.22
Construction of a simple image intensifier.
Photocathode
e e e e e e Emitted electrons
FIGURE 5.23
Greatly enlarged view of the needle-like crystalline structure of a CsI:Na screen.
spread out to cover a large area. Although not as efficient a collimator as a light pipe, light
spreading is only about half that with an unstructured screen of the same thickness. CsI
can thus be used in thicker layers without any significant loss in resolution although the
packing advantage means that a thickness of only 0.1 mm is required.
Intimately attached to the CsI is the photocathode, a photoemissive material compris-
ing antimony and caesium compounds that converts the light photons into electrons. The
output of the fluorescent screen must be closely matched to the photo response of the pho-
tocathode (see Section 5.4.1). The input phosphor and photocathode are curved to ensure
that the electrons emitted have the same distance to travel to the output phosphor. The
output of the intensifier is via a second fluorescent screen, often of ZnCdS:Ag, shielded
from the internal part of the intensifier by a very thin piece of aluminium. This stops light
from the screen entering the image intensifier. This screen is much smaller than the input
phosphor. A voltage of approximately 25 kV is maintained between the input and output
phosphors.
The Image Receptor 165
The mode of action of an image intensifier is as follows. When light from the input fluo-
rescent screen falls on the photocathode it is converted into electrons, and for the present
discussion electrons have two important advantages over photons:
Thus under the influence of the potential difference of 25–30 kV, the electrons are acceler-
ated and acquire kinetic energy as they travel towards the viewing phosphor. This increase
in energy is a form of amplification. Second, by careful focusing so as not to introduce
distortion, the resulting image can be minified, thereby further increasing its brightness.
Even without the increase in energy of the electrons, the brightness of the output screen
would be greater by a factor equal to the ratio of the input and output screen areas. The
output phosphor is generally made of small crystals of silver activated zinc-cadmium sul-
phide which are laid down in a thin layer so as not to affect significantly the resolution of
the minified image. The resolution of the image is 3–5 lp mm–1 in a new system (see Section
5.13.4).
The amplification resulting from electron acceleration is usually about 50. In other words,
for every light photon generated at the input fluorescent screen, approximately 50 light
photons are produced at the output screen. The increased brightness resulting from reduc-
tion in image size depends of course on the relative areas of the input and output screens.
If the input screen is approximately 25 cm (10 inches) in diameter and the output screen
is 2.5 cm (1 inch), the increase in brightness is (25/2.5)2 = 100 and the overall gain of the
image intensifier about 5000. By changing the field on the focusing electrodes it is possible
to magnify part of the image. A direct consequence of this is that the intensification factor
due to minification is reduced and the exposure of the patient must be increased to com-
pensate. Magnification always leads to a greater patient dose.
70 or 100 mm
camera
Electronic
signal
Lens and Half-silvered Television
iris system mirror at 45° camera
FIGURE 5.24
Method of coupling an image intensifier with a television camera using a half-silvered mirror to deflect part of
the image to a spot or cine camera.
The Image Receptor 167
30 ms
Shutter opening
frequency at 33
frames s–1
Closed Open Closed Open Closed
Film movement
Move Stop Move Stop Move
X-ray pulses
5 ms
Time axis
FIGURE 5.25
Synchronisation of the X-ray pulse with film frame exposure during cineflurography.
high, contrast is lost, whereas adjustment of tube current or pulse width has only a limited
brightness range. These limitations can be largely overcome with a digital receptor.
Spot films (70 mm or 100 mm) are larger than cine film so the lens on the front of this
camera has a longer focal length than that on a cine camera. In comparison with cinefluo-
rography, typical advantages and disadvantages are as follows:
However, as with cinefluorography, a good digital system makes spot film redundant.
Data processing
unit
Patient
Digital-analogue
converter
Csl scintillator
Image Digital Computer
photocathode
intensifier store
FIGURE 5.26
Block diagram representation of an image intensifier and TV system for digital radiography. (Reproduced with
permission from Dendy P P Recent technical developments in medical imaging part 1: Digital radiology and
evaluation of medical imaging. Curr Imag, 2, 226–236, 1990.)
(a)
Evacuated Light sensitive
chamber surface
Scanning
pencil beam
Cathode
Optical signal
filament
of electrons
Control
grid Anode at 250V
Focusing and
deflecting coils
(b) Optical
signal
FIGURE 5.27
(a) Basic features of the construction of a video camera, (b) the light sensitive surface shown in detail.
The Image Receptor 169
When the camera is directed at visible light, photoelectrons are released from the anti-
mony trisulphide matrix, to be collected by the anode and removed, leaving positive
charges trapped there. The amount of charge trapped at any one point is proportional to
the light intensity that has fallen on it. Thus the image information has now been encoded
in the relative sizes of the positive charges stored at different points in the image plate
matrix. These insulated positively charged areas draw a current onto the conductive plate
until there is an equal negative charge held there.
The electrons emitted by the cathode are formed into a very narrow beam by the con-
trol grid. This beam is attracted towards the fine mesh anode which is at 250 V positive
to the cathode. The signal plate has a potential some 225 V less than the anode, allowing
the photoelectrons to flow from the signal plate to the anode. The electrons from the
cathode pass through the mesh and this reversal in the potential field slows them down
until they are almost stationary. They have an energy of only 25 eV when they strike
the target plate. The field between the anode and the signal plate also straightens out
the path of the electron beam so that it is almost at right angles to the signal plate. The
electron beam is scanned across the signal plate by the scanning electrodes so that it
only interacts with a few of the insulated areas at a time. Electrons flow from the elec-
tron beam to neutralise the positive charge on the target. The reduction in the positive
charge releases an equivalent charge of electrons from the conducting layer and the flow
of electrons from this layer constitutes the video signal from the camera. The natural
line-scanning motion of the beam provides digitisation in one dimension and the data
along the scan line can be digitised by registering the accumulated signal in regular,
brief intervals of time.
The following additional points should be noted about a television system:
1. Irrespective of the resolving capability of the image intensifier system, the tele-
vision system will impose its own resolution limit. Vertically the limit is set
by the number of scan lines the electron beam executes. Whatever the size of
the image, the electron beam only executes a fixed number of scan lines (for a
standard TV camera, 625 in the United Kingdom, equivalent to about 313 line
pairs). The effective number of line pairs, for the purpose of determining verti-
cal resolution, is somewhat less than this—probably about 200. Now for a small
image, say 5 cm in diameter, this represents four line pairs per mm which
is comparable to the resolving capability of an image intensifier (see Section
5.13.1). However, any attempt to view a full 230 mm diameter (9 inch) screen
would provide only about 0.8 line pairs per mm and would severely limit the
resolving capability of the complete system. Therefore 1000 line and 2000 line
TV cameras have been developed. Horizontally the limit is determined by the
frequency at which the electrons are modulated as the electron beam scans
across the screen.
2. Contrast is modified by the use of a TV system. It is reduced by the camera but
increased by the television display monitor, the net result being an overall improve-
ment in contrast (for further discussion of contrast see Section 6.3).
3. Rapid changes in brightness seriously affect image quality when using a televi-
sion system. Thus a photo cell is incorporated between the image intensifier and
the television camera with a feed-back loop to the X-ray generator. If there is a
sudden change in brightness as a result of moving to image a different part of the
patient, the X-ray output is quickly adjusted to compensate.
170 Physics for Diagnostic Radiology
Insight
Image Reconstruction at Fast Frame Rates
Two approaches to image reconstruction are possible. With continuous low tube current exposure
the TV frame can be digitised at 25 frames s–1. Because of the low X-ray output, the noise in each
frame is high but this can be reduced by summing several frames, provided there is little patient
movement. For certain investigations, for example digital cardiac imaging, acceptably low noise
rates are essential at framing rates that can be anything from 12.5 to 50 frames s –1. This requires
pulsed operation with a much higher output during the pulse.
Pixel array
Output
gate
Shift register
FIGURE 5.28
Schematic representation of charge shifting in a CCD.
The Image Receptor 171
signal wire from the shift register to an ADC and the digitised signal is then placed in a
pixel array generated in the computer to match the detector element array.
An alternative method of read out allows the charge on each element to be measured by
using the fact that simultaneous connections to a vertical line and a horizontal line uniquely
define one element where they cross, allowing the charge on that element to be read.
Charge coupled devices are not suitable for recording digitised images directly from X-rays,
since the a-Si layer is very thin with poor detection efficiency. In addition the CCD is sus-
ceptible to radiation damage at high fluxes. However, a CCD may be used very successfully
in conjunction with a luminescent screen or a photoelectric cathode surface (e.g. the output
from an image intensifier). The X-ray photons are first converted into visible light photons
and then by the a-Si into electrons which are readily captured in the potential wells.
At present CCDs are small area detectors (up to about 60 mm × 60 mm). Ideally the visi-
ble light output from, say, the image intensifier should be focussed by an optical lens onto
a single large-format CCD. If images obtained from a mosaic of smaller CCDs are stitched
together there may be image distortion or loss of data at the seams.
The CCD has been used very successfully in digital fluoroscopy, with the following ben-
efits over a TV camera:
(a) The resolution is fixed by the size and interspacing of the photosensitive elements.
It does not vary with field size.
(b) There is no scanning electron beam to cause drifting.
(c) The geometry of the CCD camera is precise, uniform, distortion-free and stable
(but see comment above on combining images from several small CCDs).
(d) The sensor is linear over a wide range of illumination.
(e) The CCD has a low read out noise which is helpful when using an optically cou-
pled light detector in which there may be significant loss of light.
Insight
More on CCDs
Since their invention in 1969 CCDs have gradually been introduced into many photographic and
video devices. They were largely introduced into diagnostic radiology in the 1990s when they
replaced video camera tubes such as the Vidicon and Pumbicon to record the output of image
intensifiers.
Since this introduction, CCDs have developed many uses within the diagnostic imaging depart-
ment demonstrating their versatility. An example can be found in small area digital mammography
systems used to localise areas of interest in the breast for biopsy. A further use is in large-area
radiography systems where one or more CCDs are used in conjunction with a fibre optic taper or
optical lens to record the output of a photoluminescent screen.
explained, an electric charge accumulates on the TFT array during exposure to X-rays.
Each TFT in the array must be activated and read out in turn by a high speed process-
ing unit, the output of which is then amplified and passed through an ADC. This clearly
needs data to be processed by the acquisition computer at high bitrates. X-ray sensitivity is
also a critical issue for digital detectors used in fluoroscopy, as the diagnostic images are
acquired in a limited period of time.
Flat panel detectors cost more than image intensifiers, but their advantages include the
fact that images appear uniform and un-distorted due to the lack of focussing electron-
ics. Also they are much smaller and lighter which may result in easier movement and
a less confined environment for both patient and operators (for further discussion see
Section 9.6.2).
Graded set
of filters
Suggested 0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
optical
densities
Test film
FIGURE 5.29
Use of a sensitometer, consisting of a graded set of filters of known optical density, for quality control of film
processing.
Insight
Image Distortion in an Image Intensifier
1. Focussing a curved input screen onto a flat output screen causes pincushion distortion.
2. Spatial distortion is caused by stray magnetic fields, including the Earth’s magnetic field.
Residual magnetisation in the steel of the building can also cause problems.
3. Large field of view modes suffer more distortion than small field of view.
4. The collection efficiency of electron-optical focussing lenses in the image intensifier, and
optical lenses coupling the intensifier to cameras and display units is better in the centre.
Radial fall-off in brightness is called vignetting.
10
FIGURE 5.30
Variation of minimum perceptible contrast with dose rate for an image intensifier screening system.
must be controlled carefully since, in common with other imaging systems, the finest resolv-
able detail varies with contrast level for the image intensifier (see Section 7.3.3).
110
2
Nominal operating
90 power 400 W
1
Operating kVp
70 3
50
30
0 1 2 3 4 5 6 7 8
Tube current (mA)
FIGURE 5.31
Different ways in which kV and/or mAs can be increased to give greater output. Curve 1 (solid line) gives a
compromise between contrast and dose requirements, curve 2 (short dashes) minimises the dose by raising
the kVp as required in paediatric imaging, and curve 3 (long dashes) holds a low kVp value where contrast is
essential, for example, to visualise catheters. A curve similar to curve 3 may be used if the maximum keV is to
be locked on to a K-shell absorption edge as used in iodine contrast studies.
5.15 Conclusions
A primary photon image cannot be viewed directly by the human eye and for the best part
of 100 years photographic film was the only receptor used for simple radiographs in a wide
range of applications, including general purpose radiography, mammography, dental work
and photofluorography. At an early stage it was recognised that much greater sensitivity
could be obtained with a fixed number of X-ray photons if a phosphor was used to convert
each X-ray photon into a large number of visible light photons. Fluorescent screens with
gradually improving performance have become an integral part of many image receptors
resulting in greatly reduced doses to patients. It is important to understand the concepts
of optical density, film gamma, film speed and latitude to appreciate how the images can
be optimised.
The introduction of digital receptors has made rapid and dramatic inroads into the
supremacy of film. Two of the three main imaging modalities, CR and DR indirect detec-
tors with an active panel FPI still use a fluorescent layer as part of the receptor, but DR
direct detectors convert the pattern of X-ray photons directly into a pattern of electrons
and holes. Digital detectors have been, or are being developed for all applications of
radiography.
Digital systems have advantages over film in respect of dynamic range and post-
processing, including contrast enhancement (see Section 6.3.3) but their biggest attraction
is that they greatly facilitate the management of image data in the X-ray department. When
DR and CR are compared and contrasted, although the physical methods by which the
images are produced are very different, in general radiography the final images do not
show major differences. However, both technologies are developing rapidly so this equi-
librium may not hold indefinitely.
178 Physics for Diagnostic Radiology
Receptors used for imaging in real time with continuous X-ray exposure (fluoroscopy)
are undergoing a similar revolutionary change. Many years ago the image intensifier was
a major break-through in terms of image brightness and dose reduction and in the past it
has been linked to a variety of hardware to display images, including TV monitors and
cine cameras. These are now being replaced with technology that is fast enough to pro-
duce digital images at up to 30 frames s–1 using CCD cameras coupled to the image intensi-
fier, or the image intensifier itself is being replaced by DR FPIs.
Receptors that incorporate optical and electronic imaging systems must be carefully
maintained and subjected to quality control. For example with many phosphors it is pos-
sible for a slow degradation of the final image to occur. This degradation may only be
perceptible if images of a test object taken at regular intervals are carefully compared,
preferably using quantitative methods. Digital receptors are inherently more complex
than a film-screen cassette and require correspondingly greater checking.
One final point is that each imaging process has its own limits on resolution and con-
trast and these will be discussed in greater detail in Chapter 6. However, it is important
to emphasise here that the inter-relationship between object size, contrast and patient
dose is a matter of everyday experience and not a peculiarity of quality control measure-
ments or sophisticated digital techniques. Furthermore, the choice of imaging process
cannot always be governed solely by resolution and contrast considerations. The speed at
which the image is produced and is made available for display must often be taken into
account.
References
Dendy PP Recent technical developments in medical imaging part 1: digital radiology and evaluation of medi-
cal imaging. Curr Imag 2, 226–236, 1990.
Hay GA, Clark OF, Coleman NJ and Cowen AR A set of X-ray test objects for quality control in television
fluoroscopy. Br J Radiol 58, 335–344, 1985.
IPEM Report no 91 Recommended standards for routine performance testing of diagnostic X-ray imaging
systems. Institute of Physics and Engineering in Medicine, York, UK, 2005.
Tesic M M, Mattson R A, and Barnes G T et al. Digital radiography of the chest; design features and con-
siderations for a prototype unit. Radiology 148, 259–264, 1983.
Zhao W, Andriole K P and Samei E (2006) Digital radiography and fluoroscopy in “Advances in medical
physics” – 2006. Wolbarst A B, Zamenhof R G & Hendee W R, eds, Medical Physics Publishing,
Madison, WI, 2006, 1–23.
Further Reading
AAPM Acceptance testing and quality control of photostimulable storage phosphor imaging systems. Report
of AAPM Task group 10, number 93, American Association of Physicists in Medicine, Maryland,
US, 2006. (And earlier AAPM publications on QA of receptors).
Allisy-Roberts PJ and Williams J Farr’s Physics for Medical Imaging 2nd ed. Saunders Elsevier, 2008,
65–102.
The Image Receptor 179
Bushberg J T, Seibert J A, Leidholdt E M and Boone J M The essential physics of medical imaging in 2nd ed.,
chapter 11 “Digital radiography.” Lippincott Williams & Wilkins Philadelphia, 2002.
Carroll Q B Practical radiographic imaging 8th ed. Charles C Thomas Publisher Ltd., Springfield, IL,
2007.
Carter C E and Vealé B L Digital radiography and PACS Mosby Elsevier, St Louis, Missouri, 2008.
Spahn M Flat detectors and their clinical applications (review). Eur Radiol 15, 1934–1947, 2005.
Exercises
1. What are the differences between digital and analogue images?
2. Explain how the intensification factors of a set of radiography screens might be
compared. Summarise and give reasons for the main precautions that must be
taken in the use of such screens.
3. Draw on the same axes the characteristic curves for
(a) A fast film held between a pair of calcium tungstate plates
(b) The same film with no screen and explain the difference between them.
4. Why is it desirable for the gamma of a radiographic film to be much higher than
that of a film used in conventional photography and how is this achieved?
5. Explain what is meant by the speed of an X-ray film and discuss the factors on
which the speed depends.
6. A radiograph is found to lack contrast. Under what circumstances would increas-
ing the current on the repeat radiograph increase contrast, and why?
7. Make a labelled diagram of the intensifying screen-film system used in radiol-
ogy. Discuss the physical processes that occur from the emergence of X-rays at the
anode to the production of the final radiograph.
8. Discuss the factors which affect the sensitivity and resolution of a screen-film
combination used in radiography and their dependence upon each other.
9. What is meant by the accuracy of an analogue to digital converter? How is its accu-
racy determined?
10. What are the essential differences between digital radiography and computed
radiography?
11. Why is it important to retain the capability to digitise analogue images on films?
Outline briefly how this can be done.
12. How does the difference in diameter of the input and output screens of an X-ray
image intensifier contribute to the performance of the system?
13. Discuss the uses made of the brightness amplification available from a modern
image intensifier, paying particular attention to any limitations.
14. Compare and contrast the use of fluorescent screens in radiography and
fluoroscopy.
15. Explain how an image intensifier may be used in conjunction with a photoconduc-
tive camera to produce an image on a TV screen.
180 Physics for Diagnostic Radiology
16. What is a charge coupled device and how may it be used in conjunction with an
image intensifier to produce digital images?
17. Automatic brightness control indicates that more output is required from the X-ray
generator for a screening procedure. Discuss the different ways in which kV and
mA can be varied to achieve this and the relative merits of each.
6
The Radiological Image
SUMMARY
• This chapter investigates the various factors that affect the quality of a radio-
logical image.
• Contrast is a difficult quantity to define but since all features of an image
depend on contrast this is a good place to start.
• When imaging a patient, tissue overlying and underlying the region of inter-
est generates scatter, so the factors affecting scatter and its reduction are
important.
• The sharpness of an image or its resolution is another important criterion of
quality.
• Quantum mottle caused by the statistical nature of the interaction of X-ray
photons with matter can seriously affect image quality.
• Image processing, made possible with the advent of digital images, is now an
important way to enhance the image.
• Finally, some artefacts and distortions caused by the geometrical relation-
ship between the receptor, patient and X-ray source will be considered.
CONTENTS
6.1 Introduction—the Meaning of Image Quality............................................................... 182
6.2 The Primary Image............................................................................................................. 183
6.3 Contrast................................................................................................................................ 183
6.3.1 Contrast on a Photoluminescent Screen.............................................................. 185
6.3.2 Contrast on Radiographic Film............................................................................. 185
6.3.3 Contrast on a Digital Image.................................................................................. 186
6.3.4 Origins of Contrast for Real and Artificial Media............................................. 188
6.4 Effects of Overlying and Underlying Tissue.................................................................. 190
6.5 Reduction of Contrast by Scatter...................................................................................... 191
6.6 Variation in Scatter with Photon Energy......................................................................... 193
6.7 Reduction of Scatter............................................................................................................ 193
6.7.1 Careful Choice of Beam Parameters.................................................................... 193
6.7.2 Orientation of the Patient....................................................................................... 194
6.7.3 Compression of the Patient.................................................................................... 194
181
182 Physics for Diagnostic Radiology
undertaken in several ways, each of which has its own particular features. The definition
of quality for the resultant image in practical terms depends on the information required
from it. In some instances it is resolution that is primarily required, in others the ability
to see small increments in contrast. More generally the image is a compromise combina-
tion of the two, with the dominant one often determined by the personal preference of the
radiologist. (This preference can change; the ‘contrasty’ crisp chest radiographs of several
years ago are now rejected in favour of lower contrast radiographs which appear much
flatter but are claimed to allow more to be seen.)
The quality of the image can depend as much on the display system as on the way
it was produced. A good quality image viewed under poor conditions such as inade-
quate non-uniform lighting may be useless. The quality actually required in an image
may also depend on information provided by other diagnostic techniques or previous
radiographs.
This chapter extends the concept of contrast introduced in Chapter 5 to the radiological
image and then discusses the factors that may influence or degrade the quality of the pri-
mary image. Methods available for improving the quality of the information available at
this stage are also considered. Other factors affecting image quality in the broader context
of the whole imaging process are discussed in Chapter 7.
6.3 Contrast
The definition of contrast differs somewhat depending on the way the concept is being
applied. For conventional radiography and fluoroscopy, the normal definition is an
184 Physics for Diagnostic Radiology
10
0.1
0.01
50 100 150
Photon energy (keV)
FIGURE 6.1
Variation of linear attenuation coefficient with photon energy for (º) muscle and (•) bone in the diagnostic
region.
Incident X-ray
X0 beams X0
µ1 x1 Attenuating µ2 x2
materials
Transmitted
X1 X2
X-ray beams
FIGURE 6.2
X-ray transmission through materials that differ in both thickness and linear attenuation coefficient.
X2
C = log 10
X1
X2
C = 0.43 ln = 0.43(ln X 2 − ln X1 )
X1
The Radiological Image 185
X1 = X0 exp( −m1 x1 )
and
X 2 = X0 exp( −m 2 x2 )
thus
C = 0.43(m1 x1 − m 2 x2 )
If μ1 and μ2 were the same, the difference in contrast would be due to differences in thick-
ness. If x1 = x2 the contrast is due to differences in linear attenuation coefficient. It is con-
ceivable that the product μ1x1 might be exactly equal to μ2x2 but this is unlikely. Note from
Figure 6.1 that the difference in μ values decreases on moving to the right, thus contrast
between two structures always decreases with increasing kVp.
L2 kX X
C(screen) = log = log 2 = log 2
L1 kX1 X1
Hence the contrast on the screen, C = 0.43(μ1x1 – μ2x2), is the same as in the primary image.
As this is how the eye perceives the image, this is known as the radiation contrast and will
be denoted by CR.
Note that simple amplification, in a fluorescent or intensifying screen, does not alter
contrast.
D2 − D1
g=
log E2 − log E1
186 Physics for Diagnostic Radiology
D2 − D1
g= (6.1)
log X 2 − log X1
D2 − D1 = C
C = g0.43(m1 x1 − m 2 x2 )
Thus the contrast on film CF differs from the contrast in the primary image by the factor γ
which is usually in the range 3–4. Gamma is often termed the film contrast, thus
Note that contrast is now modified because the characteristic curve relates two logarith-
mic quantities. Film can be said to be a ‘logarithmic amplifier’. A TV camera can also act as
a logarithmic amplifier.
(a) (b)
20 20
20 equal steps
Grey level
Grey level
20 equal steps
10 10
0
0 0 200 400 600 800 1000
0 200 400 600 800 1000
Pixel value
Pixel value
FIGURE 6.3
Examples of two simple LUTs; (a) Equal weight is given to all pixel values. Each grey level represents 50 values;
(b) Greater emphasis is given to pixel values in the middle of the range. Each grey level now represents only 10 pixel
values, enhancing contrast. All pixel values below 400, or above 600 will show white/black, respectively.
(a) (b)
2000 35,000
30,000
25,000
Number of pixels
Number of pixels
20,000
1000
15,000
10,000
5000
0 0
0 500 1000 0 1000 2000 3000 4000
Pixel value Pixel value
FIGURE 6.4
Histograms showing the frequency of each pixel value; (a) A very simple example—there are 1000 possible pixel
values and 2 × 106 pixels in the image. Each pixel value is found in exactly 2000 pixels. Note that the histogram gives
no information about the spatial distribution of individual pixel values; (b) A more realistic histogram for a chest
X-ray. The majority of the pixel values are in the range 1000–3500 and the LUT can be adjusted so that the grey scale
contrast is changing most rapidly over this range, which contains most of the diagnostic information. Note that the
large peak to the left of the histogram is due to unattenuated X-rays that have not passed through the patient. Moving
to the right, the next three peaks broadly represent the lung, soft tissue and bone areas of the image, respectively.
Application of the LUT to the raw image data is normally done in conjunction with histo-
gram equalisation. This is a technique where the signal level of each pixel from the imager’s
output is represented on a histogram allowing the system to determine the distribution of
pixel values in the image (see Figure 6.4), effectively assigning the average grey level to the
average pixel value in the image. If necessary the upper and lower limits of the greyscale
can be determined, analogous to the latitude of film.
188 Physics for Diagnostic Radiology
The histogram information may be used to prepare the radiographic data for display
such that it is optimised for the observer. An example can be given for a skull radiograph.
If a dose in air of 10 mGy is incident on the skull, approximately 106 photons mm–2 will
be transmitted. The fluctuation in this signal due to Poisson statistics is ±103 which is
only 0.1% of the signal. Thus changes in signal of this order produced in the skull should
be detectable. An alternative way of stating that a change of 0.1% in the signal can be
detected is to say that there are 1000 statistically distinguishable pixel levels present in
the image. Unfortunately, the eye can only distinguish about 20–30 grey levels and one
of the major benefits of digital radiographic techniques over older techniques is that the
histogram allows the grey levels to be matched to pixel levels containing clinically useful
information.
Insight
Histogram Equalisation
An alternative way to think about histogram equalisation is that it makes maximum use of the
available grey shades. If the number of pixel values is 1000 but the maximum pixel value in the
histogram is only 500, only half the grey shades are being used (see Figure 6.5a). We wish to shift
and stretch this histogram to make better use of the range. This may be done by applying to each
pixel value a gain and bias given by the equation
The gain is defined by the user—suppose we choose a value of 1.9—and the bias is obtained from
the expression
Now set the new mean at 600. Then the bias is 600 – 570 = 30, and our operating equation is
(b)
(a)
220
Number of pixels
Pixel valueout
30
0
34 300 500 1000 0 100
Pixel valuein
Pixel value
(c)
Number of pixels
95 600 980
Pixel value
FIGURE 6.5
Stretching the histogram to use all pixel values; (a) Initial histogram—minimum = 34, mean = 300, maximum =
500; (b) The operating equation: Pixel value out = 1.9 × Pixel value in + 30; (c) Stretched histogram—minimum =
95, mean = 600, maximum = 980. Note that since (a) and (c) are histograms, the outline should be a series of steps
but these are too small to show here.
(Z = 56) with a similar K shell energy are used for studies of the alimentary tract and
colon.
Negative contrast may be created by the introduction of gas, for example CO2 in the bowel
in double contrast studies. This makes use of the big difference in density between gas and
soft tissue. Note that modification of atomic number is very kVp dependent whereas modi-
fication of density is not.
An example of the use of a contrast agent is in the study of urinary tract function. This
can be investigated by injecting into the circulatory system an iodine compound which
will then be filtered out of the blood by the kidneys and passed along the ureters to the
bladder. Alternatively catheters can be inserted into the ureters and an iodine compound
injected in retrograde fashion up the ureters into the renal pelvis.
The number of ‘contrast’ materials available is limited by the requirements for such
materials. They must have a suitable viscosity and persistence and must be miscible or
immiscible with body fluids as the examination requires. Most importantly they must be
non-toxic. One contrast material, used for many years in continental Europe, contained
thorium which is a naturally radioactive substance. It has been shown in epidemiological
studies that patients investigated using this contrast material had an increased chance of
contracting cancer or leukaemia.
190 Physics for Diagnostic Radiology
Iodine-based compounds carry a risk for some individuals and most developments in
the past 60 years have been directed towards newer agents with lower toxicity (Dawson
and Clauss 1998). These have included achieving the same contrast at reduced osmolar-
ity by increasing the number of iodine atoms per molecule and reducing protein binding
capacity by attaching electrophilic side chains.
A recent possibility has been the production of non-ionic dimers which provide an even
higher number of iodine atoms per molecule. These rather large molecules impart a high
viscosity to the fluid but this can be substantially overcome by warming the fluid to body
temperature before injection.
X2ʹ Xʹ
CR = log 10 = 0.43 ln 2
X ʹ1 X ʹ1
Now
X ʹ1 = X 0ʹ e − m1x1
And
X ʹ2 = X ʹ0 e − m2 x2
Incident
X0 X-ray beams X0
Uniform attenuation
µ0 x0
X0' X0'
x1 µ1 µ2 x2
Transmitted
X1' X2' X-ray beams
FIGURE 6.6
Diagram showing the effect of an overlying layer of uniformly attenuating material on the X-ray transmitted
beam of Figure 6.2.
The Radiological Image 191
Hence
X0ʹ e − m2 x2
CR = 0.43 ln
X0ʹ e − m1x1
Thus
C = 0.43(m1 x1 − m 2 x2 ) as before.
The fact that some attenuation has occurred in overlying tissue and that X0ʹ = X0 e − m0 x0 is
irrelevant because X0�cancels. A similar argument may be applied to uniformly attenuat-
ing material below the region of interest.
An alternative way to state this result is that under these idealised conditions loga-
rithmic transformation ensures that equal absorber and/or thickness changes will
result in approximately equal contrast changes whether in thick or thin parts of
the body.
X 2 + X0
CRʹ = log 10
X1 + X 0
The value of CR′ will be less than the value of CR for any positive value of X0.
The presence of scatter will almost invariably reduce contrast in the final image for the
reason given above. The only condition under which scatter might increase contrast would
be for photographic film if X1 and X2 were so small that they were close to the fog level of
the characteristic curve. This is a rather artificial situation.
The amount of scattered radiation can be very large relative to the unscattered trans-
mitted beam. This is especially true when there is a large thickness of tissue between the
organ or object being imaged and the film. The ratio of scatter to primary beam in the lat-
ter situation can be as high as eight to one but is more generally in the range between two
and four to one (see Figure 6.7).
192 Physics for Diagnostic Radiology
1.0
0.6
0.4
0.2
20 40 60 80
Photon energy (keV)
FIGURE 6.7
Typical primary, scatter and total spectra when a body-sized object is radiographed at 75 kVp _____ total; _ _ _ _ _
scatter; ------- primary (only the continuous spectrum is shown).
Insight
Effect of Scatter on Contrast
If CR = log10 (X2/X1) and the difference between X2 and X1 is small, this may be written as
⎛ X + X ⎞ ⎛ X ⎞
CR = log10 ⎜ 1 or log10 ⎜ 1 +
⎝ X1 ⎟⎠ ⎝ X1 ⎟⎠
P
CS =
P+S
Rearranging
⎛ S ⎞ P
CS ⎜ 1 + ⎟ = = C0
⎝ P⎠ P
The Radiological Image 193
−1
⎛ S⎞
C S = C0 ⎜ 1 + ⎟
⎝ P⎠
Hence when S = P, Cs = C0 /2, that is, if the scatter contribution is equal to the primary, the contrast
is halved.
Note that for a well-filtered beam the effect of beam hardening will be small compared with the
effect of scatter.
In practice, as the kVp rises from 50 to 100 kVp, the fall in linear attenuation coefficient of
the low energy scattered radiation in tissue is much more rapid than the fall in the scatter-
producing Compton cross-section. Thus factor 2(b) is more important than factor 1(a) and
this is the prime reason for the increase in scatter reaching the receptor.
The increase in scatter is steep between 50 and 100 kVp but there is little further increase
at higher kVp and above 140 kVp the amount of scatter reaching the receptor does start to
fall slowly.
A reduction of kVp will not only increase contrast but will also reduce the scatter reach-
ing the receptor. This reduction is, however, limited by the patient penetration required
and, perhaps more importantly, an increase in patient dose due to the increase in mAs
required to compensate for the reduction in kVp. (For film, a decrease of 10 kVp would
require a doubling of the mAs for the same blackening.)
(a) (b)
X-rays X-rays
FIGURE 6.8
Demonstration that compression in the physical sense will not alter the attenuating properties of a fixed mass of
gas (a) gas occupies a large volume at low density; (b) gas occupies a much smaller volume at a higher density.
The Radiological Image 195
6.8 Grids
6.8.1 Construction
The simplest grid is an array of long parallel lead strips held an equal distance apart by a
material with a very low Z value (an X-ray translucent material). Most of the scattered pho-
tons, travelling at an angle to the primary beam, will not be able to pass through the grid
but will be intercepted and absorbed as shown in Figure 6.9. Some of the rays travelling at
right angles, or nearly right angles, to the grid are also stopped due to the finite thickness
of the grid strips, again shown in Figure 6.9. Both the primary beam and the scatter are
stopped in this way but the majority of the primary beam passes through the grid along
with some scattered radiation. Scatter travelling at an angle of θ/2 or less to the primary
beam is able to pass through the grid to point P (Figure 6.10).
Stopped
primary rays
Stopped
scattered rays
Grid
FIGURE 6.9
Use of a simple parallel grid to intercept scattered radiation.
196 Physics for Diagnostic Radiology
Angle of acceptance of
scatter to point P
Material of low
Z value
Lead strips
d
θ
D h
Grid
Film cassette
FIGURE 6.10
Grid geometry. Number of strips mm–1, N = 1/(D + d); typically N is about 4 for a good grid. Grid ratio r = h/D;
typically r is about 10 or 12. Fraction of primary beam removed from the beam is d/(D + d). Since d might be
0.075 mm and (D + d) 0.25 mm, d/(D + d) will be about 0.3. Tan (θ/2) = D/2h.
As grids can remove up to 90% of the scatter there is a large increase in the contrast in
radiographs when a grid is used. This increase is expressed in the ‘contrast improvement
factor’ K where
K normally varies between 2 and 3 but can be as high as 4. The higher values of K are
normally achieved by increasing the number of grid strips per centimetre. As these are
increased, more of the primary beam is removed due to its being stopped by the grid. The
proportion stopped is given by
d
D+d
where d is the thickness of a lead strip and D is the distance between them (Figure 6.10).
Reduction of the primary beam intensity means that the exposure must be increased to
compensate. The use of a grid therefore increases the radiation dose to the patient and thus
there is a limit on the number of strips per centimetre that can be used. In addition, the
inter-space material will also absorb some of the primary beam. Crossed grids (see Section
6.8.2) require a greater increase in patient dose than parallel grids.
6.8.2 Use
As grids are designed to stop photons travelling at angles other than approximately nor-
mal to them, it is essential that they are always correctly positioned with respect to the
The Radiological Image 197
central ray of the primary beam. Otherwise, as shown in Figure 6.11, the primary beam
will be stopped. The fact that the primary photon beam is not parallel but originates from
a point source limits the size of film that can be exposed due to interception of the primary
beam by the grid (Figure 6.12). The limiting rays are shown where
C
tan y =
FRD
X-ray source
Central ray
FIGURE 6.11
Diagram showing that a grid which is not orthogonal to the central X-ray axis may obstruct the primary beam.
X-ray focus
ψ
Maximum angle at
which an unscattered
ray can pass
through the grid FRD
(focus-receptor
distance)
h
C
D
Central ray
FIGURE 6.12
Demonstration that the field of view is limited when using a simple linear grid.
198 Physics for Diagnostic Radiology
D
tan y =
h
Insight
Grids and Computed Radiography
The use of grids can create a moiré pattern artefact when used with computed radiography sys-
tems if careful selection of grid ratios is not considered. Moiré patterns occur when two grids, or
patterns, of a similar spatial frequency are overlaid onto each other with a slight offset, creating
areas of interference.
The lines of the anti-scatter grid and the lines created by the scanning laser in a CR reader are
two such patterns. Consider an image of a grid taken using a storage phosphor and scanned by a
laser with the grid aligned such that the grid lines are parallel to the scan lines. The resultant signal
detected by the scan line will either represent a bar, a gap or some combination of both depend-
ing on where the scan line falls relative to the grid lines. This would normally create a repeating
pattern within the image seen in Figure 6.13a. However, if the frequency of the grid were to be
slightly different to the frequency of the scan lines, it can be seen that the repetition of the pattern
in the image would be disrupted and replaced by the type of interference seen in Figure 6.13b
leading to an under-sampling of the grid lines causing aliasing (see Section 8.4.2).
This effect can be avoided by having a sufficiently high grid frequency and passing the pre-
sampled signal through an anti-aliasing filter to remove frequencies above the Nyquist limit of
the reader. Some systems provide a grid suppression algorithm such that the appearance of grid
lines is reduced. This relies on the grid frequency being within a range that can be detected by
the algorithm. When using a grid with CR systems, the grid lines should run perpendicular to the
direction of travel of the scanning laser.
The grids shown in previous figures are termed linear grids and should be used with
the long axis of the grid parallel to the cathode anode axis of the tube so that angled radio-
graphs can be taken without the primary beam hitting the lead strips.
The simple linear grid has now been largely replaced by the focused linear grid where
the lead strips are progressively angled on moving away from the central axis (Figure 6.14).
The Radiological Image 199
(a) (b)
FIGURE 6.13
Illustration of moiré patterns. Image (a) shows the image of a regular series of lines, such as those found with
radiographic grids, sampled at a frequency sufficiently high to provide an accurate image. Image (b) demon-
strates the moiré patterns generated when the grid in image (a) is under-sampled at a lower frequency.
X-ray focus
FIGURE 6.14
Construction of a linear focused grid.
This eliminates the problem of cut-off at the periphery of the grid but imposes restrictive
conditions on the focus-receptor distances (FRDs) that can be used, the centering of the
grid under the focal spot and having the correct side of the grid towards the X-ray tube.
If any of these is wrong the primary beam is attenuated. If the grid is upside down then
a narrow exposed area will be seen on a test film with very little blackening on either
side of it. Decentering tends to produce generally lighter films which get lighter as the
amount of decentering is increased. Using the wrong FRD will not affect the central por-
tion of the radiograph but will progressively increase cut-off at the edge of the film as the
distance away from the correct FRD is increased. Note that these faults might be difficult
to recognise in a digital system since the processing software will tend to compensate
for them.
200 Physics for Diagnostic Radiology
Crossed grids with two sets of strips at right angles to each other are also sometimes
used. This combination is very effective for removing scatter but absorbs a lot more of
the primary beam and requires a much larger increase in the exposure with consequent
increase in patient dose.
6.8.3 Movement
If a stationary grid is used it imposes on the image a radiograph of the grid as a series of
lines (due to the absorption of the primary beam). With modern fine grids this effect is
reduced but not removed.
The effect can be overcome by moving the grid during exposure so that the image of the
grid is blurred out. Movements on modern units are generally oscillatory, often with the
speed of movement in the forward direction different from that on the return. Whatever
the detailed design, the movement should be such that it starts before the exposure and
continues beyond the end of the exposure. Care must also be taken to ensure that, in single
phase machines, the grid movement is not synchronous with the pulses of X-rays from the
tube. If this occurs, although the grid has moved between X-ray pulses, the movement may
be equal to an exact number of lead strips. The lead strips in the grid are thus effectively in
the same position as far as the radiograph is concerned. This is an excellent example of the
stroboscopic effect. Medium and high frequency machines are, of course, not troubled by
this effect. One disadvantage of the focussed over the simple linear grid is that the decen-
tering of the grid during movement results in greater absorption of the primary beam.
α
Angled anode b
a = b sin α
FRD
X-ray opaque
object
ORD = d
Umbra Penumbra
Image plane
a S U b
T
X-ray intensity
S T U
FIGURE 6.15
The effect of a finite X-ray focal spot size in forming a penumbral region. FRD = focus-receptor distance;
ORD = object-receptor distance.
moving from S to U the number of X-ray photons rises to that in the unobstructed beam, is
termed the geometric unsharpness. The magnitude of SU is given by
d
SU = b sin a (6.2)
(FRD − d)
For target angle α = 13°, b = 1.2 mm, FRD = 1 m and d = 10 cm then SU = 0.06 mm. Since the
focal spot size is strictly limited by rating considerations (Section 2.5.3), a certain amount
of geometric unsharpness is unavoidable.
Note that
(1) The size of the focal spot places a lower limit on the size of object that can be
distinguished. Very small objects, for example, a small calcification on a mammo-
gram or a small vascular embolism may not be visualised if the penumbra from
the focal spot is too large.
(2) Careful examination of Figure 6.15 shows that as the penumbra increases the
umbra decreases. However, the actual size of the image on the radiograph is only
altered significantly when the size of the object to be radiographed approaches or
is less than the size of the focal spot so in normal practice the size of the focal spot
has little effect on the magnification.
thickness equal to the bars. When imaged by relatively low energy X-rays the lead atten-
uates most of the beam while the gap transmits most of it resulting in a high contrast
‘pair’ of lines. Each group of line-pairs is progressively thinner than the last defined by
the number that fit with a unit length—line-pairs per millimetre or lp mm–1. A typical
range for the template would be 0.5 to 10 lp mm–1 corresponding to resolutions of 1 to
0.05 mm.
Figure 6.16a and 6.16b show that for fixed values of FRD and d, resolution increases with
decreasing focal spot size. Similarly, Figure 6.17a and 6.17b show that for fixed focal spot
size, and fixed d, resolution improves with increasing FRD as predicted by Equation 6.2.
(a) (b)
FIGURE 6.16
Effect of focal spot size on resolution. Two images of a line-pair test object taken with fixed values of FRD
(100 cm) and d (30 cm); (a) focal spot size = 1.3 mm, limiting resolution 1.7 lp mm–1 ; (b) focal spot size = 0.6 mm,
limiting resolution 2.2 lp mm–1.
(a) (b)
FIGURE 6.17
Effect of FRD on resolution. Two images of a line-pair test object taken with fixed values of focal spot size
(1.3 mm) and d (30 cm); (a) FRD = 70 cm, limiting resolution = 1.2 lp mm–1; (b) FRD = 100 cm, limiting resolu-
tion = 1.7 lp mm–1.
The Radiological Image 203
patient, it has a finite thickness generally with decreasing X-ray attenuation towards the
edges. These features can be considered as part of the geometric unsharpness of the result-
ing image and are in fact often much larger than the geometric unsharpness described
in Section 6.9.1. The effect on the number of photons transmitted is shown in Figure 6.18.
As can be seen there is a gradual change from transmission to absorption, producing an
indistinct edge.
Another source of unsharpness arises from the fact that during a radiograph many
organs within the body can move either through involuntary or voluntary motions. This is
shown simply in Figure 6.19, where the edge of the organ being radiographed moves from
position A to position B during the course of the exposure. Again the result is a gradual
transition of radiographic density resulting in an unsharp image of the edge of the organ.
The main factors that determine the degree of movement unsharpness are the speed of
movement of the region of interest and the time of exposure. Increasing the patient-recep-
tor distance increases the effect of movement unsharpness.
Edge of organ
to be imaged
X-ray photons
X-ray beam
transmitted
forming the
shadow
FIGURE 6.18
Contribution to image blurring which results from an irregular edge to the organ of interest.
204 Physics for Diagnostic Radiology
X-ray
source
Edge of
organ to
be imaged FRD
B A
Object
ORD = d movement
B A Receptor B A
Distance moved
during radiograph
X-ray photons transmitted
B A
Distance along the image plane
FIGURE 6.19
Effect of movement on radiographic blurring. If the object moves with velocity v during the time of exposure t
then AB = vt and A’B’ = AB.FRD/(FRD – d).
(a) (b)
FIGURE 6.20
Examples of receptor unsharpness obtained with the line-pair test object in contact with the receptor; (a) digital
image with small pixels (0.16 mm), 4.3 lp mm–1; (b) digital image with large pixels (0.54 mm), 1.3 lp mm–1.
penumbra caused by the focal spot can be minimised by increasing the focus to image dis-
tance. Figure 6.20 shows examples in which the test object was in contact with the recep-
tor. Note that as data are passed down the imaging chain, for example, from a CR storage
plate to the display device (monitor) and hard copy (film) the resolution can only get worse,
never better!
The Radiological Image 205
Insight
The Star Test Pattern
The line-pair object method described, measures the combined effects of the finite focal spot
size and the relative values of FRD and d. Assessment of the geometric unsharpness caused by
the finite focal spot—and by inference an estimation of the size of focal spot itself—can be made
by use of a star test object. Such a test object again consists of lead bars, but this time they are
arranged in a radial pattern like the spokes of a wheel with the bars and gaps becoming equally
wider with distance from the centre of the test object. This gives a continuous variation of lp mm –1.
The detailed theory of operation of the star test pattern can be found in Spiegler and Breckinridge
(1972), but in use it is imaged to determine the point at which the resolving capacity of the system
fails and causes blurring of the line pairs.
U = UG2 + UM 2 + UR 2
(a) (b)
FIGURE 6.21
One of the authors (PD) suffering from quantum mottle! (a) The image was taken with a digital camera at nor-
mal light intensity. (b) The light intensity has been reduced by a factor of 25. The camera can adjust the grey
scale but noise is apparent in the image.
Similarly, if sufficient X-ray quanta strike a photoreceptor, they produce enough light
photons to provide a detailed image but when fewer X-ray quanta are used the random
nature of the process produces a mottled effect which reduces image quality (see Figure
6.21). A very fast rare earth screen may give an acceptable overall density before the mot-
tled effect is completely eliminated. For an example of quantum mottle in a radiograph see
Figure 9.16b. Thus the information in the image is related to the number of quanta forming
the image.
When there are several stages to the image formation process, as in the image intensifier/
CCD system, overall image quality will be determined at the point where the number of
quanta is least, the so-called ‘quantum sink’. This will usually be at the point of primary
interaction of the X-ray photons with the photoluminescent screen (see Figure 6.22). If
insufficient X-ray photons are used to form this image, further amplification is analogous
to empty magnification in high power microscopy, being unable to restore to the image
detail that has already been lost. It follows that neither electron acceleration nor minifica-
tion in an image intensifier improves the statistical quality of the image if the number of
X-ray photons interacting with the input screen remains the same.
This subject will be considered again under the heading of ‘quantum noise’ in
Section 7.5.
1010
109
107
106
105
104
103
A B C D E F G H
Stage in process
FIGURE 6.22
Quantum accounting diagram for an image intensifier coupled to a CCD camera. The various stages are
A—input flux to patient; B—exit flux from patient; C—X-ray photons absorbed (the quantum sink); D—light
photons emitted; E—photoelectrons released into the image intensifier (II); F—light photons emitted from the
II output; G—light photons entering CCD; H—digital signal from CCD.
a wide variety of image manipulations, or processing, which is not possible with film
where collection and display cannot be separated.
Image processing can be considered either in real space or in terms of spatial frequen-
cies. The latter is more amenable to mathematical analysis and is the approach used by
manufacturers. It is considered in an insight later. However, the techniques are more read-
ily understood in real space so this is the approach adopted here.
(a) Inverting an image—this has the effect of making the pixels which appeared light
appear dark, and vice versa. The equivalent with film would be converting a nega-
tive into a positive. It can be done quite simply by replacing a pixel value by its
‘mirror image’ relative to an arbitrary mean. For example, if the mean pixel value
is 500, a pixel value of 620 (500 + 120) is replaced by 380 (500 – 120). In more general
terms the slope of the LUT has been inverted, thereby inverting the grey scale
about its mean.
(b) Enhancing contrast by subtracting a fixed value (n0) from each pixel (see Figure 6.23).
The new contrast is
⎛ n − n0 ⎞ ⎛n ⎞
log ⎜ 1 ⎟ > log ⎜ 1 ⎟
⎝ n2 − n0 ⎠ ⎝ n2 ⎠
(c) Adding two distributions (e.g. two image frames of the heart) to improve the
ratio of signal to noise. If the signal is N counts and the noise is assumed to be
208 Physics for Diagnostic Radiology
n1
n1 – n0
Counts
Counts
n2
n0
n2 – n0
Distance Distance
FIGURE 6.23
Background subtraction as a potentially useful method of data manipulation.
(a) Smoothing filters to reduce the impact of noise due to poor counting statistics, or if
pixellation is coarse (e.g. in nuclear medicine) the intrusive effect of pixellation.
With a simple smoothing filter all 9 pixels (i.e. the target pixel and 8 adjacent) are
given equal weight—see Table 6.1a.
A template with equal weights leads to rather heavy smoothing so different
weights are normally assigned. Consider the application of the filter in Table 6.1b
to the array in Table 6.2 where the central pixel is high because of Poisson noise.
The new computed value of the central pixel is
(1 × 1) + ( 2 × 2) + (1 × 1) + (1 × 2) + (9 × 4) + ( 2 × 2) + ( 2 × 1) + ( 2 × 3 ) + (1 × 1)
= 3.555
16
TABLE 6.1
Nine Point Arrays of Smoothing Filters; (a) Heavy Smoothing; (b) Lighter Smoothing
1 1 1 1 2 1
1 1 1 2 4 2
1 1 1 1 2 1
(a) (b)
The Radiological Image 209
TABLE 6.2
Hypothetical Set of Raw Pixel Values
1 2 1
1 9 2
2 3 1
TABLE 6.3
Weighting Factors for a Sharpening Filter
–1 –1 –1
–1 4 –1
–1 –1 –1
FIGURE 6.24
Edge enhancement or use of gradients to improve visualisation.
Note that the image shrinks slightly as the template cannot be applied to the
pixels round the edge—but this is not critical for say a 1024 × 1024 pixel matrix.
(b) Sharpening filters. In contrast to smoothing filters, image features become sharper
if negative weight is given to surrounding pixels (see Table 6.3).
(c) Edge enhancement. A plot of the gradient of counts per pixel is much more pro-
nounced than the change in absolute counts across the edges (see Figure 6.24).
Figure 6.25 shows examples of the effect of different filters on an image. Applications of
these processes include, for example, improving the image of a low-contrast boundary,
such as a tumour within the body or enhancing the visualisation of the character of the
boundary—is it fuzzy or sharp? This is an important distinction when deciding if the
tumour is benign or malignant.
(a) Image Segmentation. In this approach the image is first divided into several regions
of anatomical interest. The mathematics is mostly beyond the scope of this book
but a simple example of regional orientation segmentation and thresholding illus-
trates the principle.
Suppose one wishes to know if a lung nodule is increasing in size with time.
Look at the pixel values in the area defined by the nodule and in the surrounding
210 Physics for Diagnostic Radiology
FIGURE 6.25
Three pictures of an X-ray image showing the effect of filters; (a) none; (b) smoothing; (c) edge enhancement.
area, establish a threshold and set all pixel values below the threshold to 0 and all
those above to 1. It is then easy to measure the diameter, perimeter and area of the
nodule in sequential images.
(b) Histogram analysis. The idea of making a histogram of pixel values was introduced
in Section 6.3.3. A number of useful operations can be performed using this infor-
mation. One is to find the region of interest of the image. In many radiographs
there are areas within the image where the X-ray beam does not pass through the
body. In these cases unattenuated radiation interacts with the receptor producing
a signal that is much greater than in the rest of the image. Histogram analysis
can identify these areas easily and exclude them from further processing. Systems
that use ‘automatic collimation’ or ‘masking’ can also use these identified areas to
improve the visualisation of the image by automatically assigning a ‘black’ value
to them to reduce glare. A further use of this identification of the region of interest
in the image can be to calculate the average receptor dose in that region which is
useful in monitoring patient radiation doses.
(c) Data compression. This is essential for efficient data storage and will be discussed
in Chapter 17.
Insight
Frequency Domain Methods of Image Processing
A full treatment involves a thorough understanding of Fourier transforms, the principle of which
is that any image in real space can be expressed in terms of a large number or sinusoidal waves
The Radiological Image 211
(a) (b)
100 100
% of signal accepted
% of signal accepted
0 0
Threshold Threshold
Spatial frequency (k space) Spatial frequency (k space)
FIGURE 6.26
Properties of filters represented in frequency space; (a) low pass—all spatial frequencies up to a threshold value
are transmitted equally; (b) high pass—spatial frequencies above a threshold are transmitted equally.
of different spatial frequencies, expressed in cycles cm –1. Note that the spatial frequency k when
dealing with waves in space (k space) is the analogue of temporal frequency f(ω/2π) for waves
varying with time.
Thus an image in real space f(x,y) is transformed into F(u,v) where u is a measure of spatial fre-
quencies in the x-direction and v is a measure of spatial frequencies in the y-direction.
The point of this manoeuvre is that applying a processing operation, H(u,v), in frequency
space,
is much easier mathematically than carrying out the same process in real space.
Smoothing filters are also known as low pass filters and are very easy to understand in terms
of spatial frequencies because they selectively reduce the high frequency components of the
image which carry information about edges and sharp detail (see Figure 6.26a). Conversely high
pass filters have the opposite effect, reducing the low spatial frequencies associated with the
slowly varying characteristics of an image, such as overall intensity and contrast, and enhanc-
ing the edges and detail (Figure 6.26b). Band pass filters accept a pre-selected range of spatial
frequencies.
AB XY GH
M1 = M2 = M3 =
ab xy gh
Consider triangles Fxy and FXY. Angles xFy and XFY are common; xy is parallel to XY.
Triangles and FXY are thus similar. Therefore
XY FRD
= (6.3)
xy FRD − d
By the same considerations triangles aFb and AFB are similar and triangles gFh and GFH
are similar
AB FRD
=
ab FRD − d
and
GH FRD
=
gh FRD − d
Therefore
M1 = M 2 = M 3
That is, for objects in the same plane parallel to the receptor, magnification is constant.
This is magnification without distortion.
Object 1 Object 3
FRD
a b x y g h
Object 2
ORD = d
A B X Y G H
FIGURE 6.27
Demonstration of a situation in which magnification without distortion will occur.
The Radiological Image 213
D
A B
C
Image
plane A B C D
Images of unequal length
D
B
A
Image
plane B1 B2 A1 A2 D1 D2 C1 C2
B1B2 > A1A2
FIGURE 6.28
Demonstration of (a) distortion of shape of an object; (b) distortion of shape when objects are of finite thickness
(A,B,C) and of relative position when they are at different depths (C,D). Note that the object B has been placed
very wide and the object C has been placed very close to the X-ray focus to exaggerate the geometrical effects.
In particular a patient would not be placed as close to the X-ray focus as this because of the high skin dose (see
Section 9.6.3).
214 Physics for Diagnostic Radiology
is off-axis and the image is enlarged more (compare A and B). Note that when the sphere is
in a different plane (e.g. C) the distortion may be considerable. Figure 6.28b also shows that
when objects are in different planes, distortion of position will occur. Although C is nearer
to the central axis than D, its image actually falls further away from the central axis.
A certain amount of distortion of shape and position is unavoidable and the experienced
radiologist learns to take such factors into consideration. The effects will be more marked
in magnification radiography.
6.13.7 Grids
These must be used if scatter is significantly reducing contrast, for example when irradiat-
ing large volumes. Use of grids requires an increased mAs thus increasing patient dose.
6.13.12 Post-processing
For digital images a variety of post-processing techniques is available—see Section 6.11.
These will all alter the appearance of the final image.
It will be clear from this lengthy list that the quality of a simple, plain radiograph is affected
by many factors, some of which are interactive. Each of them must be carefully controlled
if the maximum amount of diagnostic information is to be obtained from the image. For
further information see BIR (2001).
References
British Institute of Radiology (BIR). Assurance of Quality in the Diagnostic X-ray Department. British
Institute of Radiology, London, UK, 2001.
Dawson P. and Clauss W. Advances in X-ray Contrast. Collected Papers. Springer, Heidelberg, Germany,
1998.
Spiegler P. and Breckinridge W.C. Imaging of Focal Spots by Means of the Star Test Pattern, Radiology
102:679–684, 1972.
Further Reading
Carrol Q.B. Practical Radiographic Imaging, 8th edn. Charles C Thomas, Springfield, Illinois, 2007.
Curry T.S., Dowdey J.E. and Murry R.C. Jr, Christensen’s Introduction to the Physics of Diagnostic
Radiology, 4th edn. Lea and Febiger, Philadelphia, 1990.
Fauber T.L. Radiographic Imaging and Exposure (Chapter 4; Radiographic image quality pp. 50–99).
Mosby Inc, St Louis, 2000.
Oakley J. (ed) Digital Imaging. A Primer for Radiographers, Radiologists and Health Care Professionals.
Cambridge University Press, Cambridge, UK, 2003.
Exercises
1. What is meant by contrast?
2. Why is contrast reduced by scattered radiation?
The Radiological Image 217
3. The definition of contrast used for radiographic film cannot be used for digital
images. Discuss the reasons for this.
4. What are the advantages and disadvantages of having a radiographic film with a
high gamma?
5. A solid bone 7 mm diameter lies embedded in soft tissue. Ignoring the effects
of scatter, calculate the contrast between the bone (centre) and neighbouring soft
tissue.
Film gamma = 3
Linear attenuation coefficient of bone = 0.5 mm–1
Linear attenuation coefficient of tissue = 0.04 mm–1
6. Discuss the origins of scattered radiation reaching the receptor.
7. Give a sketch showing how the relative scatter (scattered radiation as a fraction
of the unscattered radiation) emerging from a body varies with X-ray tube kV
between 30 kVp and 200 kVp and explain the shape of the curve. What measures
can be taken to minimise loss of contrast due to scattered radiation?
8. How can X-ray magnification be used to enhance the detail of small anatomical
structures? What are its limitations?
9. What are the advantages and disadvantages of an X-ray tube with a very fine
focus?
10. List the factors affecting the sharpness of a radiograph. Draw diagrams illustrat-
ing these effects.
11. A digital radiograph is taken of a patient’s chest. Discuss the principal factors that
influence the resultant image.
12. What factors, affecting the resolution of a radiograph, are out of the control of the
radiologist? (Assuming the radiographer is performing as required.)
13. A radiograph is found to lack contrast. Discuss the steps that might be taken to
improve contrast. Distinguish carefully between film-screen, CR and DR images.
14. Explain the meaning of the terms quantum mottle and quantum sink. What meth-
ods are available for reducing the effects of quantum mottle?
15. Explain the difference between global operations and local operations in image
processing and give examples of each.
7
Assessment of Image Quality and Optimisation
SUMMARY
• This chapter looks at image quality in the wider context to determine if the
image is ‘fit for purpose’, that is accurate diagnosis.
• Factors affecting image quality are reviewed.
• Image perception is important so the operation of the visual system is
explained.
• An alternative definition of contrast is presented and quantum noise is con-
sidered in greater detail.
• The importance of modulation transfer function (MTF) as a quantitative
measure of imager performance is explained.
• Receiver operator characteristic curves are explained as a versatile method
of analysis of both technical and clinical features of images.
• Examples of optimisation of imaging systems and image interpretation are
given.
• The chapter concludes with a brief introduction to clinical trials.
CONTENTS
7.1 Introduction......................................................................................................................... 220
7.2 Factors Affecting Image Quality...................................................................................... 220
7.2.1 Image Parameters................................................................................................... 221
7.2.2 Observation Parameters......................................................................................... 221
7.2.3 Psychological Parameters...................................................................................... 221
7.3 Operation of the Visual System........................................................................................ 221
7.3.1 Response to Different Light Intensities...............................................................222
7.3.2 Rod and Cone Vision..............................................................................................222
7.3.3 Relationship of Object Size, Contrast and Perception.......................................223
7.3.4 Eye Response to Different Spatial Frequencies.................................................. 224
7.3.5 Temporal Resolution and Movement Threshold................................................225
7.4 Objective Definition of Contrast....................................................................................... 226
7.4.1 Limitations of a Subjective Definition of Contrast............................................. 226
7.4.2 Signal-to-Noise Ratio and Contrast-to-Noise Ratio........................................... 227
7.5 Quantum Noise................................................................................................................... 228
7.6 Detective Quantum Efficiency (DQE).............................................................................. 231
219
220 Physics for Diagnostic Radiology
7.1 Introduction
In earlier chapters the principles and practice of X-ray production were considered and the
ways in which X-rays are attenuated in different body tissues were discussed. Differences
in attenuation create ‘contrast’ on the image receptor, and differences in ‘contrast’ provide
information about the object. Imaging systems are often described in terms of physical
quantities that characterise various aspects of their performance.
However, when an X-ray image, or indeed any other form of diagnostic image, is assessed
subjectively, it must be appreciated that the use made of the information is dependent on
the observer, in particular the performance of his or her visual response system. Therefore,
it is important to consider those aspects of the visual response system that may influence
the final diagnostic outcome of an investigation. Quantitative methods for assessing noise
and image quality will be considered in greater detail, and the chapter will conclude with
examples of ways in which imaging systems and image interpretation may be optimised
and a brief introduction to the design of clinical trials.
1. The signal to be detected—the factors to consider here will be the size of the
abnormality, the shape of the abnormality and the inherent contrast between the
suspected abnormality and non-suspect areas.
2. The number and type of possible signals—for example, the number and angular
frequency of sampling in computed tomography (CT).
3. The nature and performance characteristics of the image system—spatial resolu-
tion (this is important when working with an image intensifier, digital radiography
systems and in nuclear medicine), sensitivity, linearity, noise (both the amplitude
and character of any unwanted signal such as scatter) and speed, especially in
relation to patient motion.
4. The interplay between image quality and dose to the patient.
5. Non-signal structure—interference with the wanted information may arise from
grid lines, overlapping structures or artefacts in CT and ultrasound.
1. The display system—features that can affect the image appearance include the
brightness scale, gain, offset, non-linearity (if any) and the magnification or
minification.
2. Viewing conditions—viewing distance and ambient room brightness.
3. Detection requirements.
4. Number of observations.
Some of these points have been considered in earlier chapters and others will be consid-
ered later in this chapter. For further information on display systems, see Sections 10.3.5
and 17.5. It is important to realise that final interpretation of a diagnostic image, especially
when it is subjective, depends on far more than a simple consideration of the way in which
the radiation interacts with the body.
40
Minimum detectable contrast (%)
30
20
10
FIGURE 7.1
Variation of minimum detectable contrast with light intensity, or brightness, for the human visual system.
Assessment of Image Quality and Optimisation 223
vision are the need for dark adaptation, which may take up to 30 minutes, and a loss of
colour sensitivity, although the latter is not a real problem in radiology.
(a) (b)
0.1
Minimum detectable contrast
1
(% log scale)
10
100
10 1 0.1
Detail diameter (mm, log scale)
FIGURE 7.2
Relationship between minimum perceptible contrast difference and object size. (a) A suitable phantom for
investigating the problem. Holes of different diameter (vertical axis) are drilled in Plexiglas to different
depths to simulate different amounts of contrast (horizontal axis). As the holes become smaller the contrast
required to visualise them becomes greater. (Photograph kindly supplied by Artinis Medical Systems after
a design by Thijssen M A O, Thijssen H O M, Merx J L et al. A definition of image quality: The image qual-
ity figure. In Optimisation of image quality and patient exposure in diagnostic radiology, B M Moores, B F Wall,
H Eriska and H Schibilla eds., BIR Report 20, 1989, 29–34.) (b) Typical result for an image intensifier TV camera
screening unit.
224 Physics for Diagnostic Radiology
Insight
Alternative Display for Contrast-Detail Data
An alternative way to display information on the relationship between threshold contrast and
detail size is to plot the threshold detection index, HT, as a function of the square root of the detail
area on a log-log scale.
HT is defined as
1
HT =
(CT × A )
where CT is the threshold contrast, expressed as a percentage difference in response of the recep-
tor to the detail and the background, and A is the area of detail. The shape of the CT versus A curve
means that HT is small when either CT or A is large, taking its maximum value in the middle of the
range. Example threshold contrast-detail detectability (TCDD) curves are shown in Figure 7.3.
This form of plot has some advantages:
100
Threshold detection index (HT)
10
1
0.1 1 10
Square root of area (mm)
FIGURE 7.3
Typical threshold contrast-detail detectability curves for fluoroscopy (solid line) and digital acquisition (dashed
line).
Assessment of Image Quality and Optimisation 225
500
Percentage contrast
10 10
5
1 100
0.1 1.0 10 100
Spatial frequency
(cycle/degree, log scale)
FIGURE 7.4
Contrast sensitivity of the human visual system defined as the reciprocal of threshold contrast measured with a
sinusoidal grating plotted against spatial frequency at a brightness of approximately 500 cd m–2. (After Campbell
F W. The physics of visual perception, Phil. Trans. Roy. Soc. Lond. 290, 5–9, 1980, with permission.)
depending on whether viewed from 1 m or 50 cm. Campbell (1980) has shown that contrast
sensitivity of the eye-brain system is very dependent on spatial frequency and demon-
strates a well-defined maximum. This is illustrated in Figure 7.4 where a viewing distance
of 1.0 m has been assumed. Exactly the same curve would apply to spatial frequencies of
twice these values if they were viewed from 50 cm.
Thus it may be seen that the regular pattern of matrix lines is unlikely to be intrusive
when using a 1024 × 1024 matrix in digital radiology, but it may be necessary to choose
both image size and viewing distance carefully to avoid or minimise this effect when look-
ing at 128 × 128 or 64 × 64 images sometimes used in nuclear medicine.
Insight
Temporal Resolution of the Eye
If luminance fluctuates sinusoidally about a mean brightness L, the stimulus will be L + l sin( 2pft )
where f is the frequency and the signal varies between L + l and L – l. The value of l that causes a
226 Physics for Diagnostic Radiology
10 100
Contrast sensitivity
0.1 cdm–2
Spatial
Temporal
1 10
500 cdm–2
0.1 1
1 10 100 1 10 100
Frequency ( f ) (Hz) Frequency (spatial or temporal)
FIGURE 7.5
(a) Temporal contrast for two values of retinal luminescence (0.1 cd m–2 and 500 cd m–2)—based on de Lange 1958
and Kelly 1961. (b) Temporal contrast sensitivity, or flicker sensitivity L/l(f) at 500 cd m–2 and compared with the
spatial contrast sensitivity (redrawn from Figure 7.4).
flicker sensation is determined for various values of f, say l(f), and the temporal contrast l(f)/L can
be plotted as a function of f (see Figure 7.5a). At 0.1 cd m –2, typical of the darker part of a low-level
display, the temporal contrast required for a flicker sensation increases with increasing frequency,
as one would expect intuitively. At 500 cd m–2, typical of a cathode ray tube display, l(f)/L actually
decreases with increasing low frequency so the required frequency for flicker-free viewing must
exceed the value of f at (l/L)min.
Figure 7.5b is the 500 cd m–2 curve from Figure 7.5a redrawn to show the temporal contrast
sensitivity of the eye, or flicker sensitivity, the reciprocal L/l(f). There is a strong similarity with the
spatial contrast sensitivity (redrawn from Figure 7.4).
reasons for this. First, the perceived contrast will depend on the sharpness of the bound-
ary. Consider, for example, Figure 7.6. Although the intensity change across the boundary
is the same in both cases, the boundary illustrated in Figure 7.6a would appear more con-
trasty on X-ray film because it is sharper. Second, if the boundary is part of the image of a
small object in a rather uniform background, contrast perception will depend on the size
of the object. A digital system which can artificially increase the density at the edge of a
structure will subjectively increase the contrast of the structure.
An alternative definition starts from the task of perceiving a single object in a background
that is uniform apart from the presence of noise and measuring the signal/noise ratio.
Imagine, for example, that the object consisted of a single region of lightly attenuating
material (Figure 7.7).
(a) (b)
Log intensity
Log intensity
FIGURE 7.6
Curves showing the difference in transmitted intensity across (a) a sharp boundary; (b) a diffuse boundary.
Intensity of X-rays
Imax
Imean
Imin
FIGURE 7.7
Appearance of a perfect image of a simple object consisting of a single strip of lightly attenuating material.
228 Physics for Diagnostic Radiology
The signal may be represented by Imax – Imin and the noise has an average value Imin. Hence
the signal/noise ratio is (Imax – Imin)/Imin or if the signal is small, to a good approximation
(Imax – Imin)/Imean. Contrast is often given as (Imax – Imin)/(Imax + Imin). Since Imean = ½(Imax + Imin)
this will give values that differ by a factor of two from those obtained using Imean.
This definition can be easily extended to digital systems. For example, in a digital radio-
graph the signal will be the number of X-ray photon interactions per pixel and a quantita-
tive expression for the contrast-to-noise ratio will be
N1 − N 2
CNR =
noise
where N1, N2 are the mean numbers of counts in pixels in the regions of differing contrast.
If the only significant source of noise is quantum noise (see next section)
N1 − N 2
CNR =
N1 + N 2
(a) (b)
N – 2N ½ N – N½ N N + N ½ N + 2N ½
Regular array
of detectors Recorded counts
FIGURE 7.8
Demonstration of the statistical variation of detected signal: (a) uniform flux of X-ray photons onto a regular
array of detectors; (b) the spread of recorded counts per detector. If the mean count per detector is N, 66% of the
readings lie between N – N½ and N + N½; 95% of the readings lie between N – 2N ½ and N + 2N½.
TABLE 7.1
Variation in Counts Due to Statistical Fluctuations
as a Function of the Number of Counts Collected
(or the Number of Counts per Pixel), N
N½/N × 100
N N½ (%)
10 3 30
100 10 10
1000 30 3
10000 100 1
100000 300 0.3
1000000 1000 0.1
digitised into a 64 × 64 matrix (approximately 4000 pixels) giving a mean of 25 counts per
pixel. Now N ½ /N = 20%! Quantum noise is always a major source of image degradation in
nuclear medicine.
Insight
Relationship between Photon Fluence and Dosimetric Quantities
The entrance surface dose (D) without backscatter is the product of the photon fluence (number
of photons per unit area, N/A), the mass energy absorption coefficient for tissue (µa /ρ), and the
mean photon energy (E).
Thus,
N µa
D= × ×E
A r
If D = 3 mGy for a skull view (3 × 10 –3 J kg–1), µa /ρ = 0.004 m2 kg–1, E = 50 × 103 × 1.6 × 10 –19 J (note,
care is required to ensure units are consistent), N/A ≈ 1014 photons · m–2 or 108 photons · mm–2.
230 Physics for Diagnostic Radiology
Assuming transmission through the skull is about 1% (it varies from 10% for a chest to 0.1% for a
lateral lumbar spine), and that detector efficiency is 100% (in practice it may be about 70%–80%
for a flat panel digital imager), there will be about 106 photons mm –2 in the image. The number of
photons per pixel can be obtained by multiplying by the pixel area and the quantum noise in the
pixel can be calculated.
There are now also a number of situations in radiology where quantum noise must be
considered as a possible cause for loss of image quality or diagnostic information. These
include the following:
1. The use of very fast intensifying screens may reduce the number of photons
detected to the level where image quality is affected (Section 6.10).
2. If an image intensifier is used in conjunction with a television camera or the image
is converted into digital format using a charge coupled device (CCD) camera, the
signal may be amplified substantially by electronic means. However, the amount
of information in the image will be determined at an earlier stage in the system,
normally by the number of X-ray photon interactions with the input phosphor to
the image intensifier. The smallest number of quanta at any stage in the imaging
process determines the quantum noise, and this stage is sometimes termed the
quantum sink.
3. Enlargement of a radiograph decreases the photon density in the image and hence
increases the noise.
4. In digital images the smallest detectable contrast over a small area, say 1 mm2,
may be determined by quantum noise (see Section 7.5).
5. In CT the precision with which a CT number can be calculated will be affected. For
example, the error on 100,000 counts is 0.3% (see Table 7.1), and the CT number (see
Section 8.3) for different pixels in the image of a uniformly attenuating phantom
(e.g. water) must vary accordingly.
Insight
Quantum Sinks
In Figure 6.22 the number of photons at each stage in the imaging system for an image intensifier
coupled to a digital output was shown.
The critical stage (the quantum sink) was the number of quanta absorbed by the input phosphor.
This is one reason why the quantum efficiency of the phosphor was such an important consider-
ation in Section 5.4.1.
The signal-to-noise ratio (SNR) for this sequence will be 3 × 104/(3 × 104)1/2 or about 120. In this
example subsequent amplification (or in the latter stages reduction) in the number of quanta will
not affect S/N because signal and noise are amplified equally.
Note, however, that if the number of quanta were subsequently to fall below 3 × 104 during image
processing, for example, as a result of digital subtraction imaging, the Poisson noise of the second-
ary quanta would further degrade S/N causing an even lower, or secondary, quantum sink.
Note that quantum mottle confuses the interpretation of low contrast images. In Section
7.7.1, imager performance will be assessed in terms of MTFs, which provide information
on the resolution of small objects with sharp borders and high contrast. Other sources of
noise may be the ultimate limiting factor in these circumstances.
Assessment of Image Quality and Optimisation 231
2
SNR out
DQE = 2
SNR in
When it is applied to the whole system it compares the output of the device (i.e. SNR in the
image signal) to the SNR in the input (incident radiation beam). Alternatively, to monitor
how noise levels are changing, the DQE can be calculated for each stage in the imaging
process. The DQE for the whole system is then the product of the individual DQEs.
Assuming the only source of noise in the incident beam is quantum noise,
2 N 02
SNR in = = N 0 (the number of incident quanta)
( N 02 )1/2
Thus DQE = SNRout2/N0 and SNRout2 is known as the noise equivalent quanta, that is, the
number of quanta that are equivalent to the total noise in the system.
Note that the concept of quantum efficiency of a detector was introduced as an ‘insight’
in Section 5.4.1. As the name implies, this specifies the efficiency of the detector in regis-
tering quanta, or the fraction of quanta incident on the detector that are stopped by it. It
must not be confused with DQE, which is a much more general term that can be applied
to the complete imaging system and quantifies all sources of noise. If the DQE concept is
applied just to the first stage of the imaging process, that is to the detector, then the upper
limit of DQE, in which no extraneous noise is introduced, is the quantum efficiency of
the detector.
merge. Unfortunately, diagnostic imagers are complex devices and many factors contrib-
ute to the overall resolution capability. For example, in forming a conventional analogue
X-ray image these will include the focal spot size, interaction with the patient, type of film,
type of intensifying screen and other sources of unsharpness. Some of these interactions
are not readily expressed in terms of resolving power and even if they were there would
be no easy way to combine resolving powers.
A practical approach to this problem is to introduce the concept of MTF. This is based
on the ideas of Fourier analysis, for detailed consideration of which the reader is referred
elsewhere, for example Gonzalez and Woods (2008). Only a very simplified treatment will
be given here.
Starting from the object, at any stage in the imaging process all the available informa-
tion can be expressed in terms of a spectrum of spatial frequencies. The idea of spatial
frequency can be understood by considering two ways of describing a simple object con-
sisting of a set of equally spaced parallel lines. The usual convention would be to say the
lines were equally spaced 0.2 mm apart. Alternatively, one could say that the lines occur
with a frequency in space (spatial frequency) of five per mm.
Fourier analysis provides a mathematical method for relating the description of an
object (or image) in real space to its description in frequency space. Two objects and their
corresponding spectra of spatial frequencies are shown in Figure 7.9. In general, the finer
the detail, that is sharper edges in real space, the greater the intensity of high spatial fre-
quencies in the spatial frequency spectrum (SFS). Thus, fine detail, or high resolution, is
associated with high spatial frequencies.
For exact images of these objects to be reproduced, it would be necessary for the imaging
device to transmit every spatial frequency in each object with 100% efficiency. However,
each component of the imaging device has a MTF which modifies the SFS of the informa-
tion transmitted by the object.
Each component of the imager can be considered in turn so
Where
FIGURE 7.9
Two objects and their corresponding spectra of spatial frequencies. Solid line is a sharp object. Dotted line is a
diffuse object.
Assessment of Image Quality and Optimisation 233
Hence
Insight
More on MTF
As indicated in the main text, all objects in real space can be represented by a set of spatial
frequencies. This mathematical process is known as a Fourier transform (FT) and simplifies the
subsequent mathematics considerably (as previously discussed in an insight in Section 6.11 in con-
nection with image processing in the frequency domain).
It should be noted that rigorous mathematical derivation of MTF assumes a sinusoidally varying
object with spatial frequency expressed in cycles mm –1. In the figures presented in this chapter
it has been assumed that this sinusoidal wave can be approximated to a square wave and spatial
frequencies are given in line pairs mm –1 for ease of interpretation by the reader.
The MTF relates the sharpness in the image to the sharpness in the object, expressed in terms
of spatial frequencies. Since the FT of a point object is a constant value (normalised to 1.0) at all
spatial frequencies, the MTF of an imaging device is the FT of the point spread function when the
device images a point source (see Figure 7.10). Of course an ideal imager would reproduce the
object faithfully and have an MTF of 1.0 at all values of 𝜈.
In its simplest form, MTF only applies strictly to a system in which resolution does not vary
with signal strength or location in the image. To achieve this, sampling of the data must occur at a
frequency that is appreciably higher than the Nyquist frequency (see Section 8.4.2). This is readily
achieved with an analogue detector (e.g. film-screen combination or image intensifier).
However, with digital detectors the inherent information content of the image is higher than the
spatial frequency of the sampling. With a sampling distance d of 100 µm, the Nyquist frequency is
1/2d = 5 cycles mm–1, but the MTF of an aperture the size of a pixel (assumed for simplicity also to
be d) contains spatial frequencies much higher than this so the MTF is under-sampled. Thus strict
analysis of the MTF of a digital detector is quite complex—for further detail see Zhao et al. 2006.
Some examples of the way in which the MTF concept may be used will now be given.
First, it provides a simple pictorial representation of the imaging capability of a device.
Figure 7.11a shows MTFs for five image receptors. It is clear that non-screen film transfers
higher spatial frequencies, and thus is inherently capable of higher resolution, than screen
film. Of course this is because of the unsharpness associated with the use of screens.
A similar rationale is made for the greater resolution associated with direct digital detec-
tors over indirect detectors that incorporate a photoluminescent layer (Figure 7.11b). Note
that the MTF takes no account of the dose that may have to be given to the patient.
A similar family of curves would be obtained if MTFs were measured for different film-
screen combinations. The spatial frequency at which the MTF fell to 0.1 might vary from
234 Physics for Diagnostic Radiology
Profile through a
point source
0
Distance Spatial frequency (v)
Profile through
image
0
Spatial frequency (v) Distance
FIGURE 7.10
Diagrams showing the effect of an imaging device on simple objects. To reproduce the profile of the object
(a) exactly, all spatial frequencies would have to be transmitted by the imager equally (b). In practice, most imag-
ers transmit high spatial frequencies less well (c), which results in loss of sharpness in the image (d).
10 line pairs per mm for a slow film-screen combination to 2.5 line pairs per mm for a faster
film-screen combination. This confirms that slow film-screen combinations are capable of
higher resolution than fast film screens.
Since the MTF is a continuous function, an imaging device does not have a ‘resolution
limit’ that is, a spatial frequency above which resolution is not possible, but curves such as
those shown in Figure 7.11 allow an estimate to be made of the spatial frequency at which
a substantial amount of information in the object will be lost.
Second, by examining the MTFs for each component of the system, it is possible to deter-
mine the weak link in the chain, that is, the part of the system where the greatest loss of
high spatial frequencies occurs. Figure 7.12 shows MTFs for some of the factors that will
degrade image quality when using an image intensifier TV system. Since MTFs are multi-
plied, the overall MTF is determined by the poorest component—the television camera in
this example. Note that movement unsharpness will also degrade a high resolution image
substantially.
As a third example, the MTF may be used to analyse the effect of varying the imaging
conditions on image quality. Figure 7.13 illustrates the effect of magnification and focal
spot size in magnification radiography. Curves B and C show that for a fixed focal spot
size, image quality deteriorates with magnification and curves C and D show that for
Assessment of Image Quality and Optimisation 235
(a) (b)
1 1
A A
0.5 0.5
B
MTF (log scale)
0.1 0.1
0.05 0.05 F
0.02 C 0.02
E D G
0.01 0.01
0 2 4 6 8 10 0 2 4 6 8 10
Spatial frequency (υ) (line pairs mm–1) Spatial frequency (υ) (line pairs mm–1)
FIGURE 7.11
Some typical MTFs for different imaging receptors. (a) A = non-screen film; B = film used with high defini-
tion intensifying screen; C = film with medium speed screens; D = a 150 mm Cs:Na image intensifier; E = the
same intensifier with television display. (Adapted from Hay GA. Traditional X-ray imaging. In Scientific Basis of
Medical Imaging, PNT Wells, ed. Churchill Livingstone, Edinburgh, 1–53, 1982, with permission.) (b) F = a direct
digital detector; G = a computed radiography system or indirect digital detector; A is reproduced from Figure
7.11a for reference. Footnote to legend—Note that as a result of technical improvements in resolving capability
since the 1980s, the MTFs D and E have moved to the right. However, the resolving capability of an image inten-
sifier is still inferior to that of a film screen, and the television camera still causes further degradation.
1.0
MTF (log scale)
0.1
D C B A
0.01
2 4 6 8 10
Spatial frequency υ (line pairs mm–1)
FIGURE 7.12
Typical MTFs for some factors that may degrade image quality in an image intensifier TV system. A = 1 mm
focal spot with 1 m focus film distance and small object film distance; B = image intensifier; C = movement
unsharpness of 0.1 mm; D = conventional vidicon camera with 800 scan lines.
fixed magnification image quality deteriorates with increased focal spot size. Note that
if it were possible to work with M = 1, then the focal spot would not affect image quality
and an MTF of 1 at all spatial frequencies would be possible (curve A). All these changes
could of course have been predicted in a qualitative manner from the discussions of mag-
nification radiography in Sections 6.12.1 and 9.4. The point about MTF is that it provides
236 Physics for Diagnostic Radiology
1.0
A
D C B
0.01
4 8 12 16 20
Spatial frequency υ (line pairs mm–1)
FIGURE 7.13
MTF curves for magnification radiography under different imaging conditions. A = magnification of 1; B = mag-
nification of 1.2 with a 0.3 mm focal spot size; C = magnification of 2.0 with a 0.3 mm focal spot size; D = mag-
nification of 2.0 with a 1 mm focal spot size.
a quantitative measure of these effects, and one that can be extended by simple multi-
plication to incorporate other factors such as the effect of magnification on the MTF of
the receptor, and the effect of movement unsharpness (see for example Curry et al. 1990).
Hence it is the starting point for a logical analysis of image quality and the interactive
nature of the factors that control it.
Insight
An Unusual MTF
The majority of systems transmit low spatial frequencies better than high spatial frequencies.
Xeroradiography (see insert to Section 9.2.5) has an unusual MTF with better data transfer at about
one line pair per mm than at lower spatial frequencies. In practice this property means that discon-
tinuities in the image (edges) that contain high spatial frequency information are enhanced. The
MTF falls again at even higher spatial frequencies because of loss of resolution caused by other
problems (e.g. focal spot size and patient motion).
It should be noted in passing that the visual thresholds at which objects can be
(a) detected, (b) recognised and (c) identified are not the same and this is a serious limita-
tion when attempting to extrapolate from studies on simple test objects to complex diag-
nostic images.
Contrast C1 Contrast C2
Frequency of a given
visual stimulus
T
Visual stimulus
FIGURE 7.14
Curves showing how, for a fixed visual stimulus threshold T, the proportion of stimuli detected will increase if
contrast is increased from C1 to C2.
100
Visual response %
50
C1 C50 C2
Contrast
FIGURE 7.15
Visual response (percent positive identification) plotted as a function of contrast on the basis of observations
similar to those described in Figure 7.14.
238 Physics for Diagnostic Radiology
Experiments of this type may be used to demonstrate, for example, that in relation to
visual perception, object size and contrast are inter-related.
The greater the separation of the distributions, the more readily is the signal detected.
The main problem with both signal detection theory and the method of constant stimu-
lus is that they require the visual threshold level T to remain fixed. In practice this is well
nigh impossible to achieve, even for a single observer. If T is allowed to vary, for example if
there are several observers, a more sophisticated approach is required (see Section 7.8).
7.7.2.3 Ranking
In this method the observer is presented with a set of images in which some factor thought
to influence quality has been varied. The observer is asked to arrange the images in order
of preference. This approach relies on the fact that an experienced viewer is frequently able
to say if a particular image is of good quality but unable to define the criteria on which this
judgement is based.
Frequency of a given
visual stimulus
T
Visual stimulus
FIGURE 7.16
Curves illustrating the concept of false negative and false positive based on the spread of visual stimuli for an
object of fixed contrast. Dotted line = probability distribution of true negatives. Solid line = probability distribu-
tion of true positives.
Assessment of Image Quality and Optimisation 239
100
Successful identification
of the ‘better’ image
75
50
Guessing
response Increasing amount of
scattering material
FIGURE 7.17
Curve showing how, the ability to detect the under-graded image might be expected to increase steadily as the
amount of degradation was increased.
(a) (b)
equipment should
be serviced
pair separation
pair separation
Minimum adjustment
to cause detectable
deterioration in image
quality
Time since last service Adjustment of equipment
to simulate deterioration
with time
FIGURE 7.18
Curves showing how a logical policy towards the frequency of servicing might be based on measurements of
deterioration of imager performance.
240 Physics for Diagnostic Radiology
Distribution of
true normal images
50
Number of images with a given
40 Distribution of
true abnormal images
visual stimulus
30
20
10
A B C D E
FIGURE 7.19
Use of different visual thresholds with distributions of visual stimuli.
Assessment of Image Quality and Optimisation 241
D
True positive (%)
60
Az
40
E
20
FIGURE 7.20
The ROC curve that would be constructed if the two distributions shown in Figure 7.19 were sampled at five
discrete points. (Reproduced with permission from Dendy PP. Recent technical developments in medical imag-
ing Part 1: Digital radiology and evaluation of medical images. Curr. Imag. 2, 226–236, 1990.)
A useful numerical parameter is the proportion of the ROC space that lies below the
ROC curve, AZ. In the two extreme distributions just discussed AZ = 1.0 and 0.5, respec-
tively. For intermediate situations the larger the value of AZ the greater the separation of
the distributions. Note that a value of AZ less than 0.5 indicates that the observer is per-
forming worse than guessing!
A two-dimensional ROC curve can be converted into a three-dimensional ROC curve in
which the signal intensity is the third dimension (see Figure 7.21). For example when the
signal intensity is high, the ROC curve will be close to the axes. When the signal intensity
is zero the ROC curve will be the 45 degree guessing line. Thus a constant value for the
imaging parameter produces an ROC curve whereas a profile through the surface at a
constant FP fraction yields the response curve that would be produced by the method of
constant stimulus.
The simple approach to ROC analysis described here can be applied very success-
fully when the whole image is being assessed as normal/abnormal. However, there is an
obvious weakness if the observer’s task is to identify a specific small lesion, for exam-
ple, a lung nodule or a cluster of microcalcifications on a mammogram. The observer
might ‘correctly’ identify the image as abnormal on the basis of an (erroneously) per-
ceived abnormality in a part of the image that is actually normal. When the observer
is required both to identify abnormal images and to specify correctly the position of
the abnormality, localisation ROC curves (LROC) may be constructed. A further com-
plication with clinical images is that the radiologist’s decision is often based on more
than one feature in the image. Clearly, there will be greater confidence in the conclusion
when the different features provide corroborative information but the different image
features may provide conflicting information. For a review of variants on ROC analysis
see Chakraborty (2000).
242 Physics for Diagnostic Radiology
0.8
0.6 2
0.2 0.4
1
1.0
1.0
0.8
0.8
True positive fraction
0.6
0.6
0.4
0.4
0.2
0.2
0
3 0
1.0
Sig 2 0.8
na 0.6
l in 1 0.4 n
ten actio
sit 0.2 ve fr
y 0
ep o si t i
Fals
FIGURE 7.21
A ‘three-dimensional’ graph in which two-dimensional ROC curves are plotted as a function of signal intensity,
thereby generating a surface in three dimensions. (Reproduced with permission from ICRU Report 54, Medical
imaging—The assessment of image quality. International Commission on Radiation Units & Measurements,
Bethesda, USA 1996.)
Insight
Free Response Operating Characteristic (FROC) and Alternative Free
Response Receiver Operating Characteristic (AFROC) Curves
These are two approaches developed to overcome some limitations of simple ROC analysis. In the
FROC method the observer searches each image for suspicious lesions and indicates both their
location and their confidence level, usually from 1 to 4, where 4 is the highest level of confidence.
If an indicated lesion is within a predetermined distance of an actual lesion, this is recorded as a
TP, otherwise it is a FP. A plot of the fraction of lesions localised on the y-axis (range 0–1) against
the mean number of FPs per image (on the x-axis) is the FROC curve. Low values of the mean
number of FPs per image indicate a strict criterion of reporting lesions. There is no limit to the
x-axis.
Since the mean number of FPs/image is unbounded, there is no equivalence between ROC
curves and FROC curves. However, by considering the signal and noise stimuli as overlapping
Gaussian variables, somewhat analogous to Figure 7.14, it is possible to convert the x-axis into a
probability of a FP image, (i.e. one or more FPs). This plot is known as an AFROC curve with limits
of 0 and 1.0 on both axes. The area under the curve Al (to distinguish from Az) ranges from 0 for
random guessing (cf. Az = 0.5) to 1.0 for a perfect observer.
Note that FROC and AFROC methodology require a set of images where the location of every
true lesion is known. This restricts somewhat the use of the methods on clinical images.
Good work can be done on systematic analysis of image quality using analogue images.
However, digitised images are preferable because the data are available in numerical form.
For example, it is possible to investigate the interaction between, say contrast, resolution
and noise for carefully controlled, quantitative changes in the images or to look at the
Assessment of Image Quality and Optimisation 243
relative importance of structured or unstructured noise. One reason why this is desirable
is that the eye-brain system is very perceptive, so there is only a narrow working region
in which an observer might be uncertain or different observers disagree. Digital images
provide more opportunity for fine tuning than analogue images.
Finally, under computer control it is possible to superimpose known lesions of different
size, shape and contrast on normal or apparently normal clinical images that have been
digitised. In the subsequent analysis the true abnormals can be unambiguously identified.
80
True positive (%)
60
Pixel sizes:
0.1 mm
0.5 mm
40 1.0 mm
20
FIGURE 7.22
Use of ROC curves to show how diagnostic accuracy for fine detail in the pneumothorax varies with pixel size
in digitised images. (Adapted with permission from MacMahon H, Vyborny CJ, Metz CE et al. Digital radiog-
raphy of subtle pulmonary abnormalities—An ROC study of the effect of pixel size on observer performance.
Radiology 158, 21–26, 1986.)
244 Physics for Diagnostic Radiology
Although a pixel size of 0.1 mm × 0.1 mm may be acceptable for chest radiographs,
subsequent work by the same authors (Chan et al. 1987) showed that for mammograms
detection accuracy for microcalcifications was still significantly reduced if images were
only digitised to this level.
TABLE 7.2
Radiographer and Radiologist Performances in Reading Mammographic
Films by Comparison of the Area under an ROC Curve
AZ′ values
Radiographers Radiologists
Pre-training 0.77 0.87
Post training 0.88 0.89
After 200 screening mammograms read 0.91 –
After 5000 screening mammograms read 0.88 –
Source: Adapted from Pauli R, Hammond S, Cooke J and Ansell J. Radiographers
as film readers in screening radiography: An assessment of competence
under test and screening conditions. Br. J. Radiol. 69, 10–14, 1996.
Assessment of Image Quality and Optimisation 245
TABLE 7.3
Mean Values of AZ for Five Radiologists Who Read
Uncompressed and Compressed Chest Radiographs
AZ
Digitised Compressed
Interstitial disease 0.95 ± 0.04 0.95 ± 0.03
Lung nodules 0.87 ± 0.06 0.88 ± 0.05
Mediastinal masses 0.83 ± 0.08 0.80 ± 0.10
Source: Adapted from Aberle DR, Gleeson F, Sayre JW et al.
The effect of irreversible image compression on diag-
nostic accuracy in thoracic imaging. Invest. Radiol. 28,
398–403, 1993.
pixels. Data compression will then have a number of advantages, especially in respect of
less archival demands, faster data transfer and faster screen build up.
Compression algorithms are basically of two types. The first is reversible or loss-less
compression. Provided that the compression/decompression schemes are known, a true
copy of the original image may be recovered. Unfortunately, the maximum compression
ratio that can be achieved in this way is about 4:1, often less with noisy data.
Higher compression ratios require the use of irreversible processing which results in
some alteration in the information content. Note that this is not a problem per se. Most
image data undergoes some alteration in information content before it is viewed (e.g. filter-
ing the raw projections during image reconstruction in CT—section 8.4.2). The important
requirement is to be able to image a lesion optimally without introducing false readings.
Provided that data compression does not influence this process negatively, a certain degree
of loss of information can be tolerated.
A full discussion of the relative merits of different compression algorithms is outside
the scope of this book but some further detail is given in Section 17.8. It is clear that ROC
analysis provides a useful way to select the approach which gives maximum compression
with minimum loss of image quality. For example, Aberle et al. (1993) studied 122 PA chest
radiographs which had been digitised at 2000 × 2000 pixels and 12 bit resolution and then
compressed at an approximate compression ratio of 20:1. Five radiologists read the digi-
tised images and the digitised/compressed images and results were analysed using ROC
curves (see Table 7.3).
Although the compression process was irreversible, there was no evidence of any differ-
ence in detectability for interstitial disease, lung nodules or mediastinal masses.
hence radiation dose. Perceived wisdom is that there is a concomitant extra radiation risk
associated with any increase in dose and optimisation in terms of ensuring that all radia-
tion doses to patients are ‘as low as reasonably achievable’ will be considered in Chapter 13
when doses and risks to patients are reviewed. However, the corollary to this constraint
is that all other aspects of the imaging process must also be optimised as failure to do so
may have an indirect impact on the dose of radiation required to provide the necessary
diagnostic information.
These aspects are many and varied, ranging from purely physical features of the imag-
ing process to evaluation of the process of image interpretation. A comprehensive review
is beyond the scope of the book but it will be useful to consider a few illustrative examples.
The ones chosen have two things in common. First, they are all fairly topical and second
there is a quantitative measure of the optimisation process in each of them. The latter point
seems entirely appropriate for a book with strong emphasis on physics and highly desir-
able in ensuring that the overall process of image optimisation gradually improves.
Insight
More on Optimisation of kVp
The relationship between image quality and tube potential is a complex function of the primary
X-ray spectrum, scatter spectrum and the energy dependence of the absorption efficiency of the
receptor. For digital radiography this suggests that an optimisation strategy concerning itself with
the energy of the primary beam spectrum can do more than examine the peak tube potential (kVp)
alone. Other studies (e.g. Samei et al. 2005) have investigated the influence of the overall shape
of the spectrum after alteration by additional tube filtration. This can further reduce patient dose
while keeping image contrast at acceptable levels by taking advantage of the wide dynamic range
and post-processing options of digital systems—not usually achievable with film-screen systems
due to their inflexible contrast properties. Of course, additional filtration does lead to longer expo-
sure times, increasing both wear on the X-ray tube and any movement unsharpness.
Assessment of Image Quality and Optimisation 247
100
10
1
0.1 1 10
Detail diameter (mm)
FIGURE 7.23
Contrast-detail curves showing threshold luminance contrast for different viewing box luminance settings.
(Reproduced with permission from Robson K J. An investigation into the effects of sub-optimal viewing condi-
tions in screen-film mammography. Br. J. Radiol. 81, 219, 2008.)
100
OD = 2.48
Threshold luminance contrast (%)
10
0 cd m–2
1102 cd m–2
3480 cd m–2
1
0.1 1 10
Detail diameter (mm)
FIGURE 7.24
Curves showing the contrast-detail performance for films with an OD of 2.48 under different levels of uniform
reflection. (Reproduced with permission from Robson K J. An investigation into the effects of sub-optimal view-
ing conditions in screen-film mammography. Br. J. Radiol. 81, 219, 2008.)
light and glare but perception was very sensitive to reflected light, with the threshold
detail diameter increasing with the amount of reflected light, especially when reflection
was non-uniform (Figure 7.24). This work was done within the context of mammography
because viewing conditions are especially critical in the detection of subtle abnormalities
in mammograms. However, the conclusions are likely to apply to the viewing of clinical
images generally and may lead to a reappraisal of the optimum viewing conditions.
Assessment of Image Quality and Optimisation 249
Insight
More on Viewing Conditions
The findings of this work are at variance with those of Blackwell, especially regarding the effect
of the level of illumination. A possible explanation is that Blackwell used noiseless signals so the
only noise contribution was from the visual system σvis. If there is a significant external noise con-
tribution σext, for example, from the film or in a digital system from the electronics, the total noise
σ tot will be (σ 2vis + σ 2ext)1/2.. σvis may be affected by light intensity but if σext > σvis this effect may be
obscured. Evidence that the total noise in the recent work was much higher than in Blackwell’s
work comes from the fact that the threshold contrast was higher.
100
90
80
70
60
Percentage survival
50
40
30
20
10
0
0 1000 2000 3000 4000 5000
Gaze duration (msec)
FIGURE 7.25
The percentage of nodules holding visual attention over a 15 s time interval for the four possible decision
outcomes of true positive (TP) and false positive (FP), which cannot be distinguished (solid line), true negative
(TN) (dotted line), and false negative (FN) (dashed line). (Redrawn with permission from Manning D J, Ethell
S C and Donovan T. Detection or decision errors? Missed lung cancer from the anterior chest radiograph. Br. J.
Radiol. 77, 231–235, 2004.)
work is important because, if confirmed, it may add a new dimension to the way image
interpretation is taught and could influence the way in which artificial intelligence is used
to assist the radiologist.
same patient are often generated in 3-D imaging techniques such as helical CT and mag-
netic resonance imaging (MRI) so some of these studies are being investigated. The other
general areas in which CAD has been applied are those where current practice leads to a
high percentage of false negative reports.
The basic technologies involved in CAD schemes are (Doi 2005)
1. Image processing for detection and extraction of abnormalities (see Section 6.11)
2. Quantitation of image features for candidates of abnormalities
3. Data processing for classification of image features between normals and
abnormals
4. Quantitative evaluation and retrieval of images similar to those of unknown
lesions
5. Quantitative studies of observer performance
Most CAD systems work on the basis of ‘prompts’ to the observer identifying suspicious
locations and allocating a ‘probability of abnormality’. However, the value of this informa-
tion is critically dependent on the quality of the algorithm used to generate the prompt. In
theory there are a large number of ways in which the same images might be processed and
many variations in which the ‘prompt level’ might be decided. Exact details of algorithms
are often commercially secret but examples of the image processing that may be used are
well documented.
In the final assessment of a CAD system, ROC analysis (see Section 7.8) is frequently a use-
ful quantitative tool. Consider, for example, the work of Doi and colleagues on detection
and classification of lung nodules on digital chest radiographs. As noted in the previous
section the literature shows that radiologists may miss about 30% of lung nodules on chest
radiographs and even when a nodule is found, classification into benign and malignant is
difficult. Image processing techniques were developed into algorithms for module detec-
tion and shown both to assist many radiologists and to reduce variation in detection accu-
racy due to variation in radiologists’ experience.
Further algorithms were developed to provide a second opinion on the likelihood of
malignancy based on image features such as the shape and size of the nodule and the
distribution of pixel values inside and outside the nodule. Linear discrimination analy-
sis is a statistical method that determines the differences between two or more types of
object based on an assessment of their features and uses that information to place the
object into predetermined categories—such as benign or malignant. This analysis was
used to assign a likelihood measure of malignancy. Observer performance studies were
252 Physics for Diagnostic Radiology
1.0
0.8
CAD output
Radiologists
(Az = 0.72)
0.2
P = 0.002
0.0
0.0 0.2 0.4 0.6 0.8 1.0
False positive fraction
FIGURE 7.26
ROC curves used to evaluate the distinction between malignant and benign nodules on chest radiographs
with and without computer-aided diagnosis. (With permission from Doi K. Review Article: Current status and
future potential of computer-aided diagnosis in medical imaging. In Computer Aided Diagnosis, Gilbert F J and
Lemke H, eds., Br. J. Radiol. 78, Special Issue, S3–19, 2005.)
analysed by ROC analysis and showed that radiologists could better distinguish benign
and malignant nodules when they had the benefit of computer prompts (Figure 7.26). In
fact the highest AZ value was achieved by computer analysis alone so perhaps automated
computer diagnosis is not dead after all!
1. The evaluation must be prospective with images assessed within the reference
frame of the normal routine work of the unit—not in some academic ivory tower.
2. A typical cross-section of normal images and abnormal images from different dis-
ease categories must be sampled since prevalence affects the predictive value of a
positive test.
Assessment of Image Quality and Optimisation 253
3. A sufficient number of cases must enter the trial to ensure adequate statistics.
4. Equivalent technologies must be compared. It is meaningless to compare the
results obtained using a 1985 ultrasound scanner with those obtained using a 2005
CT whole body scanner or vice versa.
5. Evaluations must be designed so that the skill and experience of the reporting
teams do not influence the final result.
6. Finally, and generally the most difficult to achieve, there must be adequate inde-
pendent evidence on each case as to whether it should be classified as a true nor-
mal or a true abnormal.
If these constraints can be met, methods are readily available for representing the results.
For example, if images are simply classified as normal or abnormal, the four possible out-
comes to an investigation can be expressed as a 2 × 2 decision matrix (Table 7.4).
Abnormals detected a
Sensitivity = =
Total abnormals a+b
Normals detected d
Specificity = =
Total normals c+d
and
Abnormals correctly id
dentified a
Predictive value of a positive test = =
Total abnormal reports a+c
It is well known that prevalence has an important effect on the predictive value of a positive
test and to accommodate possible variations in prevalence of the disease, Bayes Theorem
may be used to calculate the posterior probability of a particular condition, given the test
results and assuming different a priori probabilities.
TABLE 7.4
A 2 × 2 Decision Matrix for Image Classification
Abnormal Images Normal Images
True abnormal a b
True normal c d
Totals a+c b+d
254 Physics for Diagnostic Radiology
TABLE 7.5
Examples of the Confidence Ratings Used
to Produce an ROC Curve
Rating Description
5 Abnormality definitely present
4 Abnormality almost certainly present
3 Abnormality possibly present
2 Abnormality probably not present
1 Abnormality not present
If the prior probability of disease, or prevalence, is P(D+) then it may be shown that the
posterior probability of disease when the test is positive (T+) is given by
[a /( a + b)] P(D+ )
P(D+ / T+ ) =
[a /( a + b)] P(D+ ) + [c /(c + b)] P(D − )
Similarly the posterior probability of disease when the test is negative (T–) is given by
[b /( a + b)] P(D+ )
P(D+ / T− ) =
[b /( a + b)] P(D+ ) + [d /(c + d)] P(D − )
7.11 Conclusions
The emphasis in this chapter has been on the idea that there is far more information in
a radiographic image than it is possible to extract by subjective methods. Furthermore,
many factors contribute to the quality of the final image and for physiological reasons,
the eye can easily be misled over what it thinks it sees. Thus there is a strong case for
introducing numerical or digital methods into diagnostic imaging. Such methods allow
greater manipulation of the data, more objective control of the image quality and greatly
facilitate attempts to evaluate both imager performance and the overall diagnostic value of
the information that has been obtained.
Assessment of Image Quality and Optimisation 255
Thijssen MAO, Thijssen HOM, Merx JL et al., A definition of image quality: The image quality fig-
ure, in Optimisation of Image Quality and Patient Exposure in Diagnostic Radiology, B M Moores,
B F Wall, H Eriska and H Schibilla, eds, BIR Report 20, 29–34, 1989.
Zaho W, Andriole K P and Samu E, Digital radiography and fluoroscopy, in Advances in Medical
Physics, A B Wolbarst, R G Zamenhof and W R Hendee, eds, Medical Physics Publishing,
Madison, Wisconsin, 1–23, 2006.
Exercises
1. What factors affect (a) the sharpness (b) the contrast of clinical radionuclide
images?
2. List the factors affecting the sharpness of a radiograph. Draw diagrams to illus-
trate these effects.
3. What do you understand by the ‘quality’ of a radiograph? What factors affect the
quality?
4. Explain the terms subjective and objective definition, latitude and contrast when
used in radiology.
5. Explain what is meant by the term ‘perception’ in the context of diagnostic imag-
ing. Explain how perception studies may be used to assess the quality of diagnos-
tic images.
6. Why does the MTF of an intensifying screen improve if the magnification of the
system is increased?
7. Show that enlargement of an image such that the photons are spread over a larger
area increases quantum noise by (N/m2)½ where N is the original number of pho-
tons mm–2 and m is the magnification. Is quantum noise increased by magnifica-
tion radiography?
8. Explain how quantum mottle may limit the smallest detectable contrast over a
small 1 mm2 area in a digitised radiograph.
9. What is an ROC curve and how is it constructed? Give examples of the use of ROC
curves in the assessment of image quality.
10. Discuss the concept of optimisation in radiological imaging and give examples.
8
Tomographic Imaging with X-Rays
SUMMARY
CONTENTS
8.1 Introduction......................................................................................................................... 258
8.2 Longitudinal Tomography................................................................................................ 260
8.2.1 Digital Tomosynthesis............................................................................................ 262
8.3 Principles of X-ray Computed Tomography................................................................... 262
8.4 Single-Slice CT..................................................................................................................... 265
8.4.1 Data Collection........................................................................................................ 265
8.4.2 Data Reconstruction............................................................................................... 269
8.4.2.1 Filtered Back Projection.......................................................................... 269
8.4.2.2 Iterative Methods..................................................................................... 274
8.5 Spiral CT............................................................................................................................... 276
257
258 Physics for Diagnostic Radiology
8.1 Introduction
Three fundamental limitations can be identified that apply to planar imaging with both
X-rays and gamma rays from radionuclides (see Chapter 10).
(i) Superimposition
The final image is a two-dimensional representation of an inhomogeneous three-
dimensional object with many planes superimposed. In X-ray imaging the image relates
to the distribution of attenuation coefficients in the different planes through which the
X-ray beam passes.
The confusion of overlapping planes, and in particular the generation of scatter (see
Sections 6.4 and 6.5) results in a marked loss of contrast making detection of subtle anoma-
lies difficult or impossible. A simplified example of the consequence of superimposition is
shown in Figure 8.1.
Anterior or posterior
6 5 6
Lateral
2 2 2 6
2 1 2 5
2 2 2 6
FIGURE 8.1
Illustrating how superposition of signals from different planes reduces contrast. If the numbers in the nine
squares represent units of attenuation on a logarithmic scale in nine compartments, the theoretical contrast is
2:1. If, however, the attenuation is viewed along any of the projection lines shown, the contrast is only 6:5.
Movement of tube
S S Plane of
movement of
S1 S2 X-ray tube
X
•
t
P Plane of cut
Q
2φ
F F Plane of
X2 P2 P1 movement of
F2 F1 = X1
receptor
Movement of receptor
FIGURE 8.2
Diagram illustrating the movement of the X-ray tube and the image receptor in linear longitudinal tomography
and the blurring that occurs for points that are not in the plane of cut.
Tomographic Imaging with X-Rays 261
the plane of cut is not completely separated from adjacent planes, there is a minimum
amount of blurring that the eye can discern and this effectively determines the ‘thickness’
of the plane that is in focus. The thickness of cut increases if the value assumed for mini-
mum detectable blurring increases, but decreases if the angle of swing increases. It also
decreases if the ratio focus-film distance/focus-plane of cut distance increases.
Insight
It is instructive to calculate the thickness of the plane in terms of the minimum amount of blurring,
B say, and the geometry of the system. Referring again to Figure 8.2, suppose the point X suffers
the minimum amount of blurring that is just detectable. From the previous discussion:
d
P1 P2 =S ⋅ (where S = S2 − S1)
D
and
(d + t )
X1X 2 = S ⋅
(D − t )
Hence
⎡ (d + t ) d ⎤ (D + d )
B = S⎢ − ⎥≈S t (since t D)
⎣ (D − t ) D ⎦ D2
Since
S
≈ 2f
D
⎛D + d⎞
B = 2t ⎜ f
⎝ D ⎟⎠
The same argument can be applied to points below P so the thickness of cut,
1 D (8.1)
2t = B ⋅ ⋅
f (D + d )
Whilst it may be desirable for the slice thickness (t) to be small, the contrast between two
structures will be proportional to (µ 2 – µ 1) · t. Consequently, large angle tomography giv-
ing thin slices may be used when there are large differences in atomic number or density
between structures of interest, but if (µ 2 – µ 1) is small, a somewhat thicker slice may be
required to give sufficient contrast.
Note the following additional points:
1. The plane of cut may be changed by altering the level of the pivot.
2. Since the tube focus and film move in parallel planes, the magnification of any
structure that remains unblurred throughout the movement remains constant.
This is important because the relatively large distance between the plane of cut
and the receptor means that the image is magnified.
3. For two objects that are equidistant from the plane of cut, one above and one below,
the blurring is greater for the one that is further from the receptor. This may influ-
ence patient orientation if it is more important to blur one object than the other.
4. Care is required to ensure that the whole exposure takes place whilst the tube and
image receptor are moving.
5. The tilt of the tube head must change during motion—this minimises reduction
in exposure rate at the ends of the swings due to obliquity factors but there is still
an inverse square law effect.
⎛I ⎞
Pq = ln ⎜ 0 ⎟
⎝ Iq ⎠
where Pθ is the projection of all the attenuation coefficients along the line at angle θ.
µ(1,2)
P90(3)
µ(1,3)
z
µ(1,4)
x
µ(x,y–1)
y
µ(x–1,y) µ(x,y) µ(x+1,y)
θ=0
µ(x,y+1)
θ = 90
µ(512,
512)
FIGURE 8.3
A slice through the patient considered as an array of attenuation coefficients. If θ = 0 is chosen arbitrarily to be
along the y-axis, the arrows show the P90 projections.
264 Physics for Diagnostic Radiology
For example, for θ = 90°, one of the values of P90, say P90(3) to indicate that it is the projec-
tion through the third pixel in the y direction, is given by
Note that, strictly, each µ value should be multiplied by the x dimension of the pixel in the
direction of X-ray travel. All values of x are the same in this projection. However, pixels
contribute different amounts to different projections.
The problem is now to obtain sufficient values of P to be able to solve the equation for
the (512)², about 260,000, values of µ(x,y). The importance of computer technology now
becomes apparent, since correlating and analysing all this information is beyond the capa-
bility of the human brain but is ideal for a computer, especially since it is a highly repeti-
tive numerical exercise.
For each pixel in the reconstructed image, a CT number is calculated which relates the
linear attenuation coefficient for that pixel µ(x,y) to the linear attenuation coefficient for
water µw according to the following equation:
m( x , y ) − m w
CT number = K ⋅
mw
where K is a constant equal to 1000 on the Hounsfield scale. Note that since the image
space is divided into pixels and the data must take discrete values (CT numbers), CT is a
true digital technique. Some typical values for CT numbers are given in Table 8.1.
Contrast may now be expressed in terms of the difference in CT numbers between adja-
cent pixels. Since a change of 1 in 1000 is 0.1%, contrast changes of about 0.2%–0.3% can be
detected. Compare this with 2% visible contrast for fairly large objects on a good planar
radiograph. Note that at the high tube potentials used in CT, the Compton effect domi-
nates. As discussed in Section 3.4.3, the Compton effect depends on free electrons and is
fairly constant for biological elements when expressed per unit mass. However, because of
density differences, the number of electrons per unit volume varies and it is this variation we
are measuring in CT.
An interactive display is normally used so that full use can be made of the wide range
of CT numbers. Since only about 30 grey levels are distinguishable by the eye on a good
black and white monitor, adjustment of both the mean CT number (the window level), and
the range of CT numbers covered by the grey levels (the window width) must be possible
quickly and easily (see Figure 8.4 and Section 6.3.3). A wide window is used when compar-
ing structures widely differing in µ, but a narrow window must be used when variations
TABLE 8.1
Typical CT Numbers for Different
Biological Tissues
Tissue Range (Hounsfield Units)
Air –1000
Lung –200 to –500
Fat –50 to –200
Water 0
Muscle +25 to +40
Bone +200 to +1000
Tomographic Imaging with X-Rays 265
100
Window
width
Window
level
0
CT number
FIGURE 8.4
Illustrating the relationship between window width and window level when manipulating CT numbers.
in µ are small. Consider, for example, a 1% change in µ, which represents a range of about
10 CT numbers. With a window from +1000 to –1000 this may show as the same shade
of grey. However, if a narrow window, ranging from –50 to +50 is selected, 10 CT num-
bers represents a significant proportion of the range and several shades of grey will be
displayed.
8.4 Single-Slice CT
8.4.1 Data Collection
The process of data collection can best be understood in terms of a simplified system.
Referring to Figure 8.5 which shows a single source of X-rays and a single detector, all the
projections P0(y) that is, from P0(1) to P0(512), can be obtained by traversing both the X-ray
source and detector in unison across the section. Both source and detector now rotate
through a small angle δθ and the linear traverse is repeated. The whole process of rotate
and traverse is repeated many times such that the total rotation is at least 180°. If δθ = 1°
and there is a 360° rotation, 360 projections will be formed.
Although this procedure is easy to understand and was the basis of the original or first
generation systems, it is slow to execute, requires many moving parts and requires a scan-
ning time of several minutes. Thus a major technological effort has been to collect data
faster and hence reduce scan time, thereby reducing patient motion artefacts. Historically,
scan times were reduced in second generation scanners by using several pencil beams and
a line of detectors.
Third generation scanners achieved faster scan times using an arc of detectors and a fan-
shaped beam, wide enough to cover the whole body section (Figure 8.6a). One of the initial
problems with this generation of scanners was that detector instabilities led to ring shaped
artefacts within images. Fourth generation scanners used a complete ring of detectors and
only the X-ray source rotated (Figure 8.6b). This design did not result in faster scan times
than third generation scanners. However, it offered advantages in terms of detector cali-
bration, and consequently the ring artefact issue was largely overcome. At this stage, scan
rotation times were typically 1–2 s.
266 Physics for Diagnostic Radiology
θ Detector
Projection Pθ
with θ = 0
Projection Pθ
with θ = θ
X-ray source
FIGURE 8.5
Illustration of the collection of all the projections using a single X-ray tube and detector and a translate-rotate
movement (a first generation scanner).
(a)
Rotate
Detectors
Rotate
Patient outline
(b)
Complete ring of
stationary detectors
Fan beam
of X-rays X-ray source
only rotates
Patient outline
FIGURE 8.6
Design of different generations of CT scanners. (a) A rotate-rotate (third generation) scanner. (b) A fourth
generation scanner, in which only the tube rotates.
Tomographic Imaging with X-Rays 267
The fourth generation design has not proved popular, however, and modern multi-slice
systems are all based on third generation designs. Improvements in the stability of solid-
state detectors mean that ring artefacts are no longer a significant issue with this design.
These systems have achieved further increases in overall scan speed using an array of mul-
tiple rows of detectors to acquire more than one image in a single rotation (see Section 8.6).
Modern equipment contains tens of thousands of detectors and rotation times of around
0.3 s can be achieved.
Even faster scan times can be achieved using ‘scanners’ with no moving parts. In one
approach, termed electron beam CT, there is a semicircle of tungsten targets below the
patient and a complementary semicircle of detectors above the patient. Note that recon-
struction is possible from profiles collected over angles of a little more than 180°. Collection
over a full 360° is not necessary. The beam of electrons is deflected electromagnetically so
that the tungsten targets are bombarded in sequence. Very high tube currents can be used
since each part of the anode is only used very briefly and scan times as short as 50 ms have
been achieved.
In spiral CT (see Section 8.5) continuous X-ray output may be required for up to
60 s. This places a number of new demands on the design and construction of X-ray
tubes and generators, especially with respect to heat dissipation. For example, 200 mAs
per rotation might be required to achieve the necessary signal-to-noise ratio in the
image. At an effective energy of 75 keV, 15 kJ of heat is generated per rotation, or for
100 rotations 1.5 MJ of heat is released in the anode. To improve heat storage capacity
and tube rating, larger, faster rotating ceramic-mounted anodes have been developed
(see Section 2.3.3). Anode angles tend to be smaller than for normal X-ray tubes and
to achieve more uniform heat transfer and dissipation the anode is flat with the X-ray
beam angled onto it.
Table 8.2 shows typical specifications for an X-ray tube used for CT. Note that rotation
times are not likely to get significantly shorter, given the size and weight requirements of
the anode for heat storage. A rotation time of 0.4 s imposes a mechanical rotational force of
15 g on the anode (where g is the force experienced by unit mass as the result of gravity).
For a rotation time of 0.2 s this force would rise to 60 g.
Other constraints ensure optimum image quality. Voltage fluctuations must be less than
1% and this requires high frequency power supplies with the voltage controlled by a dedi-
cated microprocessor. Generator output must also be controlled to better than 1% so the
tube operates in fast pulse mode (a few ms) using grid control (see Sections 2.3.1 and 2.3.6).
High resolution requires a small focal spot size, typically of the order of 0.5–1.0 mm and
the tube must be arranged with its long axis perpendicular to the fan beam to avoid heel
effect asymmetry.
TABLE 8.2
Typical Specifications for an X-ray Tube Used for CT
Typical Performance
kV 80–140
Maximum mA 500–800
Generator power (kW) 60–100
Anode heat capacity (MJ) 6
Anode angle (º) 7
Focal spot size (mm) 0.5–1.0
Rotation time (s) 0.3–1.0
268 Physics for Diagnostic Radiology
Choice of X-ray energy is a compromise between high detection efficiency and good
image contrast on the one hand, low patient dose and high tube output on the other.
Calculation of a unique matrix of linear attenuation coefficients assumes a monoenergetic
beam. Heavy filtration is required to approximate this condition—for example, 0.4 mm Cu
might be used with a 120 kV beam to give an effective energy of about 75 keV. When cor-
rections are made for differences in mass attenuation coefficient and density, 0.4 mm Cu
is approximately equivalent to 6.5 mm Al. Note, however, that the characteristic radiation
that would be emitted from copper (Z = 29, K shell energy = 9.0 keV) as a result of photo-
electric interactions would be sufficiently energetic to reach the patient and increase the
skin dose. Thus the copper filter is backed by aluminium to remove this component.
Modern scanners will also correct for the effects of beam hardening by software in the
reconstruction programme. Failure to correct adequately for beam hardening will cause
image artefacts (see Section 8.9).
In single-slice CT, beam size is restricted by adjustable slit-like collimators near the tube.
Collimators at the detector can also be used to define the slice thickness more precisely and
to control Compton scatter, especially at the higher energies. Normally only a few percent
of scattered radiation is detected. Note that scattered radiation is more of a problem with a
fan beam than with a pencil beam. The reader may find the discussion of broad beam and
narrow beam attenuation in Section 3.6 helpful in understanding the reason. Note also
that scatter rejection is better in third generation designs than fourth. In third generation
CT, the tube and detectors are in fixed orientations with respect to each other. This allows
the use of detector collimation which is focussed on the X-ray source. Conversely, in fourth
generation CT, the tube moves relative to the detectors, and the degree of detector collima-
tion must be reduced.
The requirements of radiation detectors for CT are a high detection efficiency (i.e. high
atomic number and density), wide dynamic range, good linearity, fast response and short
afterglow to allow fast gantry rotation speeds (e.g. for ECG gated cardiac studies), stability,
reliability, small physical size, high packing density, high geometric efficiency and reason-
able cost.
This has been an area of intensive commercial development. Originally thallium-
doped sodium iodide NaI (Tl) crystals and photomultiplier tubes (PMTs) were used.
One problem with NaI (TI) crystals is afterglow—the emission of light after exposure to
X-rays has ceased. This effect is particularly bad when the beam has passed through the
edge of the patient and suffered little attenuation because the flux of X-ray photons inci-
dent on the detector is high. Detectors are now radiation-sensitive solid-state material
(e.g. cadmium sulphate, gadolinium oxysulphide, gadolinium orthosilicate) and PMTs
have been replaced by high purity temperature stabilised silicon photodiodes. Modern
solid-state detectors have an 80%–90% X-ray quantum detection efficiency, depending
on kVp and object, and 80%–90% geometric efficiency. They have low electronic noise
thus ensuring that quantum noise is the limiting noise contribution. They respond rap-
idly and afterglow signals drop by several orders of magnitude in a few ms, so they are
suitable for a CT scanner with very high sampling rates. They have good anti-scatter
collimation.
The detector element spacing can be as low as 0.5 mm and some designs incorporate a
quarter detector shift in the x-y plane to double the spatial sampling density. The current
from the silicon photodiode can be amplified and converted into a digital signal in an
ADC. The dynamic range is better than 16 bits.
In spite of their inherently low sensitivity, gas-filled ionisation chambers were initially
quite widely used as detectors in third generation scanners. High atomic number gases
Tomographic Imaging with X-Rays 269
(x,y)
l
x
FIGURE 8.7
The projections that will contribute to the calculation of linear attenuation coefficient at some arbitrary point
(x,y), a distance l from the (x,y) origin.
270 Physics for Diagnostic Radiology
m(x ,y ) = • Pq (l)
over all
projections
where Pθ(l) is the projection at angle θ through the point (x,y) and l = x cosθ + y sinθ, to give
an estimate of the required value of µ(x,y).
An alternative way to look at this approach, which may be easier to understand (and
explains how the name arose), is to assume that the attenuation of each profile is ‘back pro-
jected’ along the profile, with each pixel making an equal contribution to the total attenu-
ation. Note that because the beam has finite width, the pixel (x,y) may contribute fully to
some projections (Figure 8.8a) but only partially to others (Figure 8.8b). Due allowance
must be made for this in the mathematical algorithm. Similar corrections must be applied
if a fan beam geometry is being used.
If this procedure is applied to a uniform object that has a single element of higher linear
attenuation coefficient at the centre of the slice, it can be shown to be inadequate. For such
an object each projection will be a top-hat function, with constant value except over one
pixel width (Figure 8.9a). These functions will back project into a series of strips as shown
in Figure 8.9b. Thus simple back projection creates an image corresponding to the object
but it also creates a spurious image. For the special case of an infinite number of projec-
tions, the spurious image density is inversely proportional to radial distance r from the
point under consideration.
(a) Pθ
(b)
θ
X-ray
source
µ
X (x,y) µ
(x,y)
P90 (y)
X-ray
source
FIGURE 8.8
Illustration of the difference between the matrix size and the size and shape of the scanning beam for most
angles. Assuming, for convenience, that the beam width is equal to the pixel width, then for the P90 projection
(Figure 8.8a) it can be arranged for the beam to match the pixel exactly. However, this is not possible for other
projections. Only a fraction of the pixel will contribute to the projection Pθ (Figure 8.8b) and allowance must be
made for this during mathematical reconstruction. Any variation in µ across a pixel, or since the slice also has
depth, the volume element (voxel), will be averaged out by this process.
Tomographic Imaging with X-Rays 271
(a)
θ = 315° Object space divided
into regular pixels
(a square object has been chosen
to simplify the profiles)
(b)
θ = 270°
θ = 235°
Profiles at θ = 180°
different angles
FIGURE 8.9
(a) The projections for a uniform object with a single element of higher linear attenuation coefficient at the
centre of the field of view. (b) Back projection shows maximum attenuation at the centre of the field of view but
also a star-shaped artefact.
Further rigorous mathematical treatment is beyond the scope of this book and the reader
is referred to one of the numerous texts on the subject (e.g. Gordon et al. 1975). Suffice to
say that the back-projected image in fact represents a blurring of the true image with a
known function which tends to 1/r in the limiting case of an infinite number of profiles.
Furthermore, correction for this effect can best be understood by transforming the data
into spatial frequencies (see Sections 6.11 and 16.8). In terms of the discussion given there,
the blurred image can be thought of as the result of poor transmission of high spatial fre-
quencies and enhancement of low spatial frequencies. Any process that tends to reverse
this effect helps to sharpen the image since good resolution is associated with high spatial
frequencies. It may be shown that in frequency space correction is achieved by multiplying
the data by a function that increases linearity with spatial frequency.
An important factor determining the upper limit to the spatial frequency (υm) is the
amount of noise that can be accepted in the image. This upper limit is also related to the
size of the detector aperture, the effective spot size of the X-ray tube and the frequency
of sampling. The value of υm imposes a constraint on the system resolution since for an
object of diameter D reconstructed from N profiles, the cut-off frequency υm should be of
the order of N/(πD) and ensures spatial resolution of 1/2υm. For example, if N = 720 and
D = 40 cm, the limit placed on spatial resolution by finite sampling is about 1 mm. Note
that reference to Figure 8.9 shows that sampling is higher close to the centre of rotation
than at the periphery. Thus if only the centre of the field of view is to be reconstructed,
higher spatial frequencies may contribute and better resolution may be achieved.
Finally, the correction function to be applied to the back-projection data cannot rise lin-
early to υm and then stop suddenly because this will introduce another source of image
artefacts. Figure 8.10 shows how a sharp discontinuity in spatial frequency translates into
a loss of sharpness in real space—the effect is very similar to the diffraction of light at a
straight edge.
272 Physics for Diagnostic Radiology
Image amplitude
filter function
Amplitude of
True
image
edge
υm
Spatial frequency Distance
FIGURE 8.10
The effect of a sharp discontinuity in a filter function that has a constant amplitude up to υm (left) on the sharp-
ness of the edge of an image (right).
x x
B D
A E
FIGURE 8.11
An example of aliasing. If the sine wave is sampled at A B C D E, it is uniquely determined. If it is only sampled
at A C E, the values may be fitted by a lower frequency curve shown dotted.
Insight
Aliasing
Many texts state that the generation of spurious spatial frequencies resulting in multiple low inten-
sity images owing to inadequate sampling of data is known as aliasing. However, relatively few
give a detailed explanation in simple terms.
A good starting point is the approach of Cherry et al. (2003). Scan profiles are not continuous
functions but collections of discrete point by point samples of the scan projection profile (in digi-
tal and CT work, the pixel). The distance between these points is the linear sampling distance. In
addition, profiles are obtained in CT scans only at a finite number of angular sampling intervals
around the object. The choice of linear and angular sampling intervals and the maximum spatial
frequency of the cut-off filter (the cut-off frequency) in conjunction with the detector resolution,
determine the reconstructed image resolution.
The sampling theorem (Oppenheim and Wilsky 1983) states that to recover spatial frequencies
in a signal up to a maximum frequency υm, the Nyquist frequency, requires a linear sampling dis-
tance d given by d ≤ 1/2υm. Alternatively, if the value of d is fixed, there is a limit on υm of 1/2d.
If frequencies higher than υm are transmitted (e.g. by the MTF of the detector) or amplified by the
filter function, aliasing will result.
To understand how under-sampling can introduce spurious spatial frequencies, consider Figure
8.11 which shows a simple sinusoid with repeat distance x. If it is sampled at 5 equally spaced
points A B C D E, the curve will be uniquely defined. No other sinusoid will have the same
Tomographic Imaging with X-Rays 273
amplitude values at these 5 points. Note that the data have been sampled five times in the distance
2x so the sampling theorem is satisfied.
If the data are only sampled at A C and E, an alternative sinusoid (rising as it passes through
A and E, of lower spatial frequency may also be fitted.
Figure 8.12 summarises the position with respect to filter functions. On the left, curve
A shows the effect of simple back projection on the amplitudes of different spatial frequen-
cies, curve B shows the (theoretically) ideal correction function. However, this function
will cause amplification of high spatial frequency noise. On the right, filter C will give
good resolution but will cause artefacts because of the sharp edge in frequency space.
Filter D will cause loss of resolution because high spatial frequencies near to but less than
υm are inadequately amplified. Filter E will give good resolution without too much noise
amplification but there will be some aliasing.
Clearly there is no ideal filter function and the final choice must be a compromise.
However, the exact shape of the filter function has a marked effect on image quality in
radiology and manufacturers generally offer a range of filter options. The performance of a
filter function with respect to resolution can be well represented by its modulation transfer
function. MTFs for a standard and a high resolution filter are shown in Figure 8.13. These
would correspond to spatial resolution limits of about 0.7 mm and 0.4 mm, respectively.
A
Amplitude
Amplitude
C
E
B
D
υm υm
Spatial frequency Spatial frequency
FIGURE 8.12
Typical functions associated with image reconstruction by filtered back projection. For explanation, see text.
1.0
0.8
0.6
MTF
0.4
0.2
5 10 15
Spatial frequency (cyles cm–1)
FIGURE 8.13
Modulation transfer functions for two CT filter functions; dashed line—standard, solid line—high resolution.
274 Physics for Diagnostic Radiology
For a standard body scan one would normally use short scan times because of body
movement, with a wide slice and standard convolution filter. For a high resolution scan,
say of the inner ear, a sharper filter, that is one which falls to zero more steeply, and thin-
ner slices will be selected. Both of these changes will increase the relative importance of
noise. If this reaches an unacceptable level it can be counter-acted by increasing the mAs
and hence the dose.
Insight
More on Iterative Methods
1. Note the iteration does not always converge, that is the estimate becomes progressively
worse, not better. To check for convergence, calculate the root mean square deviation:
Σ( xE − x0) 2
n −1
where the summation is made over all pixels. For the fourth estimate, this is
0 + 0 + 1/ 9 + 0 + 1/ 9 + 0 + 4 / 9 + 0 + 1 15
= = 0.46
8 72
It is left as an exercise for the student to show that the standard errors of previous estimates
are higher.
2. In practice it is better to model the input data (µxy values) and the imaging process (e.g. the
shape of the X-ray beam and spectrum) to various degrees of sophistication to predict the
image. Unfortunately, the stochastic process of quantum noise prevents an exact quantifi-
cation of the model parameters from the measured projections. Therefore it is necessary
to estimate the most likely parameter combination. For example, the maximum likelihood
expectation algorithm establishes a model that is most likely to give the same projection
set as the measured object. The deviations between the modelled and measured projec-
tions may be used to improve the modelled parameters in an iterative process. Clearly this
Tomographic Imaging with X-Rays 275
18 9 P1 8 18
(i) Object 9 10
The object consists of nine elements 12 5 1 6
with true attenuations as shown.
24 7 9 8
P2
9 3 2 4
P4 P3
15 12 18
15 12
10
(iii) Second estimate
4 3 5
(a) Share the discrepancies equally
among the pixels on that projection. 8 7 9
E.g. Each pixel on the top line is
decreased by 1. 3 2 4
(b) Compare estimated P3 projections
with true P3 projections. The middle P3
projection is 3 too low, etc.
9
16
(iv) Third estimate 9
(a) Share the discrepancies equally 5 1½ 5*
among the pixels on that projection.
Note that pixels marked * remain 7½ 8 7½
unaltered.
(b) Compare estimated P4 projections 3* 1½ 5
with true P4 projections.
P4
FIGURE 8.14
A simple example of the iterative process.
276 Physics for Diagnostic Radiology
sequence can be repeated indefinitely but there is a high cost in computing time and a
procedure that requires large numbers of iterations is probably unsatisfactory. Also care is
necessary to ensure that the iterative process converges to a unique solution because the
iterative process tends to model image variations that are due to random (stochastic) changes
resulting in increasingly noisy images. If μ values are changed too much, the revised image
may be less similar to the object than the original image.
No one reconstruction method holds absolute supremacy over all others and it is essen-
tial to assess the efficiency of a particular algorithm for a particular application. A major
disadvantage of iterative methods for fast CT imaging is that all data collection must be
completed before reconstruction can begin.
8.5 Spiral CT
Until the early 1990s, a major constraint on the whole imaging process was that after each
rotation the direction of rotation had to be reversed. This returned the tube and detectors
to their original positions to untangle the wires. Since that time, the development of slip-
ring technology has permitted continuous rotation of the X-ray tube and detector system
for third generation scanners, or continuous rotation of the X-ray tube for fourth gener-
ation scanners. Rotation may be in a fixed plane to obtain information about the way in
which contrast medium reaches an organ or fills blood vessels over short period of time.
Alternatively, the patient may be moved through the gantry aperture during such contin-
uous data acquisition. This provides a spiral or helical data set and is known variously as
spiral CT, helical CT or volumetric CT.
Clearly, for this mode of data collection it is necessary for the X-ray tube to emit radiation
for a long period of time. An initial restriction on spiral CT was set by the heat rating of the
tube, limiting the method to low tube currents and poor counting statistics. The develop-
ments in anode design discussed in Section 2.3.3 have been important. A modern scanner
will easily cover the full length of the patient at standard tube currents.
A second early limitation was that image quality was not comparable with axial acquisi-
tion. This was partly due to the noise associated with the low counts, and partly to the fact
that data collected at different angles are not in the same orthogonal plane relative to the
patient (Figure 8.15). Some degradation of the slice sensitivity profile is inevitable and in
single-slice CT this is greater the higher the rate of table feed. One approach to this prob-
lem is to use linear interpolation between two adjacent data points obtained at the same
angular position—for a simple account see Miller (1996). In Figure 8.15 measured values at
A and B for the same arbitrary angle θ may be used to deduce the actual value in the plane
of interest. However, B is rather a long way from this plane so an improved algorithm
effectively shifts the phase of this sinusoid by 180° and then interpolates between the mea-
sured values A and B′ for a more accurate estimate of the required value.
An important term in relation to spiral CT is the pitch. This relates the volume traced out
by the spiral to the nominal beam thickness. It is defined as
Table feed per rotation
Pitch =
Nominal beam thickness
The pitch is 1.0 if the distance travelled by the couch during one full rotation is equal
to the nominal beam thickness (e.g. nominal beam thickness = 5 mm, duration of 360°
Tomographic Imaging with X-Rays 277
B A
B
Direction of travel
FIGURE 8.15
Graph showing that because the patient moves during data acquisition the X-ray focus traces out a spiral rela-
tive to the patient. To reconstruct a plane of interest, data not actually collected in that plane must be used.
Interpolation between two positions separated by 180° (AB′) produces a better result than interpolation between
positions separated by 360° (AB).
rotation 1s, rate of movement = 5 mm s–1). If the rate of movement is doubled, the table pitch
becomes 2.0. Slices of nominal thickness 5 mm can still be reconstructed but there will be
degradation of the slice sensitivity profile.
There are a number of advantages of spiral CT:
• Slice thickness, slice interval and slice starting point may be chosen retrospec-
tively rather than prospectively. The simple example of a single 5 mm lesion being
detected with 5 mm slices illustrates this advantage. In axial CT, unless the slice
starts exactly at the edge of the lesion, it will contribute to two slices with corre-
sponding loss of contrast.
• Reconstruction is not limited to transverse sections. They can also be coronal or
sagittal, pathology can be viewed from any angle and vessel tracking techniques
may be used to follow tortuous vessels as they pass through the body.
• The faster scan permits a full study during a single breath hold. This is important
to eliminate respiration where motion artefacts can be troublesome—for example,
when visualising small lesions in the thorax. High levels of contrast can also be
maintained in vessels for the duration of the study and CT angiography becomes
possible.
8.6 Multi-Slice CT
8.6.1 Data Collection
Rapid acquisition of data, both for improved volume coverage and to minimise patient
movement, is a major goal in CT and, as a means to this end, faster rotation times and spi-
ral CT have already been discussed. The development of scanners in the mid 1990s which
allowed the simultaneous acquisition of more than one slice presented a further significant
advance in CT technology.
278 Physics for Diagnostic Radiology
z-axis at
isocentre
FIGURE 8.16
Examples of multi-detector arrangements; (a) A fixed array detector with 64 × 0.625 mm detectors, (b) an adap-
tive array with 16 × 0.75 mm detectors at the centre and 4 × 1.5 mm detectors on each side.
Tomographic Imaging with X-Rays 279
Motion of
focal spot
A • •B
z-axis at
isocentre
Beam shift δz
Detector array
FIGURE 8.17
Principle of double z-sampling; As a result of periodic motion of the focal spot position in the z direction, the
projections falling on each detector from spot A are slightly different from those from spot B. The shift at the
isocentre δz can be made equal to half the detector width.
However, most scanners offering 16 slices or above offer slice widths of around 0.6 mm or
thinner and practically isotropic resolution is achievable.
A further improvement in z-axis resolution has been implemented by one manufacturer
in a technique known as double-z sampling. In this technique, the focal spot of the X-ray
tube is shifted on a periodic basis between two locations on the anode surface. The prin-
ciple is illustrated in Figure 8.17. Because each detector ‘sees’ the two focal spot positions
along slightly different directions, the two beams are attenuated along slightly different
projections through the patient. If, for example, the detector assembly allows 32 collimated
0.6 mm projections per rotation of the gantry assembly, two consecutive 32 channel pro-
jections, created with the focal spot in different positions, can be interleaved to produce
64 overlapping projections. Each projection remains 0.6 mm thick, but the z-axis is now
sampled every 0.3 mm at the isocentre. Note that the overall resolution is slightly inferior
to this, because there is some loss of resolution in other parts of the field of view.
The latest scanners have now extended the multi-slice concept to detectors giving 16 cm
of coverage in the z direction, achieved through 320 rows of 0.5 mm detectors. These scan-
ners offer the potential for imaging entire organs in a single rotation, and offer particular
advantages in the field of CT angiography.
Despite the development in the number of rows of detectors, there are no major differ-
ences in the basic principle of operation of a multi-slice array of detectors and the single arc
of CT detectors discussed in Section 8.4. Each element consists of a fluorescent solid-state
material that converts absorbed X-rays into visible light photons, and a silicon photodiode.
up to four images per rotation, image reconstruction is carried out by assuming that the
data are collected as four parallel fan beams, each one being perpendicular to the z-axis.
However, this is an approximation. Whilst the central images are acquired in a plane which
is almost perpendicular to the z-axis, towards the edge of the bank of detectors there is
increasing angulation away from this ideal. As a result, data are now being collected using
a cone of radiation, in what is termed a cone-beam geometry. Ignoring the cone-beam geom-
etry and treating the data as being acquired from parallel fan beams can lead to artefacts
which result from inconsistencies in the reconstruction process. These arise because the
apparent longitudinal position of an off-axis object varies as the tube and detectors rotate
(Figure 8.18). However, in up to four-slice multi-slice CT, these cone-beam artefacts can
normally be tolerated, and the reconstruction is still essentially 2-D in nature.
For spiral multi-slice CT, data are being acquired in the form of multiple interleaved heli-
ces. If the cone-beam geometry is ignored, data reconstruction can still be based on linear
interpolation between the two nearest available datasets (see Section 8.5). These data may
have come from any of the multiple detector rows, or from the same detector but on a pre-
vious or subsequent rotation. Alternatively, the concept of a filter width in the z direction
has been introduced. Using this technique, for any particular projection, reconstruction is
based on all relevant datasets within a fixed ‘filter width’ in the z direction. In multi-slice
spiral CT, the effect of changes in pitch on image noise, patient dose and z-axis resolution
depends critically on whether image reconstruction is based on a two-point linear inter-
polation or a filter width technique.
For multi-slice CT with more than eight slices, the cone-beam nature of the acquisition
process must be properly accounted for (cone-beam CT). A number of techniques have
been developed. One is to extend the algorithms for filtered back projection (see Section
8.4.2), so that, rather than back projecting into a 2-D plane, data are back projected into a
3-D volume. However, this process requires significant computing power. Alternatively,
‘Advanced Single-Slice Rebinning’ (ASSR) techniques may be used. These techniques are
based around the reconstruction of large numbers of non-parallel overlapping tilted images
in planes which correspond to the plane of irradiation of a row of detectors. Subsequently,
(a) (b)
4 8 12 16
•
A A
z-axis at isocentre
B B
4 8 12 16
•
FIGURE 8.18
The effect of cone-beam geometry in multi-slice CT; (a) In single-slice CT, the two objects A and B would be
consistently imaged in the same transverse plane, corresponding to the location of detector 11. In multi-slice CT,
when the tube is above the patient, objects A and B are imaged by detectors 14 and 12, respectively. (b) When the
tube has rotated to beneath the patient, objects A and B are now imaged by detectors 12 and 15, respectively.
Tomographic Imaging with X-Rays 281
parallel axial images, or indeed images in any other plane, are obtained by interpolation
between the non-parallel tilted images. Further detail is beyond the scope of this book but
a good review is given by Flohr et al. (2005).
There is potential for a large amount of data to be created from a long multi-slice acqui-
sition with narrow detector collimation, and due consideration must be given to data stor-
age and retrieval issues. For a single 512 by 512 pixel image (about 260,000 pixels) stored
with a 12 bit accuracy, each image requires about 400 kbytes of storage (1 byte = 8 bits).
Furthermore, with multiple reconstructions in different planes or with different recon-
struction filters, it is common to create in excess of a thousand images from a routine
examination. It may also be necessary to store the original projections if several recon-
struction algorithms are available and the radiologist is unsure which one will give the
best image. Issues relating to the storage and retrieval of images are discussed further in
Chapter 17.
100
(% log scale)
1
0.1
0.1 1 10
Detail diameter (mm, log scale)
FIGURE 8.19
Relationship between minimum detectable contrast and detail size for a CT scanner.
At low contrast, quantum mottle may become a problem. If a uniform water phantom
were imaged, then even assuming perfect scanner performance, not all pixels would show
the same CT number. This is because of the statistical nature of the X-ray detection pro-
cess which gives rise to quantum mottle as discussed in Section 6.10. In fact, CT operates
close to the limit set by quantum mottle. A 0.5% change in linear attenuation coefficient,
corresponding to a change in CT number of about 5, can only be detected if the statistical
fluctuation on the number of counts collected ( n) is less than 0.5%. Hence,
n 0.5
<
n 100
which gives a value for n of about 4 × 104 photons. This is close to the collection figure for
a single detector in one projection.
Any attempt to reduce the dose will increase the standard deviation in the attenuation
coefficient due to this statistical noise. When measurements are photon limited, statistical
noise increases if the patient attenuation increases. However, it decreases if the slice thick-
ness increases or the pixel width increases because more photons contribute to a given
attenuation value.
Quantum mottle is still a considerable problem in the very obese patient especially for a
central anatomical site such as the spine.
TABLE 8.3
Approximate Numbers of CT Scans/
Year (Millions) in the United Kingdom
and United States
Year UK US
1985 0.3 8.0
1995 1.0 20
2005 2.7 60
Source: Adapted from Hall E J and Brenner D
J. Cancer risks from diagnostic X-rays.
Br. J. Radiol. 81, 362–378, 2008.
sure that CT is being used optimally in terms of benefit versus risk and use of expensive
resources.
Furthermore, as discussed in detail in Section 13.7.1, CT is a relatively high dose tech-
nique and in Council Directive 97/43/Euratom of 30 June 1997 on ‘Health protection of indi-
viduals against the dangers of ionising radiation to medical exposure’ (CEC 1997), CT has
been given the status of a ‘Special Practice’ requiring special attention to be given to patient
doses. This is especially important in paediatrics where risks are proportionately higher.
There are a number of technical factors which may increase the dose. With the faster
scanning capabilities now available there is a tendency to scan more slices or a larger
volume than is necessary to answer the diagnostic question. This is thoroughly bad tech-
nique and must be avoided at all costs. A more subtle problem is that with narrower slices,
higher mAs values are required to maintain acceptable noise levels, but at the cost of
increased dose.
Insight
Dose Control
Recall from Section 8.3 that because CT is a digital modality, there is nothing analogous to the
saturation effect of film blackening to alert the operator that doses are rising.
The dose is directly proportional to mAs, other factors remaining constant, but variation with
kV is more complex. Work with iodine-filled phantoms shows that, at constant dose the signal-
to-noise ratio increases with reduced kV and reduced phantom diameter. However, at 80 kVp
tube output may not be high enough for, say, CT angiography with larger patients. In paediatrics,
where the benefits of low kV (say 80 kVp) should be greatest, beam hardening artefacts may be
more of a problem.
For multi-detector arrays, narrow beam widths are less dose efficient. The X-ray inten-
sity (dose) has to be maintained at a fixed level over all the active detectors to maintain a
constant noise level. There is a penumbra to the useful part of the beam, which contributes
proportionately more to a narrow beam than to a wide beam (Figure 8.20). Consequently,
the widest beam collimation consistent with the required slice thickness should be chosen.
Note that when a linear array of, say, 64 × 1 mm detectors is covered by a fan beam, the
useful beam is so wide that the edge effect is negligible.
Since dose is directly proportional to mAs, a ‘one size fits all’ approach to mAs is
bad technique and protocols which incorporate tube current modulation (see next sec-
tion) should be used to fit the mAs to the patient. Attenuation through the patient will
284 Physics for Diagnostic Radiology
(a) (b)
Focal spot Focal spot
• •
Intensity of Intensity of
beam profile beam profile
4 1 mm 16 1 mm
active detectors active detectors
FIGURE 8.20
Diagrams showing that a narrow beam (a) is less dose efficient than a broad beam (b). The edge effects from a
narrow beam make a higher proportional contribution to the dose. For further explanation, see text.
determine the final noise level and hence the input mAs for a given image quality, so the
cross-sectional diameter of the patient is an important measurement. For a well-filtered
120 kVp narrow beam the intensity should fall by about 50% every 4 cm, but in practice,
due to scatter and other factors, attenuation is rather less than this and typically 50% every
10 cm (McCollough et al. 2006).
Note that children show much larger variations in cross-sectional attenuation
than adults and are more sensitive to X-rays. For small children, protocols optimised
for paediatric work must be used, not the corresponding adult protocols (Paterson et al.
2001).
It should be noted that for a narrow beam, µ will have a fairly precise value, but wider
beams cannot be modulated to the same degree because there will be a spread of attenua-
tion coefficients in the z direction through which the wider beam is passing and an aver-
age value of µ must be used. This is of particular significance for the latest generation of
scanners offering up to 16 cm of coverage in a single rotation.
These AEC systems for CT have a great potential but it is important to recognise that
there is a complex relationship between contrast to noise ratio, a specified level of spatial
resolution and minimum dose. In isotropic CT, noise, and therefore dose for a comparable
image quality, vary inversely with the fourth power of the voxel dimension. To make full
and proper use of AEC, the image quality, or noise levels, acceptable for a given diagnostic
task must be understood and defined.
8.9 Artefacts
Computed tomography systems are inherently more prone to artefacts than conventional
radiography and these will be summarised briefly.
8.9.5 Aliasing
This phenomenon, whereby high frequency noise generated at sharp, high contrast bound-
aries appears as low frequency detail in the image, was discussed in Section 8.4.2 in con-
nection with the shape of the filter function. It is caused by under-sampling the highest
spatial frequencies. The effect is less marked when iterative techniques are used but can-
not be eliminated entirely.
8.9.6 Noise
If the photon flux reaching the detectors is severely reduced, for example by heavy atten-
uation in the patient, there will be statistical fluctuations (quantum noise) resulting in
severe streaks.
8.9.7 Scatter
The logarithmic step in the reconstruction process makes the effect of scatter significantly
non-linear and the distribution becomes very object dependent. Shading artefacts result. Both
detector collimation and software algorithms can be used to reduce the effect of scatter.
A wide range of in-depth tests are carried out as part of type testing or commissioning.
However, these will normally be made by specialists and are outside the scope of this
book. For further information see IPEM Report 32 (2003).
There are, however, a number of tests of performance that should be done on a regular
basis. These tests should be rapid, so that they take minimal scanner time; uncomplicated,
so that staff are capable of performing them; and quantitative, so that as many tests as pos-
sible produce objective, numerical answers. A number of basic measurements give a great
deal of information about the performance of the scanner:
(i) Image noise—an image is taken of a water-filled phantom and the standard deviation
in CT number measured in a large region of interest. As explained in Section 8.7, the
CT number will not be constant throughout the image. The actual variation in CT
numbers will depend on the radiation dose and hence the mAs, but should be con-
stant for fixed exposure parameters. For multi-slice scanners this test can be repeated
for multiple slices obtained in a single axial acquisition, to assess inter-slice variation.
(ii) CT number constancy—this may be checked by imaging a test object containing
inserts to simulate various tissues. Mean CT numbers in a region of interest in
each material should be compared with values obtained at commissioning.
If the exact composition and density of the materials is known, these measure-
ments may also be used to check that the CT number is varying linearly with the
linear attenuation coefficient. Note, however, that for this measurement to be suc-
cessful it is necessary to know the effective energy of the X-ray beam at all points
in the phantom because of beam hardening.
(iii) CT number uniformity—this may be assessed using the same image as that used for
the image noise test. Smaller regions of interest are positioned at the centre and
around the edge of the image of the phantom, to assess the uniformity of the mean
CT number across the phantom.
(iv) High contrast resolution—high contrast bar patterns, with decreasing spacing can
be used to estimate the limiting high contrast spatial resolution (see Section 6.9.1).
Alternatively, a more complex mathematical analysis of the image of a fine bead or
wire can be used to establish the modulation transfer function of the system (see
Section 7.7.1).
(v) Axial dose—assessment of the variation of dose, normally expressed as the
Computed Tomographic Dose Index (see Section 13.7.1), with a range of exposure
settings should be carried out.
For all these features, careful measurement at commissioning will establish a baseline
and subsequent deterioration during routine testing should be investigated.
For an example of the use of 4-D imaging to remove respiratory artefacts, see Vedam
et al. (2003). The paper explains how an ordinary spiral CT scanner can be used to acquire
artefact-free images of the lungs in the presence of respiratory motion. The method involves
taking an over-sampled CT scan and then assigning each slice to one of 8 bins, correspond-
ing to different phases of the breathing cycle, as determined by an external respiratory
signal. This technique not only gives artefact-free images but also contains respiratory
motion information not available in 3-D CT images. Recent work has used this information
to investigate breathing irregularities.
All major vendors now offer 4-D CT, which is also becoming increasingly widely used
in radiotherapy, for example to make appropriate corrections for respiration in treatment
planning.
The latest generation of scanners, with up to 16 cm of longitudinal coverage in a single
rotation, introduce even greater possibilities for time-based studies. Such systems can pro-
vide complete organ coverage in a single sub-second rotation, opening up new possibili-
ties for functional imaging and CT perfusion studies.
8.11.2 Cardiac CT
The challenge of cardiac CT is to obtain very high resolution images in a very short time,
so that cardiac motion does not significantly degrade the quality of the images obtained.
The development of multi-slice CT, with increasing numbers of slices and decreasing rota-
tion speeds, has resulted in rapid developments in cardiac imaging within the last few
years.
In conventional CT, reconstruction is often based on data collected over 360o. However,
as has already been noted, data collected over 180o is sufficient for the reconstruction pro-
cess, and for a third generation scanner, this can be collected in a little over half the tube
rotation time. (Note that to collect 180o worth of data, the tube must rotate through 180o
plus the angle subtended by the detector bank in the axial plane.) Consequently, for a tube
rotation time of 0.4 s, a temporal resolution of around 200 ms can be achieved in cardiac CT.
However, except at very slow heart rates, this is still insufficient for imaging in the 50–200
ms period in mid-late diastole when there is least change in the left ventricular volume.
Improvements in temporal resolution can be made using cardiac-gating and a multi-sector
reconstruction technique. Rather than acquiring data from a 200 ms portion of a single
cardiac cycle, data from two heart beats can be used, each contributing information from
the same 100 ms portion of the cardiac cycle. This process can be extended to three or four
cardiac cycles. However, if the heart rate isn’t perfectly stable there is an increasing chance
of mismatch between successive phases in the cardiac cycle. To minimise the number of
sectors required, it is common practice to use β-blockers to slow the heart rate, and hence
reduce the requirement for such a high temporal resolution.
As an alternative to the multi-sector approach, one vendor has introduced a scanner
with two tubes and two detector banks, mounted at 90o to each other. This system allows
data from 180o to be collected in only one-quarter of a rotation. Together with a decrease
in tube rotation time to 0.33 s, this allows a single-sector cardiac image to be obtained in
only 83 ms.
Radiation dose is a significant issue in cardiac CT. High resolution reconstructions
inherently result in higher doses. Furthermore, the acquisition process can be very dose-
inefficient if large portions of the data, acquired during periods of significant cardiac
motion, are never used for image reconstruction. One solution is to use sequential axial
scanning, triggered at an appropriate point in the cardiac cycle. This is very dose efficient,
Tomographic Imaging with X-Rays 289
but in most scanners is too slow for contrast-based studies. Scanners offering complete
cardiac coverage in a single rotation can, however, operate in this mode. An alternative
solution to the dose issue is to perform a spiral scan at low pitch, but to modulate the tube
current, such that the current is reduced to typically 20% of its full value during periods of
significant cardiac motion, when the reconstruction data is unlikely to be used.
8.12 Conclusions
A radiographic image is a two-dimensional display of a three-dimensional structure and
in a conventional image the required detail is always partially obscured by the superimpo-
sition of information from underlying and overlying planes. The overall result is a marked
loss of contrast.
Tomographic imaging provides a method for eliminating, either partially or totally, con-
tributions from adjacent planes. Longitudinal tomography essentially relies on the blur-
ring of structures in planes above and below the region of interest. It is a well-established
technique and the main consideration is choice of thickness of the plane of cut. If the
focus-plane of cut distance and focus-image receptor distance are fixed, the thickness is
determined by the angle of swing, decreasing with increasing angle. Large angle tomog-
raphy may be used when there are large differences in atomic number and/or density
between the structures of interest, but if differences in attenuation are small, somewhat
290 Physics for Diagnostic Radiology
larger values of slice thickness may be desirable. With the development of digital detector
technology, there are new opportunities for simultaneously collecting a set of images by
longitudinal tomography that are in focus in different planes (tomosynthesis).
In single-slice CT imaging, a large number of views are taken of a transverse slice
through the patient from different angles. Several generations of CT scanner have been
introduced, each designed primarily to give a faster scan time, which is now in the region
of 0.4 s per slice. Even faster scanning probably requires ‘scanners’ with no moving parts
and electron beam CT is a potential area for development.
X-ray output is required for several seconds and this has placed new demands on the
design and construction of X-ray tubes and generators, especially to absorb and dissipate
the large amount of heat released in the anode.
Radiation detectors for CT have also been an area of rapid development to meet the strin-
gent requirements, especially for high detection efficiency, good linearity, wide dynamic
range and fast response. Thallium-doped sodium iodide crystals with PMTs and ioni-
sation chambers filled with high atomic number inert gases at high pressure have both
been used but the industry standard is now a radiation-sensitive solid-state detector, for
example gadolinium oxysulphide, coupled to a high purity, temperature stabilised silicon
photodiode.
Two major developments that have greatly expanded the applications of CT are spiral
CT and multi-slice CT. In spiral CT, slip-ring technology permits continuous rotation of
the X-ray tube with faster scanning and coverage of the whole trunk of the patient. There
are numerous advantages—(a) slice thickness, slice interval and slice starting point may be
chosen retrospectively, (b) reconstruction is not limited to transverse sections, (c) a study
can be completed during a single breath hold, (d) high levels of contrast can be maintained
in vessels for the study duration.
The introduction of arrays of detectors of different sizes has permitted multi-slice CT in
which slice thickness can be varied. This has resulted in even faster scanning, isotropic
resolution and, with increasing coverage in the z direction (e.g. 320 rows of 0.5 mm detec-
tors can cover 16 cm) the potential for imaging entire organs in a single rotation.
Data reconstruction has been a challenging problem for many years. The principal
methods developed for single-slice CT were filtered back projection (FBP) and iteration.
Both have their strengths and weaknesses—for example, FBP is easy to implement but
iteration provides better opportunities for modelling the problem. In FBP data processing
can start on the projections, but all projections have to be collected before iteration can
begin. Additional problems with data construction in multi-slice imaging result from the
requirement to combine data from non-parallel beams (the cone-beam effect).
Image quality is difficult to define in CT but factors that need to be considered include
radiation dose, quantum mottle, noise and resolution. Also a number of artefacts can
appear on the images arising from one or more of the following causes—mechanical mis-
alignment, patient movement, detector non-uniformities, partial volume effects, beam
hardening, aliasing.
Computed tomography is generally a high dose technique and the frequency of CT
examinations has increased rapidly in recent years. Both simple dose reduction strate-
gies, for example protocol optimisation, taking the minimum number of slices, using the
lowest mAs that will give diagnostically useful images, and more sophisticated methods,
for example, tube current modulation, should be considered. A careful quality assurance
programme will also help to ensure that image quality is optimised.
Applications of CT continue to expand and diversify, including four-dimensional CT,
cardiac CT and spectral CT.
Tomographic Imaging with X-Rays 291
References
CEC, European Communities Council Directive 97/43/Euratom of 30 June 1997, Health protection
of individuals against the dangers of ionising radiation in relation to medical exposure and
repealing Directive 84/466/Euratom, Off J Eur Commun L 180, 1997.
Cherry S R, Sorenson J A & Phelps M E, Physics in nuclear medicine, 3rd ed. Saunders/Elsevier, 2003,
486–488.
Flohr T, Schaller S, Stierstorfer K et al., Multi-detector row CT systems and image-reconstruction
techniques, Radiology 235, 756–773, 2005.
Gordon R, Herman G T & Johnson S A, Image reconstruction from projections, Sci Am 233, 56–68,
1975.
Hall E J & Brenner D J, Cancer risks from diagnostic X-rays, Br J Radiol 81, 362–378, 2008.
IPEM, Measurement of the performance characteristics of diagnostic X-ray systems used in medicine. Part III:
computed tomography X-ray scanners (IPEM Report 32) 2nd ed. Institute of Physics and Engineering
in Medicine, York, 2003.
Miller D, Principles of spiral CT—practical and theoretical considerations, Rad Mag Feb 28–29, 1996.
McCollough C H, Bruesewitz M R & Kofler J M Jr, CT dose reduction and dose management tools:
overview of available options, Radiographics, 26, 503–512, 2006.
Oppenheim A V & Wilsky A S, Signals and systems. Prentice Hall Inc. Eaglewood Cliffs N J, ch 8,
1983.
Paterson A, Frush D P & Donnelly L F, Are settings adjusted for paediatric patients? AJR 176, 297–
301, 2001.
Vedam S S, Keall P J, Kini V R et al., Acquiring a four-dimensional computed radiography data set
using an external respiratory signal, Phys Med Biol 48, 45–62, 2003.
Further Reading
Cody D D, Stevens D M & Rong J, CT quality control, in Advances in medical physics 2008. Eds Wolbarst
A B, Zamenhof R G and Hendee W R, Medical Physics Publishing, Madison, 2008, 47–60.
Flohr T G, Cody D D & McCollough C H, Computed tomography, in Advances in medical physics 2006.
Eds Wolbarst A B, Zamenhof R G & Hendee W R, Medical Physics Publishing, Madison, 2006,
ch 3.
Hsieh J, Computed tomography: principles, design, artefacts and recent advances. SPIE Press, Washington,
2003.
ICRP, Managing patient dose in multi-detector computed tomography (ICRP Publication 102). Ann
ICRP 37, 1, 2007.
Kalender W, Computed tomography: fundamentals, system technology, image quality, applications, 2nd ed.
Wiley-VCH, Weinheim, 2005.
Pullan B R, The scientific basis of computerised tomography, in Recent advances in radiology and medi-
cal imaging, vol 6, Eds Lodge T & Steiner R E, Churchill Livingstone, Edinburgh, 1979.
Seeram E, Computed tomography: physical principles, clinical applications, and quality control, 3rd ed.
Saunders, Philadelphia, 2008.
Swindell W and Webb S, X-ray transmission computed tomography, in The physics of medical imaging,
Ed Webb S, Adam Hilger Bristol & Philadelphia, 1988, 98–127.
292 Physics for Diagnostic Radiology
Exercises
1. Explain briefly, with the aid of a diagram, why an X-ray tomographic cut is in
focus.
2. List the factors that determine the thickness of cut of a longitudinal X-ray tomo-
graph and explain how the thickness will change as each factor is varied.
3. Explain the meaning of the terms pixel and CT number, and discuss the factors
that will cause a variation in CT numbers between pixels when a uniform water
phantom is imaged.
4. Explain why the use of a fan beam geometry in CT without collimators in front of
the detector would produce an underestimate of the µ values for each pixel.
5. Discuss the factors that would make a radiation detector ideal for CT imaging and
indicate briefly the extent to which actual detectors match this ideal.
6. Explain why the technique of transverse tomography can eliminate shadows cast
by overlying structures. Suggest reasons why the dose to parts of the patient might
be appreciably higher than in many other radiographic examinations.
7. Figures 7.2b and 8.19 show the relationship between contrast and resolution for an
image intensifier and CT scanner, respectively. Explain the differences.
8. Describe and explain how the CT number for a tissue might be expected to change
with kVp.
9. Explain the design principles of detector arrays for multi-slice CT with particular
reference to the way in which different slice thicknesses can be achieved.
10. Discuss the causes of different types of artefact in CT.
9
Special Radiographic Techniques
SUMMARY
• This chapter highlights some physical principles that are specific to, or espe-
cially important in particular radiographic techniques.
• Mammography is a low voltage technique developed to enhance the low
contrast in breast tissue.
• High voltage radiography may be useful when increased X-ray output or bet-
ter penetration is required. There will be some loss of contrast.
• There is some magnification in all images and the reasons are explained.
Magnification is generally undesirable but is useful in a few situations.
• Subtraction techniques are used to eliminate unwanted information from
an image, thereby making diagnostically important information easier to
visualise. Digital images greatly facilitate the use of subtraction methods.
• Interventional radiology (IR) is a wide-ranging term for situations in which
X-ray imaging is an aide to other clinical procedures. Since it is not a diagnos-
tic procedure, high quality images may be less important than other aspects
of the intervention.
• The final two sections highlight some important points when imaging chil-
dren (paediatric radiology) and in dental radiology.
CONTENTS
9.1 Introduction......................................................................................................................... 294
9.2 Mammography—Low Voltage Radiography.................................................................. 294
9.2.1 Introduction............................................................................................................. 294
9.2.2 Molybdenum Anode Tubes................................................................................... 295
9.2.3 Rhodium and Tungsten Anode Tubes................................................................. 297
9.2.4 Scatter....................................................................................................................... 299
9.2.5 Image Receptors......................................................................................................300
9.2.6 Quality Control and Patient Doses......................................................................304
9.3 High Voltage Radiography................................................................................................306
9.3.1 Principles..................................................................................................................306
9.3.2 The Image Receptor................................................................................................ 307
9.3.3 Scattered Radiation.................................................................................................308
9.4 Magnification Radiography............................................................................................... 310
293
294 Physics for Diagnostic Radiology
9.1 Introduction
In Chapter 2 the basic principles of X-ray production were presented and Chapter 3 dealt
with the origin of radiographic images in terms of the fundamental interaction processes
between X-rays and the body. Chapters 5 and 6 showed how the radiographic image is
converted into a form suitable for visual interpretation.
On the basis of the information already presented, it is possible to understand the phys-
ics of most simple radiological procedures. However, a number of more specialised tech-
niques are also used in radiology and these will be drawn together in this chapter. These
techniques provide an excellent opportunity to illustrate the application to specific prob-
lems of principles already introduced and appropriate references will be made to the rel-
evant sections in earlier chapters.
There are several difficulties associated with imaging the breast to determine whether
a carcinoma or pre-cancerous condition exists. First, an important pointer is the relative
amounts, distributions and variants in fibroglandular and adipose tissues. However, there
is very little difference between the two tissues in terms of the properties which create radio-
logical contrast. Their densities are both close to 1.0 × 103 kg m–3, differing by less than 10%.
Adipose tissue has a slightly lower mean atomic number (Z about 5.9) than fibroglandular
tissue (Z about 7.4) owing to its higher fat content. To exploit this difference the photoelectric
effect must be enhanced (recall that it is very Z-dependent) by imaging at low kV.
Second, one of the prime objectives of mammography is to identify areas of microcal-
cification, even as small as 0.1 mm diameter. These are very important diagnostically. To
achieve the necessary geometric resolution a small focal spot size is required and prob-
lems of X-ray tube output and rating must be considered. Also the resolution limit of the
image receptor should, ideally, be higher than in normal radiography.
Third, there is a wide variation in the X-ray intensity leaving the breast, being high near
the skin and much lower in thicker and denser regions. Therefore the ideal receptor will
have a wide latitude (film) or dynamic range (digital receptor).
Finally, breast tissue is very sensitive to the induction of breast cancer by ionising radia-
tion (especially for women between the ages of 14 years and menopause). Therefore in
symptomatic mammography the normal requirement that clinical benefit must outweigh
the risk applies. However, mammography is also used for asymptomatic screening where
there is benefit to only a small percentage of exposed women and limitations on dose are
even more stringent. High regard must therefore be paid to the radiation dose received
during mammographic examinations whilst ensuring that the dose is high enough to
avoid intrusive quantum mottle arising from the inherently low contrast.
There have been numerous historical reviews of the progress made over the years in
the physics and technological aspects of mammographic imaging—for a brief account see
Brateman and Karellas (2006). We shall concentrate on the ‘state of the art’ in technical
developments that have improved image quality and at the same time reduced patient
dose, with an explanation of the underlying physics.
30
Kα
10
0
0 5 10 15 20 25 30
Energy (keV)
FIGURE 9.1
Spectral output of a molybdenum anode X-ray tube operating at 28 kVp with a 0.05 mm molybdenum filter.
Cathode Anode
Effective anode
Tilt 6°
angle, 22°
Breast
Detector
FIGURE 9.2
Effect on the X-ray beam of angling the anode-cathode axis.
rather more precisely than would be necessary if the characteristic lines (which are inde-
pendent of kVp) totally dominated the spectrum. Also there is some merit in increasing the
kVp for thicker breasts (perhaps from 28 to 30 kVp for breasts greater than 7 cm) to reduce
dose although this results in some loss of radiographic contrast.
Other features which ensure that patient doses are kept to a minimum include a carbon
fibre table and automatic exposure control (AEC). Note that at 30 kVp the attenuation of
the table will be much greater than at, say 100 kVp. Furthermore, the table must have very
uniform attenuation properties. Any irregularity in table attenuation can appear as an
artefact on the image. AEC is discussed in Section 9.2.5.
20
10
Kα
Kβ
0
0 5 10 15 20 25 30
Energy (keV)
(b)
30
Linear attenuation
X-ray intensity (arbitrary units)
coefficient of rhodium
(arbitrary units)
20
10
Kα
Kβ
0
0 5 10 15 20 25 30
Energy (keV)
(c)
30 Linear attenuation
X-ray intensity (arbitrary units)
coefficient of
molybdenum
(arbitrary units)
20
10
Kα
Kβ
0
0 5 10 15 20 25 30
Energy (keV)
FIGURE 9.3
Attenuation curves for filter materials superimposed on different spectra—(a) rhodium spectrum + rhodium
filter; (b) molybdenum spectrum + rhodium filter; (c) rhodium spectrum + molybdenum filter.
Alternatively, for very thick breasts a tungsten (W) anode may be used. The effect of a
50 µm rhodium filter on the output spectrum of a tungsten anode is shown in Figure 9.4.
Of course with Z = 74 there are no useful characteristic radiations but the greater efficiency
of X-ray output compensates for the extra attenuation and exposure times are short. If a Rh
filter is used some of the benefit to the spectrum of the Rh absorption edge is retained. If
an aluminium filter is used, it behaves as in general radiographic examinations.
Special Radiographic Techniques 299
30
10
0
0 5 10 15 20 25 30
Energy (keV)
FIGURE 9.4
Spectral output of a tungsten anode X-ray tube operating at 30 kVp with a 50 µm rhodium filter.
Insight
L Shell Characteristic Lines
Note the spike just above 10 keV in the spectrum in Figure 9.4. This is due to the production of
L shell characteristic lines by the tungsten anode. This is the only spectrum in the book where
evidence of these lines is shown. They will be produced in any tube used for general radiography
whenever a tungsten anode is used, but their contribution to the spectrum is normally negligible.
Since a tungsten anode gives a good output, a small focal spot (0.2 mm) can be used and
this can be reduced to 0.1 mm by mounting the tube at an angle of 5o. Note that greater care
is required in setting and checking the generator kilovoltage since there are no character-
istic lines in the spectrum as in the case of molybdenum and rhodium.
Summarising these last two sections, modern mammography tubes may therefore have
four selectable focal spots, large and small for both a Mo target and for either a Rh or W
target. At higher kV there is slight loss of contrast but better penetration.
9.2.4 Scatter
Scatter is a serious problem in mammography, increasing with both breast thickness and
breast area. As shown in Figure 9.5, the scatter to primary ratio increases sharply at high
field sizes and can exceed 1.0 for an 8 cm thick breast. Using the analysis in Section 6.5,
this corresponds to a 50% reduction in contrast. Over the relatively narrow kVp range used
in mammography (25–35 kVp), scatter stays fairly constant. Three methods can be used to
reduce the effects of scatter in mammography.
(a) Compression
Compression of the breast is essential if good quality images are to be obtained. As
discussed in Section 6.7, compression in the radiological sense actually forces soft tis-
sue out of the direct path of the X-ray beam. In mammography this allows the spread
of different anatomical structures with less superposition and is achieved by means
of a compression paddle situated between the tube and the breast.
300 Physics for Diagnostic Radiology
1.2
1 8 cm
Scatter/primary ratio
0.8
0.6
0.4 4 cm
0.2
0
0 50 100 150 200
Field size (mm)
FIGURE 9.5
Typical trends showing the effect of breast thickness and field size on scatter to primary ratio for a molybdenum
spectrum with a molybdenum filter; values for both 4 cm and 8 cm thick breasts are shown.
Because there is less breast tissue in the beam, there are three important effects, (i) less
scatter is generated, improving primary radiation contrast, (ii) beam attenuation is reduced
so exposures are shorter and motion artefacts are reduced, (iii) the integral dose to glan-
dular tissue is reduced and hence there is a lower risk of radiation-induced cancer.
The compression force should be displayed and automatically removed after expo-
sure for patient comfort.
(b) Grids and Air Gaps
Grids were discussed in Section 6.8 and the air gap technique is discussed in Section
9.3.3. These general methods of scatter reduction are also used in mammography when
appropriate. Inevitably, the use of a grid will increase the patient dose because some of
the useful beam that has passed through the patient is stopped, or partially stopped by
the strips of lead foil or by the inter-space material. However, by using a low grid ratio
(typically 4:1), a low line density (typically 3 lines mm–1) and a carbon fibre inter-space for
low attenuation, the dose increase (Bucky factor = exposure with grid/exposure without
grid) is limited to about a factor of two. Image contrast can be improved by about 40% but
the grid movement must be such that no grid lines are visible on the image.
For small field sizes the air gap technique may be a satisfactory method of scatter
reduction. The smallest available focal spot must be used to maintain resolution. Since
the focus-image receptor distance is fixed in a mammography unit, the focus-skin dis-
tance must be reduced. This will increase the skin dose but the sensitive volume of
breast tissue may be reduced.
Film
X-rays generated
at low kV Film
(mammography)
‘Blackening ’
= image density
FIGURE 9.6
Schematic diagrams illustrating how high energy X-rays produce light fairly uniformly throughout the inten-
sifying screen (upper diagram) but, because of significant attenuation, low energy X-rays produce most light
where they first enter the screen.
in Figure 9.6. In the upper diagram, using high energy X-rays, the loss of X-ray inten-
sity in the screen is small, so the amount of light produced in successive layers of
the screen is similar. Both the amount of light reaching the film (which determines
blackening) and the mean distance of the light source from the film (which affects
resolution) will also be similar. For mammography X-rays, in the lower diagram,
the situation is different. Now there is appreciable attenuation of the X-rays so it is
important that as much light as possible is produced close to the film. Clearly this
occurs when the screen is behind the film not in front of it.
The screens are invariably rare earth, usually gadolinium oxysulphide, and screen and
film are pulled into very close contact, for example by using a flexible plastic cassette
that can be vacuum evacuated. The screens are only about one-tenth as fast as ordinary
screens to ensure that very high resolution can be achieved—typically 20–22 lp mm–1,
whilst quantum mottle is also eliminated. Further development of needle-shaped fibre-
optic-like crystals (see Section 5.13.1) of small cross-section may improve performance.
To enhance contrast differences, the film has a high gamma and hence a small film
latitude—see Section 5.5.4. Therefore AEC is used routinely to provide consistent
film density, typically centred on an optical density of about 1.7, over the clinically
useful range of breast thicknesses and X-ray tube potentials. The AEC is usually
under the cassette to keep magnification as small as possible and consists of an
302 Physics for Diagnostic Radiology
Insight
Slot Scanning—Practical Detail
A commercial model marketed briefly by Fischer Imaging (Senoscan) illustrates the approach.
A tungsten-rhenium anode was used and the total scan time was 5–6 s, although each narrow
slice was only irradiated for 200 ms. The digital detector used indirect conversion technology with
a 10 mm × 220 mm CsI:Tl scintillation crystal coupled to a linear array of four slit-shaped charge-
coupled devices (CCDs). Each CCD was made up of a matrix of 25 µm2 pixels and each pixel was
fed by 25 × 5 µm optical fibres. Spatial resolutions of 54 µm for a large field image (21 cm × 29
cm) and 27 µm for a smaller field of view (11 cm × 15 cm) were attainable.
Automatic exposure control is different in a digital system since the image receptor itself,
or a sub-component of it, can be used to control the exposure. Therefore adequate expo-
sure for all types of breast can be achieved without the need for critical placement of the
AEC sensor. With such an AEC system, a brief pre-exposure may also permit automatic
selection of the best anode target and filter.
As for general radiology comparison with screen-film mammography explains why
digital is proving popular.
Insight
Resolution in Mammography
Great emphasis has been placed on resolution in mammography and the reader may have noted
that the pixel sizes given for digital systems do not permit the resolution obtainable with the best
film-screen combinations. Digital systems will be limited to somewhere between 6 and 10 lp mm–1,
compared to 20–22 lp mm –1 for film screen. Notwithstanding, for a modern digital system at com-
parable dose, image quality seems to be very similar to that of film-screen images.
The reasons are as yet unclear but the discussion on factors affecting image quality in Chapter 7
is relevant. In respect of detector performance, MTF, noise characteristics, dose and DQE, as well
as resolution all affect image quality. Furthermore, the ability to enhance image features, espe-
cially contrast, by post-processing digital images may also offset the benefits of inherently better
resolution with film screen.
For a good review of the performance of digital detectors in mammography see Noel and
Thibault (2004).
Insight
Xeroradiography
Older books on the physics of mammography would have discussed this approach to breast imag-
ing. It is of considerable interest from the theoretical view point because the imaging technique
is rather novel.
Briefly, it depends on the way X-rays interact with a uniformly charged photoconductor (a spe-
cial form of semiconductor). Where many interactions occur, corresponding to low attenuation in
the breast, most of the charge leaks away. Pockets of charge remain where attenuation has been
high. When the imaging plate is covered in powder and exposed to a strong electric field, the field
lines are distorted to charge edges and highlighted by the powder. Thus xeroradiography produces
good spatial resolution and edge-enhanced images—a feature which causes its MTF to be better
at high spatial frequencies than at low spatial frequencies. However, the relatively poor contrast
sensitivity and relatively high radiation dose, combined with progressive improvement in film-
screens and, more recently, digital detectors, have rendered the technique non-competitive.
For more detail see for example, Dendy and Heaton, 2nd edition (1999).
304 Physics for Diagnostic Radiology
TABLE 9.1
Breast Cancers Induced per Million Women per
mGy.
Age Cancers per 106 Women per mGy
25 18–30
35 15–25
45 7–17
55 3–12
65 1–8
Note: The ranges of values reflect the difficulty of mak-
ing accurate estimates.
Source: Data are from several studies tabulated by Law
J, Faulkner K and Young KC, Risk factors for
induction of breast cancer by X-rays and their
implications for breast screening, Br. J. Radiol.
80, 261–266, 2007.
TABLE 9.2
Results of 1994 and 2008 Surveys of Mammography Screening Units in East Anglia UK Using a
Standard Breast Phantom (4 cm)
Average MGD Range of MGD Range of Optical
Year of Survey Number of Units mGy mGy Density
1994 23 1.17 0.72–1.68 1.30–1.69
2008 30 1.42 1.20–1.88 1.56–1.90
Source: Courtesy Mr D Goodman and Mr O Morrish.
4
MGD (mGy)
3
0
0 2 4 6 8 10
Breast thickness (cm)
FIGURE 9.7
Typical increase in MGD with breast thickness using a molybdenum anode at 30 kVp and molybdenum filter
for a mean optical density of 1.5.
spectrum for radiographing thin breasts. For thicker breasts the dose reduction factor can
rise to as much as 5 and contrast enhancement of digital mammograms gives better qual-
ity images with higher energy beams.
Factors that can affect MGD are kept under critical review. These include the following:
For further comment on the impact of digital receptors on doses in mammography see
Section 13.7.2.
Strict attention to image quality is also required to ensure a high pick up rate. Optical
density must be checked—a low dose mammogram that produces too light a film is diag-
nostically useless. Test objects are available to check spatial resolution, threshold resolu-
tion and granularity and focal spot size must be checked carefully. Note that because of
the line focus principle this may be different in different directions. Typical tolerance lim-
its on some of the more important variables are shown in Table 9.3.
Many of the QC checks for digital mammography are similar to those for screen-film
systems; for example, generator performance (accuracy of kV, constancy of output); X-ray
collimation and alignment; beam quality, focal spot resolution, the compression device,
because they are independent of the receptor.
Clearly established checking procedures for digital receptors, analogous to those for
screen film, are still evolving. However, some aspects where careful checks will be neces-
sary are detector uniformity, linearity of signal with dose, noise versus dose, resolution,
artefacts and image display—see Section 5.14.2.
306 Physics for Diagnostic Radiology
TABLE 9.3
Typical Tolerance Limits on Some Important Variables in Mammography
Measure Tolerance Limit
Resolution 12 lp mm –1
reduced proportionately more than the kV is increased. A change from 75 kVp to 90 kVp
(20% increase) can halve the mAs. Thus operating conditions of, for example, 140 kVp and
200–500 mA allow short exposures to be used, typically 1–1.5 ms. Short exposures are an
advantage whenever movement may be a problem, for example movement artefact due
to the heart and great vessels in chest radiography, radiography of the gastro-intestinal
tract, digital subtraction angiography and some computed tomography (CT). Because of
the short exposures, the X-ray tube must be designed for high heat input to the anode in
the short term. The anode may complete less than one full rotation during the exposure.
Since less heat is produced overall, the tube has a longer lifetime. Note, however, that
the X-ray tube has to tolerate consistent use at high voltage so the manufacturer should
be informed at the time of installation if the tube is to be used in this way. The kV of the
generator output should be checked regularly since the tube will be operating near to its
electrical rating limit. High tension cables may develop problems more frequently than at
low kV.
(a) (b)
Optical density
Optical density
Low kV High kV
A B A1B1AB A2B2
Log relative exposure Log relative exposure
FIGURE 9.8
Graphs showing how, for a given object and a given film-screen combination, there may be no exposure latitude
at low kV (a), but there is at high kV (b).
308 Physics for Diagnostic Radiology
attenuation than A (e.g. the diaphragm) or less attenuation than B would show little or no
contrast at low kV. At high kV such tissues are more likely to fall on the linear part of the
characteristic curve.
With digital receptors detector latitude is not really a problem because the dynamic
range is large so by suitable selection of window level and window width, virtually all
signals can be displayed, irrespective of body attenuation. Note, however, that although
the algorithms of a computer-based system can easily adjust an image that has received
too much radiation at high kV, if there is too little signal because of high attenuation at low
kV or because the dose has been reduced too much at high kV, increased noise (quantum
mottle) may degrade the image and computer software cannot compensate for this effect.
Other consequences of moving to higher voltage, namely increased scatter, reduced
dose, lower heat rating, higher electrical rating, will apply irrespective of the detector.
Furthermore, the physical principles on which kVp values and concomitant mAs values
are based are largely independent of the receptor and should only be altered when mov-
ing to a digital system if careful observation and measurement show that imaging will be
optimised as a result.
Insight
Does High kV Technique Reduce ESD?
In Section 9.3.1 we stated that an advantage of increasing kV was reduced patient dose. The
standard explanation is quite straightforward. As the kV increases, body attenuation is less so to
achieve the same dose at the receptor a lower entrance dose is required.
However, a practical study of ESD values for patients undergoing chest examination in a number
of radiology departments in East Anglia UK some years ago showed that average ESDs were actu-
ally higher at high kV (Wade et al. 1995). Further follow-up showed that several of the departments
were using grids for the high kV work and this negated the benefits of better penetration. The take-
home message is that for patient dose studies it is important to compare practical situations as well
as investigating the impact of a single variable.
An alternative method to reduce scatter reaching the film is the air gap technique, the
principle of which is illustrated in Figure 9.9. Imagine there is a small scattering centre
near the point where the X-rays leave the patient. At diagnostic energies, Compton scat-
tered X-rays will travel almost equally in all directions (see Section 3.4.3). Referring to the
diagram, as the cassette is progressively moved away from the patient, the number of scat-
tered X-ray photons intercepting it decreases.
It is also clear from the diagram that the first 20–30 mm of gap will be the most impor-
tant. Since there will be scattering centres throughout the body one might think that the
technique would not be very effective because there is a large ‘gap’ between most of the
Special Radiographic Techniques 309
Limits of collimated
X-ray beam from
distant source Patient outline
Small scattering
centre within
the patient
Low energy
photons
scattered in Three possible
all different positions for the
directions image receptor
FIGURE 9.9
Diagram showing schematically how the number of scattered photons intercepting the image receptor will
decrease as it is moved away from the position of contact with the patient.
scattering points and the image receptor even when the latter is in contact with the patient.
However, low energy scattered photons produced near the entrance surface are heavily
absorbed within the patient, thus it is the scattered photons that originate close to the exit
surface which cause most of the problem.
Note also
Insight
More on Scattered Radiation
Figure 9.9 showing almost uniform distribution of scattered radiation is very much a first approxima-
tion. Closer study of the problem reveals minor but important variations in the pattern of scatter.
First, reference back to the polar diagrams in Figure 3.6 shows that slightly more radiation is scattered
in the forward direction than in the sideways directions. Furthermore, calculations such as those in
Section 3.4.3 show that photons scattered through small angles retain most energy. Finally, the most
energetic photons are attenuated least by body tissues. The result of combining all these factors is
310 Physics for Diagnostic Radiology
that as kV increases, scatter leaving the patient is markedly more in the forward direction. One con-
sequence is that when grids are used the grid ratio will be much higher for high kV work (typically
20:1) than for low kV work (typically 4 or 5:1 for mammography). However, there is still sufficient
side scatter in high kV work for the air gap technique to be useful in some circumstances.
A number of other features of the air gap technique—for example, the effect of the finite
focal spot size on image sharpness, the effect of the penumbra and patient movement
on sharpness and the effect on patient dose are identical to those encountered in macro-
radiography and will be considered in the next section.
For a high kV chest technique, in the range say 125–150 kVp, a typical air gap might be
20 cm with a focus-receptor distance of 3 m. This would give comparable contrast to a 10:1
grid ratio. Both techniques result in an increased dose to the patient but the increase due
to the inverse square law as a result of the air gap is generally less than that required to
compensate for the grid.
Although the air gap technique would appear to have a number of advantages, it is not
widely used, perhaps because the position of the image receptor relative to the couch has
to be changed.
Two final comments will be made on high voltage radiography. First, kV has no effect
on resolution, magnification or distortion because it causes no change in beam projection
geometry. Second, this discussion has been presented in terms of low voltage (60–70 kVp)
versus high voltage (125–150 kVp) techniques but intermediate voltages can of course be
used, with the consequent mix of advantages and disadvantages.
(a)
X-ray source, assumed a
Focus plane point source for this diagram
d1
d2
d1
d2
Receptor plane
FIGURE 9.10
Geometrical arrangements for magnification radiography. (a) Assuming a point source of X-rays, then by simi-
lar triangles the magnification M = (d1 + d2)/d1, (b) and (c) demonstrate that for an object of fixed size and fixed
magnification, the size of the penumbra increases with the size of the focal spot.
M ≃ 1.1. One way to achieve a magnified image is to magnify, using optical methods, a
standard radiograph. However, this approach is often not satisfactory as it produces a very
grainy image with increased quantum noise (see Sections 6.10 and 7.5). The alternative is to
increase the value of M and this can be done in two ways: (a) keep FRD(d1 + d2) constant, but
increase d2, reducing d1; (b) keep d1 constant and increase d2. Unless otherwise stated, it will
be assumed that d2 is increased keeping FRD constant—this is the norm in mammography.
Although increasing d2 achieves the desired result, this has a number of other consequences
as far as the radiographic process is concerned and these will now be considered.
F
M + 2( M − 1)
xy
312 Physics for Diagnostic Radiology
where xy is the size of the object. Thus when M is large and F is of the order of xy, the
penumbra contributes significantly to the image.
If we think of xy as the smallest resolvable object distance in the image plane, large
values of F are clearly a problem. This is illustrated with three worked examples.
(a) High kV Chest—note that FRD is increased
Assuming the finest detail to be resolved is about 200 µm (0.2 mm) and that P
should be no greater than half this value gives F ~ 1.6 mm.
(b) Mammography with Minimal Magnification
M = 650/590 = 1.1. For a 0.3 mm spot, P = F(M – 1) = 30 µm which will permit reso-
lution of 100 µm calcifications.
(c) Mammography with Magnification
M = 650/430 = 1.5. If F = 0.3 mm, P = 0.3 × 0.5 = 150 µm which is too big. F must
be reduced to 0.1 mm to give an acceptable penumbra of 50 µm.
When the FRD is short, spots larger than 0.3 mm are little use and 0.1 mm is
preferable. This may impose rating constraints.
A focal spot of 0.3 mm or less is not easy to measure accurately and its size may
vary with the tube current by as much as 50% of the expected value. A pin hole may
be used to measure the spot size (Section 2.7) but to estimate the resolution it is best
to use a star test pattern (Section 6.9.3). The performance of a tube used for magnifica-
tion radiography is very dependent on a good focal spot and careful, regular quality
control checks must be carried out. It can be difficult to maintain a uniform X-ray
intensity across the X-ray field using a very small focal spot. The intensity distribu-
tion may be either greater at the edge than in the centre or, conversely, higher in the
middle than at the edge. Such irregularities can cause difficulties in obtaining correct
exposure factors.
(ii) Receptor Unsharpness
Unsharpness caused by the limiting resolution of a film-screen combination is
reduced by magnification. To understand the reason for this, consider image forma-
tion for a test object that consists of eight line pairs per mm. If the object is in contact
with the screen (M = 1) the screen must be able to resolve eight line pairs per mm,
which is beyond the capability of fast screens. Now suppose the object is moved to a
point midway between the focal spot and screen (d2 = FRD/2 and M = 2). The object is
now magnified at the screen to four line pairs per mm, thereby making the imaging
task easier.
A similar argument applies to digital detectors which have a fixed pixel size—the
pixel ‘projects back’ to smaller dimensions in the object plane. For systems which
Special Radiographic Techniques 313
have a fixed matrix size, the effect would be different if the matrix were expanded to
cover a larger field of view, since pixel size would increase. According to the inverse
square law, the pixel would project back to the same size as in the unmagnified state
so resolution would be unaltered.
Notwithstanding this finding that magnification generally improves receptor reso-
lution, readers may recall that in Figure 7.13 modulation transfer function fell more
rapidly as the magnification increased (curves C and D) implying inferior resolution
or more unsharpness. This is because focal spot size increases unsharpness and is
normally the dominant effect. For more detail see the ‘insight’.
Insight
Effect of Magnification on Resolution
From the discussion in the main text, if R is the intrinsic receptor unsharpness for an infinitesi-
mally thin object in contact with the receptor (M = 1), for any other value of M, UR (the receptor
unsharpness) = R/M.
But UR is only one contributor to the overall unsharpness. A more general expression for U is
(UG2 + UM2 + UR2)1/2, see Section 6.9.4.
Referring to Figure 9.10, the penumbra P = F(d2/d1) = F(M – 1). If we divide by M, effectively
reducing blurring to its value in the object plane, then
UG (geometric unsharpness) = F(1 – 1/M). Movement unsharpness will be ignored for this
argument.
Therefore, expressed in terms of constants other than M, and relative to the object plane,
UR = R/M and UG = F(1 – 1/M). If UR ≫ UG then unsharpness is proportional to 1/M and decreases
with increasing M. If UG ≫ UR then unsharpness is proportional to (1 – 1/M) and increases with
increasing M.
The critical value is R/F, the ratio of the intrinsic receptor resolution to the focal spot size. If R/F
is greater than about 0.5, then magnification radiography will reduce unsharpness, a value less
than 0.5 and magnification will increase unsharpness (Dance 1988). In practice for general radi-
ography R/F ≪ 0.5 (intrinsic receptor resolution ~0.1 mm, focal spot size ~1 mm) so resolution
decreases (blurring increases) with increasing magnification.
When UR and UG are comparable, unsharpness may decrease for small values of M and subse-
quently increase for larger values. These conditions can be achieved in magnification mammog-
raphy using a very small focal spot (~0.1 mm) and a magnification of 1.5. The small focal spot
limits tube current to about 25 mA, so if exposures greater than 100 mAs are necessary exposure
times will exceed 4 s.
Subtraction methods can be used with analogue images on film. Although no longer
the method of choice, a brief description helps to understand the principles involved and
some of the problems. The basic principle of the technique is quite simple. A radiograph
is a negative of the object data. If a negative of this negative is prepared (a positive of the
object data) and positive and negative are then superimposed, the transmitted light will
be of uniform intensity. This is because regions that were black on the original negative
are white on the positive and vice versa, the two compensating exactly. The positive of the
original image is often called the ‘mask’. When a second radiograph is taken of the patient,
with one or two details slightly different, for example, following the injection of contrast
medium, superimposition of the mask and the second radiograph will result in all the
unchanged areas transmitting uniformly, but the parts where the first and second radio-
graph differ will be visualised.
For the technique to be successful, the two images must superimpose exactly and no
patient movement must take place between exposures. The exposure factors for the radio-
graph and the tube output must remain the same to ensure an exact match of optical
density which should be in the range 0.3–1.7. The copy film making the mask must have a
gamma equal to –1.0. When the films are viewed together the combined optical density is
approximately 2.0 and a special viewing box with a high illumination is required.
In practice, digitised images are now used for all subtraction applications. This is
because it is relatively straightforward to subtract the counts/pixel in the mask image from
the counts/pixel in the modified image on a pixel by pixel basis to obtain the subtracted
image. In the remainder of this section it will be assumed that all images are digitised.
Insight
Logarithmic Subtraction
In Section 6.4 we showed that, in the absence of scatter, radiation contrast was unaffected by a
layer of uniformly attenuating material above (or below) the region of interest. This result arises
because contrast is defined as the logarithm of a ratio of intensities, to be consistent with the
logarithmic response of film.
A similar argument applies when digitised images are subtracted. Referring to Figure 9.11a,
before contrast is added for simplicity assume that µt in the vessel is the same as in the tissue, that
C1 is the count in a specified pixel and k is a constant.
This expression cannot be simplified any further. Hence C1 – C2 depends on both I0 and on x, the
thickness of surrounding tissue.
On the other hand, taking Naperian logarithms (log10 simply introduces a numerical factor which
cancels out)
ln C1 = ln kI0 – µt x
ln C2 = ln kI0 – µt (x – y) – µc y
(a) (b)
µt throughout I0 I0
µt Vessel with
y x
contrast
µC
µt
I1 I2
Digital
detector
C1 C2
FIGURE 9.11
Simple models for calculating the attenuating effect of a narrow blood vessel, (a) before; (b) after addition of
contrast. For explanation see ‘insight’.
Special Radiographic Techniques 317
Moving grids
Patient
FIGURE 9.12
Use of grids to remove scatter (a) a single grid; (b) grids both in front of the patient and in between patient and
detector moved in synchrony. Two grids remove more scattered radiation than a single grid.
dose. Thus to improve S/N it is preferable to increase the concentration of contrast rather
than the dose (always bearing in mind possible side effects of the contrast medium).
9.5.2.2 Roadmapping
This is an excellent illustration of the power of subtraction imaging and is used to assist
the advancement of guide wires and catheters in the cardiovascular tree using as little
contrast as possible. The problem is that both the contrast agent and the wire/catheter are
radio-opaque and thus are difficult to distinguish.
Exact details vary between systems but the principles are as follows:
1. Acquire a short native scene using fluoroscopy. Use the last image as a mask for
subtraction
2. Inject contrast medium and create subtracted images outlining the vascular tree—
dark against the background
3. Continue until contrast is maximum. This ensures the vascular tree is optimally
visualised
4. Invert the unsubtracted maximum contrast image so that the vascular tree appears
radiotranslucent
5. Now advance the wire catheter under fluoroscopy and use the inverted image as a
new mask for ‘subtraction’ (strictly speaking, addition because the contrast image
has been inverted). The wire/catheter appears black advancing in a white vessel,
thus making visualisation much easier (see Figure 9.13)
FIGURE 9.13
Illustration of roadmapping. The lighter structure is the internal carotid artery. In the lower part of the image
the dark guide catheter (A) is fairly straight because it is sufficiently rigid to alter the shape of the flexible artery.
Above the microcatheter delivery system (B), the microcatheter (also darker) follows the contours of the finer
arteries up to the aneurysm (C). (Image courtesy of Dr J Higgins and Ms H Szutowicz.)
Special Radiographic Techniques 319
Insight
Synchrotron Radiation
A full treatment is beyond the scope of this book but a brief summary may be helpful. Radiation from
a synchrotron with energies in the X-ray range can be generated by accelerating electrons almost to
the speed of light in a closed, circular trajectory, storage ring. They are then caused to oscillate in a
transverse direction and behave like tiny transmitting antennae. The emitted X-rays have many of the
properties of laser beams—highly collimated, very intense (anything from 5 to 10 orders of magnitude
stronger than a conventional beam) and can be very monochromatic. Techniques exist to produce
either a single wavelength or a spectrum. Because the spectrum is so intense, a monochromator can
be used to create several narrow spectral lines (typically 30 eV wide) of useable intensity.
In a hospital environment there are, unfortunately, major logistic problems in both installing a
synchrotron, which is neither simple nor inexpensive, and in using it routinely.
30
X-ray intensity (arbitrary units)
20
10
30 eV high energy
30 eV low
window (not to
energy window
(not to scale) scale)
0
0 20 40 60 80 100 120 140
Energy (keV)
FIGURE 9.14
Illustrating the use of wide energy windows with conventional spectra from a tungsten anode for dual energy
subtraction. The spectrum on the left is generated at 60 kVp and a 30 eV window is centred on 20 keV. The spec-
trum on the right is generated at 140 kVp with a window centred on 70 keV.
320 Physics for Diagnostic Radiology
Patient
X-ray
source
Pre-filter
FIGURE 9.15
Schematic diagram of a single exposure dual energy system.
(a) (b)
(c)
FIGURE 9.16
Images obtained with a prototype dual energy chest unit; (a) the low kV image, (b) the high kV image, (c) the
subtracted image. (Images courtesy of Professor Gary T Barnes.)
(a)
Bone Bone/tissue Tissue Vessel
boundary
Counts/
pixel
(b)
Bone Bone/tissue Tissue Vessel
boundary
(c)
Counts/
pixel
Zero
counts
Vessel-like Vessel identified
artefact
FIGURE 9.17
Creation of an artefact due to movement. The images show, schematically, counts/pixel in (a) the mask frame;
(b) the contrast-enhanced frame; (c) the subtracted frame (b–a). A shift to the right in the bone/tissue boundary
between frames (a) and (b) appears as an artefact in frame (c).
one from the other. Thus it is not necessary to use the same frame as the mask for each
subtraction.
Imagine that 30 frames s–1 have been collected for several seconds. In time interval
differencing a new series of subtracted images is constructed by using frame 1 as the
mask for, say, frame 11, frame 2 as the mask for frame 12, frame 3 as the mask for frame
13 and so on. Thus each subtracted image is the difference between images separated
by some fixed interval of time—one-third of a second in this example. Such processing
can be effective in situations where an organ is undergoing regular cyclical patterns of
behaviour, for example, in cardiology. Note that the statistics on a single frame may be
poor but the technique is still applicable to a fixed time difference between two groups of
frames. Both the time difference and number of frames can now be adjusted to give the
best images.
Special Radiographic Techniques 323
Field Size—Several field sizes are possible, depending on the application, typically from
12 cm for cardiac work to 48 cm for body angiography (diagonal distance across a flat panel
detector [FPD]), and it is important to use the smallest that covers the region of interest.
Too large a field size will increase the volume of patient in the direct beam and increase
scatter both to other parts of the patient and to other staff in the room.
Robotics—This subject is outside the scope of this book but it is important to note the
potential of robotic systems in IR, especially where the robot can permit the intervention-
alist to stay further from the couch.
Image receptor—Digital techniques are having a big impact on IR as in other branches
of radiology. Cinefluorography and television monitors have been largely replaced by
CCD cameras and there is now increasing interest in replacing image intensifiers
with either direct conversion or indirect conversion FPDs (see Section 5.10). The FPDs
have a better dynamic range because there is no television camera and inherently
better detective quantum efficiency. Both of these properties might reduce patient
dose. FPDs also have better spatial resolution, negligible contrast loss and no inherent
vignetting.
However, Davies et al. (2007) have recently reported on a careful comparison of three
commercial systems, all used for similar coronary angiography procedures. One uses a
state-of-the-art image intensifier (II) in which problems of geometric distortion, glare and
vignetting have been largely eliminated, coupled to a CCD camera with low noise digital
recording. The other two use indirect conversion FPDs. All the systems have high power
X-ray tubes, automatic dose control, programmable filters and similar frame rates.
Phantom dose measurements and clinical assessments of both patient dose and image
quality were made. Minor differences were noted, and these may be indicative of an
important contribution from processing algorithms. However, overall there was no clear
evidence that replacing IIs with FPDs produces any great improvement in either dose
reduction or image quality. As the cost of FPDs comes down, they will replace IIs because
of all the benefits of a totally digitised system (see Section 5.1). It is noteworthy that image
quality and dose may be only relatively minor considerations.
A typical specification for a state-of-the-art system for IR is shown in Table 9.4.
TABLE 9.4
Typical Specification for a State-of-the-Art IR System
1. X-ray tube mounted on a C-arm which can be rotated round 3 axes to obtain any desired orientation
2. Variable source-detector distance (80–120 cm)
3. Tube assembly—microprocessor controlled high frequency X-ray generator, 60–120 kV, 0.5–1000 mA,
switching times 0.5–800 ms, pulse frequencies up to 50 frames s–1, 0.4–1.0 mm focal spot size, maximum
power 100 kW, continuous power 3 kW, heat capacity 1.5 MJ
4. Collimator to adjust field of view
5. 2–8 mm copper filtration
6. Carbon fibre patient table
7. AEC pre-selecting kV, mA, pulse width and pre-filtration
8. Typical dose rates to image receptor—30 nGy per pulse for fluoroscopy (advancement of guide wires and
catheters, placement of stents, coils and closure devices). Higher doses of 300 nGy per pulse for digital cine
mode to produce low noise images of diagnostic quality
9. 40 cm × 30 cm FPD comprising a-Si diode matrix coupled to CsI scintillators, 3 k × 2 k matrix, 150 µm
pixels, 3.25 lp mm–1 resolution; or distortion-free image intensifier coupled to state-of-the-art CCD camera
Special Radiographic Techniques 325
Insight
Dose Thresholds in IR Procedures
There are reasons why thresholds in IR might differ from those quoted in the literature, where
thresholds are based mainly on radiotherapy and accidents.
1. Unlike stochastic effects, which are thought to be strictly additive, tissue-specific effects can
be repaired so fractionated or low dose rate effects may be less severe.
2. At low dose rates very little is known about the effects of exposed area on thresholds.
3. There is increasing evidence that some patients are more radiation sensitive than others,
especially to skin injury. Possible explanations are enhanced genetic susceptibility to radi-
ation or a compromised healing process.
For many procedures, DAP is a poor measure of the maximum skin dose and risk
of tissue effects especially in cardiology where there are partially overlapping, or
326 Physics for Diagnostic Radiology
TABLE 9.5
Good Practice for Investigating a Recorded Tissue Effect after a High Exposure
1. Monitor and record all entrance skin doses and DAP readings
2. Check monitor calibrations and, if possible set tighter tolerances
3. Match the entrance field to the position of the injury—especially for erythema
4. Seek advice from the Radiation Protection Adviser and radiotherapy colleagues
5. Check radiation protection reports for evidence of variation in performance of equipment
6. Review all irradiation programmes with a view to lower dose rates
7. Introduce routine post-procedure patient visits when the estimated skin dose exceeds a specified
limit—say 2 Gy
• Is one personal dose meter sufficient and should it be worn above or below the
apron?
• Are two dose meters required, one above and one below the apron?
• Is extremity monitoring—eyes, hands, legs necessary?
(1) Actions which reduce the dose to the patient will also reduce the dose to staff.
(2) The pattern of scattered radiation around the patient is not uniform. Readers are
advised to refer to Figure 3.7 again. This shows that the highest dose rates are in the
back-scattered direction. In the forward direction the patient is an effective attenu-
ator and in a practical situation further attenuation will be provided by the image
receptor. The implications for the interventionalist are clear. For lateral and oblique
views it is much better to work on the image receptor side of the patient than on
the X-ray tube side. If the tube axis is vertical, work with an under-couch tube if the
clinical procedure allows so that the dose to the upper body and eyes is reduced.
(3) Doses to the hands are very dependent on their position relative to the beam and
the hand movements required for a particular examination. These will vary from
one examination to another.
(4) The monitoring must be both relevant and proportionate to the risk.
Proper use of protection measures, for example, lead apron, thyroid collar, fixed and
moveable lead screens can be very effective. For example, a 0.35 mm lead apron and thyroid
shield will reduce the risk by at least a factor of 30, maybe more for some procedures.
Other personal shielding, for example, lead eye glasses and leaded gloves (giving up to
20% beam attenuation) may be necessary but they are inconvenient to wear and leaded
gloves severely inhibit manual dexterity.
Special Radiographic Techniques 327
Because of all the uncertainties, for a new procedure the best advice is ‘If in doubt, moni-
tor’. If readings are acceptably low, monitoring may be relaxed. Acceptability should be
judged in terms of work load and dose limits (see Section 14.3.3). For whole body monitor-
ing a single dose meter worn under the apron should suffice for annual effective doses
less than annual background radiation; an additional monitor should be worn over the
apron if doses are higher. Additional extremity monitoring can often be relaxed after ini-
tial measurements but should be continued if estimated annual doses exceed one-tenth of
the relevant dose limit.
There are also logistic problems with personal dosimetry in IR (e.g. staff forget to wear
their badges), often caused because staff are focussed on the clinical procedure, not on their
personal dose. Perhaps greater efforts should be made to relieve the operator of the respon-
sibility for their own monitoring. With fully digital systems one way to do this might be by
mathematical modelling using the exposure and projection data from the acquisition runs
stored by the equipment, generic data on scatter patterns around the X-ray tube, couch
and patient, and images of staff positions. This is a formidable project but, if successful, it
would have the added benefit of providing precise data on doses to patients.
exposure, and if the cable between the transformer and the tube has a significant capaci-
tance, there may be a post-exposure surge of radiation.
To achieve a rectangular configuration of kV with minimal ripple and hence a uniform
X-ray output over a very short exposure time (say 1 ms) requires a grid controlled 12-pulse,
multipurpose or constant potential generator. The shorter the exposure, the more stringent
the requirements. This leads to the rather paradoxical conclusion that the smallest patients
require the most powerful machines.
Note that it is generally not acceptable to achieve more ‘convenient’ that is, longer, expo-
sure times by reducing the kV. This will increase exposure times, by the converse of the
processes discussed in Section 2.5.4, and will also increase contrast but the penalty in
terms of increased dose is likely to be high.
Added filtration—an optimised radiograph of a child cannot be obtained simply by
reducing the exposure time in proportion to body thickness. This can be appreciated from
Figure 9.18. At the depth in an adult corresponding to the entrance surface of the child
there will be not only beam attenuation but also beam hardening. This beam hardening
must be introduced externally for paediatric investigations. Additional filtration of 1 mm
aluminium plus 0.1–0.2 mm copper (equivalent to about 3–6 mm aluminium at standard
diagnostic radiographic voltages) is appropriate. Note that in general-purpose equipment
it may not be easy to change the filtration frequently, nor to avoid the risk of using the
wrong filtration for a given patient.
Attempts have been made to use filter materials with K shell absorption edges (see
Section 3.8 and Table 1.1) at specific wavelengths, for example, rhodium and erbium, with
K-edges at 23.2 and 57.5 keV, respectively. Considerable dose reduction can be achieved
with these filters.
Low attenuation materials—in view of the increased radiation sensitivity of the patient, the
use of carbon fibre or some of the newer plastics in materials for table tops, grids (if used),
front plates of film changers and cassettes is strongly recommended. In the voltage range
used for paediatric patients, dose reduction of up to 40% may be achieved.
Field size and collimation—inappropriate field size is a frequent fault in paediatric radiol-
ogy. Too small a field may result in important anatomical detail being missed. Too large
100 100
1 1
0.1 0.1
0.01 0.01
0.001 0.001
ant. ant. ant. post.
adult child neonate
FIGURE 9.18
Schematic representation of the difference in entrance dose for an adult, a child and a neonate. The initial
attenuation in the adult must be introduced by added filtration for paediatric radiology.
Special Radiographic Techniques 329
a field will not only impair image contrast and resolution by increasing the amount of
scattered radiation but will also result in unnecessary ionising radiation energy being
deposited in the body outside the region of interest. In the neonatal period the tolerance for
maximum field size should be no more than 1 cm greater than the minimum value. After
the neonatal period this may be relaxed to 2 cm.
Effective immobilisation—incorrect positioning is another frequent cause of inadequate
image quality and no exposure should be allowed unless there is a high probability that the
exact positioning will be maintained. In paediatric work appropriate immobilisation devices
may be required to ensure that (i) the patient does not move, (ii) the beam can be centred cor-
rectly, (iii) the image is obtained in the correct projection, (iv) accurate collimation limits the
beam to the required area, (v) shielding of the remainder of the body is possible.
In many situations physical restraint will be a poor substitute for a properly designed
device.
Use of grids—there is much less scattered radiation from an infant than from an adult
and grids are frequently unnecessary especially for fluoroscopic examinations and spot
films. Therefore equipment in which the grid can easily be removed should be used.
If a grid is necessary, the grid ratio should not exceed 1:8 and the line number should not
exceed 40 per cm. Moving grids may cause problems at very short exposure times.
Image receptor—Table 9.6 shows some typical values for the dose required at the image
receptor for film-screen combinations and the lower limit of visual resolution for different
speed classes.
Note that these figures, which were obtained at 80 kVp with a suitable phantom, should
be used only as a guide. Exact values will vary with kV and filtration, especially that
caused by the object.
Reference to Table 9.6 shows that a high speed film-screen combination gives a big dose
saving. Furthermore it results in a shorter exposure which minimises motion unsharp-
ness, the most important cause of blurring in paediatric imaging. The slight loss in resolu-
tion is rarely a problem although it may be a reason for choosing a lower speed class for
selected examinations.
Many of the advantages of digital image receptors and digital image handling are
important in paediatric radiology, not least the potential for dose reduction. However, the
underlying physics is not basically different from that discussed in Chapters 5 and 6.
Automatic exposure control—There are a number of potential problems with using AEC in
paediatric work (i) the system may not be able to compensate for the very large variation
in patient size; (ii) the detector may be too big for the critical region of interest; (iii) it may
TABLE 9.6
Relationship between Speed Class, Radiation Dose and Resolution for Different
Film-Screen Combinations
Dose Requirement at Lower Limit of Resolution
Speed Class Image Receptor (µGy) (lp mm–1)
25 40 4.8
50 20 4.0
100 10 3.4
200 5 2.8
400 2.5 2.4
800 1.25 2.0
330 Physics for Diagnostic Radiology
not be possible to move the detector to the critical region; (iv) the usual ionisation chamber
of an AEC is built behind the grid which is frequently not necessary; (v) a variety of film-
screen combinations may be required for different examinations and the AEC system will
have to be calibrated for each of them; (vi) the AEC may require a longer exposure time
than the radiographic examination.
An AEC specifically designed for paediatric work will have a small mobile detector that
can be positioned very precisely, mounted behind a lead-free cassette and must respond to an
absorbed dose that is considerably less than 1 µGy. The technical problems are such that it may
be preferable to use predetermined exposure charts based on the infant’s size and weight.
500
400
Mean surface entrance dose (µGy)
300
200
100
0
1 2 3 4 5 6
Number of criteria fulfilled
FIGURE 9.19
Correlation between fulfilment of X-ray technique criteria and dose for chest radiography in 10-month old
infants. (Adapted from Schneider K, Evaluation of quality assurance in paediatric radiology. Rad. Protec.
Dosimet. 57, 119–123, 1995, with permission.)
Special Radiographic Techniques 331
TABLE 9.7
Good Radiographic Technique for an AP Projection of the
Chest of a Neonate
Patient Position Supine
Nominal focal spot value Less than 1.3 mm
Additional filtration 1 mm Al + 0.2 mm Cu
Grid No
Screen-film system Nominal speed class 400
Focus-receptor distance 100 cm
Radiographic voltage 65 kVp
AEC No
Exposure time <4 ms
Protective shielding Lead-rubber masking of abdomen
Source: Adapted from CEC European Guidelines on quality criteria
for diagnostic radiographic images in paediatrics. Report
EUR 16261 Luxembourg, Office for Official Publications of the
European Communities, 1996.
TABLE 9.8
Suggested Image Quality Criteria for AP Neonatal Chest Examinations
Reproduction of the vascular pattern in the central half of the lungs
Visually sharp reproduction of the trachea and proximal bronchi
Visually sharp reproduction of the diaphragm and costophrenic angles
Reproduction of the spine and paraspinal structures
Visualisation of retrocardiac lung and mediastinum
Source: Adapted from CEC European Guidelines on quality criteria for diagnos-
tic radiographic images in paediatrics. Report EUR 16261 Luxembourg,
Office for Official Publications of the European Communities, 1996.
More or less as a direct consequence of this work, in 1996 the European Commission
published ‘European Guidelines on Quality Criteria for Diagnostic Radiographic Images
in Paediatrics’. An example of good radiographic technique for an AP projection of the
chest of a neonate is shown in Table 9.7.
The suggested guideline skin entrance dose that should not normally be exceeded was
set at 80 µGy although this figure has subsequently been reduced to 50 µGy and even
30 µGy should suffice.
A list of image criteria is suggested in Table 9.8 and images may be compared using
evaluation methods such as those discussed in Chapter 7 especially receiver operator
characteristic (ROC) curves.
For a recent paper on dose and image quality optimisation in neonatal radiology see
Dougeni et al. (2007).
range of mA settings to make full use of the sensitivity of modern image receptors. Note
that narrower beams and shorter cycle times require an effectively constant output. Any
significant fluctuation, for example caused by the kV dropping with the mains frequency,
would cause banding on the film. If AEC is used there must be a back-up timing circuit to
prevent excessive over-exposure.
Many panoramic radiographs are unacceptable because of poor patient positioning and
alignment. Equipment must be provided with effective positioning aids, for example the
light beam type. There should be sufficient variation on exposure times to take advantage
of the fastest receptors.
Insight
Dose Savings in Dental Radiology
Table 9.9 shows how easily the effective dose can escalate due to poor technique. Fortunately,
many of these improvements have now been implemented but it is salutory to realise that with a
ball-park figure of 10 million intra-oral radiographs/annum a saving of 10 µSv/film is equivalent to
a collective effective dose saving of 100 manSv.
Special Radiographic Techniques 335
TABLE 9.9
Typical Effective Doses for Dental Examinations under Different Conditions
Examination and Conditions Effective Dose (mSv)
Two dental bitewings, 70 kVp set, 200 mm FSD, rectangular collimation E speed film 0.002
Two dental bitewings 70 kVp set 200 mm FSD, round collimation E speed film 0.004
Two dental bite wings 50–60 kVp set 100 mm FSD round collimation E speed film 0.008
Two dental bite wings 50–60 kVp set 100 mm FSD round collimation D speed film 0.016
Source: Adapted from NRPB Guidelines on radiology standards for primary dental care. Report by the Royal
College of Radiologists and the National Radiological Protection Board. Documents of the NRPB Vol 5
No 3 NRPB Chilton, 1994.
References
Balter S. Interventional fluoroscopy—physics, technology and safety. Wiley-Liss, 2001, 170–180.
Brateman L and Karellas A. Mammography and other breast imaging techniques. In Advances in
medical physics. Eds, Wolbarst A B, Zamenhof R G, Hadlee W R, Medical Physics Publishing,
Madison, WI, 2006.
CEC. European Guidelines on quality criteria for diagnostic radiographic images in paediatrics. Report EUR
16261 Luxembourg, Office for Official Publications of the European Communities, 1996.
Dance D R. Diagnostic radiology with X-rays. In The physics of medical imaging. Ed., S Webb, IOP
Publishing Ltd, Bristol, UK, 1988, 38–40.
Davies A G, Cowan A R, Kengyelics S M, et al. Do flat panel detector cardiac X-ray systems con-
vey advantages over image intensifier based systems? Study comparing X-ray dose and image
quality. Eur Radiol, 17, 1787–1794, 2007.
Dendy P P. Commentary—Radiation risks in interventional radiology. Br J Radiol, 81, 1–7, 2008.
Dendy P P and Heaton B. Physics for diagnostic radiology (2nd ed.). IOP Publishing, Bristol, UK, 1999,
238–241.
Dotter C T and Judkins M P. Transluminal treatment of arterosclerotic obstruction. Description of a
new technic (sic) and a preliminary report of its application. Circulation, 30, 654–70, 1964.
Dougeni E D, Delis H B, Karatzu A A, et al. Dose and image quality optimisation in neonatal radiog-
raphy. Br J Radiol, 80, 207–815, 2007.
IPEM Report 89, The commissioning and routine testing of mammographic X-ray systems. Institute of
Physics and Engineering in Medicine, York, UK, 2005.
Jennings R J, Eastgate R J, Siedband M P and Ergun D L. Optimal X-ray spectra for screen film mam-
mography Med Phys, 8, 629–639, 1981.
Law J, Faulkner K and Young KC. Risk factors for induction of breast cancer by X-rays and their
implications for breast screening Br J Radiol, 80, 261–266, 2007.
Noel A and Thibault F. Digital detectors for mammography – the technical challenges. Eur Radiol, 14,
1990–1998, 2004.
NRPB. Guidelines on radiology standards for primary dental care. Report by the Royal College of
Radiologists and the National Radiological Protection Board. Documents of the NRPB, Vol 5,
No 3, NRPB Chilton, 1994.
NRPB. Guidance for dental practitioners on the safe use of X-ray equipment. National Radiological
Protection Board, Chilton, Didcot, 2001.
Schneider K. Evaluation of quality assurance in paediatric radiology. Radiat Protect Dosimet, 57, 119–
123, 1995.
Wade J P, Goldstone K E, Dendy P P. Patient dose measurements and dose reduction in East Anglia
UK. Radiat Protect Dosimet, 57, 445–448, 1995.
336 Physics for Diagnostic Radiology
Further Reading
Bushberg J T, Seibert J A, Leidholdt E M and Boone J M. The essential physics of medical imaging (2nd
ed.). Lippincott, Williams and Wilkins, Philadelphia. Mammography, 2002, 191–229.
Dowsett D J, Kenny P A and Johnston R E. The physics of diagnostic imaging (2nd ed.). Hodder Arnold,
chapter 9, 2006, 227–251.
Iannucci J M and Howerton L J. Dental radiography – principles and techniques (3rd ed.). Saunders,
Elsevier, St Louis, 2006.
Pisano E D, Yaffe M J and Kuzimiak C M. Digital mammography. Lippincott, Williams and Wilkins,
Philadelphia, 2004, 4–26.
Exercises
1. What are the basic requirements of an X-ray tube that is to be used for mammog-
raphy? How can these be achieved with
(a) a molybdenum anode tube; (b) a tungsten anode tube
2. Compare and contrast the use of film-screen combinations and digital receptors in
mammography.
3. Since a low kVp is required for high contrast, why should the use of a high kVp
sometimes be advantageous?
4. Outline the principles of magnification radiography and indicate its limitations.
5. Discuss the relative advantages and disadvantages of magnification
mammography.
6. Suggest some situations in which a subtraction technique might be used and indi-
cate how the required information would be obtained.
7. What is the effect of digital image subtraction on (a) quantum noise; (b) struc-
ture noise? How can the effect of quantum noise be reduced in digital subtraction
angiography?
8. Outline the ways in which dual energy subtraction imaging may be used to high-
light bony or soft tissue structures.
9. What are the important features of state-of-the-art equipment for interventional
radiology?
10. What are the most likely tissue injuries in interventional radiology? What arrange-
ments should be made to minimise such effects and to investigate them when they
occur?
11. Discuss the need for effective immobilisation in paediatric radiology.
12. Explain the need for good quality assurance procedures in dental radiology and
discuss the potential for reducing the collective dose to the population.
10
Diagnostic Imaging with Radioactive Materials
F I McKiddie
SUMMARY
This chapter covers the following aspects of imaging with radioactive materials:
CONTENTS
10.1 Introduction......................................................................................................................... 338
10.2 Principles of Imaging......................................................................................................... 339
10.2.1 The Gamma Camera..............................................................................................340
10.2.1.1 The Detector System................................................................................ 341
10.2.1.2 The Collimator..........................................................................................342
10.2.1.3 Pulse Processing.......................................................................................345
10.2.1.4 Correction Circuits..................................................................................346
10.2.1.5 Image Display........................................................................................... 347
10.2.2 Additional Features on the Modern Gamma Camera...................................... 347
10.2.2.1 Dual Headed Camera.............................................................................. 347
10.2.2.2 Whole Body Scanning............................................................................. 347
10.2.2.3 Tomographic Camera.............................................................................. 349
10.2.2.4 The Cardiac Camera................................................................................ 349
10.3 Factors Affecting the Quality of Radionuclide Images................................................. 349
10.3.1 Information in the Image and Signal to Noise Ratio......................................... 350
10.3.2 Choice of Radionuclide.......................................................................................... 351
10.3.3 Choice of Radiopharmaceutical............................................................................ 353
10.3.4 Performance of the Imaging Device.....................................................................354
10.3.4.1 Collimator Design....................................................................................354
10.3.4.2 Intrinsic Resolution..................................................................................354
337
338 Physics for Diagnostic Radiology
10.1 Introduction
Nuclear medicine is popularly understood to be the use of radioactive materials to pro-
duce diagnostic images of biochemical processes within the body. Although the wider
term includes all applications of radioactivity in diagnosis and treatment, excluding
sealed source radiotherapy, the general perception is taken to mean diagnostic imaging
in vivo.
However, this does not mean that the in vitro, or non-imaging techniques are insignifi-
cant. These involve the measurement of samples taken from the patient and are around
7% of the workload in a typical UK department (Hart and Wall 2005). The samples can
be blood, breath, urine or faeces and are labelled with both gamma and beta emitting
radionuclides. The requirement for accurate mathematical models of the processes under
investigation in many in vitro tests ensures that the results are absolute measures of physi-
ological processes such as glomerular filtration rate. For further details of the range of
in vitro tests see Elliott and Hilditch (2005).
The primary requirement in in vivo diagnostic imaging is the ability to obtain informa-
tion concerning the spatial distribution of activity within the patient. This chapter deals
with the physical principles involved in obtaining diagnostic quality images after a small
quantity of radioactive material has been administered to the patient in a suitable form.
The basic requirements of a good imaging system are as follows:
1. A device that is able to use the radiation emitted from the body to produce high
resolution images, supported by electronics, computing facilities and displays that
Diagnostic Imaging with Radioactive Materials 339
will permit the resulting image to be presented to the clinician in the manner most
suitable for interpretation.
2. A radionuclide that can be administered to the patient at sufficiently high activity
to give an acceptable number of counts in the image without delivering an unac-
ceptably high dose of radiation to the patient.
3. A radiopharmaceutical, that is a radionuclide firmly attached to a pharmaceutical,
that shows high specificity for the organ or region of interest in the body.
It is important to recognise that, when detecting in vivo radioactivity, sensitivity and spa-
tial resolution are mutually exclusive (see Figure 10.1). The arrangement on the left (Figure
10.1a) has high sensitivity because a large amount of radioactivity is in the field of view of the
detector, but poor resolution. The arrangement on the right (Figure 10.1b) has better resolu-
tion but correspondingly lower sensitivity. Since gamma rays are emitted in all directions,
the collimator ensures that the image is only made up of those events travelling perpendicu-
lar to the detector. This preserves the relationship between the position within the patient
from which the gamma ray was emitted, and its position of interaction in the detector.
In diagnostic imaging spatial resolution is important and sensitivity must be sacri-
ficed. A modern gamma camera (see Section 10.2.1) records no more than 1 in 104 of the
gamma rays emitted from that part of the patient within the field of view of the camera.
Furthermore, any additional loss of counts in the complete system will result in an image
of inferior quality unless the imaging time is extended to compensate. Therefore this chap-
ter also considers the factors that limit image quality and the precautions that must be
taken to optimise the images obtained using strictly controlled amounts of administered
activity and realistic imaging times.
(a) (b)
Detectors
Collimators
probably of
lead
Extended sources
of radioactivity
FIGURE 10.1
Collimator design showing conflicting requirements of sensitivity and resolution. Arrangement (a) where
the detector has a wide acceptance angle will have high sensitivity but poor resolution, whereas arrangement
(b) will have much better resolution but greatly reduced sensitivity.
340 Physics for Diagnostic Radiology
in computed tomography (CT). This also allows the detectors to be rotated around the
patient in up to a 540o arc to obtain tomographic image data.
The collimation of the detectors allows the spatial relationship between the point of
emission of a gamma ray in the patient and the point at which it strikes the crystal to be
established (see Figure 10.2). Note that unlike a grid in conventional radiology, the col-
limator in radionuclide imaging has no role in discriminating against scatter within the
patient. The function is purely to ensure that all photons incident on the crystal are travel-
ling perpendicular to the crystal (or nearly so) when they interact.
The detectors on modern gamma cameras are generally rectangular with a crystal of
approximately 400 mm × 500 mm. Up to 100 PMTs will be arranged in a close packed
hexagonal array behind the crystal to improve spatial resolution. As shown in Figure 10.3,
the number of photons reaching each PMT, and hence the strength of the signal, will be
determined by the solid angle subtended by the event at that PMT. Hence, by analysing all
the PMT signals, it is possible to determine the position of the gamma ray interaction in the
crystal. Essential features of the gamma camera may be considered under five headings.
Light
scintillation NaI (TI)
crystal
Parallel hole
Small collimator
radioactive
source Emitted gamma rays
FIGURE 10.2
Use of a collimator to encode spatial information. In the absence of the collimator radiation from the source may
strike any point in the crystal.
A B C D PMTs
ΩA ΩC
Photoelectric event NaI(TI) crystal
in crystal
Incident gamma rays
FIGURE 10.3
Use of an array of PMTs to obtain spatial information about an event in an NaI (TI) crystal. Light photons spread
out in all directions from an interaction and the signal from each PMT is proportional to the solid angle sub-
tended by the PMT at the event. The signal from PMT A is proportional to ΩA and much greater for the event
shown than the signal from PMT C which is proportional to Ω C.
342 Physics for Diagnostic Radiology
Display
Accept
Amplified PMT X Y Pulse height analyser
signals Z pulse
Correction
circuits
Pulse processing/
electronics
Lead shielding
PMTs
Light guide
NaI(TI) scintillation crystal
Lead multiparallel 3
hole collimator 2 2 1
FIGURE 10.4
Basic components of a gamma camera detector system. The fates of photons emitted from the source may be
classified as follows: (1) useful photon, (2) oblique photon removed by collimator, (3) scattered photon removed
by pulse height analyser, (4) absorbed photon contributing to patient dose but giving no information, (5) wasted
photons emitted in the wrong direction.
P P2
P P P
Thin crystal C Thick crystal C P P
1
FIGURE 10.5
Interactions of gamma rays with thin and thick NaI (TI) crystals. P = photoelectric absorption. C = Compton
scattering. With a thin crystal, many photons may pass through undetected, thereby reducing sensitivity. With
a thick crystal the image is degraded for two reasons. First, the distribution of light photons to the PMTs for an
event at the front of the crystal such as P1 will be different from the distribution for an event at the rear of the
crystal such as P2. Second, scatter in the crystal degrades image quality since the electronics will position ‘the
event’ somewhere between the two points of interaction in the crystal.
As shown in Table 10.1 a 12.5 mm crystal stops most of the 140 keV photons from techne-
tium-99m (Tc-99m), the most widely used radionuclide in nuclear medicine (see Section
10.3.2). However, it can also be seen that these crystals are less well suited to higher ener-
gies. The detector system is protected by lead shielding to stop stray radiation.
TABLE 10.1
Stopping Capability of a 12.5 mm Thick NaI
(Tl) Crystal for Photons of Different Energy
Photon Energy Interactions
keV %
80 100
140 89
200 60
350 23
500 15
Object plane
(a) (b) (c)
0 5 10 0 5 10 0 5 10
10 5
0 5 10 0 5 10 0
Image plane
FIGURE 10.6
The effect of different collimator designs on image appearance. (a) The parallel hole collimator produces the
most faithful reproduction of the object. (b) The diverging collimator produces a minified image but is useful
when the required field of view is bigger than the detector area. (c) The pinhole collimator produces an enlarged
inverted image and is useful for very small fields of view.
constructed from stacks of corrugated foil. The axes of the holes are perpendicular to the
face of the collimator and parallel to each other.
Performance of the collimator will be determined primarily by its resolution and sensitiv-
ity. As shown in Figure 10.7 long narrow holes will produce high resolution but low sensi-
tivity so these two variables work against each other. A typical low energy general purpose
collimator will have a resolution of 6 mm and a sensitivity of around 150 cps per megabec-
querel. However, a typical low energy high resolution collimator will have a resolution of 5
mm and a sensitivity of around 100 cps per megabecquerel. This emphasises the non-linear
relation between resolution and sensitivity in parallel hole collimators. The general pur-
pose and high resolution collimator pairs are the most widely used in routine diagnostic
imaging. The ‘low energy’ in their name refers to the fact that the thickness of the septa and
the size of the holes are optimised for gamma rays in the 120–140 keV range.
As the object is moved away from the face of a parallel hole collimator, resolution deteri-
orates markedly so all imaging should be done with the relevant part of the patient as close
as possible to the collimator face. Sensitivity is relatively independent of distance from the
collimator face, only decreasing if additional attenuating material is interposed.
Figure 10.7 also illustrates another problem. Higher energy gamma rays may be able to
penetrate the septa and this will cause serious image degradation. Thicker septa are now
required and for adequate sensitivity this also means larger holes and correspondingly
poorer resolution.
344 Physics for Diagnostic Radiology
NaI (TI)
crystal
l Collimator
s
2r
Gamma ray
Patient
FIGURE 10.7
Diagram showing that oblique gamma rays will pass through many lead strips, or septa, before reaching the
detector. Typical dimensions for a low energy collimator are l = 25 mm, 2r = 3 mm, s = 0.2 mm. The number of
holes will be approximately 15,000.
Crystal
2r c
FIGURE 10.8
Diagram showing the physical proportions and geometry of a parallel hole collimator with a point source
positioned at P.
Insight
Resolution and Sensitivity of a Collimator
The spatial resolution of a parallel hole collimator depends on the geometry of the holes, cor-
rected for any septal penetration. If the resolution RP of the image of a point source at P (see
Figure 10.8) is measured by its full width at half maximum height (FWHM) then
2 r (t e + d + c )
RP =
te
where r is the hole radius and te the effective collimator thickness after septal penetration has been
accounted for.
⎛ 2⎞
te = t − ⎜ ⎟
⎝ µ⎠
Diagnostic Imaging with Radioactive Materials 345
where μ is the linear attenuation coefficient for gamma rays in the collimator material.
The sensitivity (or geometric efficiency) of the collimator is given by
⎡ Kr 2 ⎤
Sens = ⎢ ⎥
⎣ te ( 2r + s) ⎦
Other collimator designs are used for special purposes. A converging collimator will
magnify the image of a small organ (Figure 10.6b). A variation sometimes used to image the
brain is a cone-beam collimator. This gives improved sensitivity and resolution. However,
these collimators introduce distortion because the magnification factor depends on the
distance from the object plane to the collimator and is therefore different for activity in
different planes in the object. There are also variations in resolution and sensitivity across
the field of view as the hole geometry varies from being almost parallel at the centre to
highly angled near the edge.
To image small objects a pinhole collimator which functions in a manner analogous to
the pinhole camera may be useful (Figure 10.6c). The pinhole is a few millimetres in diam-
eter and effectively limits the gamma rays to those passing through a point. The ratio of
the size of image to the size of object will depend on the ratio of the distance of the image
plane from the hole to the distance of the object plane from the hole. The latter distance
must be small if reasonable magnification is to be achieved. The thyroid gland is the organ
most frequently imaged in this way. Note that the pinhole collimator suffers from the same
distortions as converging collimators.
Insight
Positional Signal Calculation
A simple method of demonstrating the positional calculation is shown in Figure 10.9. This assumes
that the field of view of each PMT is triangular, dropping to zero at the centre of each adjacent tube
(see Figure 10.9a). If the signals from all the tubes are simply summed, this produces the output
shown in Figure 10.9b. This is the energy signal Z. To obtain useful positional information, the
output must vary linearly with x. Therefore, the weighting factors ωj are used. In the case shown
in Figure 10.9c the weighting factors are ω1 = 2, ω2 = 1, ω3 = 0, ω4 = –1, ω5 = –2.
346 Physics for Diagnostic Radiology
PMT 1 2 3 4 5
ω1 ω2 ω3 ω4 ω5
(a)
Σj PMTj (b)
ΣωjPMTj (c)
j
FIGURE 10.9
The use of weighting factors in a positional signal calculation. This example shows a calculation for the x-axis. A
similar calculation would be carried out for the y-axis. (a) A linear array of 5 PMTs; (b) Simply summing the sig-
nals produces an output which is independent of x, except at the edges of the array; (c) The sum of the weighted
signals produces an output which varies linearly with x.
As the weighting factors are energy dependent, allowance must be made for this by using a ratio
circuit for the final positional calculation. The positional signal for X is then expressed as
∑ j ω jPMTj( x,y )
X=
∑ jPMTj( x,y )
The energy signal Z is produced by summing all the unweighted PMT signals. This
signal is then subjected to pulse height analysis as described earlier in this section and the
XY signal is only allowed to pass to the processing system if the Z signal falls within the
preselected energy window.
window will be the same. If the measured peak is shifted to one side, this will be reflected
in a higher number of counts in the corresponding window. Once again variations in the
measured photopeak from the true photopeak can be stored as a correction matrix for each
part of the crystal.
Finally it is important to monitor and adjust the gains of the PMTs. One way to do this is
to use light emitting diodes to flood the crystal with light.
(a) X, Y Uniformity
correction
Anger logic Accept Analogue
position computation display
Z
Energy PHA
computation
Preamplifiers
ADCs
PMTs
(b)
Computer
Automatic console
tuning Digital
display
(c)
X, Y Linearity X, Y
Digital correction
processer Digital
Energy
storage
Z correction
Z MCA
ADC ADC ADC ADC ADC
Console
Computer
with digital
Automatic
display
tuning
DICOM
connection
PACS network
FIGURE 10.10
Block diagrams showing the development of data processing facilities with a gamma camera. (a) 1970s. Anger
logic used to position events, as described in the text; pulse height analyser (PHA) discriminated against scat-
ter; displays were mainly analogue with the occasional option of analogue to digital conversion (ADC). (b) 1980s
to mid 1990s. PMTs were tuned individually; stand alone computer consoles were introduced with ADC as
standard; two or three PHAs were provided to allow more than one gamma ray energy to be collected; linearity
correction was introduced; images stored digitally. (c) Mid 1990s. ADCs fitted to the output from each PMT/
pre-amplifier; signals processed digitally throughout; multichannel analysers allow photons at many gamma
ray energies to be collected simultaneously; networking becomes a possibility.
Diagnostic Imaging with Radioactive Materials 349
bed moves under the detector and the Y position signals have an offset direct current
(DC) voltage signal applied to them which is a function of detector position. All the data
are collected in a single pass of the camera over the patient thereby producing a non-
overlapping image, hence facilitating interpretation, in the shortest possible scan time.
Scan rates are typically 8–12 cm per min, so an average 1.7 m adult can be imaged in
15–20 min.
The large rectangular detectors of modern systems are big enough to cover the lateral
field of view in a single pass, in all but the most extreme cases. Where the field of view is
insufficient, the quality of the scan is likely to be poor in any case due to the degree of scat-
ter and attenuation encountered in very large patients.
Most systems now incorporate automatic contouring to minimise patient-detector sep-
aration. This allows the resolution within the image to be kept approximately constant
throughout by maintaining a constant distance between the patient and the detector. If the
separation between detectors is constant, resolution will be degraded at the points where
the detector is furthest from the patient. The contouring techniques used include infra-red
beam, electrical impedence or ‘learn mode’ systems.
(a) (b)
(c)
FIGURE 10.11
Images of a phantom at different count densities—(a) 20 kilocounts, (b) 200 kilocounts, (c) 2000 kilocounts.
The cold areas on the left and hot areas on the right become sharper as the number of counts in the image
increases.
Diagnostic Imaging with Radioactive Materials 351
A Mo-Tc generator consists of Mo-99 adsorbed onto the upper part of a small chromato-
graphic column filled with high grade alumina (Al2O3). When 0.9% saline solution is
passed down the column, the Mo-99 remains firmly bound to the alumina but the Tc-99m,
which is chemically different, is eluted. Essential features of a generator system are shown
in Figure 10.12. Since the Tc-99m builds up fairly rapidly (see Section 1.7), it is possible to
elute the column daily to obtain a ready supply of Tc-99m (Figure 10.13). The generator can
be replaced weekly, by which time the Mo-99 activity will have decreased significantly.
The Mo-99 required for manufacture of the generator systems is reactor produced at a
small number of sites worldwide. The vulnerability of this supply has become apparent
in recent years as the reactors age and become more susceptible to unexpected failures.
A number of unscheduled interruptions to generator manufacture have occurred with
serious consequences for the nuclear medicine community. Obtaining a secure and reli-
able supply of molybdenum is now one of the main issues facing nuclear medicine in the
coming years.
352 Physics for Diagnostic Radiology
TABLE 10.2
Properties of Some Radionuclides Used for In Vivo Imaging
Nuclide Half-Life Type of Emission Example of Use
Carbon-11 20 min a β giving 511 keV γ rays
+ cCO2 for regional cerebral blood flow
Nitrogen-13 10 min a
β giving 511 keV γ rays
+ c Amino acids for myocardial
metabolism
Oxygen-15 2 mina β+ giving 511 keV γ rays cGaseous studies with labelled O2, CO2
and CO, labelled water
Fluorine-18 110 mina β+ giving 511 keV γ rays Fluorodeoxyglucose for glucose
metabolism
Gallium-67 72 h 92 keV, 182 keV, 300 keV Soft tissue malignancy and infection
γ rays
Technetium-99m 6 hd 140 keV γ rays Numerous
Indium-111 2.8 day 173 keV, 247 keV γ rays Labelling blood products
Iodine-123 13 h 160 keV γ rays Thyroid and brain receptor imaging
–
Iodine-131 8.0 day 360 keV γ rays, β particles Metastases from carcinoma of thyroid
–
Xenon-133 5.3 daye 81 keV γ rays, β particles Lung perfusion studies
Thallium-201 73 h Orbital electron captureb Cardiac infarction and ischaemia
80 keV X-rays and Auger
electrons
a Cyclotron produced positron emitter—see Chapter 11.
b T1–201 decays by orbital electron or K shell capture. This is an alternative to positron emission when the
nucleus has too many protons and adjusts the balance by capturing an electron from the K shell. The initial
capture process may not result in any emission of radiation but characteristic X-rays will be emitted as the
vacancy in the K shell is filled. If the atomic number of the element is high enough (e.g. thallium Z = 81), this
characteristic radiation may be of high enough energy to be useful for imaging.
c Not widely used at present.
d Generator produced. Note short half-life radionuclides that cannot be produced on site are of limited value for
in vivo imaging.
e Since Xe-133 is used in gaseous form, the biological half-life is very short so the β– particle dose is small.
Insight
Potential Dose to the Patient from Tc-99
As mentioned earlier, the ratio of the activity of a mother-daughter radionuclide pair is the inverse
ratio of their half-lives, if both are radioactive. Thus,
Ad T 1/ 2p
⊕
Ap T 1/ 2d
where Ad is the activity of the daughter, Ap is the activity of the parent, T1/2p is the half-life of the
parent and T1/2d is the half-life of the daughter.
Diagnostic Imaging with Radioactive Materials 353
For the decay of Tc-99m to Tc-99 the half-lives are T1/2p = 6 hours and T1/2d = 2 × 105 years.
Therefore, the ratio becomes
ATc-99 6 6
≈ ≈ ≈ 3.4 x 10 −9
ATc-99m 2 x 105 x 365 x 24 1.8 x 109
This demonstrates that only a tiny fraction (approximately 3 parts in a billion) of the radiation dose
to the patient is due to the contribution of the Tc-99. The biological half-lives of the radionuclides
must also be considered. In this case, the biological half-lives are approximately equal, so the physi-
cal decay becomes the significant factor. However, if the daughter radionuclide had a significantly
longer biological half-life, then the dose to the patient arising from it may become significant.
Collection vial
Pressurised
elution vial
Terminal filter
Lead shield
FIGURE 10.12
Simplified diagram of a generator system that operates under positive pressure.
Tc-99 m
activity
24 48
Time (h)
FIGURE 10.13
Curve showing the Tc-99m activity in a Mo-99/Tc-99m generator as a function of time, assuming the column is
eluted every 24 h.
354 Physics for Diagnostic Radiology
readily available for all interested users, have a short effective half-life and be of low toxic-
ity. Very short half-life material may constitute a radiation hazard to the radiopharmacist
if it is necessary to start the preparation with a high activity.
Radiopharmaceuticals concentrate in organs of interest by a variety of mechanisms,
including capillary blockage, phagocytosis, cell sequestration, active transport, compart-
mental localisation, ion exchange and pharmacological localisation. The reader is referred
to a more specialised text (e.g. Frier 1994) for further details. The exact mechanism of
uptake into the organ of interest is often not vitally important, as long as sufficient is
accumulated. However, if functional parameters are to be derived and quantified then a
deeper understanding of the kinetic model underlying organ uptake is required (Peters
1998).
One disadvantage of Tc-99m is that, being a transition element, it is not easily bound to
biologically relevant molecules and its chemistry is complex (Nowotnik 1994). Nevertheless
in spite of the difficult chemistry, a wide range of pharmaceuticals has been labelled with
Tc-99m (Britton 1995) and good target to background ratios are sometimes achieved.
However, poor specificity of radiopharmaceuticals for their target organs remains a weak
point in nuclear medicine imaging, with most commonly employed radiopharmaceuticals
showing very poor selectivity, generally less than 20% in the organ of interest.
Note that the obvious elements to choose for synthesising specific physiological markers,
hydrogen, carbon, nitrogen and oxygen, have no gamma emitting isotopes. Pharmaceuticals
containing radioisotopes of some of these elements can be used for PET as discussed in
Chapter 11.
50
25
–5 0 5 –5 0 5 –5 0 5
Distance (mm)
FIGURE 10.14
Derivation of system resolution from line spread function measurements and the effect of distance and scat-
tering material on system resolution. The traces are typical images of a 1 mm line source of Tc-99m obtained
under different conditions: (a) no scattering material, source on collimator face, FWHM = 5.7 mm, (b) 5 cm tis-
sue equivalent scattering material, FWHM = 7.1 mm, (c) 10 cm tissue equivalent scattering material FWHM =
8.6 mm.
356 Physics for Diagnostic Radiology
⎛ ΔC ⎞
UD = ⎜ × 100%
⎝ Cmean ⎟⎠
where ΔC is the maximum difference in counts between any two adjacent pixel elements.
The standard deviation (SD) of the pixel counts can also be calculated and the coefficient
of variation is 100(SD/Cmean) where Cmean is the mean of all pixels within the field of view.
This is a useful measure for tracking change in non-uniformity over time. However, it
may miss local defects in the image and should always be used in conjunction with visual
inspection and another non-uniformity measure.
From the viewpoint of accurate diagnosis, camera non-uniformities must be minimised or
they may be wrongly interpreted as real variations in the image count density. As for linear dis-
tortion, it is now routine to collect and store a uniformity correction matrix that can be applied
to each image. For a well adjusted modern camera, integral non-uniformity over the centre of
the field of view should be less than 2%. The introduction of the digitised detector heads has
helped greatly in this respect, as they are far more stable than the previous analogue systems.
1. About 30 eV of energy must be dissipated in the crystal for the production of each
visible or ultraviolet photon.
2. Even assuming no loss of these photons, only about one photoelectron is produced
for every 10 photons on the PMT photocathode.
Thus to generate one electron the photocathode requires about 300 eV and a 140 keV photon
will produce only about 400 electrons at the photocathode. This number is subject to con-
siderable statistical fluctuation (N½ = 20 or 5%). The result is that a monoenergetic beam of
gamma rays will produce a range of pulses and will appear to contain a range of energies.
The result, as shown in Figure 10.15, is that, even in the absence of scatter, monoenergetic
gamma rays produce light signals with a range of energies. This spread, expressed as the
ratio of the FWHM of the photopeak spectrum to the photopeak energy, is a measure of
the energy resolution of the system and is about 10% for a gamma camera at 140 keV. The
spectrum is then further degraded by scatter in the patient (dotted curve).
Unscattered photons contribute information about the image so a wide energy window
(typically about 20%) must be used. Unfortunately a wide energy window permits some
gamma photons that have been Compton scattered through quite large angles, and may
have lost as much as 20 keV, to be accepted by the pulse height analyser. The problem is
greater for low energy gamma photons, for two reasons.
FIGURE 10.15
Graph demonstrating the energy resolution of the NaI(TI) crystal in a gamma camera. The FWHM (AB) is about
14 keV or 10% of the peak energy.
(a) (b)
(c)
FIGURE 10.16
The effect of distance and scatter on image quality. (a) Test object in contact with collimator face. (b) 10 cm air
separation. (c) 10 cm separation and perspex scatter material. In all images 2000 kilocounts were collected (cf
Figure 10.11c).
Note that semiconductor detectors (see Sections 4.7 and 4.9) produce a narrower spread
and much better energy discrimination. However, as was mentioned previously, these
have only been produced commercially for small field of view gamma cameras designed
for cardiac imaging.
Modern gamma cameras allow more than one energy window to be set, thus accepting
several photopeaks. This can be useful when working with a radionuclide which emits
gamma rays at more than one energy, for example, Ga-67 or In-111 or when attempting to
image two radionuclides simultaneously.
As shown in Figure 10.16 scattered radiation causes deterioration in the image of a test
object, especially when the scattering material also increases the distance to the collima-
tor face. The patient is the major source of scattering material and there is an obvious
358 Physics for Diagnostic Radiology
difference in image quality for say a bone scan of a very thin person when compared with
that of an obese person.
It might be assumed that due to the relatively low resolution of nuclear medicine data
compared to CT and MRI, the performance of the display screen is not of such great
importance. However, this would be erroneous as many nuclear medicine images are now
reported by the radiologist with reference to previous imaging. Therefore, the display
screen must be of sufficient performance to allow accurate assessment of images from
other imaging modalities. A detailed report on the performance of display screens, the
effect of the environment on their utility and their quality assurance has been produced
by the American Association of Physicists in Medicine (Samei et al. 2005).
Where it is necessary to produce hard copy images, most departments now use dry
carbon-based film which is readily utilised in networkable, high capacity printers. These
are generally DICOM compatible which greatly simplifies the networking task and allows
output in standard formats.
Digitised images permit graphical data to be produced and sophisticated forms of image
processing are possible. The techniques are similar to those discussed in Section 6.11. If the
images collected in a dynamic study are to be analysed quantitatively, they must of course
be digitised. This aspect will be discussed in more detail in the next section.
Kidneys
Bladder
(b)
Bladder
Activity (arbitrary units)
FIGURE 10.17
(a) Schematic drawings of regions of interest around kidneys and bladder for a renogram study. A background
region is also shown. (b) Typical activity—time curves for such regions of interest.
Diagnostic Imaging with Radioactive Materials 361
10.4.1.3 Deconvolution
Although radioactivity is injected intravenously as a bolus, after mixing with blood and
passing through the heart and lungs, it arrives at the kidneys over a period of time. Also
some activity may recycle. Thus the measured activity-time curve is a combination (convo-
lution) of a variable amount of activity and the rate of handling by the organ. It is possible
to measure the mean transit time for the organ and deconvolution is a mathematical tech-
nique that offers the possibility for removing arrival time effects and presenting the result
as for a single bolus of activity. However, in nuclear medicine the presence of noise limits
the power of deconvolution methods. It has been shown that the handling of the noise by
smoothing greatly increases the variability in measurements of this type (Houston et al.
2001) and they should, therefore, be treated with some caution.
C(t) = a + b sin(2πft + ϕ)
362 Physics for Diagnostic Radiology
1.0
0.5
0
200 400 600
Time (msec)
FIGURE 10.18
Time-activity curve over the left ventricle in a normal patient.
(a) (b)
FIGURE 10.19
(See colour insert.) Functional image of a normal MUGA study. (a) The phase image represents the phase (or
timing) of the contraction of the heart chambers. (b) The amplitude image represents the amplitude of the con-
traction. In this case it can be seen that the largest contraction occurs apically and along the lateral wall. Both of
these parameters are derived from the Fourier fitted curve (Figure 10.18).
where a is a baseline constant, b is the amplitude of the motion, f is the reciprocal of the
number of frames per cardiac cycle and ϕ is the phase of the motion.
The most important quantity derived from the left ventricle TAC is the ejection fraction
ED − ES
LVEF = %
ED − bgnd
Diagnostic Imaging with Radioactive Materials 363
where ED = region of interest (roi) counts at end diastole, ES = roi counts at end systole. The
choice of roi boundaries and the background (bgnd) roi are critical.
All the data may be analysed pixel by pixel to produce two functional images, one of
which shows the phase of each part of the heart motion, the other showing the ampli-
tude. Note that in practice the curves will not be truly sinusoidal but methods of Fourier
analysis may be used to introduce further refinements if necessary. Such images are best
displayed using a colour scale rather than monochrome, as the relative magnitude of the
parameter in different parts of the image is more readily appreciated.
It can be seen in Figure 10.19 that for this normal patient, the phase is similar throughout
each chamber and the atria are contracting out of phase with the ventricles. The amplitude
(volume contraction) image demonstrates that the highest contraction is occurring apically
and along the lateral wall.
A major difficulty with functional imaging is the choice of a reasonably simple math-
ematical index that is relevant to the physiological condition being studied.
15
Observed count rate ( 104 cps)
10 Line of
proportionality
5 10 15
Expected count rate ( 104 cps)
FIGURE 10.20
Curve showing how the various dead times in the camera system result in a count rate losses. In this example
10% and 20% losses in counts occur at about 6 × 104 and 13 × 104 cps, respectively. The maximum observed
count rate would be about 30 × 104 cps after which the recorded count rate would actually fall with increasing
activity.
364 Physics for Diagnostic Radiology
arithmetic circuits have minimum processing times and it may be an advantage to by-pass
the circuits that correct for spatial non-linearity, if any, to reduce processing time.
For all these reasons, if sources of known, increasing activity are placed in front of a
gamma camera under ideal conditions with no scattering material, a graph of observed
against expected count rate for a modern camera might be as shown in Figure 10.20. In prac-
tice, performance would be inferior to this because the camera electronics has to handle a
large number of scattered photons that are subsequently rejected by the pulse height analy-
ser. Thus the exact shape of curve is very dependent on the thickness of scattering material
and the width of the pulse height analyser window. Loss of counts can occur at count rates
as low as 5 × 104 cps and this may be a problem when making quantitative measurements.
However, the likelihood of encountering count rate problems in routine clinical prac-
tice has greatly reduced in recent years. The developments in modern digital processing
electronics have reduced the electronic processing time and, thus, increased the incident
count rates at which significant losses occur. There is also a reduced clinical requirement
for studies where high count rates may be encountered, such as first-pass cardiac stud-
ies. These have been replaced by non-radioactive alternatives including trans-oesophageal
echocardiography.
C( x , y ) = ∫ A( x ʹ , y ʹ , z)S( x − x ʹ , y − y ʹ , z) e − mt dx ʹ dy ʹ dz
all of the volume
occupied by activity
where S(x-x’, y-y’, z) represents the response of the detector (at x, y, z) to a point source of
activity (at x’, y’, z), μ is the linear attenuation coefficient of the medium and t is the thick-
ness of the attenuating medium traversed by the gamma rays. The recovery of the function
A(x’, y’, z) for the whole slice from the available data C(x, y) represents a complete solution
to the problem.
Use of a single value of μ is of course an approximation. Ideally, it should be replaced by a
matrix of values for the linear attenuation coefficient in different parts of the slice. To assist
in obtaining these values, gamma camera manufacturers are now producing SPECT/CT
systems where the tomographic gamma camera has a small CT system attached. Many
of these are low-dose, non-diagnostic CT systems, but some are fully diagnostic systems.
These have additional utility for imaging, similar to the PET/CT systems (see Chapter 11),
Diagnostic Imaging with Radioactive Materials 365
but create a number of new problems for nuclear medicine departments in terms of radia-
tion protection, room shielding and staff training.
Three fundamental limitations on emission tomography can be mentioned. The first is
collection efficiency. Gamma rays are emitted in all directions but only those which enter
the detector are used. Thus detection efficiency is severely limited unless the patient can
be surrounded by detectors. As has been mentioned earlier, in Section 10.2.2, the develop-
ment of dual headed camera systems has led to some improvement in this regard, although
efficiency remains a fundamental limitation.
The second limitation of SPECT is attenuation of gamma rays within the patient (see
Insight). The third limitation is common to all nuclear medicine studies, namely that the
collection time is only a small fraction of the time for which gamma rays are emitted.
Hence the images are seriously photon limited.
Insight
More on Attenuation Correction
Allowances for attenuation within the patient can be made and corrections simplified by adding
counts registered in opposite detectors. As shown in Figure 10.21, for a uniformly distributed
source a correction factor μL/(1 – e–μL) where L is the patient thickness can be applied. However,
experimental work indicates that the value of μ is neither that for narrow beam attenuation, nor
that for broad beam attenuation, but somewhere between the two. Note—this analysis does not
apply for a very non-uniform distribution. The reader can convince themselves of this by consider-
ing a point source that is not mid-way between A and B.
Although SPECT/CT systems have an advantage in determining the value of μ, there are still
issues of accuracy regarding the narrow versus broad beam conundrum and the conversion from
the values obtained at the kVp of the X-ray system to the monoenergetic photopeak energy of the
radionuclide being used.
Note that accurate attenuation correction is only essential when SPECT is used quantitatively.
Uncorrected images are generally acceptable for qualitative interpretation.
The projection data required are normally collected by rotating the gamma camera
around the patient. Data is collected at fixed angular increments, typically 3o or 6o, for
15–20 min or during continuous rotation for the same time. The large field of view of mod-
ern detectors allows a large volume of the patient to be imaged in a single acquisition. The
flexible positioning of the detector heads also allows acquisitions to be carried out with the
A x B
x=0 dx x=L
FIGURE 10.21
Correction factor to be applied for gamma ray attenuation in the patient when the source of radioactivity is
uniformly distributed. If the total activity is I, then the activity per unit length is I/L and the activity in the
L
strip dx is Idx/L. The signal recorded at A is (I/L) ∫0 e –μx dx where 𝜇 is the linear attenuation coefficient of the
L
medium. The signal recorded at B is (I/L) ∫0 e –μ(L–x) dx. Both expressions work out to (I/L)((1 – e –μL)/𝜇) and since, in
the absence of attenuation, the signal recorded at A and B should be I, the total activity in the strip, the required
correction factor is (𝜇L)/(1 – e –μL).
366 Physics for Diagnostic Radiology
heads at 90o to one another. This is typically used for cardiac SPECT where data are only
acquired over a 180o arc, and allows the acquisition time to be reduced by a factor of two.
Tomography places more stringent demands on the design and performance of gamma
cameras than conventional imaging. For example, multiple views must be obtained at pre-
cisely known angles and the centre of rotation of the camera must not move, for example
under its own weight, during data collection. The face(s) of the camera must remain accu-
rately parallel to the long axis of the patient and the mechanical and electronic axes of
the camera must be accurately aligned. Camera non-uniformities are more serious than
in conventional imaging since they frequently reconstruct as ‘ring’ artefacts. If views are
corrected with a non-uniformity correction matrix collected at a fixed angle, care must be
taken to ensure that the pattern of non-uniformity does not change with camera angle.
Such changes could occur, for example as a result of changes in PM tube gain due to stray
magnetic fields.
For all these reasons, especially very poor counting statistics, resolution is inferior in
SPECT to that in conventional gamma camera imaging and much inferior to CT. Resolution
decreases with the radius of rotation of the camera. This is because the circumference is
2πr and for N profiles the sampling frequency at the edge is N/2πr. For body sections
the resolution is about 8–10 mm so 3o sampling (N = 60) is quite adequate. For example,
with objects 20 cm in diameter N/2πr ≈ 0.1 mm–1. If this is the minimum sampling fre-
quency, then by the Nyquist theorem (see Section 8.4.2) the resolution limit set by sampling
is ≈ 1/2νm ≈ 5 mm which is less than the limit set by other factors.
The clinical demand for SPECT has increased markedly in recent years as the devel-
opment of slip-ring, dual headed gamma cameras has made data acquisition faster and
simpler. Studies such as myocardial perfusion and regional cerebral blood flow imag-
ing, which have to be carried out tomographically, have made up much of this demand.
However, SPECT is often now carried out on studies such as bone scans, for which it pre-
viously would not have been considered feasible, due to the improvements in acquisition
and processing systems. Table 10.3 contains a list of the ten most frequently performed
examinations in a typical Nuclear Medicine Department of a University Teaching Hospital
during 2008. This shows that a significant number of these examinations are performed
tomographically.
The gamma camera manufacturers are now providing dedicated processing and analy-
sis packages for many SPECT applications. Recent developments have included the pro-
duction of resolution recovery software. This allows the production of reconstructed data
of similar quality to the current standard using a reduced acquisition time. Times can typi-
cally be reduced by at least a half whilst maintaining image quality, allowing a significant
increase in patient throughput.
TABLE 10.3
Ten Most Frequently Performed Examinations in the Nuclear Medicine Department of a Teaching
Hospital during 2008
Number of
Study Radiopharmaceutical Study Type Investigations
Bone scan 99m Tc-MDPa Wholebody 1769
SPECT 28
Myocardial perfusion scan 99m Tc-Tetrofosminb SPECT 1706
Ventilation and perfusion 99m Tc-DTPAc aerosol Static 1163
lung scan 99mTc-MAAd
together in this brief overview of those aspects of the provision of a high quality nuclear
medicine service where physical principles are important.
The principal aim is to ensure that the requisite diagnostic information is obtained with
the minimum dose to the patient. Prime areas of concern are the accurate measurement of
the administered activity and the performance of the imaging device. However, consid-
eration must also be given to the safety of staff, other patients and the public, especially
since the patient themselves becomes a source of radiation, and to the release of radioac-
tive waste to the environment.
(d) Determination of attenuation of different materials, for example, glass and plastic
containers
(e) Calibration of measuring equipment
A protocol for establishing and maintaining the calibration of medical radionuclide cali-
brators and their quality control has been prepared by the National Physical Laboratory
(Gadd et al. 2006). Drawing on experience of traceability to national standards in radio-
therapy, the report gives recommended methods for calibrating reference instruments
at a large regional centre and for checking field instruments. It also gives guidelines on
the frequency of quality control tests and acceptable calibration tolerances for both types
of instrument. A 5% limit on overall accuracy is a reasonable practical figure for a field
instrument.
A number of variables can cause significant errors in radionuclide calibrators. Two are
particularly important.
(a) The container size and shape, and volume of fluid can be a problem with beta and
low energy gamma emitters because of self-absorption. Even the thickness of the
vial can affect calibration. These variables are less of a problem for energies above
about 140 keV but it is important that the gamma ray energy being used for imag-
ing is also the one being used for calibration by the calibrator. For example, the
160 keV gamma ray from I-123 is used for imaging but unless special precautions
are taken to filter out low energy radiation, the calibrator will respond to the low
energy characteristic X-rays at 35 keV.
(b) Contamination by other radionuclides can seriously affect calibrator accuracy if
the calibrator is much more sensitive to the contaminant than to the principal
product. Two examples are given in Table 10.4.
For Tl-201 1.5% contamination will overestimate the activity by 13%. For Sr-85, as little as
0.2% Sr-89 will overestimate the activity by 38%.
Great precision in respect of the isotope calibrator is of little value if there is uncertainty
or variation in the amount of activity actually administered to the patient. Some possible
causes for the wrong activity being given would be
TABLE 10.4
Effect of Contaminants on Accuracy of Calibrator Measurement of
Radionuclide Activity
Calibrator Sensitivity
Radionuclide Typical Activity Present (%) pA MBq–1
Tl-200 0.5 2.56
Tl-201 98.5 0.86
Tl-202 1.0 5.03
Sr-89 0.2 5.26
Sr-85 99.8 0.028
Diagnostic Imaging with Radioactive Materials 369
(c) Retention of the radiopharmaceutical in the vial (stickiness) for example, meth-
oxyisobutylisonitrite (MIBI), Tl-201
It is important to check that all operators can draw up and inject a specified volume, say
1 ml, accurately. With a syringe shield in place it is possible for an inexperienced operator
to draw up almost no fluid at all.
TABLE 10.5
Performance Measurements for a Gamma Camera
Test Frequency Comment
Physical inspection Daily
Photo peak position Daily
Visual uniformity Daily Failure of a number of functions will show as non-uniformities in a
Tc-99m flood image
Quantitative Daily Both integral and differential uniformity should be checked over
uniformity the whole field of view and over the centre of the field of view
Extrinsic uniformity Monthly Measured with the collimator in position
Centre of rotation Monthly Measured using an off-centre point source
Tomographic quality Monthly Assessed qualitatively using a suitable tomographic phantom
Spatial distortion 3 monthly May be obtained from the same data set
Intrinsic resolution 3 monthly
System spatial 3 monthly Measured with the collimator in position
resolution
Energy resolution 3 monthly
TABLE 10.6
Typical Performance Figures for a Gamma Camera
Parameter Value Conditions
Intrinsic spatial resolution 3.7 mm FWHM over the useful field of view
System spatial resolution 6.5 mm FWHM with high resolution collimator, without
scatter at 10 cm
Intrinsic energy resolution 10% FWHM at 140 keV
14% FWHM at 140 keV with collimator and scatter
Integral uniformity 2.0% Centre of field of view
Differential uniformity 1.5% Centre of field of view
Count rate performance 130,000 cps 20% loss of counts without scatter
75,000 cps Deterioration of intrinsic spatial resolution to 4.2 mm
System sensitivity 160 cps MBq–1 Tc-99m and a general purpose collimator
370 Physics for Diagnostic Radiology
Table 10.5 lists the more important tests and suggests an approximate frequency. There
is some variation between centres. For further detail on these procedures see Bolster (2003)
and Elliott (2005). Table 10.6 gives typical performance figures.
Note that quality assurance (QA) checks must not be unduly disruptive to the work
of the department. Daily checks should take no more than a few minutes and monthly
checks no more than 1–2 h.
There are many possible reasons for computer software failure, and complete software
evaluation is extremely difficult. Since, however, the basic premise of software evaluation
is that application of a programme to a known data set will produce known results; new
software should be tested against
Any significant variation from the expected results can then be investigated.
10.7 Conclusions
The primary detector of gamma rays in nuclear medicine is invariably a sodium iodide
crystal, doped with about 0.1% thallium. The advantages of this detector are as follows:
1. A high density and high atomic number ensure a good gamma ray stopping effi-
ciency for a given crystal thickness.
2. The high atomic number favours a photoelectric interaction, thus a pulse is gener-
ated which represents the full energy of the gamma ray.
3. Thallium gives a high conversion efficiency of the order of 10%.
4. A short ‘dead time’ in the crystal generally permits acceptable counting rates
except for very rapid dynamic studies, when dead times both in the crystal and
elsewhere in the system can be important.
The instrument of choice for a wide range of static and dynamic examinations is the
gamma camera. Its mode of operation may be considered in two parts.
Over 90% of all nuclear medicine examinations are carried out with Tc-99m. The advan-
tages of Tc-99m for radionuclide imaging are as follows:
Diagnostic Imaging with Radioactive Materials 371
1. The number of counts that can be collected for given limits on radiation dose to
the patient, required resolution, and time of examination
2. The ability of the radiopharmaceutical to concentrate in the region of interest
3. The presence of, and ability to discriminate against, scattered radiation
4. Overall performance of the imaging device, including spatial and temporal linear-
ity, uniformity and system resolution
Modern cameras collect all data digitally and all counts within one pixel are summed.
For visual display it is then converted into an ‘analogue type’ image by interpolation and
smoothing. Digitised images can also be used to extract, under computer control, func-
tional data for specified regions of interest. The gamma camera is now used extensively
for dynamic studies where important information is obtained by numerical analysis of
digitised images on a frame-by-frame basis.
Tomographic imaging is becoming an ever more significant part of the workload in
nuclear medicine. The large field of view digital detectors and improved computing power
have removed many of the previous obstacles to SPECT, although the count limited nature
of the process remains a fundamental obstacle.
References
BFCR, Reports www.rcr.ac.uk/publications.aspx?PageID=310, 2008.
Bolster, A., Quality Assurance in Gamma Camera Systems. Report No. 86, IPEM, UK, 2003.
Britton, K.E., Nuclear medicine, state of the art and science, Radiography, 1, 13, 1995.
Elliott, A.T., Quality assurance, in Practical Nuclear Medicine, 3rd ed., Sharp, P.F., Gemmell, H.G., and
Murray, A.D., Eds., Springer, London, 2005, chap. 5.
Elliott, A.T. and Hilditch, T.E., Non-imaging radionuclide investigations, in Practical Nuclear Medicine,
3rd ed., Sharp, P.F., Gemmell, H.G., and Murray, A.D., Eds., Springer, London, 2005, chap. 4.
Frier, M., Mechanisms of localisation of pharmaceuticals, in Text Book of Radiopharmacy Theory and
Practice, 2nd ed., Sampson, C.B., Ed., Gordon and Breach, London, 1994, 201.
Gadd, R. et al., Protocol for establishing and maintaining the calibration of medical radionuclide calibrators
and their quality control. Measurement Good Practice Guide No. 93, NPL, Teddington, 2006.
Hart, D. and Wall, B.F., UK Nuclear Medicine Survey 2003–4, Nucl. Med. Commun., 26, 937, 2005.
372 Physics for Diagnostic Radiology
Houston, A.S. et al., UK audit and analysis of quantitative parameters obtained from gamma camera
renography, Nucl. Med. Commun., 22, 559, 2001.
Nowotnik, D.P., Physico-chemical concepts in the preparation of technetium radiopharmaceuti-
cals, in Text Book of Radiopharmacy Theory and Practice, 2nd ed., Sampson, C.B., Ed., Gordon and
Breach, London, 1994, 29.
Peters, A.M., Fundamentals of tracer kinetics for radiologists, Brit. J. Radiol., 71, 1116, 1998.
Samei, E. et al., Assessment of display performance for medical imaging systems, Report of the American
Association of Physicists in Medicine (AAPM) Task Group 18, Medical Physics Publishing, Madison,
2005.
Sharp, P.F., Dendy, P.P. and Keyes, W.I., Radionuclide Imaging Techniques, Academic Press, New
York, 1985.
Exercises
1. What is a radionuclide generator?
2. A dose of Tc-99m macroaggregated human serum albumin for a lung scan had
an activity of 180 MBq in a volume of 3.5 ml when it was prepared at 11.30 h. If
you wished to inject 23 MBq from this dose into a patient at 16.30 h, what volume
would you administer? (Half-life of Tc-99m = 6 h).
3. A radiopharmaceutical has a physical half-life of 6 h and a biological half-life of
13 h. How long will it take for the activity in the patient to drop to 15% of that
injected?
4. List the main characteristics of an ideal radiopharmaceutical.
5. What are the possible disadvantages of preparing a radiopharmaceutical a long
time before it is administered to the patient?
6. Why is the ideal energy for gamma rays used in clinical radionuclide imaging in
the range 100–200 keV?
7. In nuclear medicine, why are interactions of Tc-99m gamma rays in the patient pri-
marily by the Compton effect whilst those in the sodium iodide crystal are mainly
photoelectric processes?
8. The sodium iodide crystal in a certain gamma camera is 9 mm thick. Calculate the
fraction of the gamma rays it will absorb at (a) 140 keV and (b) 500 keV. Assume the
gamma rays are incident normally on the crystal and that the linear absorption
coefficient of sodium iodide is 0.4 mm–1 at 140 keV and 0.016 mm–1 at 500 keV.
9. Why is it necessary to use a collimator for imaging gamma rays but not for X-rays
produced by a diagnostic set?
10. What are the differences between a collimator used to image low energy radionu-
clides and one used to image high energy radionuclides?
11. Why is pulse height analysis used to discriminate against scatter in nuclear medi-
cine but not in radiology?
12. Compare and contrast the methods used to reduce the effect of scattered radiation
on image quality in radiology with those used in clinical radionuclide imaging
Diagnostic Imaging with Radioactive Materials 373
13. How is the spatial resolution of a gamma camera measured and what is the clini-
cal relevance of the measurement?
14. What factors affect
i. The sharpness
ii. The contrast of a clinical radionuclide image?
15. Explain, with a block diagram of the equipment, how a dynamic study is per-
formed with a gamma camera
16. A radiopharmaceutical labelled with Tc-99m and a gamma camera system were
used for a renogram. Curves were plotted of the counts over each kidney as a
function of time and although the shapes of both curves were normal, the maxi-
mum count recorded over the right kidney was higher than over the left. Suggest
reasons
17. What are functional images? Illustrate your answer by considering one applica-
tion of functional images in nuclear medicine.
18. Explain how SPECT data is acquired and how the data are manipulated to pro-
duce a clinically useful image.
19. Describe the processes that must be undertaken to ensure that a tomographic
gamma camera system is functioning correctly for the acquisition of SPECT data.
20. Describe the steps that must be taken to fully test a new gamma camera system
after installation. Why may some of the steps previously required be unnecessary
on modern systems?
11
Positron Emission Tomographic Imaging (PET)
P H Jarritt
SUMMARY
• Positron emission tomography is based upon positron emitting radiotracers
which have a short half-life.
• The positron decay process inherently limits the resolution that can be
achieved by an imaging system.
• The detector properties and configuration are important in determining the
performance of an imaging system.
• Multiple data corrections are required to correct for limitations in the detec-
tion process and to provide quantitative images.
• PET system development has incorporated computed tomography (CT) scan-
ners for the attenuation correction process.
• ‘Fused’ PET and CT imaging has enhanced the diagnostic accuracy and
application of PET imaging, especially in radiotherapy planning.
• Radiation protection is an important issue in PET imaging for both staff and
patients.
CONTENTS
11.1 Introduction....................................................................................................................... 376
11.2 PET Radionuclide Production and Properties.............................................................. 377
11.3 Principles of PET Imaging and Detector Technology................................................. 378
11.3.1 Positron Decay...................................................................................................... 378
11.3.2 Coincidence Detection......................................................................................... 381
11.4 Detector Geometry........................................................................................................... 382
11.5 Detector Construction...................................................................................................... 382
11.6 Detector Resolution.......................................................................................................... 383
11.7 Detection Events...............................................................................................................384
11.8 Image Formation............................................................................................................... 385
11.9 Image Reconstruction...................................................................................................... 387
11.10 Multimodality Imaging................................................................................................... 388
11.11 Quality Control................................................................................................................. 389
11.12 Clinical Implementation—Radiation Safety Considerations for PET Imaging....... 390
11.12.1 Radiation Risks to Staff...................................................................................... 390
11.12.2 Radiation Dose to the Patient............................................................................ 393
375
376 Physics for Diagnostic Radiology
11.1 Introduction
The development of PET as an imaging modality sprang from the recognition that emis-
sions from radioactive decay processes could be used to measure metabolic processes in
vivo. The radioactive tracer method was first applied by George de Hevesy in 1923 for
which he was awarded the Nobel Prize in Chemistry in 1943. The process of positron decay
was discovered in 1933 by Thibaud (1933) and Joliot (1933) and within 12 years positron
emitting radionuclides were being used to undertake metabolic studies in animals using
O-15 (Tobias et al. 1945) and within 20 years the first images from studies in man using
coincidence detection were published (Anger and Gottschalk 1963). In these early years
the potential and role of positron emitting radionuclides was formed. Radionuclides of
carbon, C-11, nitrogen, N-13, oxygen, O-15 and fluorine, F-18 were produced and integrated
in biologically active molecules to trace biochemical pathways and reactions. The desire
for quantification was paramount and the application of coincidence detection of positron
emissions provides the basis for the realisation of this goal today. Over the ensuing period
a broad range of technological advances have occurred and this has led to the transition of
PET from the research laboratory to a routine clinical imaging tool.
Progress in PET has been dependent on several factors:
These developments alone were not sufficient to see the widespread adoption of PET
imaging in a routine clinical setting. This came with the development in 2000 of two com-
mercially available integrated PET and X-ray CT scanners to form the PET/CT scanner
(Beyer et al. 2000). This added the advantages of a high resolution anatomical imaging
modality (CT) to the highly sensitive functional imaging modality of PET, providing
inherently registered and ‘fused’ images (Figure 11.1). This combination of function and
structure has seen the widespread adoption of PET/CT in oncology as well as in neurologi-
cal and cardiac applications.
Many of the challenges of the use of positron emitters remain. The application of the
methodology is inextricably linked to the production and availability of well-characterised
radiopharmaceuticals targeted to specific biochemical pathways and processes. Such
tracers are important for diagnosis and disease staging but more importantly there is a
Positron Emission Tomographic Imaging (PET) 377
CT transaxial
CT Scout view
CT coronal PET coronal Fused coronal
PET transaxial
Fused transaxial
FIGURE 11.1
(See colour insert.) Typical PET/CT review screen showing CT, PET and fused image data sets. The bottom
right hand image is a rotating maximum intensity reprojection image.
protons, deuterons or α particles which have been accelerated to high energies to initiate
a nuclear transformation. The cyclotron consists of an ion source situated in the centre
of two semi circular copper electrodes (dees) contained within a vacuum and mounted
between the poles of a large electromagnet. The application of an alternating potential dif-
ference between the dees causes the charged ions to be accelerated within the dees to ener-
gies up to 20 MeV in a typical biomedical cyclotron. The high energy ions are extracted
from the dees to produce a beam of positively charged ions.
The most common ion source is a negatively charged hydrogen atom (H–, a proton plus
two electrons). The proton beam is generated by stripping the electrons from the H– on
extraction from the cyclotron to produce a beam of protons.
The short half-life of the resultant radionuclides typically requires direct transfer of the
irradiated target material to radiopharmaceutical synthesis modules to produce tracers
suitable for human use. An alternative source for some positron emitting radionuclides is
a radionuclide generator where the required, short lived, radionuclide is continually being
produced from a longer lived parent contained within the generator in a parent-daughter
relationship (cf Mo-99/Tc-99m, see Section 10.3.2). The advantage of this approach is that
the short half-life daughter product can be used remotely from the production site for the
parent and used over a period of weeks. Examples of cyclotron and generator produced
PET radionuclides are given in Table 11.1.
TABLE 11.1
Properties of Common PET Radionuclides Used for Clinical Imaging
Approximate
Radio- Production Decay Positron Effective
Nuclide Reaction Mode Half Life Energy (MeV) Range in Water (mm)a
+
C-11 10B (d,n) C
11
β , OEC 20.4 min 0.96 0.4
N-13 16O (p,d) 13N β+ 9.96 min 11.9 0.5
O-15 15N (p,n) 15O
β+ 2 min 1.72 1.0
F-18 18O (p,n) 18F
β+, OEC 110 min 0.635 0.25
Ga-68 68Ge-68Ga
β+, OEC 68.3 min 1.9 1.2
Rb-82 82Sr-82Rb
β+, OEC 78 s 3.35 2.6
a see Derenzo 1986.
OEC = orbital electron capture, see Table 10.2.
Source: Some data from Derenzo S E. Mathematical removal of positron range blurring in high-
resolution tomography. IEEE Trans. Nucl. Sci. 33, 565–569,1986.
Positron Emission Tomographic Imaging (PET) 379
p+ → n + e+ + u + energy
18 18
9 F→ 8 O + 01 β + 00 ν + 1.655 MeV
The energy released in this transformation is shared between the neutrino, the kinetic
energy of the positron, or positive electron and the annihilation photons. Whilst the energy
released in the transition is characteristic of the transition, the kinetic energy of the posi-
tive electron ranges from 0 to Emax where Emax is the transition energy minus the energy of
the annihilation photons (Figure 11.2).
The positron, after ejection from the nucleus, loses its kinetic energy through scattering
interactions with the surrounding matter until it is effectively stationary. The range of
travel of the positron will be dependent on its initial energy. Once at rest it will combine
with a negative electron from the surrounding matter in an annihilation process in which
the rest masses of the two electrons are converted to energy. The energy is released in the
form of two annihilation photons each of energy 0.511 MeV. This characteristic of the posi-
tron decay process directly impacts upon the resolution that can ultimately be achieved
in PET. The point of the annihilation event is remote from the nuclear decay event, and
by inference the location of the radiotracer, by a distance determined by the energy of the
positron. This separation is often described by an effective positron range which takes
into account the distribution of positron energies and the perpendicular distance from the
point of decay to the line of coincidence (Table 11.1).
The principle of conservation of momentum determines that, in an ideal situation, if the
annihilation reaction occurs at rest the emitted photons will be emitted in exactly oppo-
site directions in a single plane (co-linear) (Figure 11.3). In practice there is some residual
momentum so the photons are emitted with a small angular distribution. The distribu-
tion of the angular deviation from co-linearity where the photons are 180o opposed is
18F
9
EC
Q = 1.655 MeV β+
18O
8
FIGURE 11.2
Energy level diagram for the decay of 18F to 18O via positron emission and orbital electron capture (OEC) routes
(see Section 1.4 and Table 10.2). (Reproduced with permission from Cherry S R, Sorenson J A and Phelps M E.
Physics in Nuclear Medicine 3rd ed. Saunders, 2003.)
380 Physics for Diagnostic Radiology
γ (511 keV)
Repr
e+ Annihilation
e-
Nucleus
γ (511 keV)
FIGURE 11.3
Illustration of the positron decay process showing positional errors in coincidence imaging caused by the
‘effective positron range’ and the non co-linearity of the annihilation photons.
TABLE 11.2
Example of System Resolution for F-18 with 90 cm Diameter Detector with a 3 mm
Crystal Element
Repr (mm) Diameter (cm) Rco-lin (mm) Rdet (mm) Rsys (mm)
F-18 ≈0.25 90 ≈2.1 3 3.7
Δt × c
Δd =
2
where c is the velocity of light, Δt is the timing resolution. For a timing resolution of 100
ps (1 ps = 10 –12 s) Δd = 1.5 cm. Typical values for current TOF systems are 400–600 ps
Positron Emission Tomographic Imaging (PET) 381
leading to positional uncertainties of 6–9 cm. It should be noted that this uncertainty is
significantly less than the diameter of a PET tomograph and the implementation of TOF
has the potential to improve quantitative accuracy for the detection process by reducing
the uncertainty in the measurement.
(1) Two events are detected within a pre-defined electronic timing window known as
the coincidence window.
(2) The energy deposited within the individual detectors is within a selected energy
range.
Line of
response
(LOR)
Annihilation
γ
γ
‘Single’
event
FIGURE 11.4
Illustration of the definition of line of response for a coincident event in a positron ring detector with discrete
crystals. Each interaction within a crystal is termed a ‘single’ event.
382 Physics for Diagnostic Radiology
(a) Cross plane coincidences (b) Cross plane coincidences (c) In-plane coincidences
2-D Mode 3-D Mode (subset)
FIGURE 11.5
Illustrations of detector geometries; (a) Cross-section through the axial plane of a multiple ring PET detector
showing limited coincident planes (crystal ring plane plus one adjacent ring only) in 2-D mode. (b) Cross-
section through the axial plane of a multiple ring PET detector showing all potential coincident planes in 3-D
mode. (c) Cross-section through the transverse plane of a PET ring detector showing angular limits for coinci-
dent events defining the field of view of the detector.
Positron Emission Tomographic Imaging (PET) 383
Insight
System Efficiency
It is important to note that the system efficiency is based upon the square of the individual detector
efficiency as two events must be processed to provide a coincident event. The highest intrinsic effi-
ciency is provided by BGO although the processing time for a scintillation event is relatively long. For
high count rate applications LSO or LYSO (lutetium yttrium oxyorthosilicate) is used. It has a slightly
lower efficiency but has a much faster processing time leading to improved energy and randoms dis-
crimination. The same applies to GSO although it is the least efficient of the available detectors.
For TOF applications high speed scintillators are required to provide the timing resolution
required to obtain the additional positional information. At present the scintillator still remains
the rate limiting step. LSO, LYSO and GSO are all used in commercial TOF machines but faster
crystals such as BaF2 (barium fluoride) have been used in research systems with limited success
due to their poor intrinsic efficiency.
It should be noted that for human imaging systems the non-co-linearity effect is the most
significant and that its impact is significantly reduced for small diameter animal imaging
systems (Table 11.2).
TABLE 11.3
Properties of Scintillators Used for PET Imaging
Relative Light
Output Decay Time Thickness for 90%
Scintillator (NaI ≡ 100%) (nsec) Efficiency at 511 keV (cm)
NaI 100 230 6.75
BGO 15 300 2.42
LSO/LYSO 70 40 2.66
GSO 25 60 3.29
CsF 5 2.5 5.40
BaF2 2 0.6 5.07
384 Physics for Diagnostic Radiology
(1) A single event. This is the detection of a single event in a single detector. It is an
important measure because these events will be directly proportional to the activ-
ity within the field of view of the detector.
(2) A true coincidence. This is the result of the detection of both photons from a single
annihilation event with neither photon interacting with the surrounding material
and which are detected within the coincidence timing and energy discrimination
windows—the ‘wanted’ image forming events.
(3) A random coincidence. Such an event occurs when two or more photons from differ-
ent decay events are recorded within the coincidence timing and energy discrimi-
nation windows with the remaining photons being lost to the detection system.
The detection logic will assign these as valid events but they will not be related
to the valid decay event. The number of random events will be a function of the
activity contributing to events within the field of view of the detector.
The random event rate RE = 2τN1N2 where N is the single event rate for each
detector and 2τ is the coincidence window width.
It is essential that a correction is applied to correct for random events which
contribute to a loss of quantitative accuracy. A number of methods exist to correct
for random events including estimates from the singles rate and an independent
measure based on delayed coincidence windows for one or a pair of detectors.
Random events which comprise of more than two simultaneous single events are
discarded as these cannot be assigned to a pair of detectors unambiguously. Such
events will also be related to the singles count rate.
(4) Scattered events. These events arise when one or both photons from a single anni-
hilation event are detected within the coincidence timing and energy discrimina-
tion windows but have undergone a Compton scattering interaction (Figure 11.6).
Whilst Compton scattering results in a loss of energy from the photon the poor
Scattering
.
Scattering
oinc
Scattering
dc
tere
.
nc
oi
scat
ec
tru
FIGURE 11.6
Illustration showing the positioning of a coincident event (line of response) due to the scattering of an annihila-
tion photon; (a) in the transverse plane; (b) in the axial plane; (c) in the axial plane due to scattering of out-of-field
activity.
Positron Emission Tomographic Imaging (PET) 385
energy resolution and the resulting wide energy discrimination window means
that such events may not be rejected on the basis of their energy.
The event is thus assigned to a line of response (LOR) joining the two detectors in which
the photons were detected which will not be correlated with the annihilation event. Scatter
events reduce contrast within the image and result in a loss of quantitative accuracy in the
reconstructed image. This definition is true for a radiotracer located within the field of view
of the detector. However, a coincident event can also be generated by photons scattered from
annihilation events outside the immediate field of view and by scattering from the gantry
and bed environment. The number of scattered events is not a function of count rate but
is constant for a particular object and radioactive distribution. The proportion of accepted
coincidences that result from scattered events is known as the scatter fraction. The size of
the scatter fraction is dependent on many factors including the geometry of the PET scanner,
the energy discrimination window and the size and density of the scattering media and the
out-of-field of view activity.
One method of reducing the scatter fraction is to operate the scanner in 2-D mode where
each ring of detectors is separated from the adjacent rings by tungsten septa. Coincidences
can then only be recorded between a restricted set of detectors resulting in a scatter fraction
of approximately 15%–20%. All scanners can also be operated in 3-D mode where the septa
are removed, yielding no physical restrictions on the possible lines of response. (It should
be noted that the next generation of PET/CT systems will only operate in 3-D mode.) In this
configuration the scatter fraction typically exceeds 35%.
A correction for scattered events is required to provide quantitative data. A variety of
methods have been used to undertake scatter corrections. The methods can be divided
into two distinct classes, first those based on estimates from multiple energy windows and
second those based on simulations of the scatter component from an initial reconstruction
of the data. This latter technique can be applied iteratively to refine the estimates. The reader
is referred to Meikle et al. (2003) for a more detailed explanation of correction methods.
Accurate scatter correction remains perhaps the most complex component of the PET imag-
ing process.
0º 0º
Parallel
coincidences
135º
180º
Sinogram
Projection profile
135º
FIGURE 11.7
Illustration of the formation of projection profiles from parallel lines of response within a single plane of detec-
tors (2-D imaging). For image storage and reconstruction purposes, this data is reorganised into sinograms
where each projection profile forms a horizontal line in the image.
Each LOR has a particular probability of occurrence due to geometric and electronic
considerations and a normalisation process must be implemented before reconstruction to
correct for this effect (see Section 11.11). Once formed, the parallel projections are typically
reformatted to yield sinograms (a single image formed as a stack of all projection profiles
for a single plane). Image reconstruction can then proceed as for any projection image set
(see Section 8.4.2).
Image reconstruction based on filtered back projection (FBP) and iterative methods have
both been applied in PET imaging.
A sequence of corrections is usually applied to the data before or during reconstruction
to yield quantitative results. These include the following:
The reader is referred to other texts (e.g. Defrise et al. 2003; Meikle et al. 2003) for a detailed
explanation of methods used to implement these corrections.
The application of attenuation corrections using CT data is of particular interest. For a
coincident event to be detected, both photons must have traversed the body unattenuated.
The probability of attenuation for a particular LOR can be measured by comparing the
count rates from a source when attenuated by the patient and when not attenuated by the
patient (a blank scan calibration). Typically, rotating positron emitting rod sources were
Positron Emission Tomographic Imaging (PET) 387
placed inside the gantry to provide these measurements. The introduction of CT provided
an alternative method for attenuation determination with a better signal to noise ratio for
the attenuation map and higher spatial resolution in a much shorter imaging time. The
disadvantages of this method relate to the need to interpolate attenuation coefficients from
CT energies (120–140 kVp) to 511 keV. This is not a linear transformation across the tissue
attenuation range. Studies are often performed in the presence of intravenous or oral con-
trast and algorithms have been developed to ensure that these do not modify the attenua-
tion coefficients used for PET data reconstruction.
The short acquisition times for CT data provide a further complication in that physiological
motion due to respiration and cardiac function is frozen in space at the time of acquisition.
The PET data is collected as a signal average typically over a period of 2–5 minutes. This will
inherently lead to misalignments in the two datasets and is most noticeable for structures
around the diaphragm. As system resolutions are improved this becomes an ever increasing
problem. The development of motion correction methodologies is the most significant chal-
lenge in improving both the diagnostic and quantitative accuracy of the technique. Typically,
PET and CT studies will be acquired as dynamic studies to permit the elimination of the
motion component, thus ensuring optimal alignment of PET and CT datasets.
The response of a PET detector is not spatially invariant. This is typically manifest as vari-
ations in sensitivity and resolution within the field of view of the detector. The correction of
these variations is particularly challenging especially in relation to variations in resolution.
Insight
Related Technologies
Whilst this chapter has focused on PET and PET/CT technology it is necessary to place this in the
context of other developments in imaging with radionuclides. Anger gamma cameras designed for
388 Physics for Diagnostic Radiology
tomographic (SPECT) imaging have been enhanced with the introduction of SPECT/CT systems in
a similar manner to PET/CT. The broader range of tracers and radiopharmaceuticals would appear
to offer many of the advantages of PET/CT systems. Whilst attenuation and scatter correction for
single photon emitters is more complex than for positron emitters, the availability of an emission
and transmission map will permit quantitative imaging using iterative reconstruction and system
modelling. The limitations of a collimated system are that resolution and sensitivity are related
by an inverse square law such that improvement in resolution by a factor of two will result in a
four-fold decrease in sensitivity. In practical terms little progress is being made to overcome this
physical limit. Sensitivity in relation to PET systems remains low.
Most SPECT systems comprise two opposed gamma camera detectors and a considerable
investment was made to use these opposed detectors in coincidence mode to provide a dual
SPECT/PET capable system.
The PET acquisition is achieved by the removal of the collimators, greatly enhancing the system
sensitivity. The two gamma cameras are then operated in the equivalent of a 3-D mode in PET
although a rotation of the detector through 180o is required to obtain full 360o sampling. The
construction of such dual detection systems required the use of much thicker NaI crystals with
typically 25 mm being used compared to 9 mm for standard SPECT systems. This was still insuf-
ficient to attenuate 511 keV photons effectively and compromised the resolution characteristics of
the gamma camera for single photon use.
As with a 3-D PET system the data rates that are encountered are extremely high and the single
crystal design of the gamma camera severely affected the data rates that could be processed com-
pared to the discrete detector configurations in PET. This led to the need to reduce injected doses
leading to increased study times.
Whilst the intrinsic resolution of the gamma camera is higher than for the PET detector the much
poorer signal to noise ratios which could be achieved in clinical studies led to poorer image qual-
ity and effective resolution. It is no longer considered appropriate to undertake PET imaging using
gamma camera based systems and the introduction of 511 keV photons into a routine nuclear
medicine environment is not recommended due to the limited equipment and environmental
shielding normally encountered.
It must, however, be noted that improvements in sensitivity for PET detector designs will only
come with significant increases in detector area. This could be achieved by the introduction of
additional detector rings or the introduction of PET-optimised panel detectors using scintillation or
semi-conductor photomultiplier technologies.
As has been demonstrated, the use of 3-D imaging leads to higher proportions of random
and scatter events. Increasing detector area increases this proportion further to a point where
these events dominate the coincidences recorded, leading to a poorer detector performance as
characterised by the noise equivalent count rate (NECR) that can be achieved at specific activity
levels.
Such systems have been developed in a research and development context but have not
yet been introduced as clinical systems. Whilst such systems would support rapid whole
body imaging their development does not provide additional capabilities in relation to discrete
organ imaging. Their development may be more useful in the development of simultaneous
PET/MR systems where anatomical and functional data may be acquired in a truly simultane-
ous manner.
TABLE 11.4
Multimodality Imaging Developments
Combination Method Status
SPECT/CT 1&2 Routine clinical, animal work
PET/CT 1&2 Routine clinical, animal work
Optical/CT/SPECT/PET 1 Animal work
SPECT/PET/MR 1 Clinical research
PET/MR 2 Animal work/technical development
tracers and so on. Each will provide a unique view of the anatomy (physical characteristic)
or physiological function contributing to the production of the image.
The combination of PET and CT illustrate the principles involved. Integrated systems
offer a number of advantages over the combination of images using software tools. First,
and most importantly, the images are inherently aligned due to the aligned geometries of
the acquisition systems. With adequate immobilisation no subsequent alignment of the
images should be required. Second, the images are closely related in time, removing uncer-
tainties due to changes in functional and anatomical configurations with time. Third, the
use of alignment software to ‘fuse’ images can lead to artefacts and uncertainties due to
the non-rigid transformations that are required.
For the clinician the superposition of functional and anatomic detail almost eliminates
uncertainties in the location of structures with abnormal function. The interpretation of a
functional image without correlated functional data can be difficult and lead to unneces-
sary further investigations.
For PET/CT and SPECT/CT the images are acquired sequentially without repositioning
the patient. However, errors in alignment may still occur through patient motion and more
importantly physiological motion due to respiratory and cardiac function. As has been
discussed, the CT components in these combined modalities are used to provide attenu-
ation correction maps. The misalignment of the emission and transmission maps due to
physiological motion is currently an area of significant research interest with the introduc-
tion of 4-D imaging protocols where studies are dynamically acquired and reconstructed,
including motion data.
and derive principally from the tests used to characterise a PET/CT scanner. These include for
PET systems, spatial resolution, count rate performance, sensitivity and image quality with
particular reference to the accuracy of attenuation and scatter corrections. For the CT system
checks should normally include alignment of positioning lights, alignment of table and gan-
try, slice localisation, table increment accuracy, slice thickness, spatial resolution image qual-
ity, including noise and uniformity, and dosimetry verification (see Section 8.10).
For quality assurance purposes a series of routine assessments and calibrations must be
performed. For the PET detector these include the following:
(i) Detector function and electronics, checking coincidences, singles, dead time, tim-
ing and energy variables
(ii) Detector calibration to reset energy tuning, positional accuracy, coincidence tim-
ing and detector normalisation
(iii) ‘Well counter’ calibration for 2-D/3-D modes to relate coincidences to kBq/ml
(i) The alignment of the PET and CT imaging planes and fields of view
(ii) Attenuation and correction values calculated from CT numbers and linearity,
especially for applications in radiotherapy planning
(iii) Reconstructed image verification
Specific phantoms are used in each phase and can include an electron density phantom for
CT number calibration, manufacturer alignment phantom and a solid Ge-68 cylindrical
phantom for anatomical image quality assessment.
Some systems are equipped with an integral Ge-68 rod source which can be inserted
within the PET field of view and rotated through 360o to permit the routine checking and
re-calibration of detector performance. These tests are fully automated in routine opera-
tion and only require intervention when performance lies outside of pre-defined limits.
The ‘Well counter’ calibration typically requires a cylindrical phantom to be manually
filled with a known concentration of F-18 and imaged using a preset protocol in each imag-
ing mode. The concentration is usually calibrated by reference to an aliquot of the phan-
tom solution measured in a fully calibrated, single sample well counter.
The frequency of testing will be dependent upon the stability of the system; however, in
principle the calibration checks will be performed on a daily/weekly basis and in all cases
ahead of quantitative studies.
TABLE 11.5
A Comparison of the Radiation Properties of F-18 and Tc-99m
F-18 Tc-99m
Principal X-ray energy 511 keV 140 keV
Dose rate at 1 m from unshielded 200 MBq source in vial 32 μSv h–1 4.4 μSv h–1
Half value thickness for lead ≈6 mm ≈0.3 mm
Half-life 110 min 6h
approximately 7–10 times greater than that for Tc-99m. It is thus essential that distance
from sources is maximised, the time spent in the vicinity of radioactive sources is mini-
mised and that shielding is sufficient to minimise the radiation exposure to staff. An effec-
tive design of a PET facility is key to the reduction of staff doses especially as the shielding
required to attenuate 511 keV photons is very significantly larger than for Tc-99m. Relative
half value thicknesses for lead are given in Table 11.5.
The conduct of a routine PET scan has a number of key components as follows:
(1) Receipt and preparation of a patient dose from a multi-dose stock vial
(2) Injection of the patient
(3) Uptake phase of the radiotracer in the patient
(4) Escorting and positioning the patient on the scanner
(5) Imaging the patient
Each of these phases of a PET study requires a different emphasis in radiation protection.
The receipt and preparation of the patient dose requires high levels of shielding between
the source and the operator and minimal operator interaction time. Typical shielding lev-
els are 30–50 mm of lead around the stock vial or between the operator and the source.
Whilst this process can be conducted manually with limited exposure of staff, semi- and
fully automated dispensing systems are available to further reduce staff doses. The injec-
tion of the patient effectively returns the radiation source to an unshielded state. The mini-
misation of staff doses thus requires the minimum time to be spent undertaking the task
at the maximum distance from the patient.
Commercial systems are now available which permit the automatic injection of the
dose whilst the operator remains at 1–2 metres from the patient behind suitable shielding.
A combination of dose preparation and injection in a single system is also available.
Once injected and during the uptake phase, the patient is usually accommodated within
a well-shielded room with video surveillance and emergency call facilities. This precludes
the need for regular staff access and also reduces the radiation levels within the environ-
ment. Typical dose rates outside the uptake area are designed at 1–2 μSv h–1. Time and
distance become the key factors to dose reduction in operation.
The acquisition of a study is usually undertaken from a shielded control room where
doses are designed to be at safe public levels. Shielded viewing panels, intercom and video
systems ensure patient contact is suitably maintained.
On completion of the study the patient is discharged from the unit as quickly as possible.
The short half-life of the radionuclide means that dose rates have significantly decreased
in this phase and except in rare circumstances no further precautions are required to pro-
tect the public, although staff should continue to minimise close contact with the patient
to reduce their overall dose burden.
392 Physics for Diagnostic Radiology
TABLE 11.6
Typical PET Operator Doses per Imaging Task (375 MBq F-18 FDG)
Measured Whole Body Dose
Technologist Task per Task (μSv)
Drawing up and measuring dose 0.29
Injecting patient 1.85
Observing patient from control room 0.12
Escorting patient to toilet 0.72
Positioning patient pre- and post-scan 1.8
Entering room during PET scan 0.29
Post-scan interaction 0.31
Total 5.38
Typical whole body effective doses to staff for studies using 375 MBq F-18 FDG (fluoro-
deoxyglucose) are given in Table 11.6. Manual dispensing and injection techniques were
used. For further information on effective doses see Section 13.5.
The annual dose l imit for unclassified workers in the United Kingdom is set at 6 mSv
with typical investigation limits set at 4 mSv. From the data presented in Table 11.6 tech-
nologists would reach these limits after approximately 750 patient studies. This is much
less than is typical of an efficient scanning unit and will lead to the need for staff rotation
or the formal classification of staff as radiation workers.
Vigilance in radiation protection for staff is vital to maintain and further reduce staff
exposure. The need to manipulate test sources should be minimised with adequate shield-
ing available for storage. The short half-life of the radionuclide precludes any major dif-
ficulties in managing the environment as any contaminated materials or equipment will
rapidly decay to safe levels.
The process of escorting the patient from the uptake area to the scanner room and posi-
tioning the patient again exposes the technologist to potentially high dose rates. This task
remains one of the most intractable to effective solutions. Mobile shielding can be used
but is ineffective where patients need significant assistance. The skill of the technologist
in undertaking this task quickly is key to minimising radiation dose. The development of
the use of PET/CT scanning for radiation therapy planning provides particular challenges
in this phase of the study as radiation therapy planning studies may require the use of
patient positioning aids and marking of the patient in relation to alignment lasers and the
scanner plane. In practice, these processes take significantly longer than routine diagnos-
tic studies and often require two technologists for verification purposes, increasing the
overall radiation burden.
Insight
Risks for Research Studies
The conduct of some research studies may prove particularly challenging. The performance of
dynamic studies in the scanner from the time of injection and with blood sample collection will
impose a significantly higher radiation burden. Such studies, when combined with the use of short
lived nuclides such as O-15, C-11 and N-13 may further increase staff doses. The routine monitor-
ing of staff doses and practice is vital in such circumstances to modify practice and reduce staff
doses effectively.
Positron Emission Tomographic Imaging (PET) 393
a much wider range of tracers. The vast majority of studies are still conducted using 18F-FDG.
Such studies have a well-defined role in the diagnosis and staging of lung, head and neck,
oesophageal and colo-rectal cancers as well as lymphoma and melanoma. Whilst the pres-
ence of disease may be apparent from a range of diagnostic procedures, the location and
extent of the disease is vital to determining the relevant treatment course and the prognosis
for the patient. This is where the combination of PET and CT has proved especially effective.
Identification of the correct treatment choice can impact on the use of health-care resources
by preventing inappropriate surgery or radiotherapy treatments.
Of growing importance is the use of functional markers to monitor treatment. Lymphoma
can in many cases be effectively treated with chemotherapy leading to remission. However,
in some cases additional interventions are required. The selection of these patients can
effectively be made using an 18F-FDG study towards the end of the chemotherapy treat-
ment. A failure to detect any uptake of the FDG is indicative of a successful treatment and
no further interventions will be required immediately. For those with residual uptake of
the FDG further treatments can be targeted appropriately.
394 Physics for Diagnostic Radiology
Insight
PET and Cancer Cell Markers
The application of PET to therapy response is leading to a resurgence of interest in the use of PET to
support rapid and effective drug discovery through the development of well-characterised biomark-
ers. Cancers and cancer cells will be characterised by specific changes in the cell environment such
as increased blood supply, poor oxygen availability or in the metabolic and reproductive processes
of the cell. The introduction of new therapeutic agents requires the demonstration of their effective-
ness in targeting these processes and PET biomarkers continue to provide the potential to rapidly
and effectively demonstrate drug interactions in vivo. A well-designed biomarker will provide signal
amplification in abnormal areas as well as provide quantitative measures of response to different
concentrations of therapeutic agent. They will provide a screening tool to eliminate ineffective com-
pounds as well as support outcome measures in compounds showing a physiological action.
In the wider context a clinical PET/CT facility must form part of a much bigger infra-
structure including basic cell biology research, extensive pre-clinical assessment of drug
efficiency using small animal imaging as well as in vivo methodology and the develop-
ment and characterisation of PET radiotracers to support specific drug interactions. The
outcome of these developments will require early phase investigation in man to validate
and characterise the drugs before registration and wider testing in patients.
Another area of application which has developed rapidly with the introduction of inte-
grated PET/CT systems is the use of PET/CT as the basis for radiotherapy planning. Most
modern radiotherapy is planned from CT scans with the treatment volumes being deter-
mined on the basis of an interpretation of the anatomical information. As has already
been described, functional changes and hence disease locations may precede anatomical
changes and the introduction of a functional marker such as 18F-FDG has the potential to
modify the delineation of treatment volumes to modify outcomes. The characteristics of
the functional marker are critically important to the success of this development. A func-
tional marker with a high false positive rate for disease will result in larger treatment vol-
umes with the irradiation of more normal tissue leading to higher normal tissue damage
and potentially higher morbidity. Cure rates may, however, be higher. A high false nega-
tive rate will potentially lead to the non-treatment of tumours leading to poorer outcomes.
Randomised controlled trials have not been extensively performed to verify this change in
protocol, but, two effects have been validated. The introduction of PET results in changes
to the delineated gross tumour volumes (GTV) with both increases and decreases when
compared to CT planning only and intra-operator variability is significantly reduced. The
potential to modify the final treatment volume after allowance for positioning errors and
patient/organ motion is less clear and further work is still required to optimise the inte-
gration of PET/CT into radiotherapy planning (Benatar et al. 2000).
Insight
Intensity Modulated Radiotherapy (IMRT)
The introduction of IMRT provides the technology to modify the dose distribution within the
treatment volume. The use of imaging with functional markers provides the potential to delineate
areas within the treatment volume with different characteristics and which may benefit from dose
enhancement such as hypoxic regions. This has been termed ‘dose painting’ and the definition of
areas based upon functional markers as ‘biological target volumes’ (BTV) (Ling et al. 2000).
Positron Emission Tomographic Imaging (PET) 395
11.14 Conclusion
Significant clinical validation work remains to be undertaken to provide a firm evidence
base for this technology. The introduction of motion detection and correction methods
for imaging, planning and delivery systems for radiotherapy should further enhance the
effectiveness of this application.
The applications for PET/CT will continue to grow although other technologies will
begin to provide alternative methodologies to diagnosis, staging and therapy response
monitoring. As has been the case with any radionuclide-based technique, the discovery
and availability of specific and well-characterised radiopharmaceuticals and biomarkers
are essential to the continued development of the methodology.
References
Anger H.O., Gottschalk A. Localisation of brain tumours with the positron scintillation camera.
J. Nuc. Med., 4, 326, 1963.
Benatar N.A., Cronin B.F., O’Doherty M.J. Radiation dose rates from patients undergoing PET: impli-
cations for technologists and waiting areas. Eur. J. Nucl. Med. 27(5), 583–89, 2000.
Ben-Haim S., Ell P. 18F-FDG PET and PET/CT in the evaluation of cancer treatment response. J. Nucl.
Med. 50(1), 88–99, 2000.
Beyer T., Townsend D.W., Brun T., Kinghan P.E., Charron M., Roddy R. et al. A combined PET/CT
scanner for clinical oncology. J. Nuc. Med. 41, 1369–79, 2000.
Defrise M., Kanchan P.E., Michel C. Image reconstruction algorithms in PET. In Basic science and clini-
cal practice in positron emission tomography. Eds. Valk PE, Bailey DL, Townsend DW, Maisey MN.
ISBN 978-1-85233-485-7, Springer, 2003.
Derenzo S.E. Mathematical removal of positron range blurring in high-resolution tomography. IEEE
Trans. Nucl. Sci. 33, 565–69, 1986.
ICRP. Radiation dose to patients from radiopharmaceuticals, International Commission on Radiation
Protection no. 106, Elsevier, 2009.
Joliot F. Preuve expérimentale de l’annihilation des électrons positifs. C.R. Acad. Sci. 197, 1622–5,
1933.
Ling C.C., Humm J., Larson S. et al. Towards multi-dimensional radiotherapy (MD-CRT): biological
imaging and biological conformity. Int. J. Radiat. Oncol. Biol. Phys. 47(3), 551–60, 2000.
Mawlawi O., Townsend D.W. Multi-modality imaging—an update on PET/CT technology. Eur. J.
Nucl. Med. Mol. Img. Suppl. 1, S15–29, Review 2009.
Meikle S.R., Badawi R.D., Valk P.E., Barley D.L., Townsend D.W., Massey M.N. Quantitative tech-
niques in positron emission tomography. In Positron emission tomography. Eds. Bailey DL,
Townsend DW, Valk PE, Maisey MN, Springer, 2003.
Pan T. Mawlawi O. PET/CT in Radiation Oncology. Med. Phys. 35(11), 4955–66, 2008.
Thibaud J. L’annihilation des positrons au contact de la matière et la radiation qu’en resulte. C.R.
Acad. Sci. 197, 1629–32, 1933.
Tobias C., Lawrence J., Roughton F. et al. The elimination of carbon monoxide from the body with
reference to the possible conversion of CO to CO2. Amer. J. Physiol. 149, 253–63, 1945.
396 Physics for Diagnostic Radiology
Further Reading
Bailey D.L., Townsend D.W., Valk P.E. Maisey M.N. (Eds). Positron emission tomography. Springer-
Verlag, London, ch. 1–3, pp. 1–62, 2005.
Cherry S.R., Sorenson J.A., Phelps M.E. Physics in nuclear medicine. 3rd ed., Saunders, 2003.
Hamilton D. Diagnostic nuclear medicine. Springer-Verlag, Berlin, Heidelberg, pp. 163–204, 2004.
Kim E.E., Lee C.M., Inoue T., Wong W.H. Clinical PET—principles and applications. Springer, New
York, chs 1–3, pp. 3–61, 2004.
Powsner R.A., Powsner E.R. Essential nuclear medicine physics, Blackwell Science, chs 8 & 9,
pp. 114–135, 1998.
Exercises
1. What property of a nucleus makes it likely to emit positrons? Discuss the fate of a
positron once it is produced in the body.
2. Explain how TOF of an annihilation pair of positrons can provide positional infor-
mation. Show that the accuracy of positioning increases as the time of resolution
of the detectors decreases.
3. What factors determine the system resolution (uncertainty of position of a posi-
tron decay) in a PET image? How should the uncertainties due to individual fac-
tors be combined, and why?
4. Explain the meaning of the terms single events, scattered events and random coinci-
dences in PET.
5. If it is desired to obtain quantitative data from a PET scan, what corrections have
to be applied to the raw data collected?
6. A clinical investigation requires images from the same patient to be collected using
two different modalities. What are the two major methods? Discuss the strengths
and weaknesses of each.
7. Explain how PET/CT can be used in radiotherapy (a) to improve delineation of the
treatment volume, (b) to assess response to therapy.
8. What are the principal causes of radiation exposure to staff working in a PET
facility?
12
Radiobiology and Generic Radiation Risks
SUMMARY
• The reasons why ionising radiation is the most harmful form of radiation
are presented.
• Radiation chemistry is explained briefly.
• Evidence that DNA is the primary target for radiation damage is presented.
• Ideas of radiobiological effectiveness (RBE) and radiation weighting factors
based on survival curve theory are discussed.
• Evidence that radiation is a carcinogen and mutagen in humans and animals
is presented and generic risk factors are established.
• The chapter concludes with a critical review of the evidence for and against
risk at the very low doses characteristic of diagnostic radiology.
CONTENTS
12.1 Introduction......................................................................................................................... 398
12.2 Radiation Sensitivity of Biological Materials.................................................................. 398
12.2.1 Molecular Basis of High Radiosensitivity........................................................... 398
12.2.2 Cells Particularly at Risk....................................................................................... 399
12.2.3 Time Course of Radiation-Induced Death..........................................................400
12.2.4 Other Mechanisms of Radiation-Induced Death............................................... 402
12.2.5 Transformation of Cells and Cancer Induction.................................................. 402
12.3 Evidence on Radiobiological Damage from Cell Survival Curve Work..................... 402
12.3.1 Cellular Repair and Dose Rate Effects................................................................. 405
12.3.2 Radiobiological Effectiveness................................................................................ 406
12.4 Radiation Weighting Factors, Equivalent Dose and the Sievert.................................. 407
12.5 Radiation Effects on Humans...........................................................................................408
12.5.1 Tissue Reactions and Stochastic Effects..............................................................408
12.5.2 Carcinogenesis........................................................................................................ 411
12.5.3 Mutagenesis............................................................................................................. 412
12.6 Generic Risk Factors and Collective Doses..................................................................... 414
12.7 Very Low Dose Radiation Risk......................................................................................... 418
12.7.1 Molecular Mechanisms.......................................................................................... 418
12.7.2 Confounding Factors based on Radiobiological Data....................................... 419
12.7.2.1 Bystander Effects...................................................................................... 419
12.7.2.2 Adaptive Responses................................................................................. 420
12.7.3 Epidemiological Studies........................................................................................422
397
398 Physics for Diagnostic Radiology
12.8 Conclusions..........................................................................................................................423
References......................................................................................................................................423
Exercises........................................................................................................................................ 424
12.1 Introduction
This chapter deals with a problem that is central to the theme of the book—namely that
ionising radiation, even at very low doses, is potentially capable of causing serious and
lasting biological damage. If this were not so, steps that are taken to reduce patient doses,
for example, the use of rare earth screens, would be unnecessary and generally undesir-
able. Furthermore, the amount of physics a radiologist would need to know would be
greatly reduced and this book might not be necessary!
Medical exposures are not the source of the highest radiation dose to the population.
Some 200 million gamma rays pass through the average individual each hour from soil
and building materials and about 15 million potassium-40 atoms disintegrate within us
each hour emitting high energy beta particles and some gamma rays.
However, as shown in Table 12.1, medical exposures contribute a far greater proportion
of the average annual dose to the UK population than any other form of man-made radia-
tion and the majority of this can be attributed to diagnostic radiology. Since this radiation
can cause deleterious effects, it is essential for the radiologist to know what these effects
are and to be aware of the risks when a radiological examination is undertaken.
TABLE 12.1
Annual Contribution to the Dose to the UK
Population from Different Sources of Radiation
Source Percentage
Natural
Cosmic rays 12
Gamma rays from ground and buildings 13
Internal from body, food and drink 9.5
Radon and thoron 50
Man made
Medical 15
Other (nuclear discharges, occupational, 0.5
fall out)
Source: Watson S J, Jones A L, Outway W B et al., Ionising
radiation exposure of the U.K. population: 2005
review. Document HPA – RPD – 001, Health
Protection Agency, Chilton, Didcot, U.K. 2005.
each cell, and the macromolecules within it, receive no energy at all but at an ionisation
site the energy deposited is much higher than that associated with cellular biochemical
events. For example, the energy required to break a hydrogen bond in DNA is about 0.5 eV
or 70 times less than the energy in an ion pair. The critical difference between ionising and
non-ionising radiation is the size of the individual packets of energy not the total energy
involved. The consequence is that even a few microgray of diagnostic X-rays in the energy
range 20–100 keV will produce a large number of random sub-molecular events, any one
of which may damage a sensitive macromolecule.
Because of this, in terms of absorbed energy ionising radiation is by far the most potent
agent known to man. A lethal dose of 4 Gy corresponds to absorbed energy of 280 J in a
70 kg man and is about the same as the amount of energy in a small quantity of warm
tea. The rise in body temperature may be calculated as energy absorbed/mass × specific
heat. Using the specific heat capacity of water (4.2 × 103 J kg–1 K–1), the temperature rise is
280/70 × 4.2 × 103 = 10 –3 K. This would be virtually undetectable and is much less than
diurnal variations in body temperature.
It is against this unique sensitivity of cells and tissues to ionising radiation that the use
of X-rays in diagnosis must be assessed.
This time scale reflects the changes taking place at the molecular and cellular level.
After exposure to ionising radiation, physical processes of absorption of photons of
energy hf, ionisation and excitation will be complete within about 10 –15 s. With any form
of ionising radiation there is a possibility that this interaction will be directly with criti-
cal targets in the cell. Experimental irradiation with microbeams has shown that the cell
nucleus, containing DNA and the chromosomes, is most sensitive to radiation injury so
this is where the targets are likely to be located.
For diagnostic X-rays, however, it is more likely that the action will be indirect, the X-rays
first interacting with other atoms or molecules in the cell to produce free radicals that are
able to diffuse far enough to reach and damage the critical targets.
Since 80% of a cell is composed of water, indirect effects of radiation are most likely to
involve water molecules. As a result of interaction with a photon of X-rays, the water mol-
ecule may become ionised
H2O → H2O+ + e–
H2O+ is an ion radical with an extremely short lifetime, about 10 –10 s. It decays to a free
radical which is uncharged but still has an unpaired electron and hence is highly reactive.
Further reactions may now occur, for example, if this free radical reacts with another water
molecule, the highly reactive hydroxyl radical OH⋅ may be formed
H2O+ + H2O → H3O + OH⋅
Radiobiology and Generic Radiation Risks 401
TABLE 12.2
Effects of Radiation Exposure on Nuclear DNA
Base modification/deletion—causing genetic defects and increased mutation rate in reproductive cells; or if not
repaired or eliminated, increased risk of malignant cell transformation in somatic cells
Bond breakage—between complementary strands of DNA in the double helix, facilitating the loss of a base and
changes in molecular shape and structure
Cross linkage—the additional covalent binding of two adjacent strands of DNA; this potentially inhibits
semi-conservative replication of DNA
Single strand breaks—occur at random in either strand along the DNA double helix
Double strand breaks—may be formed either by a single event, e.g. when the track of a densely ionising particle
passes through or close to the DNA helix, or more likely when X- or gamma rays, by random coincidence of
two single events occurring at the same time on complementary DNA strands. This process becomes more
probable the higher the X-ray dose and dose rate
Source: Reproduced with permission from Martin C J, Dendy P P and Corbett R H (eds.) Medical Imaging and
Radiation Protection for Medical Students and Clinical Staff, British Institute of Radiology, London, 2003.
Hydrated electrons are also formed and are very reactive. Reactive radicals recombine rap-
idly, normally to re-form water, but they can diffuse over distances of a few nanometres,
thus reaching and damaging DNA. Alternatively, they may react together to form toxic
products such as hydrogen peroxide
Hence the body is flooded with toxins and this is why the general feeling of malaise
results.
The steps leading from these initial physico-chemical changes to the observation of cell
death (see Section 12.3) are still poorly understood. However, it is known that DNA is a
major target for damage and a number of effects have been identified (see Table 12.2).
Further evidence for the direct involvement of DNA in radiation damage comes from the
fact that many genes, for example, the tumour suppressor gene p 53, have been shown to be
activated by even low doses of radiation. A summary of these effects is given in the ‘insight’.
Insight
Genes Showing Early or Late Response to Radiation
Some effects of X-ray activation of early and late response genes are
• ‘Housekeeping genes’ repair point mutations and single strand DNA breaks using ‘cut and
patch’ enzymes.
• ‘Checkpoint genes’ arrest cells in the cell cycle thus giving more time for repair before a
single strand break can be converted into a more permanent double strand break during
DNA synthesis.
• ‘Checkpoint genes’ may also prevent a cell proceeding to mitosis if double strand breaks
cannot be repaired or may induce a cell to differentiate or be removed by apoptosis (see
Section 12.7.2).
• Some patients, for example, those with Xeroderma pigmentosum or Ataxia telangiectasia
have enhanced radiosensitivity which is probably linked to their reduced capacity for DNA
repair.
402 Physics for Diagnostic Radiology
In the longer term, according to the law of Bergonié and Tribondeau, the differentiated
cells will have resisted the radiation well and will continue to fulfil their specialised func-
tions, so a period of relative well-being should ensue. However, as these cells die, their
replacement from the stem cell pool will have been severely depleted or will have stopped
completely. The patient becomes manifestly ill when there is a marked loss of a wide range
of differentiated, mature cells particularly in the circulating blood. Ultimately, the cause of
death will usually be failure to control infection or failure to prevent haemorrhage.
The time scale of events for sub-lethal or just lethal radiation damage, together with
detail of the techniques available to study the various stages, is shown in Figure 12.1.
For further comment on cancer induction at low doses and a model for oncogenesis based
on these assumptions see Section 12.7.
Exposure
10–16 s Physics
Ionisation of atoms
10–5 s Radiation
chemistry
Biomolecular changes
s–hours Biochemistry
Reversible Fixed
physiological biochemical
effects lesions
Minutes–days Molecular
biology
Genetic Cell
lesions death
Cell kinetics
and
Days–years
radiopathology
Years– Epidemiology
generations
Population Death
effects of
organism
FIGURE 12.1
Time scale of events for radiation-induced damage.
number of single cells is placed in a Petri dish with growth medium and incubated for 10–14
days. The cells settle on the base of the dish and, if they are capable of cell division (repro-
ductive integrity) they develop into sub-macroscopic colonies which may be counted. Not
all cells are capable of growing into colonies, even if unirradiated. Suppose that 100 cells are
seeded and 90 colonies grow. If now a second sample of the same cells is irradiated before
404 Physics for Diagnostic Radiology
3
Damage to central nervous
system and lung function
0.3
1 10 100 1000
Dose of radiation (Gy)
FIGURE 12.2
Approximate representation of survival time plotted as a function of dose for acute exposure to whole body
radiation.
1
Surviving fraction (log scale)
10–1
10–2
10–3
5 10
Dose (Gy)
FIGURE 12.3
A typical clonogenic survival curve for mammalian cells irradiated in vitro with X-rays.
incubation, from 1000 cells only about 180 colonies might develop. The expected colonies
from 1000 cells would be 900 and therefore 180/900 or 20% of the irradiated cells have sur-
vived. By repeating this experiment at different doses, a survival curve may be obtained
and for mammalian cells exposed to X-rays it would resemble Figure 12.3.
For further details of the extensive literature on survival curves, including the evi-
dence that qualitatively similar curves are obtained in vivo, the reader is referred to more
Radiobiology and Generic Radiation Risks 405
s pecialised texts (e.g. Hall and Giaccia 2006; Mettler and Upton 2008). However, two
aspects that are of great relevance to this chapter will be discussed.
1
Surviving fraction (log scale)
10–1
10–2
10–3
5 10
Dose (Gy)
FIGURE 12.4
Typical results for a ‘split dose’ experiment to demonstrate recovery from radiation and cellular repair. If an
initial dose of 4 Gy is delivered but there is a time delay before further irradiation, the survival curve follows
the solid line rather than the dotted line. Note that the shoulder to the curve re-appears and the total dose to
produce a given surviving fraction is now higher.
406 Physics for Diagnostic Radiology
Increasing
dose rate
0.01
4 8
Dose (Gy)
FIGURE 12.5
Graphs demonstrating that because of recovery the killing effect of X-rays may be dose rate dependent.
100
Acute exposure
10
Dose rate (Gy/day)
Cellular response
Homeostatic repair
0.1
4 (1 h) 10 (30 d) 30 (300 d)
LD50 (Gy)
FIGURE 12.6
Curve to show that the LD50 in vivo is very dependent on the dose rate because of cellular and homeostatic
mechanisms. The LD50 was measured in animals under conditions of continuous irradiation. The duration of
irradiation is shown in brackets.
because of cellular repair and subsequently because of homeostatic repair. The results sug-
gest that animals could tolerate 0.1 Gy per day for a long time.
10–2
5
Dose (Gy)
FIGURE 12.7
Comparative survival curves for X-rays and neutrons. If 220 kVp X-rays have been used as the reference radia-
tion RBE = D X/D N for a given biological end point (10% survival in this example).
If 220 kVp X-rays have been used, then as shown on the curve, the RBE is defined as
The RBE of neutrons when determined in this way is frequently between 2 and 3.
Except at very high LET values (see Section 1.14), the RBE of a radiation increases steadily
with LET. However, it is an incomplete answer simply to state that ‘neutrons cause more
damage than X-rays because they are a higher LET radiation and therefore produce a
higher density of ionisation’. For equal absorbed doses measured in grays, the number of
ion pairs created by each type of radiation in a macroscopic volume is the same. Therefore,
for reasons not yet fully understood, and beyond the scope of this book, it is differences in
the spatial distribution of ion pairs in the nucleus at the sub-microscopic level that cause
the difference in biological effect. This is illustrated in Figure 12.8.
1 µm
High LET
Low LET
FIGURE 12.8
Illustration of the difference in spatial distribution of ion pairs across a cell nucleus for low LET and high LET
radiations. In each case five ion pairs are formed (same absorbed dose) but whereas these are likely to result
from the same high LET particle, and thus be quite close together, they are more likely to result from five differ-
ent low LET photons and hence be much more widely separated.
Furthermore, since the shapes of the survival curves are different, inspection of Figure 12.7
shows that, at the very low doses that are important in radiological protection, the RBE
may be somewhat higher than the value of 2 or 3 quoted for higher doses.
Largely for these reasons, the International Commission on Radiation Units introduced a
new term ‘Quality Factor’ (see e.g. ICRU 1980). This is a dimensionless, invariant quantity for a
given type of radiation, determined solely by the type of radiation. However, this quality factor
was applied to absorbed dose at a point and in radiological protection it is the absorbed dose
averaged over a tissue or organ and weighted for the radiation quality that is of interest.
Thus in 1990 the International Commission on Radiological Protection (ICRP 1991) intro-
duced a new concept, the radiation weighting factor wR. This should now be used to calcu-
late the equivalent dose to a tissue T, HT according to the equation
HT = wR . DT
where DT is the dose averaged over tissue T from the radiation type R.
This is now considered to be an equivalent dose because for equal values of HT the dam-
age to tissue T will be the same for different types of radiation.
wR is dimensionless so HT still has the units J kg–1 but is now given the special name
sievert (Sv).
Values of wR are representative of the range of RBE values for that radiation in inducing
stochastic effects (see next section) at low doses. A simplified version of the latest ICRP
recommendations (ICRP 2007) is given in Table 12.3.
The concepts of radiation weighting factor and equivalent dose should only be applied
in the context of radiological protection.
TABLE 12.3
Radiation Weighting Factors
Radiation Type and Energy Range Radiation Weighting Factor wR
Photons (all energies) 1a
Electrons 1b
Protons 2
Neutrons See footnotes
Alpha particles, heavy ions 20
Notes: For neutrons wR is a continuous function, with a maximum wR = 20 at 1
MeV but falling to about 2.5 at lower and higher energies.
a There have been proposals, based on RBE data, that the w value for very
R
low energy X-rays (e.g. ~10–25 keV used in mammography) should be
higher. In the absence of unequivocal evidence for a higher value ICRP has
opted for simplicity.
b An important exception to this table may be the radiation weighting factor
for Auger electrons (Section 3.4.2), especially if they are emitted from a
radionuclide that is closely bound to DNA. There is evidence that these
electrons may be as harmful on an equi-absorbed dose basis as α-particles
because the average distance between ionising events is about the same as
the distance between DNA strands.
Source: ICRP, Recommendations of the International Commission on
Radiological Protection, ICRP Publication 103, Ann. ICRP. 37, 2–4, 2007.
(a)
100 mGy or more are required to observe an effect.
(b)
The dose threshold varies from one tissue to another.
(c)
It is a somatic effect affecting the exposed individual.
(d)
Repair and recovery can occur.
(e)
The severity of effect depends on dose/dose rate/number of exposures.
(f)
The effect usually occurs early, that is in days or weeks and may be repaired
quickly afterwards.
(g) Mechanisms are relatively well understood, for example in radiotherapy.
Threshold doses for the most sensitive tissue reactions are summarised in Table 12.4.
These values are all well above the doses received by patients in conventional radiology.
However, there have been isolated reports of radiation-induced skin injuries to patients
resulting from prolonged fluoroscopically guided invasive procedures. Absorbed dose
rates to the skin from the direct beam of a fluoroscopy X-ray system are typically in the
range 0.01–0.05 Gy/min but may reach 0.2 Gy/min or even higher. A few minutes screen-
ing at this higher dose rate could cause early transient erythema and any additional fluo-
rographic dose required for film or digital image recording must not be overlooked. For
discussion of possible hazards from interventional procedures see Section 9.6.3.
410 Physics for Diagnostic Radiology
Plateau at 100%
(a) (b)
Severity or probability of effect
Probability of effect
Linear increase
with dose
The plateau is unlikely to
be achieved because of
Well-defined other effects, e.g. cell killing
threshold dose
Natural incidence
Dose Dose
FIGURE 12.9
Idealised dose-response curves for (a) tissue reactions; (b) stochastic effects.
TABLE 12.4
Typical Threshold Doses for the Most Sensitive Tissue
Reactions
Absorbed Dose for a Brief
Tissue and Effect Exposure (Gy)
Testes
Temporary sterility 0.15
Permanent sterility >3.5
Ovaries
Sterility >2.5
Lens
Detectable opacities >0.5
Visual impairment (cataract) >2.0
Bone marrow >0.5
Depression of haematopoiesis
Skin
Early transient erythema 2
Temporary epilation 3
Stochastic effects, literally those governed by the laws of chance, are thought to occur pri-
marily at the cellular level. Since a single ionising event may be capable of causing radia-
tion injury, for example, to the DNA, and even the lowest dose diagnostic examinations
cause millions of ionisations in each gram of irradiated tissue, it is normal to assume that
there is no threshold dose for stochastic effects of ionising radiation. Thus the curve relat-
ing probability of effect to dose has the form shown in Figure 12.9b.
The two most important long-term effects of radiation, namely carcinogenesis and muta-
genesis are thought to be stochastic in nature. The former is presumably the result of dam-
age to a somatic cell, which either initiates or promotes a malignant change, the latter the
result of damage to a germ cell. A further important feature of stochastic effects is that
whereas their frequency will increase with increasing dose, their severity does not. Thus,
for example, the degree of malignancy of a radiation-induced cancer is not related to the
dose. A further reason for believing that radiation-induced carcinogenesis is a stochastic
Radiobiology and Generic Radiation Risks 411
effect is that there is no evidence for a threshold dose for radiation-induced cancer in the
Japanese survivors. There was little or no evidence for recovery effects in the multiple fluo-
roscopy work where patients received many small exposures over a period of time. Finally,
a truly stochastic mechanism would exclude the possibility of recovery.
Note that carcinogenesis and mutagenesis may be contrasted in that the former is somatic,
that is the effect is observed in the exposed individual, whereas the latter is hereditary,
with the effect being detected in the descendants.
12.5.2 Carcinogenesis
There is ample evidence from a wide range of sources that ionising radiation can cause malig-
nant disease in humans. For example, occupational exposure results in a greatly increased
incidence of lung cancer among uranium miners, and in the period 1929–1949 American
radiologists exhibited nine times as many leukaemias as other medical specialists. A fre-
quently quoted example of industrial radiation-induced carcinogenesis is the ‘radium dial
painters’. They were mainly young women employed during and after the First World War
to paint dials on clocks and watches with luminous paint. It was their custom to draw the
brush into a fine point by licking or ‘tipping’ it. In so doing, the workers ingested appreciable
quantities of radium-226 which passed via the blood stream to the skeleton. Years later a
number of tumours, especially relatively rare osteogenic sarcomas were reported.
There is also evidence from approved medical procedures. For example, between 1939
and 1954, radiotherapy treatment was given to the whole of the spine for more than 14,000
patients suffering from ankylosing spondylitis. Statistically significant excess deaths due
to malignant disease were subsequently observed, especially for leukaemia and carcinoma
of the colon. Other data comes from the use of thorotrast, which contains the alpha emitter
thorium-232 as a contrast agent in radiology; X-ray pelvimetry; multiple fluoroscopies and
radiation treatment for enlargement of the thymus gland. Unfortunately, radiation expo-
sure in medical procedures is associated with a particular clinical condition. Therefore it is
difficult to establish suitable controls for the purpose of quantifying the effect.
Finally, there is information gathered from the survivors of the Japanese atomic bombs.
This has been fully reported in a series of articles published over many years in Radiation
Research (Preston et al 2007) and by UNSCEAR (2008).
Five key points, which all have a bearing on radiation risk, can be made from this
follow-up:
1. The risk of cancer is not the same for all parts of the body, many parts are affected,
but not all to the same degree.
2. There is a long latent period before the disease develops—extra cases of leukaemia
peaked between about 5 and 14 years after exposure since when the relative risk
(i.e. expressed relative to the natural incidence) has decreased. For solid tumours
the relative risk has continued to rise (see Figure 12.10).
3. There is no evidence of a threshold dose.
4. In terms of relative risk, the lifetime risk of cancer is highest in those who were
less than 10 years of age at the time of exposure and then falls steadily with age.
5. Evidence is in reasonably good agreement with that from other sources and per-
mits a risk estimate.
Wall et al. (2006) listed 22 epidemiological studies of the effects of exposure to external low
LET radiation. Eighteen of them were medical and most involved the relatively high doses
412 Physics for Diagnostic Radiology
(a)
10 10
Relative risk at 1 Gy
Relative risk at 1 Gy
5 5
1.5 1.5
Relative risk at 1 Gy
Relative risk at 1 Gy
1.4 1.4
1.3 1.3
1.2 1.2
1.1 1.1
1 1
1950–55 1956–60 1961–65 1966–70 1971–75 1976–80 1981–85 1986–90 1991–97
Observational period
FIGURE 12.10
(a) Follow-up of relative risk of radiation-induced cancer (leukaemia) in the Japanese survivors. (Adapted from
Shimizu Y, Kato H and Schall W J, Studies of the mortality of A bomb survivors. 9 Mortality 1950–1985, Part
2, Cancer mortality based on the recently revised doses DS 86, Radiat. Res. 121, 120–141, 1990.) (b) Follow-up of
relative risk of radiation-induced cancer (all cancers except leukaemia) in the Japanese survivors Note that the
straight line is drawn to indicate the trend—the incidence may be starting to plateau but is not falling. Some error
bars are given to show the uncertainty of the data. (Adapted from Shimizu Y, Kato H and Schall W J, Studies of
the mortality of A bomb survivors. 9 Mortality 1950–1985, Part 2, Cancer mortality based on the recently revised
doses DS 86, Radiat. Res. 121, 120–141, 1990. With additional data derived from Preston et al., 2003.)
used in radiotherapy of malignant or benign conditions. The main sources of evidence that
radiation causes cancer in humans are summarised in Table 12.5.
12.5.3 Mutagenesis
The circumstantial evidence that ionising radiation causes mutations in humans is over-
whelming. For example, mutations have been observed in a wide variety of other species,
Radiobiology and Generic Radiation Risks 413
TABLE 12.5
Sources of Evidence That Ionising Radiation Causes Cancer in
Humans
Occupational Medical Diagnosis
Uranium miners Prenatal irradiation
Radium ingestion (dial painters) Thorotrast injections
American radiologists Multiple fluoroscopies (breast)
Atomic Bombs Medical Therapy
Japanese survivors Cervix radiotherapy
Marshall islanders Breast radiotherapy
Radium treatment
Ankylosing spondylitis and others
Two different
pre-replication
chromosomes
Radiation
Incorrect joining
Replication by
DNA synthesis
Dicentric chromosome
and acentric fragment
FIGURE 12.11
Steps in the formation of a dicentric chromosome and acentric fragment as a result of radiation breaks and
faulty rejoining.
including plants, bacteria, fruit flies and mice, and radiation is known to impair the learn-
ing ability of mice and rats.
Furthermore, radiation is known to cause extensive and long-lasting chromosomal aber-
rations. These may occur either pre-replication (chromosome aberrations) or post-DNA
replication (chromatid aberrations). One mechanism by which these aberrations can arise
from breaks and faulty rejoining is shown in Figure 12.11. Dicentric chromosomes, which
occur when two different chromosomes break and then rejoin incorrectly (shown here)
414 Physics for Diagnostic Radiology
and ring chromosomes, when two breaks occur in the same chromosome, are particu-
larly damaging since they are likely to prevent separation of the chromatids when the cell
attempts mitosis. An important method of retrospective dosimetry is to score such aber-
rations in cells circulating in the peripheral blood (Edwards 1997). For a dose of 50 mSv
whole body radiation, one or two dicentric chromosomes would be scored for every 1000
mitotic cells examined. The normal incidence is negligible.
Notwithstanding, researchers have so far failed to demonstrate convincing evidence of
hereditary or genetic changes in humans as a result of radiation, even in the offspring of
Japanese survivors. For the latter group four measures of genetic effects, ranging from
untoward pregnancy outcomes to a mutation resulting in an electrophoretic variant in
blood proteins have been studied. The difference between proximally and distally exposed
survivors is in the direction expected if a genetic effect had resulted from the radiation, but
none of the findings is statistically significant.
This failure to record a significant effect is presumably caused by the statistical difficulty
of showing a significant increase in the presence of a high and variable natural incidence
of both physical and mental genetically related anomalies. For severe disability the natural
incidence is between 4% and 6%.
There are additional problems in assessing genetic risk arising from the diagnostic use
of radiation. For example, only radiation exposure to the gonads is important and even this
component can be discounted after child-bearing age. Second, many radiation-induced
mutations will be recessive, so their chance of ‘appearing’ may depend on the overall level
of radiation exposure in the population. Finally, the risk to subsequent generations will
depend on the stability of the mutation once formed.
On the basis of the measured mutation rate per locus in the mouse, adjusted for the
estimated comparable number of loci in the human, the effective dose required to double
the mutation rate in humans is estimated to be at least 1 Sv. Hence 10 mSv per generation
parental exposure might increase the spontaneous mutation incidence by about 1%. An
alternative way to express the mutation risk will be discussed in the next section.
breast cancer/100,000 WY
200
300
Incidence of
Incidence of
150
100 200
50 100
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
breast cancer/100,000 WY
(800)
600 3000
Incidence of
Incidence of
500
400 2000
300
200 1000
100
0 0
0 1 2 3 4 5 6 0 100 200 300 400
Breast dose (Gy) Number of flouroscopies
FIGURE 12.12
Incidence of cancer of the female breast as a function of dose in atomic bomb survivors, women treated with
X-rays for acute post partum mastitis and women subjected to multiple fluoroscopic examinations of the chest
during treatment of pulmonary tuberculosis with artificial pneumothorax. (Redrawn from Boice J R Jr, Land C
E, Shore R E et al. Risk of breast cancer following low dose exposure, Radiology 131, 589–597, 1979.)
have been in 1977, 1990 and 2007 (ICRP 1977, 1991, 2007). The 1990 recommendations made
some important changes to both risk factors and radiation protection terminology. In par-
ticular the risk estimate for radiation-induced cancer was increased by about a factor of
four. Some of the reasons for this increase were as follows:
• Most of the information gathered between 1977 and 1990 had led to a higher esti-
mate of risk. For example, dosimetry at Hiroshima and Nagasaki had been recal-
culated. This had shown that the previous estimates of neutron doses were too
high, especially at Hiroshima. Because neutrons have a high radiation weighting
factor, they make a relatively high contribution to the equivalent dose. Reducing
the calculated neutron contribution increases the cancer risk per Sv.
• A second point was that prolonged follow-up had continued to show an excess of
solid tumours in Japanese survivors exposed to radiation. Follow-up to 1985 gave
better information on the increased risk to those who were under 10 years of age
at the time of irradiation. In 1990, different risk factors were quoted for the work
force (18–65 years) and the whole population.
416 Physics for Diagnostic Radiology
• Third, the model for cancer induction was reviewed. Before 1990 it had been assumed
to be additive. Now there is evidence that, for some cancers, for example breast, a
relative risk model is more appropriate. This assumes there will be a proportional
increase in cancer at all ages and as the population gets older and the natural inci-
dence of cancer increases, the extra cancers will also increase on this model.
• Finally, in 1990 it was decided to make some allowance for the detrimental effect
on quality of life of non-fatal cancers.
The 2007 recommendations contain relatively little data impacting significantly on risk
factors and are, for the most part, a consolidation and clarification of earlier recommenda-
tions. The only substantial change in risk figures has been a reduction in the estimated
heritable effects and most of this change is a result of a new method of presentation rather
than new data—see Insight.
Insight
ICRP (2007) Revision of Heritable Effects
• Extended follow-up of offspring of Japanese survivors has provided little new data to alter
conclusions of previous analyses (no effects have been demonstrated).
• Radiation-induced mutations are mostly large multi-gene deletions unlikely to result in live
births.
• ICRP 60 (1991) assumed all genetic lesions should be treated as lethal. The lethality fraction
has now been reduced to 80%.
• ICRP 103 (2007) has reverted to looking at the effect over two generations (as in ICRP 26
1977) because of severe scientific limitations on estimating an equilibrium value. This factor
alone reduces the risk coefficient by about a factor of 2–3.
ICRP Recommendations are essential reading for radiation protection experts but can be
quite heavy going. The reader may find a review of the 2007 recommendations by Wrixon
(2008), which also contains a comprehensive list of references, is adequate. Nominal risk
coefficients for 2007, 1990 and 1977 are compared in Table 12.6. Note that risks to the whole
population are averaged over both sexes and a wide range of ages. Since cancer risk varies
with age and sex, risks to individuals, for example, in diagnostic medical exposures, may
TABLE 12.6
Nominal Risk Coefficients in ICRP Recommendations
2007 %Sv–1 1990 %Sv–1 1977 %Sv–1
Whole Whole
Work Force Population Work Force Population
All cancers 4.1 5.5 4.8 6.0 1.25a
Heritable effects 0.1 0.2 0.8b 1.3b 0.4
Total (rounded) 4.2 5.7 5.6 7.3 2.0
Notes: In extrapolating from high doses ICRP considers that risks may be reduced by a factor of about 2 owing to
dose and dose rate effects. This correction is known as a dose and dose rate effectiveness factor (DDREF) and
has been made to the above figures which apply to low doses of radiation (see model A in Figure 12.13).
a Only fatal cancer was considered in 1977.
b The 1990 recommendations attempted to estimate an equilibrium value. Figures for 1977 and 2007 are for the
be somewhat different (but not necessarily outside the bounds of the considerable uncer-
tainties attached to these figures).
There is a body of opinion that considers the potentially harmful effects of low doses of
radiation are overstated. For example, the point is made that there is no direct evidence
that the very low doses used in, say, dental radiology have caused any cancers at all. This
is of course true. Observed excess cases have all been with much higher doses, for example
the dose to breast tissue in early fluoroscopy work was estimated at 0.04–0.2 Gy and the
number of examinations on one individual frequently exceeded 100.
Thus any extrapolation from observed data to lower doses is bound to be a model. Three
possible models, which have all been widely discussed in the literature, are illustrated in
Figure 12.13. However, we have no direct means of checking which of the three curves, or
indeed any other shape of curve is correct. For further detail see the figure legend.
Resolution of this problem would require a much better understanding of the mecha-
nism of carcinogenesis. However, if cancer arises as the final result of a series of successive
changes, it is at least plausible that radiation-induced damage could be one key step in the
chain of events. Since even a few micrograys of radiation will cause in excess of 108 ion
pairs per gram of tissue and a single ion pair is capable of breaking a chromosome and
hence causing a translocation, we must also accept that there is, effectively, no lower limit
to the dose of radiation that will cause cancer. Clearly the vast majority of changes at the
molecular level do not manifest themselves in disease!
This principle is extremely important in the diagnostic use of X-rays. If a Linear response
with No Threshold is assumed, the LNT model—curve A in Figure 12.13, then doubling the
dose of radiation will double the number of fatal cancers, however, low the dose. Also it
is reasoned that 10 mSv given to 10,000 persons or 100 µSv given to a million persons are
going to cause the same number of fatal cancers. A physical explanation for this reasoning
would be that the same number of ion pairs has been created in each case. Therefore in
500
Long-term
risk—extra cases Actual data
per 104 persons Model C
60
Background
45 radiation
Model A
30
Model B
15
20 40 60 600 1200
Dose of radiation (mSv)
FIGURE 12.13
Three curves showing the possible variation in long-term risk with radiation dose. The solid line (model A)
represents current thinking—a linear increase at low doses with no threshold (the LNT model). At higher doses
(well above protection and diagnostic levels) the curve turns upwards. The dotted curves represent a non-
linear increase, possibly with a threshold (model B) and a supralinear model in which low dose effects would
be higher than predicted (model C). Typical doses from annual background radiation and simple radiological
examinations are shown shaded.
418 Physics for Diagnostic Radiology
situations where large numbers of persons are exposed to low doses, for example in diag-
nostic radiology it is the collective dose (100 personSv in this example) that will determine
the detriment. Hence with a risk of fatal cancer of approximately 5% Sv–1, a collective dose
of 100 personSv to the whole population will cause five extra fatal cancers.
N = k′(αD + βD2).
Furthermore the model predicts that α/β will increase at low dose rates and for high LET
radiation. Both effects have been observed in many systems. At very low doses the equa-
tion reduces to N = k′αD, that is, the number of double strand breaks increases linearly
with dose from zero dose upwards.
Double strand breaks also provide a starting point for a two-stage clonal expansion model,
first proposed by Moolgavkar and Knudson (1981) and now commonly used for modelling
radiation-induced and radiation-driven carcinogenesis. The model is illustrated in Figure
12.14. The organ of interest is assumed to contain N normal cells that have the potential to
become malignant (M) in two rate-limiting stochastic steps (µ1 and µ2). In the intermedi-
ate stage (I) the growth advantage of cells on the pathway to malignancy is determined
by the relative values of α (birth rate) and β (death rate) where α > β. The growth time of
Radiobiology and Generic Radiation Risks 419
α
µ1 µ2 tlag
N I M T
FIGURE 12.14
Schematic representation of the two-mutation model for cancer induction—for explanation see text. (With per-
mission from Dendy P P and Brugmans M P J, Commentary: Low dose radiation risks, Br. J. Radiol. 76, 674–677,
2003.)
a malignant cell into a detectable tumour (T) depends on a deterministic lag time (tlag).
Spontaneous tumours result from spontaneous mutations. A single acute radiation expo-
sure is assumed to increase the mutation rate in one of the steps but cannot influence both.
At low dose, radiation is not thought to influence the clonal expansion stage. Variations
in the relative values of α and β, and in tlag can explain the well-documented variations
between time of exposure and time of tumour appearance in the Japanese survivors.
In conclusion, experimental and theoretical work on double strand breaks in DNA
strongly support the conclusion that cancer induction can be the ultimate outcome of
a double strand break in DNA. The dose-effect curve is basically linear-quadratic but
becomes linear at low doses extending back to zero dose.
• Use vital stains (i.e. ones which do not kill cells) to stain two groups of cells
selectively—one group orange for the cytoplasm, the other blue for the nuclei
• Mix the cells and attach to a thin surface that can be penetrated by the α-particles
• Programme the microbeam so that only blue nuclei are hit—the microbeam is
about 5 µm in diameter and the range of the α-particles so short that ionisation
tracks will not extend outside the nucleus
• Fix 48 h later when micronuclei and chromosome bridges, implying significant
chromosomal rearrangement, are observed in orange-stained cells which have not
been directly irradiated
420 Physics for Diagnostic Radiology
Cell-to-cell communication via gap junctions has been identified as an important contrib-
uting factor to the bystander effect. The effect is not observed when cells are sparse or gap
junctions in close-packed cells are blocked.
There are no reports of similar effects in vivo, but if bystander effects occur in vivo the
implications for extrapolation from high to low dose could be important. A dose of 1 mGy
corresponds, on average, to about one ionisation track per cell so at low doses many nuclei
are not hit. Two conflicting outcomes are possible, see Figure 12.15:
The LNT hypothesis assumes that risk is only influenced by dose. However, Figure 12.16
shows that cancer is only one of four alternative outcomes. If the relative importance of
Model A
Long-term risk
Model D
Model E
1 2 3
Dose of radiation (mSv)
FIGURE 12.15
Expanded version of the very low dose region of Figure 12.13 showing possible consequences of the bystander
effect. Model A is taken from Figure 12.13. In model D the bystander effect extends damage to unirradiated
cells. In model E the bystander effect enhances adaptive processes. All curves are dotted to show there is no
direct experimental evidence for any of them.
Radiobiology and Generic Radiation Risks 421
Radiation
Normal cell
DNA damage
1 3
3b
3a
Cancer
Cell death/apoptosis
FIGURE 12.16
Possible outcomes of radiation damage at the cellular level—for explanation see text.
these alternative pathways varies with dose and/or time, the LNT hypothesis might not
be valid.
Recent radiobiological research, both in vitro and in animals has shown a large number
of ‘adaptive responses’ to low doses suggesting that alternative pathways can be altered.
Further details are given in the ‘insight’ but the basic approach is to demonstrate that if a
low priming dose of a few mSv is given, the system subsequently responds differently to
both low and high doses. Overall the results show that
1. The adaptive effect of low doses reduces risk rather than increasing it.
2. Adaptive effects are not observed above 100 mGy.
3. Adaptive responses are not unique to mammals and may be part of an evolution-
ary response to background radiation.
Insight
Examples of Adaptive Responses
In Vitro
• The frequency of micronuclei in human skin cells exposed to a high dose of low LET radia-
tion (4 Gy) is less if the cells have been exposed to a range of lower doses (1–100 mGy) 3 h
previously.
• In non-dividing lymphocytes more cells die by apoptosis following a high dose (2 Gy) if the
cells were exposed to 100 mGy 6 h earlier.
• In rodent cells a priming dose of 100 mGy delivered at low dose rate 24 h before a high
dose (4 Gy) caused less malignant transformations than the high dose alone.
• Of particular interest to diagnostic radiology (because it involved no high doses) was the
finding that in another strain of rodent cells single low doses of 1, 10 and 100 mGy all
reduced the spontaneous malignant transformation frequency. Interestingly, the reduced
frequency was the same at all doses, even though at 1 mGy not all cells would be traversed
by an ionisation track. This may be evidence for the bystander effect.
422 Physics for Diagnostic Radiology
In Vivo
• A priming dose of 100 mGy at low dose rate the day before a carcinogenic dose of 1 Gy to
genetically normal mice delayed the appearance of myeloid leukaemia.
• The appearance of osteogenic sarcomas in genetically altered, cancer prone mice was
delayed, relative to unirradiated controls, by doses of 10 mGy, but accelerated by 100 mGy.
1. The study involves a large population of about 90,000 of all ages and both
genders.
2. About one-third of the survivors were exposed to doses in the range 5–100 mSv.
100 mSv is about the highest dose of relevance in diagnostic radiology, correspond-
ing to several CT scans.
3. Cancer incidence and mortality data are available.
4. Mortality follow-up is almost complete for adults, and about 50% complete for
children.
5. We hope it is unlikely that any comparable study can be performed in the future.
0.06
00
00
5–2
5–5
ERR for cancer mortality
5–1 125
00
0
50
5–
5–5
0.04
5–1
0.02
0.00
0 20 40 60 80 100
Mean dose (mSv)
FIGURE 12.17
Excess relative risk (ERR) of cancer mortality for Japanese survivors at low doses. ERR is significantly different
from zero (■) for average doses of about 35 mSv (range 5–125 mSv) and above. For lower doses it is positive but
not significant (○). (With permission from Hall E J and Brenner D J, Cancer risks from diagnostic radiology,
Br. J. Radiol. 81, 362–378, 2008.)
Radiobiology and Generic Radiation Risks 423
Even for such a large population, cases had to be grouped into quite large dose bands.
Thus the 5–50 mSv dose group was first compared with the controls (less than 5 mSv), then
the 5–100 mSv group was compared with controls and so on. As shown in Figure 12.17 the
excess relative risk became significantly different from zero for the 5–125 mSv group (mean
dose 35 mSv). This is important because 35 mSv is comparable with some organ doses (35
mGy assuming wR = 1 for X-rays) received in some high dose radiological procedures—for
example, two CT scans or a long interventional procedure.
Some confirmatory evidence comes from a 15 nation study of 400,000 workers in the
nuclear industry who received a mean dose of 20 mSv. Again the excess relative risk was
above zero and almost identical to the figure for A-bomb survivors extrapolated to the
same mean dose.
The best medical evidence comes from numerous studies of childhood cancer risk after
foetal exposure to diagnostic X-rays. These have concluded that there is a significant
increase in childhood cancer for foetal doses of 10–20 mGy.
12.8 Conclusions
In terms of absorbed energy ionising radiation is clearly the most harmful of all types of
radiation. This is because of the unique way in which the ionisation process deposits packets
of energy that are large compared to biochemical processes. Following the very early physico-
chemical changes resulting from the radiation, double strand breaks in DNA are probably the
most important early biological effect and manifest themselves in many different ways.
At high doses there is clear evidence that ionising radiation can cause cancer in humans
and mutations in animals. Data from different sources are broadly consistent and permit
risk estimates to be made. The absence of any major new ICRP recommendations between
1990 and 2007 gives confidence that radiation protection has matured into a sound, scien-
tifically based discipline underpinning the systems of control of radiation exposure that
have been introduced throughout the world.
The position at the very low doses associated with most diagnostic examinations is less
clear cut. Epidemiological studies are no help below the upper end of the diagnostic range
of doses and an increasing number of radiobiological experiments report adaptive pro-
cesses at low doses. The ICRP (ICRP 2007) remains of the view that extrapolation of the
LNT model (model A in Figure 12.13) below about 50 mSv is the most plausible scientifi-
cally. This is certainly the most robust approach for radiation protection planning purposes
but care is required when applying it to estimate the risk from radiological exposures to
patients. This subject is taken up again in Chapter 13.
References
Bergonié J and Tribondeau L, De quelques resultats de la radiotherapie et assai de fixation d’une
technique rationelle, C R Acad Sci 143, 983, 1906 (Engl. Transl. Fletcher G H, Interpretation
of some results of radiotherapy and an attempt at determining a logical treatment technique,
Radiat Res 11, 587, 1959).
424 Physics for Diagnostic Radiology
Boice J R Jr, Land C E, Shore R E et al., Risk of breast cancer following low dose exposure, Radiology
131, 589–97, 1979.
Brooks A L, Coleman M A, Douple E B et al., Biological effects of low doses of ionising radiation. In
Advances in medical physics, eds. Wolbarst A B, Zamenhof R G, Hendee W R Medical Physics
Publishing, Madison, 2006, pp. 256–286.
Dendy P P and Brugmans M P J, Commentary: Low dose radiation risks, Br J Radiol 76, 674–77, 2003.
Edwards A A, The use of chromosomal aberrations in human lymphocytes for biological dosimetry,
Radiat Res 148 (Suppl.), 39–44, 1997.
Hall E J and Brenner D J, Cancer risks from diagnostic radiology, Br J Radiol 81, 362–378, 2008,
Hall E J and Giaccia A J, Radiobiology for the radiologist 6th ed., Lippincott, Williams and Wilkins,
Philadelphia, 2006.
ICRP, Recommendations of the International Commission on Radiological Protection, ICRP
Publication 26, Ann ICRP 1, 3, 1977.
ICRP, Recommendations of the International Commission on Radiological Protection, ICRP
Publication 60, Ann ICRP 21, 1–3, 1991.
ICRP, Recommendations of the International Commission on Radiological Protection, ICRP
Publication 103, Ann ICRP 37, 2–4, 2007.
ICRU, Radiation quantities and units, International Commission on Radiation Units and
Measurements, ICRU Publication 33, 1980.
Martin C J, Dendy P P and Corbett R H (eds.), Medical imaging and radiation protection for medical stu-
dents and clinical staff, British Institute of Radiology, London, 2003.
Mettler F A Jr and Upton A C, Medical effects of ionising radiation 3rd ed., Saunders Elsevier, Philadelphia,
2008.
Moolgavkar S H and Knudson A G, Mutation and cancer: A model for human carcinogenesis, J Natl
Acad Sci USA 66, 1307–52, 1981.
Mothersill C and Seymour C, Radiation-induced bystander effects: Past history and future direc-
tions, Radiat Res 155, 759–767, 2001.
Preston D L, Shimizu Y, Pierce D A et al., Studies of mortality of atomic bomb survivors. Report 13:
Solid cancer and non-cancer disease mortality 1950–1997, Radiat Res 160, 381–407, 2003.
Preston D L, Ron E, Tokuoka S et al., Solid cancer incidence in atomic bomb survivors 1958–1998,
Radiat Res 168, 1–64, 2007.
Shimizu Y, Kato H and Schall W J, Studies of the mortality of A bomb survivors. 9 Mortality 1950–1985,
Part 2, Cancer mortality based on the recently revised doses DS 86, Radiat Res 121, 120–41, 1990.
UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation), Sources and
effects of ionising radiation—2006 Report to the General Assembly with scientific annexes, United
Nations, New York, 2008.
Upton A C, Cancer induction and non-stochastic effects. In Biological basis of radiological protection and
its application to risk management, Br J Radiol 60, 1–6, 1987.
Wall B F, Kendall G M, Edwards A A et al., Review article: What are the risks from medical X-rays and
other low doses of radiation? Br J Radiol 79, 285–294, 2006.
Watson S J, Jones A L, Outway W B et al., Ionising radiation exposure of the U.K. population: 2005
review. Document HPA – RPD – 001, Health Protection Agency, Chilton, Didcot, U.K. 2005.
Wrixon A D, New recommendations from the International Commission on Radiological Protection—a
review, Phys Med Biol 53, R41–R60, 2008.
Exercises
1. State the ‘law’ of Bergonié and Tribondeau on cellular radiosensitivity. Discuss the
extent to which cell types follow the ‘law’.
Radiobiology and Generic Radiation Risks 425
2. Explain the term RBE. List the factors that can affect the RBE value, giving a brief
explanation of each.
3. What are the differences between tissue reactions (deterministic effects) and sto-
chastic radiation effects? What is the evidence for believing that radiation-induced
carcinogenesis is a stochastic effect?
4. Discuss the precautions that should be taken to minimise the risk of tissue reac-
tions from radiation during interventional procedures and the actions that should
be taken to ensure that they are identified if they occur.
5. Sketch the most common forms of chromosome defect detectable after whole body
irradiation. Discuss the feasibility of chromosome aberration analysis as a method
of retrospective whole body monitoring.
6. Review the evidence that ionising radiation can cause harmful genetic effects.
7. What are the four main sources of evidence that radiation can cause cancer in
humans? Give two examples of each and discuss the strengths and weaknesses of
each source.
8. With specific reference to diagnostic radiology, discuss the case for and against the
linear-no-threshold (LNT) model of radiation risk.
9. What is the bystander effect? How may it be demonstrated and how might it affect
low dose radiation risk?
10. Outline the evidence that adaptive processes are triggered in cells at very low
doses.
13
Radiation Doses and Risks to Patients
SUMMARY
• The reasons for assessing radiation doses to patients are discussed.
• Practical methods of measuring doses to patients are reviewed.
• Current data on patient doses from a wide range of radiological investiga-
tions is presented.
• The concept of diagnostic reference level (DRL) and its role in dose reduction
is introduced together with practical methods of dose reduction.
• The concept of effective dose, its calculation and limitations in assessing risk
is discussed.
• The special high-risk situation of irradiation of the foetus is reviewed.
CONTENTS
13.1 Introduction—Why Are Doses Measured?..................................................................... 428
13.2 Principles of Patient Dose Measurement......................................................................... 428
13.2.1 Where Is the Dose Measured?.............................................................................. 428
13.2.2 How Is the Dose Measured?.................................................................................. 429
13.3 Review of Patient Doses.....................................................................................................430
13.3.1 Entrance Doses in Radiography...........................................................................430
13.3.2 Entrance Doses in Fluoroscopy............................................................................ 431
13.3.3 Doses in Interventional Procedures.....................................................................433
13.4 Effect of Digital Receptors on Patient Dose....................................................................434
13.5 Effective Dose and Risks from Radiological Examinations......................................... 435
13.5.1 Tissue Weighting Factors....................................................................................... 435
13.5.2 Organ Doses............................................................................................................ 436
13.5.3 Effective Dose.......................................................................................................... 438
13.6 Optimisation and Patient Dose Reduction..................................................................... 438
13.6.1 Technical Factors..................................................................................................... 439
13.6.2 Non-Technical Factors............................................................................................ 441
13.7 Procedures Requiring Special Dose Assessment/Measurement................................ 441
13.7.1 Computed Tomography (CT)................................................................................ 441
13.7.2 Mammography Doses............................................................................................444
13.7.3 Nuclear Medicine....................................................................................................445
13.8 The Perception of Risk from Medical Radiation Exposures.........................................446
427
428 Physics for Diagnostic Radiology
100 100
PA entrance
dose
AP lumbar spine
10 10
0.01 0.01
PA exit 0.01
dose
0.001 0.001
Anterior (A) Distance Posterior (P)
organ
Critical
FIGURE 13.1
Attenuation of an X-ray passing through the body for three common X-ray projections (solid lines). The dotted
line shows the effect on a critical organ dose, for example, the ovary if a PA projection were used instead for a
lumbar spine view.
TLD DAP
calibration Output in air in calibration
mGy mAs–1 at 75 cm
FIGURE 13.2
Direct and indirect approaches to assessment of entrance skin dose (FSD = focus-skin distance; BSF = back
scatter factor).
A dose estimate may be required when a TLD or DAP measurement has not been made.
Therefore indirect methods of dose measurement should not be overlooked. To do this
the tube output is measured under specified conditions using an ionisation chamber in
air, generally as part of routine quality assurance (QA) checks. A typical figure would be
100 µGy mAs–1 at 75 cm FSD for a modern tube operating at 80 kVp. The ESD can then be cal-
culated from the knowledge of the exposure factors, applying any necessary correction for
source-skin distance and a factor to allow for backscatter from the patient. Indirect methods
also allow a large number of dose estimates to be made from a small number of measure-
ments and may be useful at very low doses close to the detection limit of the TLDs or DAP
meter. Indirect methods rely on knowing all the exposure factors and are difficult to apply
to automatic control systems where the delivered mAs may not be recorded. The direct and
indirect approaches for assessing entrance skin dose are summarised in Figure 13.2.
Note, (1) if the measurement is made with a DAP meter the result must be divided by the
area of the beam (on the patient) to obtain the ESD and multiplied by the backscatter factor;
and (2) the backscatter factor can be quite high and will typically be in the range 1.2–1.4.
It is left as an exercise to the reader to explain why the backscatter factor will increase with
increasing field size and increasing peak applied kVp/half value layer.
14
12
10
No. of X-ray rooms
0
0–1 1–2 2–3 3–4 4–5 5–6 6–7
Room mean DAP (Gy cm2)
FIGURE 13.3
Comparison of room mean DAP distributions for lateral lumbar spine examinations from a survey carried out
in East Anglia in 2007.
a very narrow range (less than 5 mGy), some a very wide range (up to 80 mGy). In dose
surveys made in the 1980s, some of the dose variation was due to film-screen systems with
different speeds being used, and departments changed to rare earth intensifying screens
which were more sensitive and so less radiation dose was required to produce a diagnostic
image. This pattern has been repeated, to a greater or lesser degree, for subsequent surveys
with the overall trend being a reduction in dose and the range of doses. Results from a
survey of lumbar spine DAP measurements undertaken in East Anglia, U.K., in 2007 are
shown in Figure 13.3.
In an attempt to put downward pressure on patient doses a joint working party of the
Royal College of Radiologists and National Radiological Protection Board (NRPB 1990)
introduced the concept of reference values for entrance doses for standard radiological
examinations. These levels are known as DRLs and are defined as ‘dose levels for typi-
cal examinations for groups of standard sized patients or standard phantoms for broadly
defined types of equipment’ (IRMER 2000). UK National reference levels were set at the
third quartile point from an NRPB survey conducted in the mid 1980s and were adopted
by the European Union. In the UK employers are required to establish local DRLs (LDRLs)
and to undertake local reviews and initiate appropriate action when they are consistently
exceeded (see Section 13.6).
The NRPB (now the Health Protection Agency [HPA]—Radiation Protection Division)
have continued to carry out 5 yearly reviews of doses to patients from X-ray imaging pro-
cedures in the UK, the most recent being published in 2007 covering the period January
2001–February 2006 (Hart et al. 2007). National reference doses recommended in 2007 in
the review are shown in Table 13.1. In the majority of cases doses have reduced by a factor
of 2 or more since the 1980s.
TABLE 13.1
Recommended National Reference Doses for Individual
Radiographs on Adult Patients
Entrance Surface DAP per Radiograph
Radiograph Dose (mGy) (Gy cm2)
Lumbar spine AP 5 1.6
Lumbar spine LAT 11 2.5
Lumbar spine LSJ 26 2.6
Chest PA 0.15 0.11
Chest LAT 0.6 0.3
Abdomen AP 4 2.6
Pelvis AP 4 2.1
Skull AP/PA 2 0.8
Skull LAT 1.3 0.5
Thoracic spine AP 4 0.9
Thoracic spine LAT 7 1.4
Source: Hart D, Hillier MC and Wall BF, Doses to Patients from
Radiographic and Fluoroscopic X-ray Imaging Procedures in the
UK- 2005 Review. HPA-RPD-029, Health Protection Agency,
Radiation Protection Division, Chilton, UK, 2007.
TABLE 13.2
Range of Input Air KERMA Rates for Modern Image Intensifier
Systems under Automatic Brightness Control
Dose Rate Settings (µGy s–1)
Field
System Size (cm) Low Normal High
GE Advantx TC 30 0.2 0.35 0.78
Philips TeleDiagnost 38 0.15 0.3 –
GE legacy 32 0.23 0.5 1.02
Source: Adapted from Evans DS, Mackenzie A, Lawinski CP and Smith
D, Threshold contrast detail detectability curves for fluoroscopy
and digital acquisition using modern image intensifier systems.
Br. J. Radiol. 77, 751–758, 2004.
One limitation of the definition of absorbed dose is that it changes very little with
change in field size (there is a second order effect due to a change in the amount of scatter).
However, the risk to the patient will clearly be greater if the dose is delivered over a bigger
area, so the risk is more closely related to the total energy deposited as ionisation than to
the absorbed dose. Therefore one of the variables that needs to be known if surface dose
is measured with a TLD, is the field size. In fluoroscopy, field sizes are non-standard and
may vary during a study so a dose measurement based on surface TLDs is not appropriate
and a DAP measurement is used. In the most modern fluoroscopy equipment a measure
of ESD is also displayed; this is a value calculated from the DAP assuming a specific focus
skin distance. It is particularly useful in interventional procedures where the skin dose
may be significant (see Section 13.3.3).
Recommended national reference doses for a number of more common fluoroscopic and
complete examinations are shown in Table 13.3.
Note that the figure for coronary angiography considers all patients having this proce-
dure, and includes a few patients who have already had a coronary bypass graft. For these
patients doses may be of the order of twice the reference dose.
Radiation Doses and Risks to Patients 433
TABLE 13.3
Recommended National Reference Dose Area Product for a Selection of
Fluoroscopic and Complete Examinations
Fluoroscopy Time in
DAP in Gy cm2 Minutes per
Examination per Examination Examination
Barium (or water soluble) enema 24 2.8
Barium follow through 12 2.2
Barium meal 14 2.7
Barium meal and swallow 11 2.2
Barium(or water soluble) swallow 9 2.3
Coronary angiography 29 4.5
Femoral angiography 36 5.5
Fistulography 13 3.8
Hysterosalpingography 3 1
IVU 14 –
MCU 12 1.9
Nephrostography 12 4.8
Sialography 2 1.7
Sinography 9 2.1
Small bowel enema 40 9.2
T-tube cholangiography 8 1.9
Venography 7 2.2
Source: Hart D, Hillier MC and Wall BF, Doses to Patients from Radiographic and
Fluoroscopic X-ray Imaging Procedures in the UK- 2005 Review. HPA-RPD-029,
Health Protection Agency, Radiation Protection Division, Chilton, UK, 2007.
TABLE 13.4
Recommended National Reference Doses for Interventional Procedures on
Adult Patients
DAP in Gy cm2 Fluoroscopy Time in
Interventional Procedure per Examination Minutes per Examination
Biliary drainage/intervention 50 15
Facet joint injection 5 1.8
Hickman line 3 1.4
Nephrostomy 14 5.1
Oesophageal dilation 11 2.8
Oesophageal stent 25 5.9
Pacemaker 11 8.2
PTCA (single stent) 50 13
Source: Hart D, Hillier MC and Wall BF, Doses to Patients from Radiographic and
Fluoroscopic X-ray Imaging Procedures in the UK- 2005 Review. HPA-RPD-029,
Health Protection Agency, Radiation Protection Division, Chilton, UK, 2007.
Frequently the radiation field in such procedures is static for a considerable period of time
so that not only is the total energy absorbed important but also the ESD since it may be
high enough to cause skin damage (ICRP, 2000) see Sections 9.6.3 and 12.5.1. In some cases
although the radiation fields may have different orientations, for example, in coronary angio-
plasty, there will be some areas of overlap. This can be illustrated and estimated using suit-
ably calibrated film placed on the X-ray table under the patient (Morrell and Rogers 2006).
Recently ‘Gafchromic’ film which requires no processing has been used for this purpose.
Morrell and Rogers and others have reported some ESDs in excess of 1 Gy for percutane-
ous transluminal angioplasty (PTCA) procedures.
TABLE 13.5
Comparison of Room Mean DAP per Radiograph
for CR and Film
DAP (Gy cm2)
Computed
Radiograph Radiography Film
AP abdomen 2.43 2.21
PA chest 0.13 0.1
AP lumbar spine 1.41 1.55
Lat lumbar spine 2.48 1.95
AP pelvis 2.08 1.95
Source: Hart D, Hillier MC and Wall BF, Doses to Patients
from Radiographic and Fluoroscopic X-ray Imaging
Procedures in the UK- 2005 Review. HPA-RPD-
029, Health Protection Agency, Radiation
Protection Division, Chilton, UK, 2007.
Radiation Doses and Risks to Patients 435
3.50
DR receptors
3.00
Film
Mean DAP (Gycm2)
2.50
CR
2.00
1.50
1.00
0.50
0.00
W1
O3
O2
A3
A2
D3
D2
C3
G10
L1
K1
K5
K3
F1
P1
S1
FIGURE 13.4
Comparison of room mean DAP for AP pelvis examinations carried out using DR, film or CR as the image
receptor, from a survey carried out in East Anglia in 2007.
East Anglia between 2007 and 2008 for AP pelvis examinations suggest that a significant
dose reduction can be achieved (Figure 13.4).
DR receptors have been used for fluoroscopy in place of conventional image intensifiers
since about 2000. Although insufficient data were available in the HPA Review published
in 2007 to recommend a DRL for such systems, some limited data were available from car-
diac procedures. These tended to show an increase in DAP for digital detectors which may
indicate inadequate image optimisation. The next 5-year review should provide a more
solid indication of the dose saving or otherwise.
TABLE 13.6
Tissue Weighting Factors
Tissue or Organ wT (ICRP 2007a) wT (ICRP 1991)
Bone marrow (red) 0.12 0.12
Colon 0.12 0.12
Lung 0.12 0.12
Stomach 0.12 0.12
Breast 0.12 0.05
Remainder tissues 0.12 0.05
Gonads 0.08 0.2
Bladder 0.04 0.05
Oesophagus 0.04 0.05
Liver 0.04 0.05
Thyroid 0.04 0.05
Bone surface 0.01 0.01
Brain 0.01 Part of remainder
Salivary glands 0.01 Part of remainder
Skin 0.01 0.01
Remainder = adrenals, extra- Remainder = adrenals, brain,
thoracic region, gall bladder, small intestine, kidney, muscle,
heart, kidneys, lymphatic nodes, pancreas, spleen, thymus, uterus
muscle, oral mucosa, pancreas,
prostate, small intestine, spleen,
thymus, uterus/cervix
Insight
Worked Example on the Dose to the Uterus
All practising radiologists are likely to be asked as some stage in their career to advise on a patient
who has discovered she is pregnant after X-ray procedures involving the pelvis or lower abdomen
have been performed. The worked example given below illustrates how the foetal dose may be
estimated. It also serves as an important indicator of all the information (italicised) that must be on
record in relation to the examination if the assessment is to be made reasonably accurately. The
availability or otherwise of these data would be one measure of good radiographic practice.
• Basic Data on Good Radiographic Set-Up
slim to average build lady estimated 7 weeks pregnant
2 × AP lumbar spine views at 70 kVp, 80 mAs, 100 cm FRD (70 cm FSD estimated)
Radiation Doses and Risks to Patients 437
1 × lat lumbar spine view at 80 kVp, 140 mAs, 100 cm FRD (50 cm FSD estimated)
(L1-4)
1 × lat lumbar spine view at 90 kVp, 150 mAs, 100 cm FRD (50 cm FSD estimated)
(L5-S1)
These data are entered in columns 1–3 and 5 of Table 13.7
X-ray tube output data are entered in column 4
kV 70 80 90
air KERMA rate (µGy mAs–1) at 75 cm 28.7 39.1 50.5
• Calculation
1. Multiply the output factor (column 4) by the mAs and correct for the inverse square law
to find the air KERMA at the skin (column 6)
2. Use NRPB-R 186 (Jones and Wall 1985) to look up the backscatter factor (BSF) for
this particular examination. This corrects for the fact that the output measurements in
column 4 have been made in air whereas a significant amount of radiation will be scat-
tered back to the skin from the patient
3. The skin dose (column 8) is now equal to air KERMA at the skin × BSF × 1.06 (the ratio
of mass absorption coefficients for soft tissue and air)
4. Look up the uterus dose per unit skin dose for the appropriate examination in NRPB-R
186 (column 9)
5. Calculate the dose to the uterus (column 10)
It is useful to check at this stage that the answer is consistent with information in HPA (2009) for
the appropriate examination.
Notes
(1) It would be unusual for a patient to have four views for a lumbar spine examination but they
have been included here to illustrate different aspects of the calculation.
(2) A period of screening would be dealt with similarly. A record of the screening time is essen-
tial. The kV and mA may have to be estimated if they are automatically adjusted by the
equipment to maintain the correct image brightness. Ideally a DAP meter should be fitted.
(3) Software has been produced, for example, Tapiovaara and Siiskonnen 2008, to expedite
these calculations but there is some merit in working through them from first principles at
least once!
TABLE 13.7
Foetal Dose Assessment from Recorded Exposure Factors
Air Skin Uterus Uterus
µGy per FSD KERMA Dose Dose/Unit Dose
View kVp mAs mAs cm mGy BSFa mGy Skin Dosea mGy
AP × 2 70 80 28.7 70 2.64 1.36 3.8 0.179 0.684 × 2
lat 80 140 39.1 50 12.3 1.28 16.7 0.017 0.28
LSJ 90 150 50.5 50 17.0 1.27 22.9 0.014 0.32
Total 1.97
a from NRPB186.
Source: Jones, D.G. and Wall B.F, Organ Doses from Medical X-ray Examinations Calculated Using Monte
Carlo techniques, NRPB-R186, National Radiological Protection Board, Chilton, Didcot, Oxfordshire
OX11 0RQ, UK, 1985.
438 Physics for Diagnostic Radiology
TABLE 13.8
Typical Effective Doses from Some Common Diagnostic
Examinations for Adults in the UK
Examination Mean Effective Dose (mSv)
PA chest 0.02
PA skull 0.03
AP abdomen 0.7
AP pelvis 0.7
Lumbar spine (AP and lateral) 1.0
Barium enema 7
Source: Martin CJ, Dendy PP and Corbett RH, Medical Imaging and
Radiation Protection for Medical Students and Clinical Staff, British
Institute of Radiology 36: Portland Place, London, 2003.
Insight
Local DRLs
Legislation in the UK requires hospitals to set their own LDRLs to adapt local practices and ensure
patient doses are being kept as low as reasonably practicable. This is not an entirely straight
forward process and guidance has been published (IPEM 2004) on how it may be done.
Radiation Doses and Risks to Patients 439
The hospital, or hospitals if there is more than one in an organisation, first selects examina-
tions on which it is appropriate for data to be collected. This may or may not be the same as the
examinations for which there are national DRLs, although it should certainly include some of
these examinations. The selection will depend on local activities, for example, a specialist chest
hospital or a private hospital dealing largely with orthopaedic cases would select different proce-
dures from a general hospital. Data is then collected from actual patient examinations, the mini-
mum sample size is recommended as 10 patients per procedure per room with patient weights
lying between 50 kg and 90 kg so that the mean weight of the sample is 70 ± 5 kg. The mean of
the doses for each room is calculated, and then an overall mean calculated. If this exceeds the
national DRL an investigation should be undertaken to determine the cause and if possible take
corrective action. Some of the technical ways in which patient doses may be reduced are cov-
ered in this section but case mix and measurement methodology could be contributory factors as
well as equipment and technique. For further detail the reader is advised to read the guidance.
What can be done when DRLs are consistently exceeded? Since this book is concerned
with the physics of radiology, the emphasis will be on technical factors that affect the dose
but other considerations must not be overlooked and will be mentioned briefly.
• The choice of operating kilovoltage must be optimised and the appropriate amount
of filtration must be used. The use of more filtration and higher kV will lower the
ESD because of better patient penetration—but note that use of a grid may negate
the dose reduction. The effective dose will not be reduced by as great a proportion
as the ESD because the higher energy scattered radiation will travel more easily to
more distant organs.
• For both analogue and digital systems the most sensitive receptor that will give
the requisite image quality should be used. Note that the sensitivity of the detector
varies with the kVp. Therefore, to achieve maximum dose reduction the kV should
correspond to the maximum receptor sensitivity.
• It is important to measure the dose or dose rate entering the imaging system (i.e.
the exit dose from the patient) since clearly the sensitivity of the receptor will have
a major effect on the dose to the patient. Baseline and subsequent routine quality
assurance measurements are essential to detect any deterioration in the sensitivity
of the image receptor.
• In early dose surveys a significant cause of retakes with film, leading to unnecessary
patient exposure, was poor image processing. This has been largely eliminated in
digital systems, but care needs to be taken to ensure post-processing on images is
optimal and images are not deleted before being transferred to the PACS system.
• Low attenuating components, for example carbon fibre, should be used where
possible (e.g. for the couch, antiscatter grids, cassette fronts).
• Automatic exposure control devices should be used where practicable and must
be checked regularly for reliability.
• If grids are essential, the lowest grid factor commensurate with adequate image
quality should be used. The grid should be completely removed whenever
possible—consider using an air gap as an alternative.
• The focus receptor distance should be optimal—see Insight.
• Automatic beam collimation should be used whenever possible.
440 Physics for Diagnostic Radiology
Periodic measurements of patient entrance skin dose should be made and the overall qual-
ity assurance programme for the department must be such that, when technical problems
have been identified and corrected, patient doses are re-audited to demonstrate that the
necessary dose reduction has been achieved.
Insight
Minimum Focus-Skin Distances
The reasons why minimum focus-skin distances are specified for certain examinations (e.g. 180 cm
for chest films, 60 cm for mammography and 20 cm for dental films) are, first to reduce geometric
unsharpness (see Section 6.9.1) and second to reduce patient dose. The reduction in patient dose
is not always appreciated and is an important application of the inverse square law. Consider
the situations shown in Figure 13.5 (in the right hand illustration the ‘patient’ has been placed
extremely close to the X-ray focal spot for the purpose of illustration).
The dose at the image receptor, D say, must be the same in both arrangements to achieve the
same receptor response. Attenuation through the patient will be the same in both cases and can
be ignored. However, the effect of the inverse square law is to increase the dose at the front sur-
face of the patient by (120/100)2 that is, to 1.44 D on the left but by (40/20)2, that is, to 4 D on the
right.
There is unfortunately an adverse effect of the inverse square law. When longer focal-skin dis-
tances are used more X-rays (greater mAs) are required on the left to produce a dose (D) than on
the right because the receptor is further from the focal spot.
Patient 20 cm
1m
Image receptor
20 cm Patient
Image receptor
FIGURE 13.5
Illustrating how the inverse square law affects the entrance dose to the patient—for explanation see text.
Radiation Doses and Risks to Patients 441
• Probably the most frequent and certainly most avoidable unnecessary exposure to
patients is the result of inappropriate radiological examinations. The guidelines in
‘Making the best use of clinical radiology services’ (RCR 2007) contains excellent
advice on when X-rays are inappropriate or at least should be deferred for a few
weeks.
• In most centres arrangements could be improved considerably to ensure investiga-
tions are not needlessly repeated. The advent of the digitised department enables
searches to be made easily for previous images.
• Consideration must be given to using alternative imaging modalities such as
ultrasound and magnetic resonance imaging (MRI) thereby avoiding the use of
X-rays.
• Employment-related screening programmes should be used sparingly and clini-
cally justified.
• All staff should be given appropriate training, covering both an awareness of the
radiological examinations that carry the highest doses and of the procedures that
are available to minimise those doses, for example, the use of gonad shielding
where appropriate.
100
Dose mGy 50
5 10 15
TLD number
FIGURE 13.6
A CT dose profile (nominal slice width 2 mm) measured with a stack of TLD chips each 0.8 mm thick.
near the periphery of the phantom than at the centre. To account for this variation CTDIw is
defined as one-third the CTDI100 at the centre plus two-thirds the CTDI100 at the periphery.
For a multislice scanner the CTDI is greater for narrow beam widths because of the effect
of the beam penumbra (see Section 8.8).
CT dose index is a useful quantity when comparing two scanners, however, CT proce-
dures rarely involve a single rotation and usually consist of a series of rotations which may
have gaps or overlaps and these have to be allowed for; CTDIvol is therefore defined as
CTDI w
CTDI vol =
pitch
TABLE 13.9
Normalised Values of Effective Dose per Dose
Length Product for Adults Over Various Regions
Effective Dose per DLP
Region of Body (mSv [mGycm] –1)
Head and neck 0.0031
Head 0.0021
Neck 0.0059
Chest 0.014
Abdomen and pelvis 0.015
Trunk 0.015
Source: Shrimpton PC, Hillier MC, Lewis MA and Dunn
M, National survey of doses from CT in the UK:
2003. Br. J. Radiol. 79, 968–980, 2006.
TABLE 13.10
National Reference Levels for CT Examinations on Adult Patients
DLP (mGy cm)
Examination SSCTa MSCTb
Routine head (acute stroke) 760 930
Abdomen (liver metastases) 460 470
Abdomen and pelvis (abscess) 510 560
Chest, abdomen and pelvis (lymphoma 760 940
staging or follow up)
Chest (lung cancer: known, suspected or 430 580
metastases)
Chest; Hi-resolution (diffuse lung disease) 80 170
a SSCT = single slice CT.
b MSCT = multislice CT.
TABLE 13.11
Typical Effective Doses from CT Examinations for Adult Patients
Typical Effective
Examination Dose (mSv)
Routine head (acute stroke) 1.5
Abdomen (liver metastases) 5.3
Abdomen and pelvis (abscess) 7.1
Chest, abdomen and pelvis (lymphoma staging or 9.9
follow-up)
Chest (lung cancer: known, suspected or metastases) 5.8
Chest; Hi-resolution(diffuse lung disease) 1.2
Source: Shrimpton PC, Hillier MC, Lewis MA and Dunn M, National sur-
vey of doses from CT in the UK: 2003. Br. J. Radiol. 79, 968–980,
2006.
444 Physics for Diagnostic Radiology
As CT scanners developed, effective doses reduced. In part this was due to technical
improvements, including high efficiency detectors, new beam shaping absorbers, rapid
X-ray rise and fall times and the use of automatic exposure control systems with X-ray tube
mA varied under computer control in response to changes in patient cross-section to give
a pre-selected image quality. Also a greater awareness developed amongst radiologists
and radiographers that CT is a high dose technique and by selecting suitable scan pro-
tocols operators can greatly assist in keeping doses as low as reasonably practicable. The
introduction of multislice scanners has initially tended to increase dose (Yates et al. 2004)
but with greater user awareness doses can potentially be reduced (ICRP 2007b).
Some protocols giving doses approaching or exceeding the reference levels generally
involve a large number of slices. Thus CT examinations should be made with the mini-
mum settings of the tube current and scan time to give adequate image quality and with
the minimum irradiated volume, that is region of interest, required to give the necessary
information. Particular care is required with high resolution modes because these are also
high dose modes since, to maintain image quality for thin slices, a high dose is needed to
maintain signal to noise ratio.
Note that scan projection radiography (‘scout view’) is a relatively low dose technique
and is commonly used to determine the level at which CT slices are required. Modern
scanners can traverse the length of the body in less than 10 s with a sub-millimeter aper-
ture and generate good quality images.
In contrast, dynamic CT investigations (also called CT fluoroscopy) are usually high dose
studies. Dynamic CT, which allows measurements of temporal changes in contrast den-
sity, for example, in blood vessels and soft tissues, after administration of contrast mate-
rial, has much in common with dynamic studies in nuclear medicine (see Section 10.4). For
example, a sequence of temporal images will be collected and analysed to map changes
in CT number (analogous to changes in activity) as a function of time. Displayed images
may be functional images, for example a map of time to maximum contrast but selection
of suitable regions of interest to act as controls can be difficult.
The requirements for dynamic studies are a short scan time, short scan intervals, a high
scan frequency and a large number of total scans. Furthermore resolution must be high
and noise levels must be low so that small changes in density can be registered. These
two conditions are only met if high mAs values are selected and, combined with the need
for multiple images of the same slice, the net result may be a high effective dose. The
effective dose can be reduced considerably by careful selection of the correct scan plane
initially.
TABLE 13.12
Doses per Exposure Received in the NHSBSP
Mammography Screening Programme in 2001 and 2002
Average Breast
Projection Mean MGD (mGy) Thickness (mm)
Oblique 2.23 ± 0.01 56.8 ± 0.2
Craniocaudal 1.96 ± 0.01 54.1 ± 0.2
Source: Young KC, Burch A and Oduko JM, Radiation doses
received in the UK Breast Screening programme in
2001 and 2002. Br. J. Radiol. 78, 207–218, 2005.
the MGD to the standard breast (40 mm block) using a 28 kVp beam and a Mo target and
filter was 1.42 ± 0.04 mGy per exposure for the 285 systems included in their survey.
To more adequately represent compressed breast thickness for the screened population,
IPEM Report 89 (IPEM 2005) changed the standard measurement conditions to a block of
45 mm perspex and recommended the MGD should be less than 2.5 mGy.
Mean glandular dose to samples of women attending for mammography are regularly
assessed and analysed by the NHSBSP (see Table 13.12). The actual doses received depend on
the characteristics of the equipment, for example target and filter, film and screen, but also
on individual factors such as breast size and composition and the degree of compression.
Recently digital imaging has been introduced into mammography. Results suggest that
a reduction in MGD can be achieved using digital imaging although if the reduction is
too great there will be significant effect on image quality especially in the detection of
microcalcifications (Samei et al. 2007). For further comment on doses in mammography
see Section 9.2.6.
(a) The total activity in the body will decrease as a result of a combination of physical
and biological half-lives (see Section 1.8).
(b) The activity will redistribute throughout the different organs and tissues during
its residence in the body.
446 Physics for Diagnostic Radiology
TABLE 13.13
Effective Doses for Some Common Nuclear Medicine Examinations
Max Usual Effective
Radiopharmaceutical Study Activity (MBq) Dose (mSv)
51-Cr EDTA GFR 3 0.006
99m-Tc MAG3 Renography 100 0.7
99m-Tc microspheres Lung perfusion 100 1
99m-Tc microspheres Lung perfusion 200(SPECTa) 2
99m-Tc DTPA Renography 300 2
99m-Tc phosphates Bone 600 3
99m-Tc phosphates Bone 800(SPECTa) 5
99m-Tc Sestamibi Myocardial imaging (MIBI) 300 3
99m-Tc Sestamibi Myocardial imaging (MIBI) 400(SPECTa) 4
99m-Tc iminodiacetates Functional biliary system 150 2
imaging (HIDA)
111-In white blood cells Abscess 20 9
201-Tl ion Myocardium 80 18.4
18-FDG Tumour imaging 400 8
a SPECT = single photon emission computed tomography; for an explanation of other
acronyms see Table 10.3.
(c) Activity in a particular organ will irradiate not only that organ, but also, if it is a
gamma emitter, a number of other organs.
(d) Doses to other organs will depend on activity in the source organ, the inverse
square law, attenuation in intervening tissues and the mass absorption coefficient
of the target organ.
Both the Royal Society and the Health and Safety Executive in the UK have made studies
of the public perception of risk. The Royal Society approach was to gauge public opinion
on the risk associated with certain activities and then to work out the actual risk involved.
Risks as low as 1 in 106 (1 in 1,000,000) a year are commonly regarded as trivial but an
imposed risk at a level of 1 in 104 (1 in 10,000) is likely to be challenged. Typical lifetime
risks from various causes not involving X-rays are shown in Table 13.14.
Taking the risk figure from the previous chapter of 5% per Sv for a reference person, an
effective dose of 0.02 mSv is equivalent to a risk of 1 in 106 and a dose of 2 mSv equivalent to
a risk of approximately 1 in 104. On the basis of rounded effective dose figures, risks from
some common diagnostic procedures are shown in Table 13.15.
TABLE 13.14
Typical LifeTime Risk of Death from Various Causes for an
Individual Aged 40 Years Assuming They Live to 75 Years
Activity Risk Estimate
Killed by lightening 1 in 300,000
Anaesthesia (risk from single administration) 1 in 50,000
Work in service industry 1 in 6000
Accident on road 1 in 500
Pneumonia and influenza 1 in 30
Smoking 10 cigarettes a day 1 in 5
Source: Martin CJ, Dendy PP and Corbett RH, Medical Imaging and
Radiation Protection for Medical Students and Clinical Staff,
British Institute of Radiology 36: Portland Place, London,
2003.
TABLE 13.15
Lifetime Risks of Cancer from Some Common Diagnostic Procedures
Effective
Dose Risk of Fatal Risk
Procedure (mSv) Cancer Classification
Limb and joint radiographs <0.01 <1 in 2,000,000 Negligible
PA chest 0.02 1 in 1,000,000 Minimal
PA skull 0.03 1 in 700,000 Minimal
Hip 0.3 1 in 60,000 Very low
Tc-99m lung ventilation 0.4 1 in 50,000 Very low
Abdomen or pelvis 0.7 1 in 30,000 Very low
Tc-99m (MAA) lung perfusion 1.0 1 in 20,000 Very low
CT head 2 1 in 10,000 Low
Tc-99m Bone scan 3 1 in 7000 Low
CT Pelvis 10 1 in 2000 Low
F-18 tumour (PET) 10 1 in 2000 Low
Tl -201 myocardial perfusion 18 1 in 1000 Moderate
Source: Martin CJ, Dendy PP and Corbett RH, Medical Imaging and Radiation
Protection for Medical Students and Clinical Staff, British Institute of
Radiology 36: Portland Place, London, 2003.
448 Physics for Diagnostic Radiology
However, these risk figures should be used with great care and some would dispute that
they should be used at all when discussing risks of medical exposures since they rely on
calculating the effective dose (Brenner 2008). Why should this be?
Effective dose was originally introduced as a means of describing the detrimental effect
of radiation to a reference person from low doses of radiation. The detrimental effect is
comprised of cancer induction (both fatal and non-fatal) and the hereditary effect. This
has been averaged over all ages and both sexes. These factors have been incorporated into
the weighting factors (wT) which have been set out by ICRP and are based largely on data
from the Japanese population irradiated as a result of the dropping of the atomic bombs
in 1945. It is well recognized that radiation risks are age dependent with lifetime risks for
children being significantly greater than for those exposed as adults. They are also gender
dependent, although not to such an extent as age, with females overall being more radio-
sensitive than men.
Therefore, calculating effective dose for say an abdominal exposure for an individual
patient involves significant errors. There are uncertainties involved in making the initial
measurement, for example, DAP, on which dose is based. There are uncertainties in know-
ing exactly where a particular organ in the individual is situated so the computer cal-
culation of organ dose may have significant errors and then there are the uncertainties
introduced because the organ and tissue weighting factors are not based on a particular
individual, say a 70-year old female weighing 50 kg, but on a generic, ‘reference person’.
So, although effective dose can be a useful concept in estimating risk, its shortcomings
should be recognised and if it is considered necessary to assign a numerical value of risk
to a particular individual patient, the individual organ doses combined with the relevant
age-related and sex-related risk coefficients should be used. For this reason when advising
patients about risk it is wise to use general terms as indicated in Table 13.15.
Note that where the radiation dose is delivered largely to a single organ, for example,
the breast in mammography, the use of effective dose is inappropriate and equivalent dose
should be used together with the organ-specific risk coefficient.
If ‘a few’ were taken as 3, the lifetime risk of fatal cancer would be 15% Sv–1 or 15 × 10 –4
for 10 mSv.
The guidance has also given some typical foetal doses from various examinations and
put them into risk bands reflecting that risk factors cannot be calculated precisely. Some
examples are shown in Table 13.16.
TABLE 13.16
Typical Foetal Doses and Childhood Cancer Risks from Some Common Procedures
Typical Foetal
Examination Dose Range (mGy) Risk of Childhood Cancer
Chest X-ray 0.001–0.01 <1 in 1,000,000
81mKr Lung ventilation scan 0.001–0.01 <1 in 1,000,000
99mTc Lung ventilation scan (technegas) 0.01–0.1 1 in 1,000,000 to 1 in 100,000
Abdomen X-ray 0.1–1 1 in 100,000 to 1 in 10,000
Chest CT 0.1–1 1 in 100,000 to 1 in 10,000
Abdomen CT 1–10 1 in 10,000 to 1 in 1000
99mTc bone scan 1–10 1 in 10,000 to 1 in 1000
Pelvis CT 10–50 1 in 1000 to 1 in 200
Source: HPA, Protection of Pregnant Patients during Diagnostic Medical Exposures to Ionising
Radiation, Advice from the Health Protection Agency, The Royal College of Radiologists
and the College of Radiographers, Documents of the Health Protection agency RCE-9,
2009.
Radiation Doses and Risks to Patients 451
Insight
Balancing Risk for Foetus and Mother
Sometimes difficult decisions have to be made balancing the radiation dose and hence risk to the
foetus against that of the mother. One example is in the diagnosis of pulmonary embolus which
is a condition not uncommon in pregnancy. The choice is between a nuclear medicine V/Q scan
which gives a maternal effective dose of 1.7 mSv although this can be reduced if a protocol is
followed using a reduced activity for the perfusion scan and only proceeding to the ventilation
study if defects are identified on the perfusion scan. The breast dose from a full V/Q scan is about
0.7 mSv. The foetal dose from a full V/Q scan is estimated to be of the order 0.7 mSv.
The alternative method of diagnosis is to use CT scanning with a suitably designed protocol.
A survey of doses used in East Anglia gave an average maternal effective dose of 5 mSv. However,
the equivalent dose to the breast from such an examination may be between 10 and 20 mSv. The
foetal dose from such a scan is estimated to be up to 20 µSv in the first trimester, 30 µSv in the
second trimester and 130 µSv in the third trimester (Winer-Muram et al. 2002).
So the choice is between a CT procedure that gives a much lower foetal dose than the nuclear
medicine study but a higher effective dose and breast dose to the mother, and a nuclear medicine
study where the foetal dose is higher but the maternal dose is lower than for CT. There may of
course be other factors involved such as, in an emergency, which can be done sooner?
13.10 Conclusion
The potentially harmful effects of ionising radiation must be recognised and understood.
Furthermore, it is important to appreciate that increasingly sophisticated experiments
have failed to provide evidence of a safe level of radiation for the two most important
long-term effects, namely carcinogenesis and mutagenesis. Therefore it is important that
radiologists should have a good appreciation of the risks associated with the examinations
they carry out. Indeed, the public is increasingly expecting to be kept informed of the risks
involved.
Some situations carry a higher risk and the importance of minimising exposures to chil-
dren and during pregnancy cannot be over-emphasised.
However, it is necessary to keep a sense of perspective and calculations show that when
diagnostic X-rays are used correctly and doses are carefully controlled, risks are acceptable
within the broader context of both the clinical value of diagnostic information gained and
the risks associated with daily living.
References
Administration of Radioactive Substances Advisory Committee. Notes for Guidance on the Clinical
Administration of Radiopharmaceuticals and Use of Sealed Radioactive Sources. ARSAC,
Health Protection Agency, Chilton, UK, 2006 http://www.arsac.org.uk/ [accessed 18th January
2011].
Brenner, D.J., Effective dose: a flawed concept that could and should be replaced. British Journal of
Radiology 81: 521–523, 2008.
452 Physics for Diagnostic Radiology
Evans, D. S., Mackenzie, A., Lawinski, C. P. and Smith, D., Threshold Contrast detail detectability
curves for fluoroscopy and digital acquisition using modern image intensifier systems. British
Journal of Radiology 77: 751–758, 2004.
Hall, E.J. and Brenner D.J., Cancer risks from diagnostic radiology. British Journal of Radiology 81:
362–378, 2008.
Hart, D., Hillier, M.C. and Wall, B.F., Doses to Patients from Radiographic and Fluoroscopic X-ray
Imaging Procedures in the UK- 2005 Review. HPA-RPD-029, Health Protection Agency,
Radiation Protection Division, Chilton, UK, 2007.
Hart, D., Jones, D.G. and Wall, B.F., Estimation of Effective Dose in diagnostic radiology from Entrance
Surface Dose and Dose-area product measurements, NRPB R262, National Radiological
Protection Board, Chilton, Didcot, Oxfordshire OX11 0RQ, UK, 1994.
HPA, Protection of Pregnant Patients during Diagnostic Medical Exposures to Ionising Radiation,
Advice from the Health Protection Agency, the Royal College of Radiologists and the College
of Radiographers, Documents of the Health Protection agency RCE-9, 2009.
ICRP, 1990 Recommendations of the International Commission on Radiological Protection, ICRP
Publication 60, Annals of the ICRP 21: 1–3, 1991.
ICRP, Avoidance of radiation injuries from Medical Interventional Procedures, International
Commission on Radiological Protection, ICRP Publication 85, Annals of the ICRP 30: 2, 2000.
ICRP, Biological effects after Prenatal irradiation (Embryo and Fetus) International Commission on
Radiological Protection (2003) ICRP Publication 90, Annals of the ICRP 33: 1–2, 2003.
ICRP, 2007 Recommendations of the International Commission on Radiological Protection, ICRP
Publication 103. Annals of the ICRP 37: 2–4, 2007a.
ICRP, Managing Patient Dose in Multi-Detector Computed Tomography (MDCT), International
Commission on Radiological Protection, ICRP Publication 102, Annals of the ICRP 37: 1, 2007b.
ICRP, Radiation dose to Patients from Radiopharmaceuticals, International Commission on
Radiological Protection, ICRP Publication 106, Annals of the ICRP 38: 1–2, 2008.
ImPACT, CT Patient Dosimetry Calculator (version 0.99v). ImPACT, St George’s Hospital, London.
www.impactscan.org/ctdosimetry.htm, 2004 [accessed 18th January 2011].
IPEM, Guidance on the establishment and Use of Diagnostic Reference levels for Medical X-ray
Examinations, Report 88, Institute of Physics and Engineering in Medicine, Fairmount House,
York, 2004.
IPEM, The Commissioning and Routine Testing of Mammographic X-Ray Systems, report 89,
Institute of Physics and Engineering in Medicine, Fairmount House, York, 2005.
IPSM, Dosimetry Working Party of the Institute of Physical Sciences in Medicine, National Protocol
for Patient Dose Measurements in Diagnostic Radiology National Radiological Protection
Board, Chilton, Didcot, Oxfordshire OX11 0RQ, UK, 1992.
IRMER, Ionising Radiation (Medical Exposure) Regulations 2000 (SI 2000 No 1059) London, HMSO,
2000.
Jones, D.G. and Wall B.F, Organ Doses from Medical X-ray Examinations Calculated Using Monte
Carlo techniques, NRPB-R186, National Radiological Protection Board, Chilton, Didcot,
Oxfordshire OX11 0RQ, UK, 1985.
Martin, C.J., Dendy, P.P. and Corbett, R.H., Medical Imaging and Radiation Protection for medical
students and clinical staff, British Institute of Radiology 36: Portland Place, London, 2003.
Morrell, R.E. and Rogers A.T., Kodak EDR2 film for patient skin dose assessment in cardiac catheter-
ization procedures. British Journal of Radiology 79: 603–607, 2006.
NRPB, Patient dose reduction in diagnostic radiology. Report by the Royal College of Radiologists
and the National Radiological Protection Board. Documents of the NRPB Vol 1 No 3. 1990.
RCR Making the best use of clinical radiology services. Sixth edition. The Royal College of Radiologists,
London, 2007.
Samei, E., Saunders, R.S., Baker, J.A. and Delong, D.M., Digital mammography: effects of reduced
radiation dose on diagnostic performance, Radiology 243: 396–404, 2007.
Shrimpton, P.C., Hillier, M.C., Lewis, M.A. and Dunn, M. National survey of doses from CT in the
UK: 2003. British Journal of Radiology 79: 968–980, 2006.
Radiation Doses and Risks to Patients 453
Tapiovaara, M. and Siiskonen,T., PCXMX-A Monte Carlo program for calculating patient doses in
medical X-ray examinations (2nd edition) STUK – Radiation and Nuclear Safety Authority,
Helsinki, Finland. 2008 http://www.stuk.fi/sateilyn_kaytto/ohjelmat/PCXMC/en_GB/sum-
mary/ [accessed 18th January 2011].
Wall, B.F., Kendall, G.M., Edwards, A.A. et al., What are the risks from medical X-rays and other low
dose radiation? British Journal of Radiology 79: 285–294, 2006.
Winer-Muram, H.T., Boone, J.M., Brown, H.L., et al., Pulmonary embolism in pregnant patients: foe-
tal radiation dose with helical CT. Radiology 224(2), 487–492, 2002.
Yates, S.J., Pike, L.C. and Goldstone, K.E., Effect of multislice scanners on patient dose from routine
CT examinations in East Anglia. British Journal of Radiology 77: 472–478, 2004.
Young, K.C., Burch, A. and Oduko, J.M, Radiation doses received in the UK Breast Screening pro-
gramme in 2001 and 2002, British Journal of Radiology 78: 207–218, 2005.
Exercises
1. Explain carefully why AP and PA chest examinations will result in different effec-
tive doses (refer to Figure 13.1).
2. Describe the main ways in which ESD can be measured or derived and discuss
their relative merits.
3. What do you understand by a diagnostic reference level (DRL)? How are DRLs
established and what is their purpose?
4. List the organs and tissues of the body identified in ICRP publication 103 (2007) as
the most sensitive with regard to causing long-term detriment by ionising radia-
tion. Give an approximate risk of fatal radiation-induced cancer for a whole body
dose of 5 mSv.
5. Explain how the effective dose is calculated for a radiological examination. What
are the disadvantages of using effective dose to estimate the risk to an individual
patient?
6. What are the problems with estimating doses to patients in CT and how are they
overcome?
7. Discuss the factors that influence the dose to the breast in mammography (refer
also to Section 9.2.6).
8. What factors determine the effective dose to a patient when a radiopharmaceuti-
cal is administered? Suggest one way in which the effective dose can be reduced
(without loss of image quality).
9. Review the potential sources of radiation injury to the developing foetus.
10. ‘Risks from radiological examinations must be placed in the context of other envi-
ronmental, sociological and occupational risks’. Discuss.
14
Practical Radiation Protection and Legislation
SUMMARY
• The principles of radiation protection—justification, optimisation and limi-
tation—are introduced.
• Application of these principles to patients, staff and the public is discussed.
• Relevant legislation, especially as it applies to medical exposures, is
reviewed.
• The UK Ionising Radiations Regulations (IRR) (1999) are discussed in some
detail.
• X-ray room design is discussed briefly.
• Special protection problems associated with unsealed radioactive materials
(e.g. in nuclear medicine) are considered.
• Principles of personal dosimetry are explained and typical levels of staff
doses are reviewed.
• Some variations in approaches to legislation in Europe and the US are
noted.
CONTENTS
14.1 Introduction......................................................................................................................... 457
14.2 Role of Radiation Protection in Diagnostic Radiology.................................................. 457
14.2.1 Principles of Protection.......................................................................................... 457
14.2.1.1 Justification............................................................................................... 457
14.2.1.2 Optimisation............................................................................................. 458
14.2.1.3 Application of Dose Limits (Limitation)............................................... 458
14.2.2 Patient Protection.................................................................................................... 459
14.2.3 Staff Protection....................................................................................................... 461
14.2.3.1 Reduction of Dose Rate........................................................................... 461
14.2.4 Public Protection..................................................................................................... 462
14.3 European Legislation......................................................................................................... 462
14.3.1 Introduction—the ICRP......................................................................................... 462
14.3.2 European Legislation.............................................................................................463
14.3.3 Basic Safety Standards Directive 96/29/Euratom (1996)...................................463
14.3.4 Medical Exposures Directive 97/43/Euratom (1997)......................................... 465
14.3.5 Outside Workers Directive 90/641/Euratom (1990)........................................... 466
14.3.6 New Euratom Basic Safety Standards................................................................. 466
455
456 Physics for Diagnostic Radiology
14.1 Introduction
Although the emphasis in this chapter is on radiation protection and safety it should be
remembered that radiation safety is only a part of total safety and should not be over-
emphasised to the exclusion of other aspects of safety.
Radiation exposures can be split into three broad categories: occupational, medical and
public exposures. In radiology departments all three categories can be encountered. The cur-
rent terminology used to describe any activity which increases overall exposure is a ‘practice’
and an activity which decreases an existing exposure is an ‘intervention’. However, the latest
set of International Commission on Radiological Protection (ICRP) recommendations (ICRP
2007) have moved away from practices and interventions and use a ‘situation’ based approach
which characterises exposure as planned, emergency and existing exposure situations. ICRP
does, however, continue to use the term practice to describe an activity which increases the
risk of exposure or actual exposure to radiation. In future it recommends that the term inter-
vention should only be used to describe protective actions that reduce exposure and uses the
terms ‘emergency’ and ‘existing exposure’ to describe situations where such protective actions
are required to reduce exposures. As in their previous recommendations (ICRP 1991) the aim
is to keep the exposure of the human population as small as possible but these recommenda-
tions also start to address the consideration of doses to the natural environment. National
legislation will start to incorporate these latest recommendations over the next 10 years.
The terms radiation or radiological safety and radiation or radiological protection are
often used interchangeably. Strictly speaking radiation protection is concerned with the
limitation of radiation dose whereas radiation safety is concerned with reducing the
potential for accidents. Since the distinction is largely academic, both are described in this
chapter without any attempt to distinguish between them.
14.2.1.1 Justification
Since there is no safe threshold dose for stochastic effects (see Section 12.5.1) ionising radia-
tion should not be used in any ‘practice’ unless the net benefit from the exposure for an
458 Physics for Diagnostic Radiology
individual or society is greater than the radiation detriment; that is, the radiation should not
do more harm than good. Although justification of a practice should be generic and broad
in nature and should not necessarily be carried out each time a practice is undertaken, some
aspects of it can require consideration each time the practice is undertaken. A patient must
receive more benefit than detriment from an exposure. The benefit to the radiologist and
radiographer from any dose they receive in the course of this investigation can only be quan-
tified in terms of employment. The detriment to them must also be balanced against the ben-
efits the patient and society gain from the exposure. Likewise the radiation doses members
of the public receive, as small adventitious doses through being in the proximity of radio-
graphic exposures or nuclear medicine patients, are justified by the benefit to society. The
much larger doses which can be received by family or friends who are caring or comforting
patients who are emitting ionising radiation must also be classed as a benefit to society.
Justification does not just apply to the detriment from the radiation dose, it also goes
beyond radiation protection considerations.
14.2.1.2 Optimisation
This is applied to situations which have been identified as being justified. The magnitude
of individual doses, the probability that exposures will occur and the number of persons
that will be exposed should all be kept as low as reasonably achievable—the ALARA
principle (sometimes called the ALARP principle—as low as reasonably practicable) tak-
ing into account economic and social factors. The process of optimisation is aided by set-
ting ‘dose or risk constraints’. Constraint doses are planning doses which should not be
exceeded by a practice. They are not legal limits but are set by local or national bodies
as levels of exposure which good working practice should achieve. They do not set lev-
els which are considered to be unsafe if exceeded. If measurements show they are being
exceeded an assessment should be made to ascertain why and remedial action, if deemed
necessary, should be taken.
It is also recommended that under some circumstances a risk constraint should be set
in the case of potential exposures to limit the risk to any one individual particularly in the
event of an accident.
Justification is applied to the practice as whole whereas optimisation can be applied to
individual components of practice.
Insight
Constraint Doses
Constraint doses are referred to several times in this chapter. In all situations they are a planning
dose which it is thought need not be exceeded when a planned operation or action involving
Practical Radiation Protection and Legislation 459
ionising radiation is carried out. Although they may be based on national recommendations they
are essentially a local limit and if exceeded need only be subject to a local investigation which
need not be recorded.
The legal limits are set to restrict stochastic effects (Section 12.5.1) to acceptable levels and
to be well below thresholds for deterministic effects.
The new ICRP recommendations (ICRP 2007) continue with the same dose limits as
their previous recommendations but draw attention to the possibility that the eye may be
more radiosensitive than previously was thought. They retain the option of recommend-
ing lower equivalent dose limits and although at the time of writing a formal statement
has not been issued, it is suggested that a limit of between 20 and 50 mSv/y will be adopted
in the near future.
i. Constraints or ‘investigation levels’. Levels of dose have been set in terms of skin dose
or dose area product dose (see Section 13.2.2) which should not be exceeded for
common radiographic and screening examinations. These constraint doses are all
some way above those required by adopting best practices and the circumstances
under which they are not achieved should be quite rare. If circumstances are iden-
tified where they are being exceeded the techniques and equipment used should
be investigated and remedial action taken as necessary.
ii. Use of high kVp techniques. As pointed out in Section 13.6.1 the use of higher kVp
should reduce patient dose.
iii. Collimation. Reducing the volume of the patient irradiated not only improves con-
trast because of the reduction in scatter but can significantly reduce the effective
dose. Ensuring that even a small border exists round the edge of a radiograph can
significantly reduce the area exposed (coning the field to 1 cm inside an 18 cm ×
24 cm cassette reduces the area irradiated by 18.5%). Whilst modern units which
automatically adjust the field size to the cassette size ensure no unrecorded area of
the patient is exposed, they do not necessarily produce the optimum field size for
the anatomy being radiographed. The optimum size could sometimes be smaller.
iv. Optimisation of imaging system. All imaging systems can produce images using
a greater radiation dose than is necessary by incorrect adjustment or use of the
460 Physics for Diagnostic Radiology
imaging parameters. In many of the digital systems this is often not apparent
because of the inherent ability of the system to compensate. It is essential that
care is taken when setting up these units to ensure that excessive doses cannot be
used.
v. Reduction of screening dose. This can be achieved by keeping screening time to a
minimum. For very old units this is essentially at the discretion of the radiologist
but modern units allow a variety of methods to be used. The use of last frame hold
can significantly reduce dose particularly during processes involving the insertion
of catheters, orthopaedics or cardiology. Some units now allow the collimators to
be repositioned using a last frame hold which further reduces doses. Pulsing the
output of the tube during screening also reduces the dose. Obviously, the more
pulses per second that are required to visualise the process involved the larger
the patient dose per unit time. To see movements of the heart requires many more
pulses per second than to see movement in the large bowel.
The method of automatic control of kVp and mA to maintain the dose rate of an image
intensifier input should also be optimised (see Section 5.14.3). Initially during most inves-
tigations the kVp should be raised to the maximum value that still provides acceptable
contrast with further output, if necessary obtained by raising the mA. This will ensure
that the dose to the patient is minimised whilst diagnostic information is maintained.
vi. Filtration. The use of filtration above 2.5 mm of aluminium for normal diagnostic
work is mandatory in X-ray units. As described in Section 3.9 this removes the
lower photon energies from the X-ray beam. Special K-edge filters can be used in
paediatric radiography (Section 9.7) and in mammography (Section 9.2) to remove
unwanted high and low energy photons. Some sophisticated screening units now
automatically insert between 0.1 mm and 0.2 mm of additional copper filtration to
further harden the beam. These units effectively calculate the attenuation of the
patient and choose the thickness, if any, of copper to insert. Depending on kVp, the
first 0.1 mm of copper reduces the patient skin dose by 30%–40% and the next 0.1
mm by approximately 10%. (The effective dose reduction will vary but will be no
more than approximately half of these figures.)
vii. Output wave form. The more closely the output of the generator approaches a steady
(DC) voltage supply the higher the quality of the beam (Section 2.3.4) and the fewer
low energy photons are present. Even quite simple X-ray units are now using medium
or high frequency generators, which produce essentially a DC supply. It should be
noted that high frequency generators do not require a three phase electrical supply.
viii. Use of low attenuation materials. After the X-ray beam leaves the patient it may travel
through several structures before reaching the imaging medium. Each of these can
absorb some of the beam. It can be estimated that by replacing the covers on cas-
settes, table tops and the spacing material in grids by carbon fibre based materials
a dose reduction of up to nearly 60% could be achieved, depending on tube kVp.
ix. Choice of grid. As described in Section 6.8.1 the higher the grid ratio the greater the
dose required to produce the required film density.
x. CT slit width. The dose from CT scans is high, so the number of slices, or the beam
width when using a multi-detector array, should be kept to a minimum and
repeating the scan with contrast medium should only be undertaken if definitely
Practical Radiation Protection and Legislation 461
required. Note that using several narrow beams is generally less dose efficient
than using a wide beam (see Figure 8.20).
xi. Gonad shields. These must be used whenever possible.
Protective clothing is not designed to shield against the main beam. It will only protect
against scattered radiation or the beam after it has passed through a patient. Protective cloth-
ing must be properly stored so that it does not become cracked and must be periodically
checked to ensure that it is in good condition. Suffice to say it must also be worn when pro-
vided. If neither protective clothing nor a permanent screen is available, the person should
leave the room. ‘Hiding’ behind someone wearing a lead/rubber apron is not acceptable.
an expert committee structure uses the knowledge of independent advisers, from any
country in the World, to give guidance on all aspects of radiation protection. This
guidance is issued in the form of recommendations and can range from broad con-
cepts to detailed review of scientific research. Because of this structure ICRP recom-
mendations are not mandatory. However, they are very influential internationally and
few countries adopt regulations dealing with radiation protection which differ to any
extent from them. On a worldwide basis the legislation controlling the use of ionising
radiation is generally very similar. The major differences are found in the methods of
enforcement.
The legislation in all countries currently uses the recommendations laid down in ICRP
60 (1991) which reviewed the epidemiological data available at the time, introduced new
maximum permissible doses and developed the conceptual framework of justification,
optimisation and limitation. This review process has now been repeated and the new
publication, ICRP 103 (2007), takes account of the latest biological and physical informa-
tion available and sets the framework for legislation for the next decade. The new recom-
mendations are not fundamentally different to the ones they replace and consolidate and
develop rather than change. The changes to any national legislation to reflect them are
therefore not likely to be large. A useful review of the new recommendations has been
presented by Wrixon (2008).
An ICRP report (2010), currently in draft form, which will be of direct relevance to read-
ers of this text and which will influence future legislation is entitled Radiation Protection
Education and Training for Healthcare Staff and Students. (It should be noted that at the
time of writing a reference is not available for this.)
(a) All practices above the minimum levels below which the Directive does not apply
must be reported and some must be specifically authorised.
(b) Occupational and public exposures are distinguished from medical exposures
and those of carers and comforters (other than those carrying out the tasks as
part of their work). Directive 96/29/Euratom states that if the dose is willingly and
knowingly (i.e. the risks have been explained) received then the ‘member of the
public’ dose limits do not apply. Constraint doses for carers and comforters should
be set, however.
(c) The dose limits for exposed workers and students or apprentices above the age of
18 are shown in Table 14.1.
464 Physics for Diagnostic Radiology
TABLE 14.1
Maximum Permissible Occupational Dose Limits as Specified in
96/29/Euratom
Effective dose in any consecutive 5-year period 100 mSv
Effective dose in any year (the option for a yearly effective dose 50 mSv
of 20 mSv is maintained)
Equivalent dose to the lens of the eye in any year 150 mSv
Equivalent dose to any 1 cm2 of the skin in any year 500 mSv
Equivalent dose to extremities in any year 500 mSv
Foetal dose once a pregnancy has been declared 1 mSv
(d) For students and apprentices below the age of 18 years who are obliged to use
ionising radiation, the effective dose limit is 6 mSv/y and the equivalent dose
limits are 50 mSv/y for the lens of the eye and 150 mSv/y for any 1 cm 2 of the
skin.
(e) The dose limits for members of the public are an effective dose of 1 mSv/y (in spe-
cial circumstances this can be exceeded but the dose must not exceed an average
of 1 mSv/y over any 5-year period), an equivalent dose to the lens of the eye of 15
mSv/y and an equivalent dose to any 1cm2 of skin of 50 mSv/y.
(f) Work places where there is a possibility that the exposure to ionising radiation
could exceed the limits set for members of the public must be classified as con-
trolled or supervised. Controlled areas must be suitably demarcated and delineated
with controlled access. Appropriate work instructions (local rules) must be issued
for operations in these areas. Radiological monitoring is required.
For a supervised area there is a requirement to monitor the work force, but the
other requirements of a controlled area are discretionary.
(g) Exposed workers are divided into two groups.
Category A—those who are liable to receive an effective dose greater than 6 mSv/y
or an equivalent dose greater than 3/10 of the limit for the lens of the eye, skin
or extremities.
Category B—exposed workers who are not Category A workers.
(h) Category A workers must be monitored. Sufficient personal or area monitoring
must be carried out to demonstrate that category B workers do not need to be cat-
egory A workers.
(i) All exposed workers, apprentices and students who, in the course of their work
are dealing with sources of ionising radiation, must be given adequate training.
Women must be informed of the need to declare if pregnant or breast feeding so
as to avoid any hazard to the foetus or neonate.
(j) A suitably qualified expert must be appointed to advise undertakings on radia-
tion risks and assess radiation protection arrangements (this person is frequently
called the radiation protection adviser, RPA).
(k) All work places using ionising radiation must have suitable monitoring equip-
ment available.
Practical Radiation Protection and Legislation 465
(l) Records of monitoring must be kept and made available to individuals. (For
Category A workers this is for at least 30 years following termination of work
involving exposure or until the age of 70 years).
(m) Category A workers must receive suitable medical surveillance. This must include
a medical before starting work and a periodic review of health. Medical records
must also be kept until the person is 75 years of age or for 30 years after work
involving exposure to ionising radiation ceased.
(n) A system of inspection to enforce the requirements of the Directive must be
established.
The Directive requires all doses to be optimised. To this end it recommends that diagnos-
tic reference levels for radiodiagnostic examinations should be established and used for
members of the public. Whilst establishing that carers and comforters are not subject to
dose limits (see Section 14.3.3) it requires that national constraint doses be established for
these persons. It also requires that nuclear medicine patients or their guardians are issued
with written instructions with regard to the steps to be taken to minimise the exposure of
anyone coming in contact with that patient.
The responsibilities of both the person prescribing and clinically responsible for the
exposure and the person carrying it out are identified. Both groups must have suitable
training. This applies to qualification under the regulations and to on-going education after
qualification. Some emphasis is placed on having written protocols for standard radiologi-
cal procedures and recommendations for referral criteria. Medical physics experts must be
available to give advice on all aspects of exposure from ionising radiation in medicine.
The equipment used for medical exposures must be included in an inventory kept by the
competent authority, must be subject to acceptance testing before it is used for medical expo-
sures, must be kept under radiation protection surveillance and must be subject to regular qual-
ity control. The Directive prohibits the use of direct fluoroscopy without an image intensifier
and recommends that all fluoroscopic dose rates be controlled. It also recommends that, where
practicable, all new radiodiagnostic equipment shall have in-built dose measuring devices.
466 Physics for Diagnostic Radiology
The Directive draws particular attention to the requirements to keep doses to a mini-
mum for paediatric exposures, health surveillance programmes, potential high dose tech-
niques such as CT scanning and interventional radiography, pregnant women and, with
regard to nuclear medicine investigations, breast feeding females.
14.4 UK Legislation
UK legislation complies with the Euratom Directives in various ways.
(1) Before an undertaking can hold radioactive sources, the premises must be regis-
tered under section 7 of the Act with the Environmental Protection Agency acting
in the country concerned. This registration certificate is very specific. It specifies
the radionuclides, the activity of the nuclides and, for sealed sources, the num-
ber of sources that can be held. Occasionally it may allow a general beta/gamma
emitting radionuclide holding for a small amount of activity to accommodate new
projects. Records must be kept to show that the registration is not exceeded and
inspectors visit the premises periodically.
Occasionally, a sealed source is used in more than one location. An example
would be an americium-241 source used to check the lead equivalence of the walls
of X-ray rooms. In these situations a mobile registration under section 10 of the Act
must be obtained. One normal condition of a section 10 registration is that a clear
record must be kept of the location of the source at any time.
Practical Radiation Protection and Legislation 467
(2) Radioactive waste can only be accumulated if the user is authorised under section
15 of the Act and only disposed of if the user is authorised under section 13 of the
Act. Both authorisations are generally placed in the same certificate. The authori-
sation is very specific. It will state where the waste can be accumulated, the route
of disposal, the radionuclides that can be disposed of and the activity that can be
disposed. Again records must be kept to show compliance.
Currently there are various exemptions to the Act. The two exemptions probably most
relevant to the Health Service are as follows:
(a) The Radioactive Substances (Testing Instruments) Exemption Order (1985): This
order allows small sealed sources used in measuring instruments or for testing
instruments to be exempt from registration. Examples are the external standard
source in a liquid scintillation counter and the 3 MBq Co-57 sources used as mark-
ers in nuclear medicine.
(b) The Radioactive Substances (Hospitals) Exemption Order (1990): This order allows
the occasional patient from a peripheral hospital to be investigated in a central
main nuclear medicine department without having to transfer patients to the
main hospital and without having to obtain a registration and authorisation for
the peripheral hospital. It lays down conditions about the storage of any sources,
procedures for emergencies and the maximum amounts that can be held on the
premises. The maximum amount of most relevance is the 1 GBq limit for Tc-99m.
There are similar conditions with regard to any waste produced, the limit of most
relevance is 1 GBq for Tc-99m Technetium-99m discharged to the sewage system
of the premises in a month. If the amounts exceed these limits then a full registra-
tion or authorisation must be obtained.
The Act was amended in 2005 by the High Activity Sealed Sources and Orphan Sources
Regulations which put additional requirements on the conditions of registration for high
activity sources (and other sources of a similar level of potential hazard). These essentially
require a greater level of security which has to be agreed by the police before a registration
is granted and the provision of a financial guarantee that funds will be available to dispose
of the source when necessary.
At the time of writing this Act is under review and will be totally replaced in
October 2011.
holder. Certificates are normally only issued to senior staff of consultant status and are
time limited.
Notes for Guidance (2006) on these regulations and application forms are available from
the ARSAC. These guidance notes are particularly useful for routine tests as they give
descriptions of classes of radioactive materials that are considered acceptable for given pur-
poses. As long as the maximum recommended activity is not to be exceeded no justification
need be given for the request to use the radioactive material for the purpose described in
the Notes for Guidance. Application for an ARSAC certificate must include a description of
the applicant’s experience and training, a description of the equipment available to carry
out the tests, the agreement and description of the scientist(s) associated with the work and
the agreement of the RPA. If a research proposal is being considered or a routine test not
listed in the Guidance Notes then, in addition to the above, the ARSAC also require a full
justification and description of the use, including effective dose calculations.
These certificates are specific to the appointment held at the time of application and the
hospital in which the investigation or treatment will be done. If either of these changes per-
manently the ARSAC should be informed, as the certificate will require amendment. If, in a
specific case, for clinical reasons, a change in hospital premises must be made to administer
the radiopharmaceutical this is permitted with the agreement of the local RPA.
A certificate from the ARSAC to carry out a research project that involves the adminis-
tration of a radioactive substance does not remove the requirement for Ethical Committee
approval. Most Ethical Committees would require an ARSAC certificate to accompany the
application or assurance that an application has been made to the ARSAC.
The employer (normally the NHS Trust). This is essentially the body responsible for the
patient as opposed to the normal definition under employment law. (The Trust retains
the responsibility under the Regulations even when contract staff are involved.)
The medical physics expert. This is a suitably trained person who can advise on such
matters as patient dosimetry, unintended exposure evaluations and the development of
techniques. This person will often be the RPA but the post can be carried by someone
who is not an RPA.
The operator. This is anyone who carries out a practical aspect of an exposure. Any expo-
sure will probably have several operators involved because it includes not just the per-
son who controls the actual exposure but other persons such as physicists, technicians
and radio-pharmacists who may influence the patient dose. Their roles and responsi-
bilities must be clearly identified in the procedures.
The referrer. The employer must identify in the procedures the persons who can act as
the referrer requesting a particular medical exposure. Some (such as doctors) may be
Practical Radiation Protection and Legislation 469
The requirement above regarding the clinical information required is emphasised in the
regulations. Currently much emphasis is placed by the IRMER inspectors on the produc-
tion of written operating procedures. These establish the framework under which all the
health-care professionals involved in the imaging process can practice.
(a) Referral criteria and exposure protocols must be established. A common level
of detail is required across all departments no matter how large or small the
department.
(b) The employer must establish diagnostic dose reference levels using local mea-
surements where possible but taking into account published national figures (see
Section 13.3.1).
(c) Reference level doses apply equally to research investigations.
(d) Training and training records are an important part of the regulations and the
employer has the responsibility for ensuring that all their employees and any out-
side contractors working with their patients are adequately trained.
(e) Any incidents where a dose ‘greater than intended’ is given must be investigated
and reported. The definition of what is greater than intended is given in the Health
and Safety Executive (HSE) Guidance Note PM 77 (2006). It should be used for all
incidents such as mistaken identity, wrong view and lost images. There is a clear
direction that when a dose is ‘greater than intended’ the patient must be told.
(f) All research programmes involving the use of ionising radiations must be submit-
ted to a Local Research Ethics Committee. A dose constraint must be set as part of
the programme to limit the dose that can be received by any patient participating.
This dose must not be exceeded. The risks of participating in such a programme
must be explained to all persons asked to participate in a manner that allows them
to understand them. This can make research involving children or mentally inca-
pable adults very difficult to carry out.
(g) All medical diagnostic exposures must have a clinical evaluation carried out on
them. If it is known that this will not happen then the exposure is not justified.
Even if the employer is also the practitioner/operator (as in say a dentist) a record
must still be created of the outcome. This record should also include sufficient
exposure details, for example, kVp, mA or DAP reading to allow an estimate of the
effective dose to the patient to be made.
(h) An inventory of all the X-ray units in a department must be held along with the
staff training records referred to above for all staff.
14.4.5.1 Regulation 1
This defines the persons covered by the regulations. The regulations apply to workers who
receive a radiation dose in the course of their work and to members of the public who are
exposed due to a work activity. Limited sections apply to patients, comforters and carers.
Practical Radiation Protection and Legislation 471
14.4.5.2 Regulation 6
Before starting work with ionising radiations the HSE must be told at least 28 days before
the work starts. This applies to any new X-ray unit which is situated in a building complex
that has not previously contained an X-ray unit, to the knowledge of the HSE.
Insight
Regulation 8 with Schedule 4 Facility Planning
These identify dose limits and the need for employers to keep doses as low as reasonably practi-
cable by engineering controls and suitable systems of work. The ACOP identifies the importance
of planning new facilities properly in conjunction with the RPA. The ACOP also forbids the use
of direct vision fluoroscopy under any circumstances and requires the use of warning devices for
X-ray units. In Regulation 8 the requirement not to exceed the dose limits specified in Schedule
4 of the Act is laid down. This regulation places emphasis on the use of proper engineering con-
trols to restrict radiation doses. It identifies the need for warning devices such as lights which are,
whenever possible, automatic. These engineered safety features should, as far as possible, be
supported by systems of work to be followed by persons working or present in the area. These
systems of work are incorporated into the Local Rules (see Regulation 17). Under this regulation
personal protective equipment (PPE) must be provided. The HSE expect PPE to be used unless
wearing it renders the wearer susceptible to another, greater risk.
The expansion of this regulation in the ACOP regulation covers the exposure of patients in a
hospital to radiation from another patient as may happen from a nuclear medicine scan. The
ACOP recommends that a maximum of 5 mSv can be received under these circumstances from a
course of treatment. This is five times the normal member of the public limit.
Once a female employee declares that she is pregnant the foetus must not receive a dose that is
greater than 1 mSv during the rest of the pregnancy. For workers in a typical X-ray department this is
broadly equivalent to a dose of 2 mSv to the surface of the abdomen. An employer must also take into
account the potential risks to the newly born child of a female working with unsealed radionuclides.
If any employee receives a radiation dose in excess of 15 mSv in a year the employer must carry
out an investigation.
exceeded. The ACOP lays down the exemption from dose limits of comforters and carers
(as defined in Regulation 1).
(a) A clear description of the area to which they apply, identifying which areas are
controlled and which are supervised.
Practical Radiation Protection and Legislation 473
(b) The name of the RPS of the area (see below), the name of the RPA for the area and
the method of contacting them.
(c) The responsibilities of the RPS.
(d) The working instructions for the area, including a clear system of work if non-
classified persons are entering controlled areas.
(e) The procedures to deal with any emergencies which may be anticipated.
In addition, it may be useful to add to the rules the local methods of ensuring compliance
with other requirements of the regulations. For possible inclusion would be a description
of the management and supervision structure for the area, the method and the frequency
of testing safety controls and warning devices, the method of measuring dose rate and,
particularly, contamination, the testing of instruments and the arrangements for personal
dosimetry.
The RPS has local responsibility on behalf of the employer for ensuring compliance
with the working procedures identified in the Local Rules. The RPS must be appointed
in writing and must have sufficient experience, knowledge and status in the management
structure to be able to ensure the implementation of the Local Rules and to action the
emergency procedures should an emergency arise.
but also ancillary equipment such as image receptors, calibrators and image processing
units.
It is incumbent on persons buying such equipment to take into account potential patient
dose when deciding which equipment to purchase. It is a requirement that all X-ray equip-
ment has installed on it a device for recording the dose received by a patient. For many, but
not all, X-ray sets this is by having a DAP meter installed.
A quality assurance program must also be in place to ensure that the equipment is work-
ing as originally intended. Action levels must be established so that any unacceptable
equipment can be adjusted or replaced.
If a dose that is greater than intended is given (see Guidance Note PM 77) due to equip-
ment failure then an investigation must be carried out and a report submitted to the HSE.
(i) The shielding can be achieved by a suitable thickness of concrete blocks, barium
plaster on bricks or plaster lath or lead sheet. Applied lead sheet is not common
because of cost and construction difficulties. It is, however, being increasingly
used pre-bonded to plaster or plywood board.
(ii) Doors must have an equivalent shielding to the walls. All cracks in the door frame
must be covered by lead sheet. Sliding doors are attractive for saving space but can
be difficult and tiring to move. This can result in them not being closed properly
in a busy room.
(iii) The radiographers’ cubicle in the room should have sufficient shielding to ensure
that the dose rate does not exceed 1 μSv/h. It should be positioned and have suf-
ficient shielded windows to ensure that the radiographer can see the patient at
all times. It should also be large enough to accommodate every person who will
require to stand behind a shield unless supplementary mobile lead shields are
also present in the room.
(iv) A warning light should be positioned at the entrance to each room. Ideally this
would be a two stage device with a ‘Controlled Area’ warning illuminating when
the electricity is switched on at the unit and a ‘Do Not Enter’ warning illuminat-
ing immediately before exposure on ‘preping’ the X-ray tube. Rooms relying on
a single warning light must also have a notice indicating the action to be taken
when the light is illuminated.
(v) If there are two X-ray units in the room, warning lights must be placed on the head
of each unit to indicate which one has been selected.
(vi) Hanging facilities for lead/rubber aprons must be provided and storage for lead/
rubber gloves and gonad shields should be provided.
(vii) Emergency ‘cut off’ and aid buttons must be positioned in suitable locations.
(viii) On completion of building or modification of a room, an RPA should carry out
a critical examination to ensure that all safety devices are working properly and
that the shielding is as designed. The leakage dose rate from the unit in the room
must not exceed 1 mGy/h at 1 m from the X-ray head.
The methodology of room design is covered in several publications but that of Sutton and
Williams (2000) is recommended for further reading.
iodine isotopes, can actually diffuse through unbroken skin. Once in the body the radio-
nuclide is accumulated in the organ or tissue which would normally accumulate the sta-
ble element or, for some radionuclides, a close chemical analogue. Once incorporated, it
is virtually impossible to remove the radionuclide medically. It will only leave by natural
biological processes. As the radionuclide decays, all of the energy from any β particles
emitted will be absorbed in the body together with a proportion of the gamma pho-
ton energy. Small ingested amounts of a radionuclide can result in significant radiation
doses.
Insight
ALI and Derived Limits of Airborne Contamination
For the purposes of this example, let us make the following assumptions:
Then the total committed effective dose is I · wT H50,T summed over all tissues. Hence the ALI is
that value of I which satisfies the equation
To find the value of I, H50,T must be calculated. To do this, the factors to consider are as follows:
1. The total energy radiated per unit time by the radionuclide, perhaps separated into that from
charged particles and that from gamma rays
2. Absorbed doses from gamma rays, originating both in tissue T and elsewhere, and taking
into consideration geometrical factors, the inverse square law and the mass absorption
coefficient
3. The distribution of radionuclide within the body
4. The time scale of exposure. If the radionuclide localises quickly, this may be expressed in
terms of the effective half-life, but note that after leaving an organ the activity may localise
elsewhere—for example, the kidney excretes via the bladder
Practical Radiation Protection and Legislation 477
When the ALI is known, the permissible derived air concentration (DAC) can be calculated if
simplifying assumptions are made. Assuming a working year of 2000 h (50 weeks at 40 h), a rate
of breathing of 0.02 m³ per min and that inhalation is the only route of intake
ALI ALI
DAC = = Bq m−3
2000 x 60 x 0.02 2.4 x 103
For Tc-99m the ALI is 1 × 109 Bq, hence the maximum permissible DAC, rounded to the nearest
whole number, is 4 × 105 Bq m –3. For I-131 the ALI is 2 × 106 Bq and the DAC 7 × 102 Bq m –3
(ICRP 1994).
Note that the concept should be used with care since it only applies to the ICRP reference man
working under conditions of light activity and makes a number of other assumptions about the
metabolic breathing pattern.
The derived working limit for surface contamination that will ensure the maximum permissible
DAC is not exceeded varies from one radionuclide to another. For radionuclides used in nuclear
medicine, it may be as low as 370 Bq spread over an area of 10 –2 m2. This causes two problems:
1. The activity ‘seen’ by a small detector may be no more than 20–30 Bq. A Geiger-Müller
counter is insufficiently sensitive to detect this level of activity and a scintillation crystal
monitor must be used.
2. It may be impossible to decontaminate to maximum permissible levels by washing and
cleaning after even quite a small spill (say 1 MBq). If the radionuclide has a long half-life,
contaminated equipment will then have to be removed from service. Fortunately for Tc-99m
(6 h half-life) the contamination may decay to an acceptable level overnight.
2. Never handle bottles containing radionuclides directly but always use tongs. (This
also helps avoid contaminating gloves because of a contaminated stock bottle.)
3. If possible stay at least 0.5 m away from a patient who has received radioactivity,
and check that nurses do not remain unduly close to the patient for unnecessarily
long periods. The dose rate 25 cm away from a patient who has received 500 MBq
of Tc-99m for, say, a bone scan is about 33 μSv/h. At a metre the dose rate is only
9 μSv/h. (Note that the inverse square law is not obeyed for an extended source.)
4. Use shielded syringes to give injections.
5. If a shield cannot be used, extra care should be taken when handling the syringe
not to hold the end containing the radionuclide.
The internal hazard can be avoided by simple good house-keeping practices. Protective
clothing, especially gloves, should always be worn. Syringes do backfire and contaminated
skin is difficult to clean. All manipulations of the radionuclide from bottle to syringe must
be performed over a tray so that any drops can be contained. Syringes must always be
vented into a swab, never squirted generally over the room. All contaminated and poten-
tially contaminated materials must be disposed of in a container that has been clearly
labelled as suitable for the purpose. Hands must always be washed before leaving a radio-
active area.
their performance that are relevant to personal dosimetry are referred to here. This will
be done by considering the requirements of an ideal personal dosimeter and the extent to
which each method satisfies this ideal after briefly reviewing their characteristics. The two
techniques, described in 14.7.2, should also be considered relative to these requirements.
for film, because of the shape of the characteristic curve, calibration is necessary at a large
number of doses so that the exact shape of the curve can be established. Furthermore, cali-
bration is necessary each time a batch of film is developed because of potential variations
in film blackening with development conditions.
(a) (b)
Thin plastic For beta
Thick plastic dose assessment
Relative sensitivity per unit dose
Open window 20
5 cm (stamped number
here) 15
10 Thick plastic
Aluminium
filter
5 Tin–lead
Tin–lead to extend Cadmium–lead filter
range of uniform for neutron response
sensitivity 1.0 10 100 1000
Effective energy (keV)
FIGURE 14.1
(a) Details of a film badge holder. (b) Curves showing the variation of sensitivity (film blackening per unit dose)
with radiation energy both without and with filter. Note that in the diagnostic range, correction for variation in
sensitivity has to be made.
Practical Radiation Protection and Legislation 481
14.7.1.10 Compactness
As shown in Figure 14.1 a film badge holder is quite small. TLDs can be extremely small
(Figure 14.2) and are especially useful for monitoring exposure to the hands and fingers.
This also allows a dosimeter to be placed onto a patient to record their skin dose without
significantly affecting the radiographic image.
FIGURE 14.2
Examples of TLD monitors. On the left is a badge holder for whole body monitoring. On the upper right is the
LiF ceramic disc that would be useful for measuring extremity doses. A very small LiF chip is shown on the
lower right.
482 Physics for Diagnostic Radiology
TABLE 14.2
Relative Merits of Film Badges and Thermoluminescent Dosimeters as
Alternative Methods of Personal Monitoring
Film Badge TLD
1 Range of usefulness 0.2 mGy–6 Gy 0.1 mGy–104 Gy
2 Linearity of response No Yes
3 Calibration against radiation standards? Yes Yes
4 Response independent of radiation energy? No Yes
(except at low kV)
5 Sensitive to temperature and humidity Yes No
6 Uniformity of response within batches Yes Yes
(with care)
7 Maximum time of use 2 months 12 months
8 Compactness Small Very small
9 Permanent visual record? Yes No
10 Indication of type of radiation? Yes No
11 Indication of pattern of radiation exposure? Sometimes No
luminescence device uses a cold luminescent technique and is based on aluminium oxide.
When read out by using selected frequencies of laser light all the information on the
dosimeter is not removed as in a TLD. This allows several read out operations to be carried
out on the same dosimeter, all with the same degree of accuracy. The dosimeter also does
not require annealing before use. It is used with a variety of filters (open window, copper
and plastic) to allow for the measurement of beta and photon doses. It has a very high dose
range (from 10 µGy to 100 Gy). The pattern of the filters allows the exposure conditions of
the dosimeter to be assessed under some circumstances. The tolerance of these dosimeters
to harsh environmental conditions is also very high.
TABLE 14.3
Occupational Exposure in Several Diagnostic Radiology Departments in the UK during 2001
Number of Workers in Dose Range (mSv)
Occupational group 0–1 1–5 5–10 10–15 15–20 >20
Radiographers 4581 30 1 0 0 0
Radiologists (General) 456 11 0 0 0 0
Radiologists (Interventional) 63 4 0 0 0 0
Cardiologists 544 29 0 0 0 0
Other clinicians 1178 19 0 0 0 0
Department nurses 2120 21 0 0 0 0
Science/technical staff 590 2 0 0 0 0
Others 804 5 0 0 0 0
Source: Watson S J, Jones A L, Oatway W B and Hughes J S. Ionising Radiation Exposure of the UK Population:
2005 Review. HPA-RPD-001 Health Protection Agency Didcot, 2005.
the number exceeding 1 mSv/y is about 2.5% but none exceed 5 mSv/y. Interventional radi-
ologists are a little higher with just over 6% exceeding 1 mSv but again none exceeding 5
mSv. Compared to previous years the doses to cardiologists have now fallen and only 5%
exceed 1 mSv/y and none exceed 5 mSv/y. Nurses now have the highest percentage exceed-
ing 1 mSv/y at 10%. This may reflect the difficulty in reducing their doses because of the role
they carry out whereas the doses to the other staff involved have been reduced over recent
years.
As can be seen from Table 14.3, it should be relatively easy for radiographers who fol-
low good working practices to continue working when pregnant and to stay well within
a dose of 2 mSv to the abdomen once the pregnancy has been declared. Radiologists
and nurses may have to be a little more careful and avoid interventional investiga-
tions especially in a screening room which uses an overcouch X-ray tube because of
the risk of being exposed to scattered radiation (see Figure 3.7). For all females working
in radiology departments it is prudent to tell the RPS or Superintendent as soon as a
pregnancy is confirmed so that any changes to work activity can be made as soon as
possible. However, recall from Section 13.9 that the risk to the foetus of a dose of 1 mSv
is extremely small compared to the natural risk of a physical malformation or mental
defect occurring.
Appendix
A Perspective on Current Regulatory Arrangements Concerning
Medical Uses of Ionising Radiation in the United States *
Philosophically, radiation practices in medicine use a ‘radiation management’ approach
rather than the ‘radiation safety’ approach required for US industry; it is understood that
* Kindly contributed by Ian S Hamilton PhD and Douglas A Johnson MS, Foxfire Scientific Inc, w ww.foxfiresci-
entific.com
Practical Radiation Protection and Legislation 485
dose optimisation rather than minimisation is necessary for proper clinical efficacy. This
attitude towards dealing with the unique health and safety issues concerning ionising
radiation has long been dominant for therapeutic applications. As implementation and
use of more radiation-based diagnostic modalities has flourished, radiation management
has become a more important controller for this aspect of the use of radiation in med-
icine. Much of the current US regulatory framework for medical radiation reflects this
approach.
The practice of medicine, including areas that utilise radiation, is fairly homogenous
across the United States. This consistency is primarily due to
Concerning the first two items above, written regulations tend to lag behind profes-
sional best practices and voluntary controls, such as those implemented by the larger com-
munity of medical radiation users. Thus, improved management of radiation practice is
often first achieved through dissemination by professional organisations and national and
international guidance bodies, which constantly strive to add relevant recommendations
for the use of ever-newer radiation devices and procedures.
The American constitution recognises the 50 states as having final regulatory authority
over all but a few activities. As such, individual state governments improve, update and
revise rules and guidelines based largely on the experience and value demonstrated by
the aforementioned professional organisations. States also share membership with one
another in organisations to form working groups that generate ‘state draft rules’, that is,
templates used for individual legislatures to adopt and impose.
One area where requirements are commonly specified in detail is new quality assur-
ance standards for devices. Another is increased training for radiation users. Increasingly,
states are requiring additional certifications, particularly for non-physician medical radia-
tion practitioners. Once certification is in place, regulatory requirements are indirectly
increased each time certifying organisations increase the rigor and requirements to earn
the certification.
Along side the evolution of regulations derived from voluntary professional standards,
regulations continually arise from more politically pure sources. These are primarily from
legislative actions that result when members of the public or public interest groups or pro-
fessional lobbyists act through one or a group of politicians to enact new laws. They differ
from the evolved professional good practice regulations in that they occur relatively more
quickly, and are more likely to have “unintended consequences” than the former. The
genesis of a community-generated regulation is typically that one, or a few, high profile
incidents occur that have some mix of public outcry, attention from the media and politi-
cians willing to get involved. The science behind this type of regulation may not stand the
test of time. Conversely, regulations that occur as a result of professional lobbying tend to
protect or advance an industry or organisation but are also more likely to be better crafted,
and have greater longevity and applicability.
No matter the source of new rules, adoption remains slow because of the complex and
bureaucratic nature of the regulatory process. Hearings, draft regulations and public
comment periods all dampen the system and effectively prevent many, if not most, of the
486 Physics for Diagnostic Radiology
worst rules from attaining ultimate passage. This same inertia also slows the removal of
unwanted or obsolete rules from regulatory codes. Frequently, very old rules or allow-
ances remain valid for operators who may be in violation of current rules, but because
the licensee or registrant was/is compliant with a previous code they are conditionally
absolved from compliance. This process is known as ‘grandfathering’.
Constant advances in radiation-related technologies pose special challenges. As new
modalities become available, existing regulations may be inadequate to address new con-
cerns or may impose restrictions that are no longer relevant. This is a difficult problem
because some improvements are made that simplify and automate functions on equip-
ment, while regulations may still require actions based on operations or technologies that
no longer exist. In general, operators and regulators in the US adapt to these changes;
however, until new regulations are in place, inspectors may be placed in a compromising
position wherein they will have to “pass” a facility (or device) that does not conform to
current (non-updated) rules simply because the regulations do not apply to the newer type
of equipment and no regulatory exception exists.
One solution to broad and inclusive regulations is to create separate regulations based
on the specific use of the medical radiation source. Independent sections of licenses apply
only to the use of certain devices like bone densitometers, nuclear medicine imaging
devices, computed tomography (CT) and mammography.
Mammography is an interesting exception to the way most regulations evolve in the
US, in that the federal government legislated new rules for states to adhere to sev-
eral years ago and these have had significant impact across the country. New regu-
lations demanding much higher standards were added faster than normal and have
resulted in
The final unifier noted in the second paragraph, hospital accreditation, is driven by govern-
ment requirements that certain inspections are a prerequisite to be eligible for reimburse-
ment with public funds. Similarly, insurance companies may also require compliance with
‘volunteer’ accreditation programmes as a de facto standard of quality before permitting
third party reimbursements.
In general, the pace of regulatory change in the US appears to be accelerating. While this
has largely mirrored technological changes, any future socio-political climate will have
an impact in ways that are yet to be known. What can be predicted is that, as in Europe,
US regulations concerned with the safe use of ionising radiation in medical diagnosis and
therapy will continue to be guided by recommendations from organisations such as the
International Commission on Radiological Protection (ICRP), while focusing on patient
dose-justification and optimisation.
Practical Radiation Protection and Legislation 487
References
Administration of Radioactive Substances Advisory Committee (2006) Notes for Guidance on
Radioactive Substances to Persons for Purposes of Diagnosis, Treatment or Research (Department of
Health UK).
Carriage of dangerous Goods and Use of Transportable Pressure Equipment Regulations (2009)
Statutory Instrument 2009 No. 1348. HMSO, London.
Directive 90/641 Euratom (1990) Outside Workers. Official Journal of the European Communities
No. L349.
Directive 96/29 Euratom (1996) Basic Safety Standards Directive. Official Journal of the European
Communities No. L159.
Directive 97/43 Euratom (1997) Medical Exposures Directive Official Journal of the European
Communities No. L180/22.
Guidance Note PM77 (2006) Fitness of Equipment Used for Medical Exposure to Ionising Radiation, 3rd
edition. HSE, London.
Heaton B, McCallum S and Michael W (2011) Cyclotron and PET Facilities. IPEM Report 63 Ch 11
Institute of Physics and Engineering in Medicine York.
IAEA Safety Series No. TS-R-1 - 1996 edition (revised) Regulations for the Safe Transport of Radioactive
Materials. IAEA, Vienna.
International Commission on Radiological Protection (1991) Recommendations of the International
Committee on Radiological Protection Publication ICRP 60. Annals of the ICRP 21 (2/3) Pergamon
Press, Oxford.
International Commission on Radiological Protection (1992) Radiological Protection in Biomedical
Research ICRP 62. Annals of the ICRP 23 (2) Pergamon Press, Oxford.
International Commission on Radiological Protection (1994) Dose Coefficients for Intakes of Radionuclides
by Workers. Annals of the ICRP 24 (4) Pergamon Press, Oxford.
International Commission on Radiological Protection (1996) Age-dependent Doses to Members of the
Public from Intake of Radionuclides: Part 5 Compilation of Ingestion and Inhalation Dose Coefficients.
Annals of the ICRP 26 (1) Pergamon Press, Oxford.
International Commission on Radiological Protection (2007) The 2007 Recommendations of the
International Commission on Radiological Protection Publication ICRP 103. Annals of the ICRP 37
(2–4) Elsevier, Amsterdam.
Ionising Radiations Regulations (1999) Statutory Instrument 1999 No. 3232. HMSO, London.
IR(ME)R Ionising Radiation (Medical Exposure) Regulations (2000) Statutory Instrument 2000 No.
1059. HMSO, London.
Medical and Dental Guidance Notes (2002) IPEM, York.
Radioactive Substances Act (1993) HMSO, London.
RCR (2007) Making the Best Use of Clinical Radiology Services, 6th edition. The Royal College of
Radiologists, London.
The European Agreement concerning the International Carriage of Dangerous Goods by Road (ADR)
2007 United Nations, Economic Commission for Europe.
The Medicines (Administration of Radioactive Substances) Regulations (1978) Statutory Instrument
No. 1006. HMSO, London.
The Radioactive Substances (Testing Instruments) Exemption Order (1985) Statutory Instruments No.
1049. HMSO, London.
The Radioactive Substances (Hospitals) Exemption Order (1990) Statutory Instruments No. 2512.
HMSO, London.
Watson S J, Jones A L, Oatway W B and Hughes J S (2005) Ionising Radiation Exposure of the UK
Population: 2005 Review. HPA-RPD-001 Health Protection Agency Didcot.
Wrixon A D (2008) New ICRP recommendations. Journal of Radiological Protection 28, 161–168.
488 Physics for Diagnostic Radiology
Further Reading
Farr R F and Allisy-Roberts P J (1997) Physics for Medical Imaging. W B Saunders & Co.
Institute of Physics and Engineering in Medicine (1997) Recommended Standards for the Routine
Performance Testing of Diagnostic X-ray Imaging Systems. IPEM Report No. 77. Institute of Physics
and Engineering in Medicine York.
Sutton D G and Williams J R (2000) Radiation Shielding for Diagnostic X-rays. British Institute of
Radiology, UK, The Charlesworth Group.
Exercises
1. What do you understand by the ALARA principle? Give examples of its applica-
tion in an X-ray department.
2. What is the purpose of local rules? What would you expect to find in the local
rules for an X-ray department?
3. Discuss the responsibilities placed on the employee by the Ionising Radiations
Regulations 1999.
4. Summarize the radiological techniques that can reduce the dose to the patient.
5. Comment on the validity of the statement ‘Any dose can be justified in diagnostic
radiology’.
6. What precautions should be taken regarding radiation protection in a paediatric
X-ray clinic?
7. As the consultant in charge of a department planning a new suite of rooms for
neuroradiological investigations, what considerations would you have with regard
to radiation protection when deciding on the structure and lay out, and on the
equipment installed?
8. List the precautions that must be taken when using a mobile X-ray image intensi-
fier in an orthopaedic theatre.
9. What are the requirements of an ideal personal dosimeter?
10. What are the advantages and disadvantages of a film badge personal dosimetry
system when compared with a thermoluminescent system?
15
Diagnostic Ultrasound
SUMMARY
• The physics of ultrasound propagation in the body is presented.
• The properties of tissues which cause image formation with ultrasound are
discussed.
• The mechanisms of action of ultrasound probes are discussed.
• Technical aspects of B-mode ultrasound are presented and B-mode artefacts
are explained.
• More advanced techniques—tissue harmonic imaging, compound imaging,
coded imaging—are explained briefly.
• Ways in which information can be obtained from the Doppler effect are
considered.
• The chapter concludes with considerations of the safety of ultrasound.
CONTENTS
15.1 Introduction......................................................................................................................... 491
15.2 The Ultrasound Wave and the Principles of Echo Mapping........................................ 492
15.3 Quantities That Describe an Ultrasound Wave............................................................. 495
15.3.1 Describing the Vibration of the Medium............................................................ 495
15.3.2 Excess Pressure....................................................................................................... 496
15.4 The Scale of the Diagnostic Ultrasound Pulse in Time and Space, and Why
This Is Important................................................................................................................ 498
15.5 Production of Echoes..........................................................................................................500
15.5.1 Characteristic Acoustic Impedance of a Medium..............................................500
15.5.2 Reflection.................................................................................................................. 501
15.5.3 Scattering................................................................................................................. 502
15.6 Other Aspects of Propagation........................................................................................... 503
15.6.1 Refraction at a Boundary....................................................................................... 503
15.6.2 Attenuation of Ultrasound....................................................................................504
15.6.3 Calculating the Effect of Attenuation..................................................................504
15.6.4 Non-Linear Propagation........................................................................................ 507
15.6.5 Dispersion................................................................................................................ 507
15.7 Ultrasound Probes, and How They Work....................................................................... 507
15.7.1 The Transducer Element........................................................................................ 507
489
490 Physics for Diagnostic Radiology
15.1 Introduction
Energy can exist in many forms, and in some forms has the ability to move from one place
to another. When energy moves as a wave through a medium, this is known as radiation.
Since sound is a mechanical vibration within an elastic medium, and since such vibration
of any part of that medium will be passed on to neighbouring parts, it is valid to refer to
sound (and ultrasound) as radiation, though not of course ionising radiation.
Many of the differences between X-ray and ultrasound imaging stem from the nature of
the energy being used, and how it propagates. Whereas X-rays can be considered either as
discrete photons or as waves, ultrasound is solely a wave phenomenon. It behaves like light
waves in some ways, although it carries a different form of energy. For example, it travels in
straight lines unless it is caused by some material to change its direction; it undergoes par-
tial reflection at the boundary between two media, and is subject to refraction, diffraction,
scattering and absorption, all of which are relevant to diagnostic imaging. However, sound
waves are fundamentally different from light, X-rays and all other electromagnetic waves,
in that they cannot travel through a vacuum. They exist only as vibrations of the particles of
a medium and their propagation depends on the density and stiffness of the medium.
Narrow beams of sound tend to be used in medicine, though not necessarily remain-
ing narrow for long distances, as we imagine laser light does. The partial reflection and
scattering, which are crucial to imaging, occur whenever the travelling ultrasonic wave
meets a change in the density or stiffness of tissue. The result in many instances is that
an echo is sent back to the transducer, and diagnostic ultrasound relies on these echoes to
form its image. The pulsed ultrasound waves emitted from the transducer travel at a fairly
492 Physics for Diagnostic Radiology
constant speed in their given direction, so that the range and direction of the features that
produce the echoes can be measured and plotted. The image is therefore an ‘echo-map’.
The reflected ultrasound waves offer another diagnostic possibility: if scattered from
moving blood cells within the body, they will undergo Doppler shift (a change in frequency).
Measurement of the shift allows the blood flow in specific parts of the body to be exam-
ined, thus providing functional information.
One of the principal attractions of ultrasonic imaging is its freedom from the harm-
ful effects associated with ionising radiations. Its growth to its present dominant role
(in terms of number of examinations performed) is also due to it being relatively cheap and
mobile, to its ‘real-time’ nature, and to an ever-increasing appreciation that it can provide
unique diagnostic information about the structure, properties and mechanical behaviour
of tissue and blood flow. The major limitations to ultrasonic investigations are the barriers
presented by gas and bone, and the dependence on a relatively high degree of operator
skill and experience. In addition, mapping distortions and artefacts occur, so some under-
standing of the basic physics of sound propagation and of the techniques used in scan-
ning equipment is necessary if high quality scans are to be produced and their limitations
appreciated. This chapter will provide an initial understanding of the principles and the
technology, so that the reader is better prepared for further study of more advanced texts,
for example, Hoskins et al. (2003).
Medium
at
rest C R C
Medium
at
rest
C R
Medium
at
rest
FIGURE 15.1
Schematic representation of a propagating longitudinal sound wave produced by a vibrating transducer.
(a) Before the vibration begins, the medium is undisturbed. (b) The transducer face has moved forwards,
pushing forwards ‘particles’ of the medium and compressing them. (c) The face has moved backwards, allow-
ing the particles to relax backwards and stretch. Meanwhile, the compression C has continued to propagate.
(d) A f urther compression is produced by the next forward push of the transducer face, and so on until the
transducer stops vibrating. The movement, compressions C, rarefaction(s) R, and therefore energy, are passed
from one layer of particles to the next at the speed of sound c. Note that the layer next to the transducer (and
each subsequent layer) is displaced sinusoidally with time about its rest position (dotted curve). (The movement
and compression/stretching of particles is exaggerated.)
movement of the tissue) and potential energy (in the local compression or expansion of
the tissue).
As noted above, ultrasound imaging depends on the emission of a burst, or ‘pulse’, of
sound into the region of interest, and the return of echoes of that sound from tissue bound-
aries and inhomogeneities. It is a pulse-echo technique. Because the speed of sound in the
medium is known (or assumed) to be a particular value (1540 m s–1—see Insight), the dis-
tance (depth) of an echoing feature can be calculated from the time that the echo takes to
return. This time will be 1.3 µs for each mm of distance between the feature and the trans-
ducer. Such times are easily measureable with electronic circuitry.
494 Physics for Diagnostic Radiology
Insight
The Speed of Sound, and What Determines It
In soft tissue, the speed of sound is approximately 1540 m s–1. In any medium, it is determined pri-
marily by two properties of the medium—the density and compressibility; and to a lesser degree
by the conditions of the medium such as ambient temperature and pressure. Density ρ describes
the mass of the medium per unit volume, and stiffness K (also known as the bulk modulus) is the
pressure required to produce a given fractional change in volume. K is high for relatively incom-
pressible media such as solids, but low for compressible media such as gases. The speed of sound
c may be expressed in terms of K and ρ thus c = K/ r .
Note that the speed should be slower in a dense medium but we often observe sound travel-
ling faster in dense tissues. This is because they also tend to be stiff. In fact, tissues differ more
in stiffness than in density, so although bone, for example, is denser than muscle, it has a higher
speed of sound because it is much stiffer than muscle. The speeds in individual tissues (Table 15.1)
vary from about 5% above the mean value of 1540 m s –1 (for some samples of muscle) to about
10% below it (for some samples of fat). The speed of sound in water is also in this range, being
1482 m s–1 at 20°C, but increasing with temperature at the rate of about 3 m s–1 per °C.
To be able to separate the echoes from different boundaries that are close together, the echo
should be a short sound, so the emission should be a short pulse of only a very few vibrations.
A related benefit of the short echo is that its time of arrival can be measured more precisely.
However, an ultrasound image cannot be produced by measuring only the depth of
echoing features—it has to show the direction in which each feature is located. It is there-
fore an equally important principle that the sound pulses must be sent along precisely
known but different paths in turn, so that, after a particular emission, the only features
sending back echoes will be those along a known path. The physics of diffraction tells us
that only high frequency waves can be confined to narrow paths. The ultrasound used for
medical imaging normally has a frequency in the range 2–15 MHz. Frequencies outside
these limits might be used for special imaging purposes.
This basic ultrasound pulse-echo method is used in various ways to obtain diagnostic
information. The production of a real-time, two-dimensional (2D) map of the boundar-
ies and other features of tissue within a section of the body is known as 2D or B-mode
imaging. The sections up to 15.8 describe the nature of the diagnostic ultrasound pulse,
TABLE 15.1
Ultrasound Properties (Approximate) of Some Human Tissues at 37°C, and Other Media
Characteristic Attenuation
Speed of Sound Acoustic Impedance Coefficient D
Medium c (m s–1) Z (kg m–2 s–1 or Rayls) (dB cm–1 MHz–1) Notes
Air 331 400 1.6 (see note) At 20°C, 10% humidity
D varies as MHz–2
Water 1480 1.5 × 106 0.002 (see note) At 20°C, 1 atmos
D varies as MHz–2
Blood (whole) 1580 1.7 × 106 0.15
Liver 1580 1.6 × 106 0.4
Muscle (skeletal) 1580 1.6 × 106 0.6
Fat 1460 1.3 × 106 1.0 Values are variable
Bone 3300 6 × 106 23 Values are variable
Source: Adapted from Duck FA Physical Properties of Tissue, Academic Press Ltd., London, 1990.
Diagnostic Ultrasound 495
its propagation in tissue, its reflection from boundaries and other features, and the probes
that produce it and scan it to cover the region of interest. Sections 15.8 onwards describe
the different diagnostic modes, such as B-mode and Doppler.
A further principle of B-mode echo mapping is that the brightness of any echo displayed
on the map indicates the strength of that echo. This is how B-mode gets its name.
(a)
Particle 30
displacement
(µm)
20
10
0
0.2 0.4 0.6 0.8 1.0
–10 Time (µs)
–20
–30
(b)
Particle 30
displacement
20
(µm)
10
0
Axial distance
–10 from transducer
–20
–30
FIGURE 15.2
(a) Particle displacement versus time waveform of a typical ultrasonic pulse. (b) Showing how, due to energy
propagation in the medium, adjacent particles progressively exhibit the same displacement-time waveform. At
any instant, as shown here, the graph of their displacement versus axial distance is the reverse of (a).
496 Physics for Diagnostic Radiology
out in space. Another way of picturing the wave is therefore to draw a graph of the different
displacements of particles versus axial distance into the medium at a given instant (Figure
15.2b). This appears as the reverse of the time-graph. The pattern of displacement shown by
the spatial graph moves to the right with speed c, representing the progress of the wave.
Alternatively, ultrasound can be described in terms of the velocity u with which the
particles move as they vibrate. Velocity is the derivative with respect to time of displace-
ment, so u also varies cyclically with time, from positive (forwards) values to negative
(backwards). This alternating velocity u of the particles during their vibration must not be
confused with the steady velocity c with which the vibration travels through the medium.
The velocity that the particles reach (the velocity amplitude) is related to their displace-
ment amplitude and to the frequency of vibration. It is typically a few m s–1.
A graph of the velocity of a particle with time would be similar to the graph of displace-
ment versus time (though the time of maximum velocity does not coincide with the time
of maximum displacement). A plot of the velocity, at some instant, of all particles along the
axial direction of the wave would appear as the reverse of the temporal velocity graph.
Acceleration of the particle, and other higher derivatives of movement, also varies cycli-
cally with time. The variation of any one of the above quantities (along with knowledge
of the propagation speed c) adequately describes the wave. However, the quantity most
commonly used to describe the wave is its varying excess pressure.
Insight
Energy, Power and Intensity
The transducer does work, giving energy to the first layer of particles of the medium as it moves
and deforms them. The energy is passed from particle to particle (though some of it is absorbed
by each particle, so that the wave becomes weaker as it propagates—see Section 15.6.2). A sin-
gle pulse from a diagnostic ultrasound probe might typically carry with it a few microjoules (μJ)
of energy. As the pulse is repeated, the probe transmits a certain amount of energy within any
specified time period. The amount of energy being transferred per second by the transducer or
Diagnostic Ultrasound 497
(a) Excess
(c) Relative energy
pressure (MPa) Duration τ
of each frequency
1
component
Bandwidth B
Max
0.5
½ Max
0
0.2 0.4 0.6 0.8 1.0
Time (µs)
–0.5
fc Frequency
(MHz)
–1
(b)
Excess
pressure (MPa) Wavelength λ
1
0.5
0
Axial distance
–0.5 from transducer
–1
1 mm
FIGURE 15.3
Graphs of excess pressure of an ultrasound pulse versus (a) time, and (b) axial distance from the transducer.
Note that the peaks of pressure do not coincide in time with those of particle displacement for the same wave,
shown in Figure 15.2. The wave has a nominal frequency of 5 MHz, and a wavelength of about 0.3 mm. (c) Energy
spectrum of a typical pulse. The centre frequency fc and bandwidth B are indicated.
wave is the power (P). The unit of power is the watt (W), where 1 W = 1 J s –1. A medical scanner
might typically transmit a few thousand pulses per second, so its output power may be up to a
few hundred mW.
Because ultrasound is directional, a quantity that is often of more interest than power is its inten-
sity, defined as for X-rays (see Section 1.10). Strictly the SI unit of intensity is the watt-per-square-me-
tre (W m–2), but ultrasound intensities are usually quoted in W cm–2 or mW cm–2. Although defined
in terms of an area, intensity describes the situation at a point. It equals the total power that would be
measured within a square cm if the power at the centre of it extended throughout that area.
The intensity varies in different parts of the medium and also with time, because the emission
is pulsed. It can even be regarded as varying with the cyclical wave motion, though this is not
useful in practice. It is better to average out these rapid variations over a chosen timescale such
as the period of the wave which still allows the variation during the pulse to be described. This
‘smoothed’ intensity can be shown to be related to the pressure-amplitude pmax:
pmax 2 pmax 2
I(t ) = =
2rc 2Z
498 Physics for Diagnostic Radiology
where pmax itself is a function of time. The product ρc is given a single symbol Z, and is known as
the characteristic acoustic impedance of the medium (see Section 15.5.1).
From a graph or measurement of I(t) the temporal-peak intensity (ITP) as the disturbance passes
a particular point can be identified, or with further averaging, the temporal-average intensity (ITA)
that occurs over a period of time at that point. ITA is important for determining the temperature rise
that might be produced in a sound absorbing medium. The largest value of ITA to be found at any
point in the scanned area is known as the spatial peak temporal-average intensity (ISPTA). The value
of ISPTA that an ultrasound scanner might produce depends on the settings of the machine controls
and the choice of scanning mode. If a machine is set such that its ISPTA is maximum, this is typically
30 mW cm –2 for B-mode, 100 mW cm –2 for M-mode, and 1000 mW cm –2 for pulsed Doppler.
tc
da min ⊕
2
The axial resolution is half the pulse length because if two features are separated in the
axial direction by τc/2 m, the distance travelled by sound to each of them and back in the
pulse-echo process differs by τc m (see Figure 15.4a). This means that the echoes returning
to the transducer from each of them are separate in time, not overlapping, and the scanner
will plot them as separate features on the image. The axial resolution is thus approximately
0.75 mm for each microsecond of pulse duration. By increasing the ultrasound frequency,
the pulse can be made even shorter in time and axial length, giving better axial resolution.
The short emissions necessary for diagnostic ultrasound contain a wide spectrum of
frequencies (see Insight). The corollary is that a transducer cannot provide good axial res-
olution unless it can emit a broad band of frequencies. From the insight, axial resolution
can also be expressed as
c
da min ⊕
2B
Diagnostic Ultrasound 499
(a) (b)
Probe Probe
da
FIGURE 15.4
(a) Axial resolution depends primarily on pulse length τc (see text). In the illustration, da > τc/2, and so the two
interfaces are resolvable. (b) Lateral resolution depends primarily on the lateral extent of the travelling pulse. In
the illustration, the two features both send back echoes at the same time, and are not resolvable.
where B is the pulse bandwidth. This is a more fundamental relationship than the one
given above (see Section 15.13).
Insight
Pulse Waves, Energy Spectra and Bandwidth
Section 15.4 explains why the pulsed waves used in medical ultrasound generally have a length
of only about two cycles. This pulse must contain a spread of frequencies, because it is a direct
consequence of Fourier analysis (see Section 6.11) that only a continuous wave (CW) has a unique
frequency. Figure 15.3c shows the energy spectrum of a pulse. Two useful characteristics of it
are the centre frequency fc, at which the spectrum has its maximum height, and which is the fre-
quency we readily observe from the waveform; and the bandwidth which is defined as the width
of the energy spectrum at half its maximum height. An important rule is that, as the pulse gets
shorter in time, its bandwidth increases. In fact
1
Pulse bandwidth ⊕
Pulse duration
For a typical two cycle imaging pulse, the bandwidth is about 50% of fc. Thus, a ‘5 MHz’ imaging
pulse really means a pulse with a centre frequency of 5 MHz, but containing substantial energy at
frequencies between about 3.8 MHz and 6.2 MHz.
However, good axial resolution on its own cannot ensure a detailed B-mode image: good
resolution in the lateral direction is also required. If the emitted disturbance intercepts
two features that lie side-by-side in the lateral direction (Figure 15.4b), they will both send
back echoes to the transducer that will arrive simultaneously, and will not be resolved.
Therefore, the lateral extent of the disturbance in the medium must be narrower than the distance
500 Physics for Diagnostic Radiology
between the features we want to resolve. Section 15.7.2 will describe how the ultrasound probe
is designed to propagate the sound along a narrow path or ‘beam’. The lateral beamwidth
is related to, among other things, the wavelength of the ultrasound, suggesting that to
improve lateral resolution, a higher ultrasound frequency should be used.
p
= constant Z at every instant
u
The constant of proportionality Z depends on the particular tissue, and is known as its
characteristic acoustic impedance.
Z depends on two properties of the medium: density ρ and stiffness K, that is,
Z = rK
Z can also be shown to be equal to density multiplied by the speed of sound in the medium
concerned, that is,
Z = ρc
where it can be seen that soft tissues and water have similar Z, because of their similar
stiffness, density, and speed of sound. (The rather unwieldy formal SI unit for Z, that is,
kg m–2 s–1, is often referred to as a rayl, in honour of Lord Rayleigh who did much early
work on the theory of sound.)
However, the absolute value of Z is less important than the fact that the strength of the
echo produced at a reflecting interface is determined by the difference in the values of Z in
the two media on each side of the interface (see next section).
15.5.2 Reflection
An interface between two tissues in the body which is smooth (i.e. has flat areas con-
siderably larger than the wavelength of the sound involved), is called a specular reflector,
because it reflects sound waves in the same way that a mirror (or partial mirror) reflects
light waves.
The reflected wave has a certain fraction of the intensity of the incident wave. If Ir and Ii
are the intensities of the reflected and incident waves, an intensity reflection coefficient R can
be defined as the ratio Ir/Ii. If the incident wave is perpendicular to the specular reflector
(Figure 15.5a), then
2
I ⎡ Z - Z1 ⎤
R= r =⎢ 2 ⎥
I i ⎣ Z2 + Z1 ⎦
The numerator shows that the reflection coefficient depends on the difference between Z1
and Z2, the characteristic acoustic impedances of the media on either side of the interface.
Because the numerator is squared, it doesn’t matter whether Z2 or Z1 is the larger. (The
sign of the difference does affect the phase of the reflected pressure wave, however, which
undergoes a 180° shift if Z2 < Z1, but not if Z2 > Z1.)
(a) (b)
Before reflection: After reflection: Before reflection: After reflection:
Probe Probe
Z1, c1 Z1, c1
θ1
Z1 θ1
Z1
Z2, c2 Z2, c2
Z2
θ2
FIGURE 15.5
Partial reflection occurs when an ultrasound beam meets the boundary between two media of different char-
acteristic impedances. (a) Perpendicular incidence is assumed in the definition of reflection coefficient. (b) With
non-perpendicular incidence, the echo will not return to the receiving transducer. If the speed of sound is dif-
ferent in the two media, as assumed here, the transmitted beam is refracted (θ 2 ≠ θ 1—see Section 15.6.1).
502 Physics for Diagnostic Radiology
Since the energy that is not reflected must be transmitted through the interface, the
transmitted intensity It is equal to Ii – Ir. An intensity transmission coefficient (T) can therefore
also be defined:
It
T = = 1-R
Ii
15.5.3 Scattering
For many tissue boundaries, incident ultrasound is scattered over a wide range of angles
by small surface irregularities. Such boundaries are described as diffuse reflectors by anal-
ogy with the way that a matt surface or ground glass plate produces diffuse reflection of
light. Fortunately for pulse-echo imaging, some of the scattering will return to the trans-
ducer, even if the boundary is not perpendicular to the incident wave direction. These
‘backscattered’ echoes are weaker than those from a specular reflector, but allow the scat-
tering boundaries to be imaged even when non-perpendicular to the beam (Figure 15.6a).
Ultrasound is also scattered by the innumerable tiny microstructures within tissue
(Figure 15.6b), that is, wherever a wave meets a small feature, about a wavelength or less
in size, with a different Z. An analogy is the scattering of ripples on the surface of a pond
by a reed, where the scattered waves take the form of circular ripples spreading out from
the reed.
Early scanners were not sensitive enough to detect most backscattered echoes—they
tended to image only organ boundaries. B-mode scanners are now sensitive to a much
wider dynamic range of echoes (see Sections 5.5.4 and 15.9.1), and translate that range to
a ‘grey-scale image’, giving the vital possibility of differentiating between different tissue
types and pathologies.
Diagnostic Ultrasound 503
(a) (b)
Before scattering: After scattering:
Z1, c1 Z1, c1
Z2, c2 Z2, c2
FIGURE 15.6
(a) A rough boundary scatters sound over a range of angles, so that backscattered echoes may return to the
transducer even if the sound is not incident perpendicularly to the boundary. (b) Scattering is also produced
by small-scale inhomogeneities with the tissue parenchyma. Backscattered echoes are received that depend on
the structure of the tissue.
When the scattering structure is much smaller than a wavelength, the process is called
Rayleigh scattering in which, in theory, the power of the scattered wave is proportional to
f 4 and to a6, where a is the scatterer size. However, this relationship may not be observed
in tissue which consists of many different sizes of scatterer. Nevertheless, different tis-
sues and pathologies do lead to different strengths and textures in the complex speckle
patterns produced by scattering from within them, and this offers a way of differenti-
ating between them. An unwelcome aspect of scattering is that these speckle patterns
are largely artefactual (Section 15.10.1), and do not give a true image of tissue structure,
so quantitative tissue characterisation using ultrasound is difficult to achieve. (Note that
although the terms ‘specular’ and ‘speckle’ sound similar, they have nothing in common,
and must not be confused.)
where c1 and c2 are the speeds of sound in the first and second media respectively, and
angles are measured from a line perpendicular to the boundary. The law shows that the
transmitted beam is deflected further away from the normal when c2 > c1, or towards the
504 Physics for Diagnostic Radiology
normal (see Figure 15.5b) when c2 < c1. If c2 = c1, or if the beam strikes the boundary at right
angles (regardless of the values of c2 and c1), then no refraction takes place.
Refraction produces effects in ultrasound imaging that are not necessarily obvious to
the user. There might be an acoustic lens within the probe, playing an essential part in
the production of a narrow beam for the sound pulse to travel along (see Section 15.7.2).
Refraction can also cause various artefacts (see Section 15.10.10).
I = I 0 exp( −m s l ) (15.1)
Diagnostic Ultrasound 505
I0
1 MHz
Intensity I
3 MHz
5 MHz
0 2 4 6 8 10 12 14 16 18 20
Distance travelled (cm)
FIGURE 15.7
Attenuation of an ultrasound wave in tissue causes intensity to decay exponentially with the distance travelled.
The solid curve illustrates the situation for a 1 MHz wave propagating through tissue that attenuates 10% of its
intensity per cm. Higher frequency waves are attenuated more.
that has already been discussed in Section 1.5 (radioactive decay) and Section 3.2 (attenua-
tion of X-rays). All the mathematics presented in Section 1.5 can be applied equally well to
the intensity of ultrasound, including the simple graphical method for solving problems
involving exponential changes. For ultrasound, the y-axis will be the ultrasound inten-
sity, and the x‑axis is axial distance into the medium (see Figure 15.7). Table 3.1 could
be extended to attenuation of ultrasound using the symbols µ S and HS½ to represent the
linear attenuation coefficient and half-value thickness, respectively. However, half-value
thickness is not generally used in ultrasound—attenuation in a particular tissue is usually
described as the proportional loss of intensity per cm of that tissue.
As explained in the previous section, for soft tissues attenuation is generally propor-
tional to frequency and Equation 15.1 can then be written:
I = I 0 e − afl (15.2)
Insight
Decibels
The decibel (more useful in practice than the bel, from which it is derived) allows the calcula-
tion and expression of the result of a series of proportional processes more simply than the serial
506 Physics for Diagnostic Radiology
calculation or expression of each individual process. In Section 15.6.3, the attenuation of intensity
of an f-MHz wave travelling through l cm of tissue is shown to take the form
I
= e − af l
I0
where I0 is the initial intensity, I is the resulting intensity, and a is the attenuation coefficient of
the tissue at 1 MHz.
Converting the proportion I/I0 to decibels involves taking the value of log10(I/I0) and multiplying
by 10, thus:
⎛ I⎞
10 log10 ⎜ ⎟ = 10 log10 (e − a ) × f × l.
⎝ I0 ⎠
Since e–a is by definition the ratio I/I0 of the intensities of a 1 MHz wave after and before travel-
ling 1 cm, we can write
⎛ I⎞ ⎛ I⎞
10 log10 ⎜ ⎟ = 10 log10 ⎜ ⎟ ×f ×l
⎝ I 0 ⎠ after l cm ⎝ I 0 ⎠ after 1 cm
at f MHz at 1 MHz
= - D × f × l,
where D is the decibel (dB) value of the tissue’s attenuation coefficient. Its units are dB cm –1
MHz–1. It is a positive number because it describes an attenuator (the tissue). It can be shown to
be equal to 4.34 × a. Its effect on ultrasound intensity would of course be –D dB cm –1 MHz–1. The
benefit of this expression is that, to calculate the decibel proportion of the intensity remaining after
a wave of any frequency f MHz travels any distance l cm, we simply take the negative value of the
three factors on the right-hand side, multiplied together.
There are two further benefits of using decibels for calculations. First, the fact that they are
logarithmic means that very large ratios can be expressed compactly. In scanning situations where
one echo might have an intensity that is 1010 times that of another, it may simply be said that its
intensity is 100 dB greater (since 10 log10 1010 = 100). The second benefit derives from the fact
that, where numbers are multiplied or divided, their logarithms are simply added or subtracted.
Thus, suppose the energy of a pulse is reduced by a factor of 1/2 due to one process (which might
be travelling through the half-power thickness of a tissue) and then by a further factor of 1/10 by
a second process. Overall, the energy has been reduced by a factor of 1/2 × 1/10 = 1/20. The
alternative approach is to express the two factors in dB and simply add them together. Thus the
first factor (1/2) is a reduction of –3 dB, and the second factor (1/10) is a reduction of –10 dB, giv-
ing a total reduction of –13 dB. This result could be converted back into the form of a ratio (using
antilog10 1.3 = 20), but would often be left in dB form.
Note that, since intensity varies with the square of the amplitude, doubling the amplitude causes
a 6 dB increase in intensity.
Therefore, although higher ultrasonic frequencies provide better image resolution (see
Section 15.4), their higher attenuation means that the choice of ultrasound frequency in
practice is always a compromise. It must be low enough for the sound to penetrate to (and
return from) the deepest tissue of interest, but high enough to provide the best possible
resolution.
Finally, note from Table 15.1 that not all materials exhibit an attenuation coefficient that is
proportional to frequency. For bone, not only is the value of D enormous (perhaps 70 dB cm–1
Diagnostic Ultrasound 507
at 3 MHz), it increases more rapidly with frequency than for soft tissues. The attenuation
coefficient of water, although very low at ultrasonic frequencies, increases according to f 2.
15.6.5 Dispersion
Wave dispersion occurs if the wave contains different frequency components, and the fre-
quencies travel at different speeds through the medium. An ultrasound pulse in a disper-
sive medium would lose its narrow confinement in space and time—the dispersion of its
frequency components in time would mean that they no longer synthesized a high ampli-
tude, short-duration pulse. Thus the ultrasound system would lose sensitivity and axial
resolution. Fortunately, the media involved in medical ultrasound are non-dispersive, that
is, c is independent of frequency.
machined into discs, rectangular slabs, bowls or any other desired shape. A single disc-
shaped transducer was originally used, but now most diagnostic probes contain arrays of
thin, rectangular PZT slabs, known as elements (Figure 15.8).
The front and back faces of each transducer element are coated with a conducting layer
of silver to form electrodes, to which electrical leads are attached. Across these leads an
alternating voltage is applied at a particular frequency. The piezoelectric slab deforms in
sympathy, expanding and contracting in thickness (by only a few μm). Both faces of the
slab therefore act as piston sources of ultrasound waves that have the same frequency as
the applied voltage. In reception, the pressure variations produced by a returning echo
cause the element to expand and contract accordingly, and thus generate proportional
(‘analogue’) voltage variations between the two electrodes. These form the electronic echo
signal that is further amplified and processed by the receiver.
A disadvantage of PZT for diagnostic transducers is its high characteristic acoustic
impedance compared with the materials that might be adjacent to it, such as air or tissue.
The large reflection coefficient (Section 15.5.2) this produces at its faces means that a wave
generated inside the transducer is significantly internally reflected at each face. The pro-
cess is known as reverberation. If the transit time of the wave across the transducer and
back were equal to the period of the applied electrical excitation, then subsequent vibra-
tions would be reinforced and the reverberation would build up to a significant resonance.
This particular resonance of a thin slab is called half-wave resonance, because the transducer
thickness must be half a wavelength. Thus a PZT slab with a thickness of 0.67 mm would
exhibit half-wave resonance at 3 MHz (taking c in PZT as 4000 m s–1).
In some applications of ultrasound, the transducer is designed to be resonant at the
required frequency, because this results in highly efficient energy conversion. However,
resonance must be deliberately suppressed in a pulse-echo transducer, because short
acoustic pulses are necessary for good axial resolution (Section 15.4). Even if a short pulse
of alternating voltage were applied, a resonant transducer would continue to vibrate (‘ring’)
at its resonant frequency for some time.
The ringing time is reduced by placing an absorbing backing layer immediately next to
the back face of the transducer. The backing material is chosen to have an impedance close
to that of the transducer, so that the wave energy trapped in the transducer passes easily
into it, and is absorbed. Thus the vibration dies away quickly, and the pulse emitted from
Electrical
connection
Absorbent
Piezoelectric backing
slab
Matching layer
(Z between that of
transducer and tissue)
FIGURE 15.8
Construction detail of a typical ultrasound probe. Rectangular slabs of piezoelectric material are grouped
together to form an emitting and receiving aperture. The electrical connection to the front of each element, the
matching layer, and the absorbent backing would all be continuous across the elements.
Diagnostic Ultrasound 509
the front face is short. By deliberately introducing such energy loss, the transducer’s effi-
ciency at resonance is lost and it is hardly any more responsive at its resonant frequency
than it is for a wide band of frequencies on either side. However, this means that it is well
suited to producing and receiving short pulses which necessarily must contain a wide
band of frequencies (see Insight in Section 15.4).
If the front face of the PZT transducer were coupled directly to tissue, its large reflec-
tion coefficient would further impair the transducer’s efficiency as a transmitter and
receiver. The problem is reduced by having a thin impedance-matching layer (also
known as an anti-reflection coating) immediately next to the front face of the PZT. The
simplest ‘quarter-wave matching layer’ is made from a slab of low attenuation mate-
rial a quarter wavelength thick. The wave reverberates within this layer, undergoing
a two-way delay that amounts to half its period. At the back face, the reverberating
wave comes up against the high acoustic impedance of the transducer, and is internally
reflected with a phase inversion (180° phase shift—Section 15.5.2). The combined result
of delay and phase inversion is that the reverberating wave reinforces waves entering
the matching layer from the transducer, and so improves the transmission efficiency
of the transducer. In reception, the passage of echoes back into the PZT is similarly
enhanced by the matching layer.
It can be shown that if the material of the matching layer is given a characteristic imped-
ance of ZPZT × Ztissue , that is, the geometric mean of the impedance of PZT and tissue,
then all the energy entering it from the transducer is eventually transmitted into the tis-
sue. Thus the layer entirely eliminates reflection losses between the transducer and tissue.
However, this is only true at the single frequency for which it is designed. For short pulses,
with their broad frequency spectrum, it is necessary to use multiple matching layers, con-
sisting of several layers with various thicknesses and impedances.
Thus each transducer element is designed to produce short pulses of ultrasound.
However, it cannot by itself direct that pulse along a defined narrow path, as required for
pulse-echo methods.
Diffraction was originally explained by Huygens for coherent light. Applied to ultra-
sound, Huygens’ principle is that all points on the face of the transducer (or group of trans-
ducers) should be considered to be radiating separate waves, and that the instantaneous
pressure at a given point in the field is equal to the sum of the pressure contributions at
that point from every such wave. The radiating face is often called the aperture. Although
all points on the aperture may be vibrating in phase with each other (i.e. at the same part of
the pressure cycle at the same instant), geometry shows that the separate waves they pro-
duce travel different distances to arrive at any particular point in the field. They therefore
arrive at that point with different phases, because the aperture is generally larger than the
wavelength of the ultrasound being emitted. The field at that point will be strong due to
constructive interference if most of the contributing waves arrive in phase, but weak due
to destructive interference if many waves arrive producing positive pressures at the same
time as many producing negative pressures.
The shape of the field is therefore determined by the size and shape of the aperture, by
the wavelength, and by the fact that in practice, accidentally or deliberately, the amplitude
and phase of the waves across the aperture are not constant (i.e. the transducer face is
not a simple piston). A pulsed field which contains waves of many frequencies is further
complicated by being the sum of the fields of the individual frequencies. This summa-
tion fortunately has an averaging effect, producing a simplified field pattern which can be
summarised as follows.
Consider the plane beyond the transducer containing the tissue of interest, the ‘scan
plane’. (The same theory applies to any plane containing the axis normal to the transducer
aperture, but the scan plane is the most relevant.) Let the distance across the aperture in
that plane be 2a. If pulsed ultrasound of nominal wavelength λ is emitted from the trans-
ducer, its path in that plane will be outlined by a slightly converging beam for a distance
a2/λ (Figure 15.9a), and then by a diverging beam outlined by the angles ±θ to the axis,
where sin θ = 0.6 λ/a. Unless a is significantly larger than λ, the converging path will be
short, and the divergence will be severe. The aperture must be several wavelengths wide
to prevent divergence until the pulse is near the back of the section being imaged, because
any beam divergence will produce a loss of location accuracy in a pulse-echo system.
A given size of aperture is larger in terms of wavelength if it is emitting a higher frequency
(a) (b)
Aperture
Aperture
2a
Grating
lobe
a2
λ
Sidelobe
a
FIGURE 15.9
(a) Beam shape produced by an aperture of half width a. (b) Sidelobes are also produced, and if the aperture is
formed by a group of transducers, then grating lobes as well.
Diagnostic Ultrasound 511
(smaller λ). High frequency beams are therefore better collimated than low-frequency ones,
the converging part of the beam is longer and the subsequent divergence is less severe.
The distance a2/λ at which divergence begins is called the Rayleigh distance, and marks
the boundary between the near field or Fresnel zone and the far field or Fraunhofer zone. Here,
the width of the beam achieves its narrowest value of about a, as measured between points
where the intensity is half that on the axis (the ‘-3 dB width’). Since the total power in any
beam cross-section should be the same at all ranges (assuming no absorption or scatter),
this is therefore the range at which the on-axis intensity reaches its maximum. This region
is sometimes called the last axial maximum, although the term is more relevant in the field
of a continuous wave.
As noted in Section 15.7.1, the aperture is formed by a single disc-shaped transducer, or
more usually by a group of transducers. Both arrangements are described in more detail
in Section 15.7.3. There may be between 20 and 128 transducers in a group, and each has
its own pulse generator which delivers the oscillating voltage to excite the element and
produce the ultrasonic pulse, and its own pre-amplifier for giving initial amplification to
the echoes.
the aperture being formed from separate slabs rather than a single continuous piece—it
is effectively a diffraction grating. The angle of these lobes relative to the main lobe is
inversely proportional to the spacing of the element centres. If the elements can be made
narrow enough that their spacing can be <0.5 λ, the grating lobes are at such a wide angle
that they do not intrude significantly into the field of view. They can also be suppressed by
varying slightly the spacing of the elements, or restricting the elements’ response at wide
angles.
(a) (b)
2a 2a
a2 a2
λ λ
FIGURE 15.10
(a) Strong and (b) weak focussing of a single-element transducer. Pulsed waves are shown leaving each part of
the transducer, as in Huygens’ principle, and will reinforce each other at the focus. The focal length F is mea-
sured from the transducer to the focus, at the narrowest part of the beam.
Diagnostic Ultrasound 513
be ‘strongly focussed’, with a narrow but relatively short focal zone. At the focus, the width
(wF) of a strongly focussed beam from a transducer of radius a and focal length F is
Fl
wF =
a
Thus, as for unfocussed beams, high frequencies (small λ) produce narrower beams. This
equation also shows that a wider aperture would be required to achieve the same focal
beam width for a deeper focus.
If the focal length is more than 0.5 a2/λ (Figure 15.10b), then the beam is said to be ‘weakly
focussed’, with a wider but longer focal zone. Where the focal length cannot be varied,
weak focussing is preferred to strong focussing. This is because the region of greatest clin-
ical interest might be anywhere from just beneath the probe to near the limit of penetra-
tion, and hence a long, moderately narrow focal zone is of more value than a very narrow
but short one.
An aperture formed from a group of transducers allows the focal depth to be varied.
In transmission, the operator can position the focus at the same depth as any region of
interest, thereby improving the lateral resolution and sensitivity there. The trigger signal
undergoes different time delays on the way to the pulse generator for each element, the
delays being chosen to compensate for differences in path length (and hence travel time)
between that element and the selected focal point (Figure 15.11a). This is equivalent to the
mechanical methods described above that equalise the path lengths from a single-element
transducer to its focus, but has the advantage that it can be adjusted. The delays are pre-
programmed for each possible setting of the ‘transmission focus’ control.
Delays Delays
Pulse Pre-
generators amplifiers
Group of
transducers
F F
FIGURE 15.11
(a) Focussing of a group aperture in transmission. The transmission of each element is delayed appropriately
so that the individual ultrasound pulses form a curved wavefront. Note how the delays form a pattern which
is similar in shape to the required wavefront. (b) The reception beamformer uses the same pattern of delays to
re-synchronise the echo signal which is shown arriving from distance F, reaching the transducer elements at
different times.
514 Physics for Diagnostic Radiology
In reception, a variable receive focus is achieved in a similar way (Figure 15.11b) by intro-
ducing time delays into the electrical signal paths of the returning echoes. These delays are
pre-calculated to compensate for the small differences in travel time between the required
receive focus and each element. When the delayed echo signals are summed together they
produce a strong response from any reflector at or near the focus. However, in contrast to
the transmission situation, the operator is not required to select a receive focus. Instead,
the beamformer automatically changes the pattern of delays every few μs over the period
during which echoes return, advancing the range of the receive focus at an average rate of
1 mm every 1.3 μs. This ensures (see Section 15.2) that the receive focus always coincides
with the depth of origin of the echoes. This dynamic focussing technique produces an effec-
tive receive beam that is focussed along its length, maintaining good lateral resolution and
sensitivity from near the surface to the maximum depth of penetration.
In dynamic focussing, the number of elements in a group is also increased as the focus
advances in depth. Recalling the equation wF = Fλ/a, this allows the beamwidth wF at each focal
distance F to be kept constant, by increasing the aperture (2a) in proportion to F. The maximum
number of elements in the group is limited by the machine’s number of processing channels,
which is typically between 20 and 128. A large number of channels implies a complex and
expensive beamformer, but produces better lateral resolution and sensitivity for deep targets.
In some modes of ultrasound it might be useful to have many focal positions in the
transmit beam, effectively forming a beam which is narrow along its length. However,
this cannot be achieved in the same dynamic way as it can for the receive beam. Instead,
if the operator requests multiple-zone focussing in transmission, several transmit-receive
sequences must be carried out along each scan line. In each sequence, the transmitted
pulse is focussed at a different range, and only those echoes arriving at times correspond-
ing to ranges close to that focus are used to form the image. Clearly, the more focal zones
that are chosen, the greater the range of depths that will have improved lateral resolution.
However, the ‘real-time’ aspect of the mode will suffer, because each additional transmis-
sion focus increases the time spent interrogating each beam direction.
(a)
2b
Elev
Sca ation
n p plan b
lan e
e 2
(b) (c)
FIGURE 15.12
(a) Group aperture and its unfocussed elevation beam. Note that the beam is focussed more effectively in the
lateral (scan) plane than in the elevation plane. In the latter, its total width is no narrower than b, the half width
of the aperture in that plane. (b) Use of a lens for elevation beam focussing. (c) A 1½-D linear array has several
rows of transducer elements, allowing dynamic beam forming in the elevation plane, and hence producing an
effective slice thickness that is narrow over a large depth range.
Some modern probes provide adjustable elevation focussing and apodisation by having
the aperture divided into elements in the elevation direction as well as in the lateral direc-
tion (see Figure 15.12c). However, since every sub-division of the aperture in the elevation
direction represents an additional row of transducer elements with its associated connec-
tions, trigger generators, delays and pre-amplifiers, it is not technologically feasible to pro-
vide more than about 10 such rows of elements. The resulting matrix of elements, typically
10 × 100, is therefore called a 1½-dimensional array.
Cable connecting
to scanner
Oil Single-element
transducer
Surface of
patient
Anatomical
features
Beam
Field
of view
FIGURE 15.13
Simplified representation of a mechanically scanned probe. The single-element transducer has electrical connec-
tions, a matching layer, and absorbent backing just as the elements in a group aperture do (see Figure 15.8). The
assembly is mounted on a pivot in an oil-filled container. The beam scans over a sector-shaped field of view.
complete sweep of the transmit-receive beam across the sector provides a new frame of the
real-time image, showing all echo-producing structures in their current positions.
To couple the sound between the moving transducer and patient, the transducer must
be contained in a chamber of oil or water with an acoustically transparent shell which is
held against the patient. The difficulties of eliminating air bubbles from the chamber, and
the gradual wear of the mechanism for moving the transducer, are the reasons that this
type of probe has been largely superseded by the various array probes that scan the beam
electronically with no moving parts.
Cable connecting
to scanner
FIGURE 15.14
A linear array probe, showing an active group of elements which is about the same size as the single element in
Figure 15.13. The beam, scanned along the array, covers a rectangular field of view.
(a) (b)
FIGURE 15.15
(a) A curvilinear array, and (b) a linear array with peripheral beam steering, offer a wide field of view superfi-
cially, becoming even wider at depth.
linear array probes constructed with some degree of curvature (Figure 15.15a), provide not
only a usefully wide field of view close to the probe but an even wider one at depth. They are
popular for obstetric and some abdominal applications, but they cannot be used where the
need to push the convex front face into full contact with the patient would cause unacceptable
distortion of superficial structures. This problem has been overcome in the trapezoidal scan
format now being offered by some linear array probes (Figure 15.15b). This involves a combi-
nation of linear array beam stepping with the phased array technique described below.
An important advantage of electronically scanned arrays over mechanical probes is
their ability to combine different scanning modes. (These modes are described later.)
518 Physics for Diagnostic Radiology
There is no mechanical inertia associated with the scanning, so the beam can be made to
jump virtually instantaneously from scan line to scan line in any sequence. The indepen-
dent sequences of beam directions needed for each mode are interleaved, so that the two
(or more) modes are seen to proceed together in real time.
Circular aperture
containing
four elements
Wavefronts
to produce
focussed beam
Beam axis
FIGURE 15.16
Basic configuration of an annular array of transducers. Wavefronts, delayed by different amounts in an elec-
tronic beamforming network similar to that in Figure 11a, are shown leaving the array, and will come to a focus
in both the lateral and elevation planes. The annular array probe can only scan its beam mechanically in an oil-
filled probe similar to that in Figure 15.13.
Diagnostic Ultrasound 519
(a) (b)
Delays
Transducer
elements
Beam
axis
FIGURE 15.17
(a) A phased array probe scans its beam across a sector-shaped field of view. (b) The principle of beam steering
is similar to electronic focussing (see Section 15.7.2), but the delays are chosen to produce wavefronts that focus
on an angled beam axis. Dynamic focussing in reception involves generating a sequence of receive foci, closely
spaced along the same scan line. (The 7-element aperture shown here for clarity could in practice consist of at
least 50 elements.)
in the elevation plane. Alternatively, some modern phased arrays are ‘1½-dimensional’ (see
Section 15.7.2), having the transducers subdivided across the array as well as along it.
A disadvantage of the phased array is that the transmit and receive beamwidth increase,
and hence the lateral resolution becomes poorer, towards the lateral edges of the sector scan.
This is because the effective width of the aperture is smaller when ‘seen’ from the direction
of a scan line at an angle to the probe axis, and the focussed beamwidth is inversely pro-
portional to this width (see Section 15.7.2). Suppression of grating lobes is also crucial, oth-
erwise as the main lobe is scanned to one side, a grating lobe might enter the field of view.
focal length. These probes may be thought of as an extreme form of curvilinear array, in
which the curvature extends through an entire 360°.
The particular scanning method chosen, the frequency, and the overall shape and size
of the endoprobe depend on the anatomical constraints of the particular application. Thus
a forward-looking curvilinear array mounted at the end of a trans-vaginal probe is suited
to imaging the ovaries or an early pregnancy, but other orientations of the aperture are
required for imaging the prostate gland trans-rectally. Sometimes, as in trans-oesophageal
imaging of the heart, two compact phased arrays are mounted at right angles to each other,
providing orthogonal sections of the target area. Endoscopic probes may have either radial
or longitudinal fields of view, the latter being useful for guiding biopsy.
15.8.2 A-Mode
In an A-mode scan, the amplitudes of echoes along a single scan line are represented as
positive vertical deflections of a real-time graph on a display screen (Figure 15.18). A-mode
scans provide the most accurate way of measuring the amplitudes of echoes and the dis-
tance between two targets. Dedicated A-mode equipment is rarely manufactured today,
but A-mode is sometimes provided as an additional feature on some B-scan equipment, for
example, in eye scanners, for measuring various axial dimensions of the eye.
15.8.3 B-Mode
In a B scan, a probe held in the operator’s hand produces a sequence of beams which
automatically sweep across a chosen scan plane in the patient’s body, defining a sectional
(tomographic) field of view. The varying angle and position of the scan lines are elec-
tronically controlled so that, as echoes are received, the targets’ directions and ranges can
be established. Echoes obtained from all scan lines are displayed in their correct respec-
tive positions, to form a 2D image of the scan plane (Figure 15.19). The amplitude of each
Diagnostic Ultrasound 521
Echo
amplitude
0 5 10 15 20 25 30 35 40 45 50
Axial distance, mm
FIGURE 15.18
A-mode display from ultrasound beamed into the eye. Echo amplitude is plotted against axial distance of the
echoing feature, calculated from the elapsed time since transmission.
FIGURE 15.19
A B-mode ultrasound image is a tomographic section through the tissue beneath the hand-held probe. In this
case it shows the head and chest of a foetus in utero. Note the safety indices in the top right-hand corner of the
display (see Section 15.18).
echo determines the brightness (grey level) of the corresponding point on the display.
Hence the name B-mode. This is the most widely used type of ultrasound imaging, and is
described more fully in Section 15.9. It is often used in conjunction with other modes, such
as M-mode and Doppler.
15.8.4 M-Mode
M stands for motion, and M-mode shows how the positions of reflecting surfaces along a
single scan line vary with ‘physiological time’, that is, on a timescale of seconds rather than
522 Physics for Diagnostic Radiology
FIGURE 15.20
M-mode display, with associated B-mode image above it. On the B-mode image is the left atrium (LA) of the
heart, and to the left is its outlet to the left ventricle, across which the mitral valve opens and closes. The dotted
line shows the direction of the M-mode beam, chosen to intercept the mitral valve, so that the M-mode dis-
play allows dynamic study of its functioning. The vertical coordinate of the M-mode display represents range
(depth) while the horizontal coordinate represents time (the most recent 4-s period). Brightness (or grey level)
indicates echo amplitude. The leaflets of the valve can be seen opening and closing repetitively with time. An
ECG waveform at the bottom of the trace allows reference to the cardiac cycle.
the microseconds associated with ultrasound travel time (Figure 15.20). It is used primarily
in cardiac work, in conjunction with B-mode, via which the direction of the M-line can be
monitored and adjusted to run through the moving interfaces of interest (e.g. valve leaflets,
chamber walls). On activating M-mode, the B-mode scanning action stops and all further
transmitted pulses are sent down the selected scan line. The echoes resulting from each
transmission pulse are presented as brightness modulations along a corresponding verti-
cal line of the M-mode display, with the brightness (grey level) indicating echo amplitude.
Each transmission-reception sequence results in a new M-mode line, displayed alongside
the previous one, the data being continuously scrolled across the display. Most machines
save the most recent say 10 s-worth of the data that has been scrolled off the display, allow-
ing it to be replayed if the acquisition of new information is frozen. There is often provision
to show other physiological waveforms, such as an echocardiogram (ECG) or intra-cardiac
pressure waveform, on the same timescale as the M-mode display.
(a) (b)
FIGURE 15.21
Lateral resolution depends on the width of the ultrasound beam. The beam is scanning from right to left. There
are several beam positions between those shown in (a) and (b), all of which intercept the point target ‘1’ which
will therefore be imaged on the axis of each of them. The resultant image is a horizontal ‘streak’ with a length
equal to the beam width, as shown in (b). The point target ‘2’ is at the narrowest (focal) point of the beam, and
will be imaged with a narrower streak.
Temporal resolution describes the ability to follow changes with time in the imaged tissue.
It depends on the frame rate (which itself depends on the depth and width of the field of
view and on the scan line density) and on the degree of frame averaging (see Section 15.9.9)
and spatial compounding (see Section 15.12.1) that is selected.
The sections that follow describe the various parts of the scanner, and explain how the
above factors can be controlled individually. It is important to realise, however, that image
quality results from the combination of these separate factors. It is difficult and of ques-
tionable value to consider the separate effects that they have on image quality. Because
they interact in a complicated way with each other and with the operator’s perception, it is
even more difficult to determine what value each factor should have for a particular clini-
cal application. Imagine trying to decide whether contrast resolution is more important
than spatial resolution (or vice versa) for a particular application, and what the optimum
combination of these should be.
As explained in Sections 15.7.2 and 15.7.3, the selection of the transducers forming a
transmitting or receiving group, and the relative amplitudes and delays of their signals,
is pre-programmed for each possible beam direction and focal point. This beamforming
was traditionally implemented with analogue electronic circuitry (so-called because the
oscillating voltage it generates for a transmitting transducer or processes from a receiving
transducer is, in effect, an analogue of the ultrasonic pressure variation at the transducer).
However, beamforming’s requirement for rapid but accurate changes is much better met
by digital electronic technology, and this has become possible as digital microcircuits have
been developed by the computer and defence industries. In modern scanners, beamform-
ing is more likely to be digital than analogue, or at least a combination of the two technolo-
gies. A necessary part of this digital revolution has been the development of circuits that
can accurately, and at very high rates, convert the analogue electronic signals produced
by echoes reaching the transducers, to digital format. However, these ‘analogue-to-digital
convertors’ first require the very weak and noisy signals from each transducer to be con-
ditioned in various ways, as described next.
Voltage
General trend
Time
RF signal (depth)
Gain
Voltage
FIGURE 15.22
TGC attempts to compensate for the effect of attenuation by progressively increasing the amplification (gain)
received by echoes as they return from greater ranges.
above, a 6 MHz frequency component from the deeper target would be 108 dB (6.3 × 1010)
weaker than that from the nearer target. A consequence, therefore, is that the spectrum
of a deep echo will have relatively less high frequency energy than that of an echo from
a closer target. The centre frequency and bandwidth of echoes arriving at the transducer
therefore become progressively lower as they return from increasingly greater depths.
This means the beamwidth and pulse length become greater, and hence the lateral and
axial resolutions become poorer (Section 15.4), for deeper targets.
This variation of echo spectrum with target depth is taken into account in high perfor-
mance ultrasound scanners, where the centre frequency and bandwidth of the RF amplifier
is continuously changed with time to match the expected changes in the echo spectra. The
progressive reduction in amplifier bandwidth reduces electronic noise, leading to worth-
while improvements in sensitivity, contrast resolution and penetration for deeper targets.
An overall gain control is also provided to set an underlying amplification for all echoes,
irrespective of depth, before the TGC is applied. It controls all receiver channels, and par-
tially determines the sensitivity of the scanner.
15.9.4 Digitisation
The RF signals, which are the electrical analogues of the ultrasound echoes received by
each transducer, must now be converted into digital form for beamforming and further
processing. This process is similar to that for X-rays described in Section 5.2. Two main fac-
tors describe the performance of an analogue-to-digital convertor (ADC): the rate at which
Diagnostic Ultrasound 527
it can repeatedly sample the analogue signal and produce a corresponding digital number;
and the accuracy or resolution of that number for representing different voltages.
Nyquist’s theorem (see Section 8.4.2) states that an analogue signal is not adequately
represented by the digital sequence unless its maximum frequency component is sampled
at least twice per cycle. As the RF signals in an ultrasound receiver might well contain fre-
quencies higher than 10 MHz, the ADC must perform its conversions extremely quickly.
As in digital radiography (see Section 5.9), a binary scale is normally used. Recalling that
the dynamic range of a scanner is the ratio of the largest to the smallest of the signals that
can be processed, a 12-bit ADC will have a dynamic range of 4095:1, but an 8-bit ADC only
255:1. For ultrasound, the dynamic range limit imposed by the ADC is often expressed in
decibels. Every additional bit is equivalent to an increase of almost exactly 6 dB (see Insight
in Section 15.6.3), because it allows a doubling of the largest number representing echo
amplitude. Thus the dynamic range following an 8-bit ADC is 48 dB, while that following
a 12-bit ADC is 72 dB.
Collecting ultrasound echoes from the body is demanding of electronic hardware and the
amount of data it has to process. Each receiving channel, of which there might be up to 128,
connecting the group of receiving transducers via the probe cable to the beamformer, requires
RF amplification and a high-speed ADC. Each channel is processing a similar amount of
information to that processed by a television receiver. Ultrasound operators wonder whether
they will ever have the pleasure of using a wireless probe, that is, one that does not need the
cumbersome cable connecting it to the scanner. Unfortunately, it seems inconceivable that
technology will ever be developed to achieve this. Not only would it require the probe to emit
radio waves equivalent to 128 TV stations, it would also require the transducers to be excited
and controlled wirelessly, and the probe to be supplied with electrical power to allow all this.
The bulk of the electronic receiving hardware must therefore be within the scanner itself.
Amplitude
Demodulated signal
Time
RF signal
FIGURE 15.23
Amplitude demodulation produces a simplified echo waveform with an outline corresponding to the peak
amplitude of the RF signal.
Output
level
(dB)
30
C A B
FIGURE 15.24
Simplified examples of pre-processing transfer curves, determined by the ‘dynamic range’ (DR) control. These
allow the large dynamic range of echoes (horizontal axis) to be reduced to a range compatible with a viewed
image (vertical axis). Curve A shows a large DR setting (~90 dB), allowing most tissue echoes to be displayed
on the image, although not necessarily with high contrast. Curve B represents a low DR setting (~40 dB) which
might be used for cardiac imaging, where significant boundaries are of more interest than tissue backscattering.
Curve C shows a similar dB range centred on a lower value. This process is analogous to choosing a window
width and window level in CT (Figure 8.4). (Note that because the axes are in logarithmic decibel units, the
sloping lines represent non-linear curves.)
Frame store
Write Read
from to
receiver display
FIGURE 15.25
The frame store is a block of computer memory in which the image is assembled, ready for display. The shaded
area (the shape of which depends on the probe being used) gets filled with echo information arriving via the
receiver, one scan line at a time. Text and graphics information about the machine settings and the patient are
added around the scan area. The scanner’s display reads its information from the frame store row by row. In
practice, the store has many more locations than shown here—perhaps 1024 × 1024.
frame store is therefore a preliminary version of the eventual image which will be pre-
sented as a matrix of small picture elements (pixels), each having a grey level determined
by the amplitudes of the echoes from the corresponding part of the scan plane.
The process of storing echo amplitude information in the correct memory locations is
known as writing into the store. The sequence the locations are written to is defined by the
beam-scanning format. At the same time, the process of reading takes place, in which the
memory locations are interrogated row by row, for compatibility with the monitor screen’s
530 Physics for Diagnostic Radiology
line-by-line display of the image. Because the read sequence is different from the write
sequence, the process achieves ‘scan conversion’.
memory will appear as brightness or grey level on the display. A selection of ‘grey maps’
is usually available. A ‘reject’ control is also available to suppress (display as black) echoes
below a certain threshold which might be judged not to contribute useful information to
the image, and might even be electronic noise.
As with most scanner adjustments, the grey map and reject are usually preset, but
can be adjusted by the operator if required. Post-processing, unlike pre-processing, can
be changed even when the image is frozen. However, it further complicates the overall
transfer characteristic of the scanner which already depends on many other factors such
as gain, dynamic range setting and the transfer characteristic of the display itself (see
next section). Maps may even be available that provide tints of colours other than grey,
although coloured B-mode imaging would not normally be selected unless preferred by
the operator.
FIGURE 15.26
Example of a B-mode image showing the following artefacts: speckle pattern S, post-cystic enhancement E, and
edge-effect shadows (arrows).
Diagnostic Ultrasound 533
from each other, but those within the same cell will not. Unless very-high-frequency ultra-
sound can be used, the resolution cell will be larger than individual parts of tissue struc-
ture, and thus tissue structure will not be resolved on the image.
The individual pressure waves from scatterers within the same resolution cell will inter-
fere at the transducer to form a resultant signal that is either weak or strong depending on
the precise spatial distribution of the scatterers that caused them. The resultant signal will
therefore vary randomly between high amplitude (a bright display) and small amplitude
(a dark display) along each scan line. Bright intervals on several adjacent scan lines make
up a bright speckle.
The average axial and lateral speckle dimensions are related to the corresponding dimen-
sions of the resolution cell. Thus the speckle pattern is predominantly a characteristic of
the scanner and probe, not the tissue being imaged. The structure of the tissue has only an
indirect effect on speckle size, by governing the average brightness of the speckle accord-
ing to the strength of backscattering it produces.
Speckle adversely affects both the detection of small fluid-filled cysts, and the contrast
resolution of different tissues.
FIGURE 15.27
Reverberations between strongly reflecting interfaces produce strings of uniformly spaced false echoes. In this
case, the emitted sound is reverberating between the anterior wall a of the bladder B and the probe itself, pro-
ducing reverberations r1 and r2 within what should be a dark fluid region. Another reverberation artefact D,
called dead zone, can be seen at the top of the image.
534 Physics for Diagnostic Radiology
The regular spacing of the reverberation echoes helps to identify them as such. It is equal to
the spacing of the pair of reflectors responsible for the reverberation. A common reverbera-
tion involves the probe face itself acting as the nearer reflector, and the anterior wall of a
liquid-filled vessel acting as the further reflector. As in Figure 15.27, reverberation may pro-
duce acoustic noise within what should be an echo-free image of a liquid-filled structure.
Reverberation of a different appearance is the comet-tail artefact, due to reverberation
between the proximal and distal edges of a small structure such as a small bone, a metal
implant, or a foreign body, with different characteristic acoustic impedance from its sur-
roundings. It may be helpful in pin-pointing an implant or foreign body.
Yet another form of reverberation occurs within the layered structure of the transducer
itself (Section 15.7.1). The transmission produces an immediate sequence of closely spaced
echoes at the start of each scan line, thereby obscuring extremely close echoes right across
the image. This dead zone is visible at the top of Figure 15.27, but is much reduced (see
Figure 15.26) in modern transducers which use composite materials (periodic structures
of piezoelectric ceramics and non-piezoelectric polymers) with impedance closer to that of
tissue, and multiple matching layers.
Ele
vat
pla ion
ssel
d ve
ne Bloo
FIGURE 15.28
Slice thickness artefact. The blood vessel lies in the scan plane, and should appear dark on the image, in contrast
to the surrounding tissue. However, echoes from tissue × within the scanned slice, but not in the scan plane
itself, appear on the image as speckle and degrade contrast resolution.
FIGURE 15.29
Shadow and enhancement artefacts. Strongly reflecting and/or absorbing features such as small stones
(arrowed) in the gallbladder (GB) produce a shadow directly behind them, where the tissue cannot be imaged
by conventional pulse-echo methods. Post-cystic enhancement is produced behind the kidney (K), because the
depth-gain control over-compensates for reduced attenuation in the gall bladder and kidney.
confined packet of energy, to travel narrow and straight into the body as far as required.
The precisely calculated beamforming process (Section 15.7.2) relies on several component
waves travelling from different parts of the transducer with the uniform speed of sound
expected in soft tissue; whereas the fatty inclusions have a significantly lower speed of
propagation (Table 15.1). The result, worrying for the operator, is a de-focussed image. It is
one of the reasons why some patients make poor scanning subjects. Sophisticated machines
provide a degree of aberration correction, by allowing the operator to specify the nature of
the superficial tissue (e.g. fatty or dense), and adjusting the beamformer accordingly.
Amplitude
Fundamental band
2nd-harmonic
band
3rd-harmonic
band
0 Frequency
f0 2f0 3f0
FIGURE 15.30
Spectrum of an emitted pulse after having undergone non-linear propagation. Frequency bands have appeared
which are harmonics of the fundamental band. The desire for wide-bandwidth pulses means that the second-
harmonic band may overlap with the fundamental, requiring signal-processing methods to isolate the har-
monic echoes.
538 Physics for Diagnostic Radiology
The second harmonic band of frequencies (at double the fundamental frequencies) is of
greatest interest: higher harmonics are generally too weak to be useful. Echoes returning
to the transducer, depending on where they originated, may or may not contain harmon-
ics. Many of the artefactual echoes do not, as will be explained below. If the receiver is
therefore tuned to the second-harmonic frequencies rather than the fundamental frequen-
cies, it will not detect these artefactual echoes. In many cases, THI provides a ‘cleaner’
image than does conventional B-mode.
The potential of each tissue to distort a sinusoidal waveform is described by its non-
linearity coefficient β. However, as soon the second harmonic begins to propagate in the
tissue, it suffers from attenuation. The actual build-up of its amplitude is therefore deter-
mined more by the tissue’s attenuation at the second-harmonic frequency than by its non-
linearity coefficient (Duck 2002).
Another vital factor is the amplitude of the fundamental wave. As its name suggests,
non-linear propagation occurs if the particles of the medium do not move with a veloc-
ity exactly proportional to the varying excess pressure they experience and do not exert
an exactly proportional version of that pressure waveform on their neighbour. This does
not happen with small amplitude waveforms, but is an increasingly strong effect as the
amplitude increases. Essentially, the distortion occurs at the amplitude peaks of the wave-
form. Consider that the stiffness of the tissue increases when it is under high pressure,
but decreases during rarefaction. Since propagation velocity is proportional to the square
root of stiffness (see Insight in Section 15.2), the high-pressure parts of the waveform travel
faster than the low-pressure parts, and so the shape of the wave becomes distorted (com-
pare Figure 15.31 with Figure 15.3).
These three factors—the amplitude of the wave, and the non-linearity and harmonic
attenuation coefficients of the tissue—interact in a complex but fortuitous way to determine
(a)
Excess 1
pressure
(MPa) 0.5
Time (µs)
0
0.2 0.4 0.6 0.8 1.0
–0.5
–1
(b) Excess 1
pressure
(MPa) 0.5
0
Axial distance
–0.5 from transducer
–1
FIGURE 15.31
(a) and (b) Distorted versions of Figure 15.3a and 15.3b, due to non-linear propagation. Note the characteristic
sharp peaks and blunt troughs.
Diagnostic Ultrasound 539
which echoes returning to the transducer will contain harmonics, and which will not.
Genuine echoes from reflecting and backscattering features may contain detectable har-
monics. However, reverberation echoes may have travelled far enough for their harmon-
ics to have been attenuated. The same may be true for mirror-image echoes. Sidelobes of
the beam will not be strong enough to provoke harmonics. A similar argument applies
to the edges of the main lobe, but THI does not necessarily improve lateral resolution or
slice width because it might use a lower emission frequency than conventional imaging to
reduce the overall attenuation.
It is not obvious whether THI reduces the problems of aberration in superficial fatty tis-
sues (Section 15.10.10). Aberration tends to divert energy away from the main beam into
sidelobes. Although THI would be expected to reduce artefactual echoes from the side-
lobes (Tranquart et al. 1999), damage to the main beam would still occur.
Theoretical aspects of harmonic ultrasound fields (Duck 2002) are beyond the scope
of this chapter. The practical outcome, however, is that a second-harmonic field can pro-
vide a vehicle for pulse-echo imaging, and THI is now available on all but the cheapest
scanners.
The separation of the harmonic component of echoes from the fundamental compo-
nent, so that the latter can be rejected, can be achieved in either of two ways. The earliest
method, and still the only one available on cheaper machines, uses frequency-selective filters.
The bandwidth of the emitted pulse is restricted so that the fundamental and second-
harmonic bands do not overlap in the frequency domain (see Figure 15.30). It is then rela-
tively simple with an analogue or digital electronic filter to admit the harmonic band into
the echo receiver, but not the fundamental band. The disadvantage of this method is that,
by definition, bandwidth-restriction requires an emitted pulse of longer duration, and
thus sacrifices some axial resolution. Paradoxically, though, because the transducer must
be capable of both emitting the fundamental band and receiving the second-harmonic
band, the requirement for a wide overall frequency response pushes transducer design
techniques to the limit.
Pulse-inversion or phase inversion is a more modern method of producing a response to
the second-harmonic component of the echo while rejecting the fundamental component.
It involves the emission of two pulses in succession along each scan line, the second of
which is an inverted (anti-phase) version of the first. Upon reception, the echoes of each
pulse are simply added, and cancel by destructive interference. However, any second-
harmonic components of the two echoes will be in phase, because they have two cycles
for every one of the fundamental (Figure 15.32). Their addition therefore results in con-
structive interference of the second-harmonic component which can then be processed
further to produce the image. This method allows more of the transducer’s bandwidth to
contribute to axial resolution, because it doesn’t matter if the fundamental and harmonic
bands overlap.
In further developments of pulse-inversion THI, each emitted pulse consists of two fre-
quency bands, one centred at a nominal frequency f0, and one at 2f0. Non-linear propaga-
tion not only produces harmonics of these, but also produces other frequency bands due
to intermodulation of the two emitted bands (Figure 15.33). These intermodulation bands
are centred at the sum of the centres of the two emitted bands (i.e. 3f0) and at the differ-
ence between the two centres (i.e. f0). Intermodulation is an inevitable consequence of the
non-linear response of the tissue, but only occurs when there are at least two frequency
components in the wave. In this mode the receiver is tuned to the difference-frequency
band, centred at f0. Any relics in the echoes of the undistorted emitted wave, centred at f0
and 2f0, are eliminated by the pulse-inversion process described above, and thus artefacts
540 Physics for Diagnostic Radiology
1st echo
Amplitude
Time
2nd echo
FIGURE 15.32
Pulse-inversion harmonic processing. Two anti-phase echo pulses return from each feature. They are shown
broken down (by Fourier analysis) into fundamental (grey) and second-harmonic (black) components. The fun-
damental components are in anti-phase, so when added together they cancel; but the second-harmonic compo-
nents are in phase, and will reinforce each other when added.
Intermodulation
1st emission product
in 1st echo
Amplitude
Time Intermodulation
2nd emission product
in 2nd echo
FIGURE 15.33
Advanced pulse-inversion harmonic methods exploit intermodulation products. Two emissions both consist
of a fundamental (grey) and second-harmonic (black) component. The second emission is in anti-phase to the
first. The two echoes returning from each feature contain harmonics of both components (not shown) which,
being in anti-phase, are cancelled by the summation process. However, they also contain an intermodulation
product, from non-linear interaction between the two emission components. Its instantaneous phase is equal to
the instantaneous phase difference between the two emission components. It has the same phase in both echoes,
so addition provides a reinforced echo signal for image formation.
are suppressed. However, the difference-frequency components in the pair of echoes from
each range turn out to be in phase (see Figure 15.33), and therefore progress reinforced
through the receiver. The advantage of imaging echoes via their difference-frequency
component rather than their second-harmonic component is that it is attenuated less in
tissue, and therefore allows better penetration.
Diagnostic Ultrasound 541
FIGURE 15.34
B-mode imaging with spatial compounding. This example uses three sets of beams, shown in black, grey and
white, but more might be used in practice. Notice how the shadow behind a strongly attenuating or reflecting
feature A will be reduced by the beams that reach behind it, and more of a surface such as B is insonated by
beams at normal incidence.
542 Physics for Diagnostic Radiology
Dynamic-
Band-pass Higher Demod-
range
filter frequencies ulator
compressor
Broadband
echoes To
from Average frame
beamformer store
FIGURE 15.35
Frequency compounding in a B-mode echo receiver. Part of the receiver is split into two channels that process
different sub-bands of the echo spectrum. The result is a smoothing of speckle and a reduction of noise.
Statistical considerations show that the method also reduces the brightness variation of
the speckle artefact from tissue, producing a smoother image. Each set of angled beams
will result in a different spatial distribution of bright and dark regions which, when aver-
aged together, produce a pattern with less variance in brightness, that is, a smoother image
of tissue parenchyma. The effect of electronic noise is also reduced. Like speckle, it will
have a different pattern in the component images, and therefore reduced variance in the
compound image.
According to Section 15.10.1, a smoother speckle pattern should provide better detection
of small cysts, and better contrast resolution of different tissues. On the other hand, spatial
compounding has poorer temporal resolution, since each frame requires several beam-
sweeps from the probe. If too many angled sets of beams are used, operators describe the
resulting image as ‘swimmy’.
Greater penetration of a given frequency in a given tissue can only be achieved by putting
more energy into the emitted pulse. This can be achieved either by making the pulse lon-
ger, or by increasing its amplitude. The latter is the only sensible method for a simple pulse
emission, because to increase its duration would degrade axial resolution. However, the
theory of matched filtering or pulse compression shows that a longer pulse need not degrade
axial resolution, as long as its waveform has a more complicated pattern than a simple
burst of a single frequency, and as long as that pattern (the ‘code’) is precisely known. The
waveform within the pulse must be modulated in some way. This is a technique borrowed
from the field of communications and radar. Three common forms are amplitude, phase,
and frequency modulation, and all of these have been applied to diagnostic ultrasound
emissions. For example, a frequency-modulated emission might have a duration of 10 µs
or so, and a gradually increasing frequency throughout, starting at f1 MHz say, and finish-
ing at f2 MHz (Figure 15.36a). Such a waveform is known as a chirp, from the way it would
sound if it were audible.
A simple, single-frequency emission of duration 10 µs would provide axial resolution
of 7 mm at best—inadequate for B-mode imaging. However, its fundamental limitation
is not its long duration, but its narrow bandwidth of 0.1 MHz (see Insight in Section 15.4).
A coded waveform can be given a bandwidth B that is unconnected with its duration T.
The chirp’s bandwidth, for example, is approximately the frequency-sweep f2 – f1. If this is
2 MHz, for example, a chirp of long duration is capable of providing the same axial resolu-
tion as a 0.5 µs simple pulse.
If echoes of the long pulse are produced by features that are closer together than Tc/2 in
the axial direction, they are not separate when they reach the transducer. A ‘matched filter’
within the receiver takes the frequency components of the echo pulses and adjusts their
phase relationship so that they form a short pulse with duration τ = 1/B (Figure 15.37). The
result is equivalent to a conventional pulse-echo system having emitted a short pulse into
the body. The axial resolution is τc/2. However, the coded excitation with duration T has
T/τ times the amount of energy, and this energy is conserved during compression by the
(a) Amplitude
Time (µs)
0
5
(b) Amplitude
Time (µs)
0 5
FIGURE 15.36
Examples of high-energy coded-excitation waveforms (a) is a chirp and (b) is encoded via phase-modulation,
whereby the phase is reversed periodically in a pre-determined sequence. Although both have a duration T of
5 µs, their bandwidth is not 1/T = 200 kHz, but 2 MHz. They can therefore provide the same axial resolution as
a simple pulse of duration 0.5 µs, but have ten times the amount of energy.
544 Physics for Diagnostic Radiology
Duration
τ
Duration T
Input Matched
Output
filter
FIGURE 15.37
Coded-excitation processing system. Echoes of the chirp emission in Figure 15.36a are fed into a matched filter
which compresses their duration down to approximately 0.5 µs, so that even if they overlap in time on entry to
the matched filter, they can be separated at the output to an axial resolution equivalent to 0.5 µs. Note also the
increase in amplitude which increases their detectability.
matched filter, so the short pulse is boosted in power by the same factor T/τ. This means
that even echoes which return to the transducer with amplitude too weak to be detected
by a conventional receiver may be detected after pulse compression, thus penetration is
improved.
The ‘power-boosting’ factor T/τ of the coded-excitation or pulse compression system is
an important figure of merit. Since τ = 1/B, this factor can be expressed as TB, the time-
bandwidth product of the coded emission waveform. In the example given above, the time-
bandwidth product is 20 (13 dB), meaning that, in theory, the system would have that much
extra sensitivity to weak echoes. Commercial coded-excitation systems might use pulses
with 5 MHz centre frequency for abdominal imaging, where conventional systems would
be restricted to 3.5 MHz.
Coded waveforms are also suitable for other modes of diagnostic ultrasound, and are
compatible with the harmonic and compounding techniques described above. Both chirps
(Behar 2004) and phase-modulated waveforms (Nowicki 2003—see Figure 15.36b) are
used. There are challenges posed by coded excitation, however. One is the complicated
interplay between the coded waveform and beamforming, both of which rely on precise
phase relationships within and between signals, and their different requirements might
not be completely reconcilable in the near field of the transducer. Another issue is the
potential of the coded waveform to produce physical effects in tissue. Although its ampli-
tude in the body is no higher than that of a conventional pulse, its duration is longer, and
therefore the energy deposition is greater. This must be controlled, as for all diagnostic
ultrasound (see Section 15.18).
ultrasound imaging, it can be applied to blood in the body, or certain other fluids that
do not otherwise send back appreciable echoes. In many applications of B-mode imag-
ing, this lack of echogenicity is far from a disadvantage, because the relative darkness
of vessels containing these fluids contrasts inherently with the surrounding echogenic
tissue. However, that intrinsic contrast does not provide visibility of fluid regions that
are similar to or smaller in cross-section than the speckle pattern or resolution cell
(Section 15.10.1). There are, of course, many such small channels in the body. The small
blood vessels that perfuse organs, and the ducts in the female reproductive system are
particular examples where visibility would provide useful diagnostic evidence of pat-
ency. This would be realised if the fluids within them could be made hyper- rather than
hypo-echoic.
The established way of making fluids backscatter strongly is to infuse them with micro-
scopic bubbles containing gas or air. The gas has such a different characteristic acous-
tic impedance from the fluid that the bubbles cause strong backscattered echoes to be
returned to the transducer. The backscattering is even stronger if the bubbles are designed
to resonate in the undulating pressure field of the ultrasound pulse, which is achieved by
making their diameter about 3.3/f microns, where f is the nominal ultrasound frequency in
MHz. Microbubbles of this size are safe in the bloodstream, and can enter small blood ves-
sels. They are injected into a peripheral vein, usually in the hand, wrist or arm. They must
be stabilised with a shell, usually of lipid or albumen, otherwise they would be absorbed
into the blood, particularly on passing through the lungs. Contrast agents (Quaia 2005;
Claudon 2008) have found particular use in studies of the myocardium, and in helping to
visualise focal lesions in organs such as the liver and kidney. It is the increased vascularity
of such lesions that leads to their contrast from surrounding tissue when perfused by the
microbubbles.
A particular behaviour of the resonant bubbles in the ultrasound field allows a further
improvement in the contrast of tissues containing them. The radius of the bubbles changes
non-linearly with excess pressure, so the backscattered wave is a distorted version of the
incident waveform. The echoes returning from the contrast medium may therefore con-
tain harmonics, even for a low-amplitude emission, and contrast imaging is often done in
harmonic mode.
The pulses for contrast imaging are generally emitted at low amplitude for another rea-
son. Bubbles become unstable under high amplitude pressure variations, and might col-
lapse, which of course would end their role as a contrast agent. Occasionally, in fact, this
collapse is deliberately provoked by a brief sequence of high amplitude emissions known
as ‘bubble-burst mode’. The collapsing bubbles emit shockwaves, the ultimate form of non-
linear wave, whose broad bandwidth and high amplitude cause the locations of the col-
lapsing bubbles to appear brightly on the B-mode image. As more contrast bubbles are
perfused into the region with time, further ‘bubble-burst’ sequences will provide further
snapshots of the perfused region. Stringing these images together gives the equivalent
of time-lapse photography, showing more quickly a process (in this case perfusion) that
evolves over a period of time.
The ability of ultrasound to burst bubbles when they have reached a particular location
in the body offers the possibility of targeted therapy (Lanza and Wickline 2001). The con-
trast-agent bubbles are manufactured to contain cancer-controlling drugs or other thera-
peutic agents with a tendency to bind chemically to particular cells in the body. When they
have reached their target, as evidenced by the enhanced image, they are burst to release
the drug.
546 Physics for Diagnostic Radiology
FIGURE 15.38
(See colour insert.) A 3D ultrasound rendering of a foetal face.
rendering algorithms are implemented, to give 3D impressions of the anatomy within the
volume with varying degrees of transparency, or to display surfaces of structures such as
the foetal face (Figure 15.38).
Q
λe
P
FIGURE 15.39
The Doppler effect. An emitted ultrasound pulse, with wavelength λ e separating compressions P and Q, encoun-
ters a red blood cell RBC moving with velocity v. In (a), compression P is incident on the cell and will be scat-
tered as well as transmitted. (b) Q is incident on the RBC after a time that is shorter than the period of the wave,
because the RBC has moved forwards to meet it. (The movement is greatly exaggerated here.) (c) The backscat-
tered wave therefore has a shorter period and shorter wavelength λr than the emitted wave. (d) Illustrates that it
is the component of the cell’s velocity in the direction of the ultrasound emission, that is, v cos θ, that influences
the change in period and wavelength.
Diagnostic Ultrasound 549
Further consideration shows that the change in period, wavelength and frequency of the
wave is caused by the component of the RBC’s motion in the direction of the ultrasound
propagation, not by its true velocity v. This component is expressed mathematically as
v cos θ, where θ is the angle between the RBC and ultrasound direction vectors. If the RBC
had a net motion away from the probe (i.e. v cos θ negative), then the period and wave-
length of the backscattered pulse would be increased, and its frequency would be less than
fe. If the RBC moves at right angles to the pulse’s propagation, there is no change in period,
wavelength or frequency. As far as the pulse-echo system is aware, it is stationary.
The quantity of interest is the difference fr – fe which is known as the Doppler shift and
given the symbol f D:
2 v cos q × f e
fD ( = f r − f e ) =
c
Note that the magnitude of the Doppler shift, if expressed as a fraction f D/fe of the emitted
frequency, is proportional to the RBC’s component of motion v cos θ, expressed as a fraction
of the speed of sound c. Note also that it is usual for Doppler applications to consider v
positive if the scatterer is moving towards the sound source so that positive Doppler shifts
are associated with positive velocities.
Substitution of typical values for v, c and fe shows that f D is typically a few kHz or
less. For example, a blood cell moving at 0.5 m s –1 directly towards a probe emitting a
frequency of 3 MHz would produce a Doppler shift f D of 2 kHz. Since the magnitude of
cos θ ≤ 1 (i.e. ignoring its sign), this factor will reduce the magnitude of the Doppler shift
unless θ = 0°or 180°.
In practice, the Doppler equation is rearranged to have v or v cos θ on the left-hand side,
because the purpose of most Doppler ultrasound systems is to present information about
blood movement from their estimate of f D in the incoming echoes.
The cos θ factor in the general Doppler equation represents a significant limitation of
Doppler ultrasound. If the angle θ is not known, the true velocity of the blood flow cannot
be estimated, only the component v cos θ. Furthermore, if 60°< θ < 120°, that is the magni-
tude of cos θ < 0.5, the system’s estimate of f D will not be sufficiently accurate for a reliable
calculation of v or v cos θ. The small component of motion producing Doppler shift will be
too error-prone. Indeed, if θ is close to 90°, then there will be no Doppler shift and no indi-
cation that any blood is flowing.
It is the function of Doppler systems to impart information about the Doppler spectrum,
its variation with time, and its point(s) of origin within the body. There is a variety of sys-
tems that convey different aspects of this information.
CW
oscillator
fe
High-pass Doppler
Demodulator
filter signal
RF
amplifier
T R
Blood
vessel
FIGURE 15.40
Main components of a continuous-wave (CW) Doppler system. The angle between the transmission (T) and
reception (R) transducers is chosen to produce a short superficial cross-over region or one that extends deeper,
as required. Moving blood within the cross-over region will result in a Doppler signal, as described in the
text.
Diagnostic Ultrasound 551
Doppler shift
(kHz)
0
–3 –2 –1 0
Elapsed time (s)
FIGURE 15.41
A typical Doppler spectrogram due to pulsatile blood velocity in an artery. Distance above or below the base
line represents positive or negative Doppler frequency, respectively. A grey scale or colour scale is used to indi-
cate the relative power of the Doppler signal at each Doppler frequency.
552 Physics for Diagnostic Radiology
frequencies, but that spectrum is repeated along the frequency axis at intervals equal to fs.
In effect, however, the spectrum analyser ignores these extra frequency components and
assumes that the frequency content of the signal is confined to the principal interval.
The frequency resolution Δf achieved by spectrum analysis is equal to the inverse of
the duration T of the signal taken for analysis, thus Δ f = 1/T. Doppler systems divide the
Doppler signal into segments of typical duration 10 ms. A longer segment cannot usually
be used because the blood flow in an artery will be changing over this sort of timescale,
and it is not meaningful to analyse the spectrum of a segment while that spectrum is
changing. The resulting 100 Hz resolution is not good enough for a detailed spectral histo-
gram, but Fourier theory allows the segment to be extended artificially by adding periods
of ‘silence’ to either end of it, and thus a more appropriate frequency resolution is achieved.
Spectral analysis must be continuously repeated on new 10 ms segments of the Doppler
signal to provide the real-time spectrogram.
15.17.5 Pulsed-Wave Spectral Doppler, Duplex Scanners and the Aliasing Artefact
In many applications it is desirable to be able to observe Doppler signals from a specific
location in the body, and pulsed-wave Doppler allows this. It is usually combined with
B-mode scanning in Duplex mode, so that the location of interest can first be identified on
the B-mode image. The operator chooses a suitable direction on the image for a Doppler
beam to travel from the array probe through the region of interest, and then, along that
direction at the appropriate depth, marks a sample volume covering an axial range of typi-
cally 1–5 mm (Figure 15.42a). The chosen Doppler beam axis does not have to be normal
to the face of the array probe. Many systems allow it to be steered away from the normal
using phased array principles, if that provides a more suitable angle of intersection with
the blood flow that is being investigated.
The operator then selects pulsed-wave Doppler mode and the B-mode scanning emis-
sions are interleaved with Doppler emissions in the chosen direction. Usually, the same
group of transducer elements that emits the Doppler pulses is used to receive their echoes,
in conjunction with the digital beamformer (Section 15.9.2), but only those echoes from the
chosen range interval are admitted to the receiver. A demodulator (Section 15.17.2) then
converts the received RF signal to a baseband signal containing the Doppler frequencies.
As in a CW system, this can be fed to a loudspeaker and, after spectrum analysis, to a
spectral display (Figure 15.42b).
Although demodulation and spectral analysis of pulsed-wave Doppler appears similar
in overview and implementation to that of CW Doppler, the theory is more complicated.
First, the emitted pulse is of short duration, to allow the echoes from the chosen range
interval to be discriminated. It may be longer than a B-mode pulse, but it would not be use-
ful if longer than typically 10 µs. However, demodulated echoes even of this duration are
much too short to allow useful spectral resolution (Section 15.17.4). It is therefore necessary
to emit a sequence of pulses in the chosen direction, obtain a sequence of echoes from the
sample volume, and process that sequence as a single long-duration entity. Although the
sequence of echoes gives an interrupted rather than continuous demodulated Doppler sig-
nal, it can be made long enough to allow adequate spectral resolution. Typically, a sequence
of 100 pulses might be used, with a total duration of the order of 10 ms which is the same
as that used in CW spectral analysis.
The fact that the demodulated Doppler signal is interrupted means that it is effectively
sampled, and therefore its spectrum repeats at frequency intervals equal to the sampling
frequency fs. In this case, fs = PRF, the pulse repetition frequency of the emitted
Diagnostic Ultrasound 553
(a)
0 B-mode
depth
(cm)
10
Doppler measurements
S –47.9 cm/s Doppler settings Blood
D –19.5 cm/s WF 50 Hz velocity
RI 0.59 SV2.0 mm cm s–1
PI 0.97 M3 40
(b) 2.3 MHz
3.9 cm 20
–20
–40
Time (total 3.6 sec) –60
FIGURE 15.42
A duplex ultrasound display consists of (a) a B-mode image, allowing the operator to select the sample volume,
and (b) the spectrogram obtained from the sample volume. In this case, the B-mode image is of a foetus in utero,
and the sample volume is on the umbilical artery. The operator has aligned a cursor with the blood flow, allow-
ing the machine to measure the angle θ in the Doppler equation, and hence indicate blood speed on the spectro-
gram rather than just f D. The scanner has automatically drawn the envelope of the spectrogram, and identified
the systolic (S), diastolic (D) and mean (M) Doppler shifts, which are involved in calculating pulsatility index
(PI) and resistance index (RI—see text). The results are shown at the left, and some of the spectral Doppler set-
tings at the right.
ulse-sequence. The PRF is limited by the need to wait for echoes to return from the deep-
p
est part of the B-mode image, in case the Doppler sample volume was placed there. It
might be much less than the typical 20 kHz required to satisfy Nyquist’s sampling theorem
for Doppler signals. The consequence is that the principal frequency interval of Doppler
shifts that can be evaluated by pulsed-wave Doppler is much more restricted than that by
CW Doppler. Thus aliasing (see Sections 8.4.2 and 17.3.1) is more likely, and will result in
an artefact. Any Doppler frequencies f D that are outside the principal frequency interval
defined by the Nyquist limits ±½ PRF are inadequately sampled, and will be misinter-
preted (aliased) by the spectrum analyser, that is, they will be reconstructed with errone-
ous frequencies f D. For example, as soon as an increasing Doppler frequency exceeds
+½ PRF, the displayed frequency abruptly jumps to just above –½ PRF (see Figure 15.43),
that is, the displayed frequency is reduced by an amount equal to the PRF.
Aliasing may not be a problem for moderate blood speeds in superficial blood vessels
since a high PRF can be used, but it may be harder to avoid at larger depths (e.g. deep ves-
sels, the heart and the foetus). Operators attempt to eliminate aliasing in various ways,
such as increasing the PRF of the Doppler emissions. Specialised machines for vascular
and cardiac applications have CW Doppler or high-PRF modes available to overcome
aliasing.
The final complication of pulsed-wave Doppler processing is that the emitted pulse is
not a single frequency as it was for CW. It contains a broad band of frequencies fe, and all
554 Physics for Diagnostic Radiology
Doppler shift
(kHz)
0
–3 –2 –1 0
Elapsed time (s)
–1
FIGURE 15.43
The aliasing artefact. In this example the PRF of Doppler pulses is 2 kHz, allowing only the range of Doppler
shifts between ±1 kHz to be accurately displayed. The positive Doppler shifts at peak systole are outside this
range, and are aliased into the wrong vertical position on the spectrogram.
of these are subject to Doppler shift according to the Doppler equation. Interestingly, both
demodulation and Doppler spectral analysis are possible with these broadband Doppler-
shifted received echoes. The sequence of broadband pulses at the output of the demodula-
tor, forming the sampled signal containing the spectrum of Doppler shifts is similar, apart
from its lower sampling rate, to the sequence of samples of the digitised demodulated
signal in a CW system. In either case the spectrum of this sampled signal consists of the
spectrum of Doppler shifts, repeated at frequency intervals of fs, as far as the frequency-
limit (bandwidth) of the individual samples; and the spectrum analyser displays only that
part within the principal interval.
A simpler view of the functions of the demodulator and spectrum analyser is that they
are correlating the received broadband echoes with the broadband emissions. Due to the
Doppler effect, the former are slightly shifted in frequency by all the frequency shifts of
the Doppler spectrum. Using the unshifted emission pulses as a reference, the correlation
process is able to identify all these Doppler shifts, and also their relative strengths, that is,
it can evaluate the Doppler spectrum.
An advantage of Duplex scanning is that it allows the machine to display blood speed
rather than Doppler frequency. To do this the operator aligns an ‘angle cursor’ with the
vessel or lumen axis, thereby allowing the machine to measure the angle θ needed to find
v from f D using the Doppler equation.
beam to the blood flow, since it is constant with time, and the required ratio of blood
speeds is the same as that of the corresponding Doppler shifts on the spectrogram. Thus
the pulsatility index (PI) is defined as the difference between the maximum (‘systolic’ S) and
the minimum (‘diastolic’ D) Doppler shifts in a cardiac cycle divided by the mean Doppler
shift (M) over a cardiac cycle (see Figure 15.42b). It is used in the lower limb, for example, to
assess proximal disease on the basis of the waveform damping it produces. The resistance
index (RI) is a similar ratio, except that the denominator is the maximum rather than the
mean Doppler shift. It is used to assess the resistance of the distal vasculature to blood
flow.
(a) (b)
FIGURE 15.44
(See colour insert.) Doppler ultrasound images showing vasculature in the kidney. There are two main ways
of mapping Doppler shifts detected within the ‘colour box’ on the B-mode image: (a) A colour flow map (CFM)
shows mean Doppler shifts at each point; (b) A power Doppler map shows the strength of Doppler-shifted
echoes at each point. Notice the reference colour bars to the left of each image.
556 Physics for Diagnostic Radiology
The power is related to the amount of blood within each sample volume, that is, the per-
fusion at that point. The time needed to acquire the Doppler information means that only
a restricted area of the B-mode field of view can be interrogated for Doppler information.
The position and size of this ‘colour box’ is controlled by the operator.
A sequence of adjacent pulsed Doppler beams is emitted into the tissue defined by the
colour box, each beam having contiguous sample volumes along it. Thus each sample vol-
ume corresponds to a ‘Doppler pixel’ in the colour box. The mean frequency or power
estimates are stored in a Doppler image memory, where each stored number determines
the colour of the corresponding pixel.
If the mean Doppler frequency for each Doppler pixel is represented, the method is called
colour flow maping (CFM) or colour flow imaging (CFI) (see Figure 15.44a). Although this is a
pulsed Doppler technique, the processing after the demodulator and wall filter is differ-
ent from that in spectral Doppler. Instead of a spectrum analyser, a simpler autocorrelator
assesses the phase shift between pairs of consecutive echoes from a sample volume to esti-
mate Doppler shift. By averaging its estimate from say nine such pairs in a sequence of 10
echoes from each sample volume, the mean Doppler shift is obtained. This process is an
order of magnitude faster than spectral analysis which requires around 100 pulse echoes
from each sample volume. Furthermore, a single sequence of 10 pulse emissions along a
particular Doppler line within the colour box provides a sequence of echoes from all sam-
ple volumes along that line. As a result of this efficiency, real-time CFM can be achieved.
Scanning of the colour box alternates with B-mode sweeps of the whole image sector.
A colour scale at the side of the image shows how colour represents Doppler shift. There
is no universal convention, but a typical colour ‘map’ might use red through to yellow to
indicate increasing positive Doppler frequencies (flow towards the probe) and dark blue
through to light blue to indicate increasingly negative Doppler shifts (flow away from the
probe). Zero Doppler shift is usually represented by black.
Although useful clinically, CFM has limitations. Using only 10 echoes from each sam-
ple volume rather than 100 means that the accuracy and resolution (Section 15.17.4) of its
Doppler-frequency estimate is not as good as that achieved by spectral Doppler. Remember
also that the colours represent v cos θ, and there is no practicable way of correcting for the
many different directions in which blood might be flowing at different points in the col-
our box. There may also be blood flow that produces no colour because it is at right angles
to the scan line. Because of its poor frequency resolution, CFM may not display slow flow,
being unable to distinguish it from zero Doppler shift.
Since CFM is a pulsed technique, there is also the possibility of aliasing (Section 15.17.5).
If the Doppler frequency exceeds the Nyquist limit of half the PRF, the displayed colour
will abruptly jump from that at one end of the colour scale to that at the opposite end.
For all these reasons, CFM does not provide a complete image of blood flow within the
colour box, and should be regarded as a qualitative rather than a quantitative technique. It
is usually used to image flow in discrete blood vessels or in the heart, where blood regur-
gitating through leaking valves or spurting through septal defects shows clearly as flow
(colour) in an unusual direction.
Some CFM systems offer the option of showing the variability of the Doppler shift from
each sample volume, by mixing an appropriate strength of another colour such as green.
A large variability indicates turbulence in the blood flow, perhaps the sign of stenosis in
the vessel.
Tissue Doppler imaging is the use of CFM to visualize soft tissue movement rather
than blood flow, for example, to study heart muscle movements. In this case the filters are
changed to preserve the soft tissue Doppler signals and reject those from blood.
Diagnostic Ultrasound 557
If the power of the Doppler signal is mapped throughout the colour box, this is called
power Doppler mode (see Figure 15.44b). Again, a colour scale is provided, but it is not
related to Doppler shift—it simply indicates the power of the Doppler-shifted echoes at
each point within the colour box. Power Doppler therefore shows where there is flow-
ing blood, and how much, but gives no indication of direction or speed. It gives a clearer
indication of perfusion than does CFM, since it does not produce a confusing mixture of
colours. It also gives a more complete image of perfusion, being more sensitive to weak
and slow flow. Even flow at right angles to the ultrasound beam can usually be seen, since
the convergence or divergence of the beam means at least part of the beam will be non-per-
pendicular to the flow and will produce a Doppler signal with measurable power. Power
Doppler is not subject to aliasing.
the second or third trimester, when bone is present. For adult trans-cranial imaging, with
bone close to the probe, the cranial thermal index TIC is used.
Cavitation refers to the energetic behaviour of small gas bubbles in a liquid medium due
to the pressure variations of a sound wave. Stable cavitation (sometimes called non-inertial
cavitation) refers to the vigorous radial oscillations of bubbles when the frequency of the
sound is close to the resonant frequency of the bubble. Stable cavitation is considered to be
a useful mode of action of ultrasound in physiotherapy, but is not considered relevant to
diagnostic ultrasound. Diagnostic pulses are too short to establish it, and CW Doppler has
too small a pressure amplitude.
Inertial or transient cavitation is a more violent and damaging form of cavitation that
can be produced even by short pulses, provided the acoustic pressure amplitude is large
enough. Here the bubble increases to many times its original size within one or two high
negative-pressure excursions (rarefactions) and then violently implodes a short time later
(usually around 1 ms). During the rapid collapse the bubble temperature may rise to sev-
eral thousand degrees. Visible and ultraviolet light is emitted, shock waves are generated,
and water is dissociated into free radicals. If the bubble is near a surface, for example, a
blood vessel wall, the rapid inrush of liquid occurring during the bubble collapse can
cause impact damage to the surface. Collapse cavitation can cause tissue damage and cell
death, a fact that is used to advantage in ultrasonic sterilising tanks, and some forms of
therapeutic ultrasound.
Cavitation is more likely to occur at low frequencies than high. Theory suggests that the
probabilistic threshold for inertial cavitation is described by the mechanical index (MI):
p−
MI = .
f
For this index alone, the peak negative pressure p– is expressed in MPa and the frequency
f in MHz. (p– is also derated, that is, calculated allowing for a nominal tissue attenuation
of 0.3 dB cm–1 MHz–1.) MI is displayed on machines as a continuously updated on-screen
indicator of cavitation hazard, along with the TI mentioned above.
Because the likelihood of cavitation depends on many factors, it is not easy to determine
exact values of MI at which damage may occur. Diagnostic ultrasound is not believed to
be capable of producing cavitation in vivo unless gas-bubble nuclei are already present. For
example, cavitation-like damage from diagnostic levels of ultrasound has been demon-
strated in animal tissues containing gas cavities, such as the lung and bowel. It is not felt that
bubble growth will occur for MI < 0.7, and this is often regarded as an unconditionally safe
value. However, the collapse of contrast-agent bubbles (see Section 15.14) can be provoked at
MI = 0.3, and so this value is used as a precautionary level if a contrast agent is in use.
Radiation force is a direct mechanical force produced by ultrasound on any object that
has a smaller intensity on one side than on the other. It usually acts in the direction in
which the ultrasound is propagating. It can produce bulk motion (streaming) of an absorb-
ing liquid, due to the radiation force acting on each small region of the liquid. Streaming
can sometimes be observed in real-time ultrasound images of cysts or other liquid masses
containing particulate matter, particularly if spectral or colour flow Doppler is being used.
Streaming speed depends on the local intensity of ultrasound and the energy-absorption
of the fluid which is itself a function of the ultrasound frequency. The speed is typically
only a few centimetres per second, but cells caught up in it and those at the walls of the
containing structure will be subject to mechanical stresses.
Diagnostic Ultrasound 559
At the moment, radiation force is not thought to cause harm, but it does provide a way
of measuring the total power of an ultrasound emission. A target of sufficient size and
composition that it totally reflects or absorbs the emission experiences a radiation force
proportional to the total power of the ultrasound.
Other biological effects that are of increasing interest seem to be due to the vibration of
the medium by ultrasound. The effects are not properly understood, but are believed to
involve the cells of tissue. They are believed to occur at low intensities such as might be
associated with diagnostic ultrasound, and may be beneficial (Martin 2009). However, the
research has mainly been carried out with physiotherapy ultrasound equipment which
emits longer-duration pulses than diagnostic systems. Whilst any biological effect of a
diagnostic modality would be undesirable, at least this one, if it is produced by the short
pulses of diagnostic ultrasound, is not considered harmful.
Spectral Doppler should not be used in early pregnancy. If used in the foetal or neonatal
skull or spine, control TIB to an appropriate level; and control TIS in the adult eye. In
scanned modes, selecting a deep write zoom box or a deep transmit focus can often mean
an increase in output. The instruction manual for the scanner should contain advice on
how to set the controls to reduce output.
Conclusion
Diagnostic ultrasound is now a widely used tool in medicine, with the total number of
examinations carried out per year probably exceeding that of general radiography. It pro-
vides various types of imaging and measurement of the body, for example of anatomy
in two and three dimensions, of abnormally stiff tissue (“elastography”), and of internal
blood flow using the Doppler effect. All are based on pulse-echo techniques and are pro-
vided in real time. Contrast agents are available to enhance the imaging of vascular or
other fluid-filled structures.
560 Physics for Diagnostic Radiology
The techniques are free from the harmful effects associated with even low doses of ion-
ising radiation (although the production of free radicals is theoretically possible via a pro-
cess called inertial cavitation). The equipment is also relatively cheap, and mobile or even
portable. Thus it can be readily available both within and outside radiology departments.
From the start of its serious clinical application around 1970, the techniques and tech-
nology have developed significantly, and so therefore has the quality of the information
provided, and the range of application. This development shows no sign of slowing.
Potential disadvantages are that skill is required in positioning and manipulating the
probes and in interpreting the images, which are prone to artefacts. The operator is also
responsible for minimising the risks due to warming or mechanical disruption of tissue.
Nevertheless, in the hands of properly-trained operators, diagnostic ultrasound is consid-
ered to be safe and of great benefit to society.
References
Behar V. and Adam D., Parameter optimization of pulse compression in ultrasound imaging systems
with coded excitation, Ultrasonics, 42(10), 1101–1109, 2004.
Claudon P., EFSUMB Study Group, Guidelines and good clinical practice recommendations for con-
trast enhanced ultrasound (CEUS) – Update 2008. See http://www.efsumb.org/mediafiles01/
ceus-guidelines2008.pdf or Eur J Ultrasound (Ultraschall in Med), 29(1), 28–44, 2008.
Duck F.A., Physical Properties of Tissue, Academic Press Ltd., London, 1990.
Duck F.A., Nonlinear acoustics in diagnostic ultrasound, Ultrasound in Med & Biol, 28(1), 1–18, 2002.
Garra B.S., Imaging and estimation of tissue elasticity by ultrasound, Ultrasound Quart, 23(4),
255–268, 2007.
Hoskins P.R., Thrush A., Martin K., and Whittingham T.A. (eds.), Diagnostic Ultrasound: Physics and
Equipment, Greenwich Medical Media, London, 2003.
Lanza G.M. and Wickline S.A., Targeted ultrasonic contrast agents for molecular imaging and ther-
apy, Prog Cardiovasc Dis, 44(1), 13–31, 2001.
Martin E., The cellular bioeffects of low intensity ultrasound, Ultrasound, 17(4), 214–219, 2009.
Nowicki A., Litniewski J., Secomski W., Lewin P. A., and Trots I., Estimation of ultrasonic attenuation
in a bone using coded excitation, Ultrasonics, 41(8), 615–621, 2003.
Quaia E. (ed.), Contrast Media in Ultrasonography: Basic Principles and Clinical Applications, Springer
Verlag, 2005.
ter Haar G. and Duck F.A. (eds.), The Safe Use of Ultrasound in Medical Diagnosis, British Institute of
Radiology, London, 2000.
Tranquart F., Grenier N., Eder V., and Pourcelot L., Clinical use of ultrasound tissue harmonic imag-
ing, Ultrasound in Med. & Biol. 25(6), 889–894, 1999.
Acknowledgements
Andrew Fairhead, who revised this chapter, wishes to thank Tony Whittingham, the original
author, for laying such good foundations, not only in his writing, but also in the education of
countless students of ultrasound. Grateful thanks also to Eleanor Hutcheon for help with the
illustrations, and Manjula Sreedharan and Kathleen Anderson for providing images.
Diagnostic Ultrasound 561
Exercises
1. Explain what actually travels when a sound wave propagates. Why is a short burst
of vibration the essential waveform for diagnostic applications?
2. Why is the low megahertz range used for ultrasonic imaging? Give, with reasons,
the frequencies and transmission focal lengths you would choose to make ultra-
sound examinations of (a) an eye, and (b) a liver.
3. Describe two ways in which values of the characteristic acoustic impedance of
body components are significant to diagnostic ultrasound. Why is it necessary to
use gel between the probe and the patient?
4. A source of pulsed ultrasound and a target are separated by normal soft tissue.
Discuss the effect of each of the following on the amplitude of the ultrasonic pulse
reflected back to the source:
(a) The size and shape of the target
(b) The characteristic acoustic impedances of the target substance and interven-
ing tissue
(c) The distance between source and target
(d) The range to which the transmission focus has been set
(e) The frequency of the ultrasound
(f) The acoustic power of the source
5. What is the purpose of the backing layer, the impedance-matching layer and the
lens in a typical ultrasound transducer? What would be the consequences for
image quality of leaving out each of these in turn?
6. Explain briefly how a focussed beam of ultrasound can be obtained from the excitation
of a number of transducer elements in a linear array probe. Why is the active group of
a linear array progressively enlarged during dynamic focussing in reception?
7. Show that, in the scan plane, the beam of a single 1 mm wide element of a 3 MHz
linear array has a near field length of only about 0.5 mm and then diverges between
the angles +30° and –30° to the axis. How is the beam modified if it is produced by
an active group of 20 adjacent elements?
8. Describe the different types of B-mode real-time scanning probe, and list their
advantages and disadvantages. What might be the advantage of having probes
that are applied internally rather than to the outside of the body?
9. Describe the function of the TGC facility in an ultrasonic scanner. In what way
should this be regarded as artefact-correction? Which two artefacts can be caused
by inappropriate operation of the TGC?
10. Out of the following B-mode controls—transmission focus, depth of field of view,
overall gain, output power, TGC, write zoom, dynamic range, frame averaging—
which affect the lateral resolution, which affect contrast resolution, which affect
sensitivity and which affect temporal resolution?
11. Describe and explain a speckle pattern, as seen in a B-mode image. Which fea-
tures of such a pattern are determined by the tissue and which by the machine?
562 Physics for Diagnostic Radiology
12. Describe and explain two artefacts that cause the image echoes of genuine targets
to be shown at incorrect positions.
13. Explain the cause and consequences of beam aberration in superficial tissue. Why
is it difficult to correct for this aberration?
14. List the artefacts that tissue-harmonic imaging reduces, and in each case explain
how the reduction works.
15. Explain the Doppler effect in medical ultrasound. What is the importance of the fol-
lowing parts of a Doppler receiver—(a) the demodulator; (b) the high-pass filter?
16. Distinguish between the terms: Doppler shift, Doppler signal, Doppler frequency,
Doppler power, Doppler spectrum, Doppler waveform.
17. Describe how a pulsed Doppler device isolates the Doppler signals from a partic-
ular depth interval. What limitation is associated with Doppler frequency shift
measurement by pulsed Doppler?
18. Why does CFM mode involve a lower frame rate than B-mode? Why is it possible
for the same blood speed to be represented by different colours on the same CFM
image? What effect does aliasing have on a CFM image? Can aliasing occur in
power Doppler mode?
19. Explain what is meant by the TI and the MI, as displayed on the screens of scan-
ners. How are these numbers meant to help the operator assess the possible haz-
ard risk associated with a scan?
20. Which ultrasound mode has the greatest potential for thermal hazard, and
why? Describe the measures that should be taken to ensure the prudent use of
ultrasound.
16
Magnetic Resonance Imaging
Elizabeth A Moore
SUMMARY
Magnetic resonance imaging provides the best soft tissue contrast available in radiol-
ogy and is an essential part of many diagnostic workups. This chapter will explain
the following:
CONTENTS
16.1 Introduction.........................................................................................................................564
16.2 Basic Principles of Nuclear Magnetism........................................................................... 565
16.3 Effect of an External Magnetic Field................................................................................ 566
16.3.1 The Larmor Equation............................................................................................. 567
16.3.2 Net Magnetisation M0............................................................................................ 568
16.3.3 From Quantum to Classical.................................................................................. 568
16.4 Excitation and Signal Reception....................................................................................... 569
16.4.1 RF Excitation............................................................................................................ 569
16.4.2 Signal Reception..................................................................................................... 570
16.5 Relaxation Processes.......................................................................................................... 570
16.5.1 Spin-Lattice Relaxation.......................................................................................... 571
16.5.2 Spin-Spin Relaxation.............................................................................................. 571
16.5.3 Inhomogeneity Effects........................................................................................... 572
16.6 Production of Spin Echoes................................................................................................. 573
16.7 Magnetic Field Gradients................................................................................................... 575
16.7.1 Frequency Encoding Gradient and Fourier Transforms................................... 576
16.7.2 Phase Encoding Gradient...................................................................................... 577
16.7.3 Selective Excitation................................................................................................. 578
16.7.4 Review of Image Formation.................................................................................. 580
16.8 k-space or Fourier Space..................................................................................................... 580
563
564 Physics for Diagnostic Radiology
16.1 Introduction
The phenomenon of nuclear magnetic resonance (NMR) was discovered independently
by two groups of workers in 1946 headed by Bloch at Stanford and Purcell at Harvard.
The techniques developed were primarily used to study the structure and diffusion prop-
erties of molecules and subsequently Bloch and Purcell shared the Nobel Prize for phys-
ics in 1952. In the early 1970s the idea of spatial encoding using gradients was proposed,
again by two independent researchers, Lauterbur and Mansfield, who shared the Nobel
Prize for medicine and physiology in 2003. The ‘nuclear’ part of the name was dropped,
leaving ‘magnetic resonance’ as the name used most often in medical imaging. Several
variants of the name are used: MRI (imaging), MRS (spectroscopy), MRA (angiography),
and so on.
This chapter concerns itself purely with the concepts of using MR for diagnostic imag-
ing. It will be necessary to use some difficult ideas from quantum mechanics to explain
the processes. However, we will keep this to a minimum, and concentrate on providing
helpful analogies from classical physics.
Magnetic Resonance Imaging 565
Direction of
electron flow
Magnetic field
FIGURE 16.1
A magnetic field is generated when electric charges move along a conductor.
566 Physics for Diagnostic Radiology
proton or neutron or both) will have a residual magnetic moment. In atoms with large
atomic number there are a lot of electrons around the nucleus, which tends to shield the
effect of the unpaired spins. However, the hydrogen nucleus is a single proton, and its
single orbiting electron does not provide much shielding. 1H therefore has the largest mag-
netic moment of all the elements, which is the other important factor making MRI such a
sensitive technique. From now on, we will only consider the 1H nucleus; this may also be
called the proton or a spin.
Precession ω Precession ω
Spinning
nucleus
FIGURE 16.2
(Left) A spinning top precesses in the Earth’s gravitational field. Similarly the 1H nucleus precesses in an
external magnetic field (right).
Magnetic Resonance Imaging 567
B0 Parallel spin
1 h
ε =− γ B
2 2π 0
h
∆ε = γ B
2π 0
1 h
ε =+ γ B
2 2π 0
Anti-parallel spin
FIGURE 16.3
Spin orientations in a magnetic field. The energy difference is proportional to the magnetic field strength.
of B0 gives an increase in the population excess, which proportionately increases the sig-
nal to noise ratio (SNR) of the MR image.
h (16.1)
Δe = γ B0
2π
where γ is the gyromagnetic ratio. The gyromagnetic ratio is a constant for each nucleus,
and for protons it has a value of 2.7 × 108 rad s–1 T –1.
It is possible to cause a transition between the two states by the emission or absorption
of a packet of electromagnetic energy (a photon). The photon’s energy ε depends on its
frequency ω, given by the equation
h
e= |v | (16.2)
2π
h h
v =γ B0
2π 2π
∴ |v 0 |= γ |B0| (16.3)
Via classical physics, the same result can be reached by considering the relationship
between the rate of angular precession of the protons and the centripetal force between
B0 and µ. This important relationship is known as the Larmor equation (after Sir Joseph
568 Physics for Diagnostic Radiology
Larmor, Irish physicist 1857–1942), and ω 0 is called the Larmor frequency. It is more con
venient to know that γ is 42.57 MHz T –1, so that the Larmor frequency of protons in a 1 T
scanner is approximately 42 MHz, and at 3 T it is approximately 128 MHz. These frequen-
cies lie in the radiofrequency (RF) portion of the electromagnetic spectrum.
M0
B0
FIGURE 16.4
The two spin populations are randomly distributed around the z-axis. Each dipole moment has components
along z and in the xy plane. The net magnetisation (M0) is along z.
Magnetic Resonance Imaging 569
physics and concentrate instead on classical models to continue the explanation of the
basics of MRI. Each spin vector in the diagrams will now represent a large collection of
protons which all happen to share the exact same Larmor frequency and the same phase.
These explanations will also make frequent use of a ‘rotating frame of reference’. Since
the protons are continuously precessing about B0, it is easier to show the changes that
happen during excitation and relaxation if we can ignore the precession. As an analogy,
the motion of a bouncing ball is much easier to describe on the earth (because we are on a
rotating frame of reference) than from a distant planet, which would have to include the
rotation of the earth. Consider a frame x’-y’-z’ rotating at ω 0, the Larmor frequency, about
its z’-axis, and with the z’-axis aligned with the z-axis in the laboratory frame (i.e. defined
by the direction of B0). The magnetisation at equilibrium thus appears as a static vector of
magnitude |M0| along the z’-axis.
During excitation, and especially during relaxation, it is necessary to consider compo-
nents of the magnetisation along the longitudinal direction, that is, z (z’), and in the trans-
verse plane, that is, x’ y’. These components are labelled Mz and Mxy, respectively.
16.4.1 RF Excitation
The RF magnetic field is circularly polarised, so that it creates an additional magnetic field
B1 aligned perpendicular to B0 and rotating at ω 0. In the rotating frame, this appears as
a static magnetic field aligned, for example along the x’-axis. Just as an individual proton
precesses about B0 in the laboratory frame, so M0 begins to precess about B1 in the rotat-
ing frame (Figure 16.5). The magnitude of B1 is much smaller than |B0|, typically a few µT,
so the precession frequency is slower. After a certain time tp, M0 will be aligned along the
y’-axis, that is it has turned through exactly 90°. This amount of RF energy, measured by
|B1| and its duration tp, is called a 90° pulse, or an excitation pulse. Leaving the RF on for
double the time, or doubling the strength of the |B1| field, would turn the magnetisation
through 180°; the pulse would then be called a 180° pulse, also called a refocusing or inver-
sion pulse. Intermediate sized pulses are easily produced, and are described by the angle
through which the magnetisation is turned—the flip angle α.
For those interested in quantum physics, the RF field has exactly the right energy to
stimulate transitions between energy states. Protons in the parallel state can absorb energy
to jump to the anti-parallel state; those in the (higher energy) anti-parallel state can be
stimulated to give up some energy and drop to the parallel state. Both transitions happen
with equal probability, but since there are more spin-up protons at equilibrium, there will
be a net absorption of energy.
570 Physics for Diagnostic Radiology
z(z)
B0
M0
B1
y
x
FIGURE 16.5
In the rotating frame M moves around the magnetic portion of the applied RF pulse.
When the energy states have equal populations, there will be no net magnetisation in
the z’ direction. However, using quantum physics it is difficult to explain how M0 rotates,
and in particular how the phase of the protons is affected by the RF energy. In addition to
‘flipping’ M0 through the desired flip angle, the RF wave has the effect of bringing all the
protons into phase with each other. Once again we leave the quantum physics explanation
and return to the more useful classical concepts.
Signal
Time
FIGURE 16.6
The free induction decay (FID). The signal oscillates at ω0 and decays exponentially to zero (dotted line).
relaxation, respectively. Relaxation times are characteristic of particular tissues, and have a
wide range in vivo, spanning 2 or 3 orders of magnitude. These properties can be exploited
to control the contrast in the image, by manipulating a series of RF pulses.
⎡ ⎛ −t ⎞ ⎤
Mz = M0 ⎢1 − exp ⎜ ⎟ ⎥ (16.4)
⎣ ⎝ T1 ⎠ ⎦
where t is time and T1 is known as the spin-lattice relaxation time (Figure 16.7a). This is a
variant of the exponential decay equation discussed in Section 1.5 where the variable Mz,
increases with time. In tissues T1 is typically of the order of a few hundred milliseconds
(Table 16.1). It becomes very long in solids (such as cortical bone) or in free water, gets lon-
ger with increasing temperature, and also increases with |B0|.
⎛ −t ⎞ (16.5)
Mxy = M0 exp ⎜ ⎟
⎝ T2 ⎠
572 Physics for Diagnostic Radiology
Mz/M0 Mxy/Mxy,max
–t
1 – exp
T1 –t
exp
T2
T1
T2
Time Time
FIGURE 16.7
(Left) spin lattice relaxation time (T1) is the time taken for 63% of |M0| to recover along the z-axis. (Right)
spin-spin relaxation time (T2) describes the exponential decay of the transverse magnetisation.
TABLE 16.1
T1 and T2 Values at Commonly Used Field Strengths for In Vivo
Human Tissues
T1 Relaxation Time (ms) T2 Relaxation Time (ms)
Tissue 1.5 T 3.0 T 1.5 T 3.0 T
White matter 560 832 82 110
Grey matter 1100 1331 92 80
CSF 2060 3700 – –
Muscle 1075 898 33 29
Fat 200 382 – 68
Liver 570 809 – 34
Spleen 1025 1328 – 61
The value of T2 like T1 depends on the structure of the environment, and particularly on
the mobility of protons. In vivo T2 has a similar range to T1, ranging from tens to hundreds
of milliseconds. It is very short in solids (e.g. cortical bone), and much longer in free fluids.
It also increases with temperature, and increases slightly with |B0| up to 1.5 T, then tends
to decrease at high field strengths (Table 16.1).
relaxation time, and is calculated by combining the magnetic field inhomogeneity ΔB0
with T2:
1 1 1
*
= + gΔB0 (16.6)
T2 T2 2
In modern MR systems, ΔB0 is often very small; however, the human body introduces its
own inhomogeneity when the patient is placed in the system. This is caused by the vary-
ing diamagnetic and paramagnetic properties of different tissues, and ΔB0 will be partic-
ularly bad wherever there is a pocket of air (bony sinuses, lungs, bowels) or dense bone
(posterior fossa, vertebral column, etc.).
Insight
Understanding T1 and T2 Together
In many textbooks, T1 and T2 graphs are only shown separately. This can lead to the impression
that the relaxation processes occur on similar time scales, whereas in fact T1 is typically 10 times
longer than T2. If T1 and T2 are shown together (Figure 16.8), it can be seen that Mxy (the signal
which we can measure) rapidly decays. However, it takes much longer for Mz to recover back to
full equilibrium. The net magnetisation therefore has a magnitude less than |M0| for a long time;
if another excitation pulse is applied during this time, only the current Mz is flipped into the xy
plane to create the next signal.
Mxy
Mz
T1 recovery
|M|
T2 decay
Time
FIGURE 16.8
T1 and T2 relaxation shown on the same graph. Notice that the total magnetisation is not equal to |M0| at all
times.
574 Physics for Diagnostic Radiology
(a) (b)
z(z) z(z)
Mz
B1
y y
x x
(c) (d)
z(z) z(z)
B1
y y
x x
(e)
z(z)
y
x
FIGURE 16.9
(a) The initial 90° pulse flips the magnetisation into the transverse plane, where (b) it begins to dephase. (c) At
time TE/2 the 180° pulse (B1) is applied which flips all the magnetisation over. (d) The protons now begin to
rephase, producing a spin echo (e) at time TE. Magnetic field gradients are superimposed on the main magnetic
field, for example, in the X direction. The strength of B varies with distance (X) but not its direction.
now. Immediately after the 90° pulse Mxy = |M0|, assuming the system was in equilibrium
before excitation (Figure 16.9a). For a short time TE/2 Mxy decays as usual with T2* relax-
ation time producing a ‘fan’ of vectors in the transverse plane (Figure 16.9b). At this point
the 180° pulse is applied along the y’-axis, flipping the fan over (Figure 16.9c). Protons
which were ahead of |ω 0| are suddenly behind |ω 0|, but they are in the same physical
location and are experiencing the same ΔB0. They continue to precess faster than ω 0, but
Magnetic Resonance Imaging 575
now they appear to be catching up instead of pulling ahead. Similarly, protons which were
falling behind find themselves ahead of the y-axis. Still with the same |ω| thanks to their
local |B0|, they begin to lose their advantage and fall back towards the y’-axis. The protons
are rephasing instead of dephasing (Figure 16.9d). After a further time TE/2, they will
come back exactly into phase along the y’-axis, forming the spin echo (Figure 16.9e). The
amplitude of the echo is governed by T2, since the intrinsic dephasing cannot be reversed,
only the macroscopic extrinsic dephasing. The echo lasts only for a fraction of a second
before the protons continue to dephase as before. The shape of the signal in the receiver
coil gives rise to the name ‘echo’.
Insight
Runners-on-a-Track Analogy
To aid understanding of spin echo formation, there is a common analogy with runners on a track.
Suppose the runners race in a clockwise direction until they reach the finishing line: they will
arrive at different times according to each individual’s ‘local strength’ and other factors. Those
other factors might include something about the local environment (i.e. the lanes) in which they
run. If there are differences in the quality of the running track in the various lanes we would expect
to see this reflected in the arrival times. However, we cannot unmask these factors from the local
strengths of the athletes.
Imagine now a different race is started in which the athletes first run clockwise but after a cer-
tain time they have to turn round and run back to the starting line. Ignoring local factors (such
as the individual lanes) the runners would all arrive back at the starting line at exactly the same
time. If, however, there were variations in the lanes around the track that affected each runner
differently, they would arrive close together but not at exactly the same time. The lane variations
are analogous to T2 dephasing, causing the echo to have a lower intensity than the original FID
height.
Br = B0 + r ⋅ G r (16.7)
Protons at this position will resonate with frequency |ωr| = γ ∙|Br| instead of |ω 0|. The
effect of the gradient is to encode the spatial position of the protons, as the MR signal
frequency.
576 Physics for Diagnostic Radiology
r (direction of Gr)
FIGURE 16.10
Magnetic field gradients are superimposed on the main magnetic field. The strength of B varies with distance
(r) but not its direction.
Gradients along the three principal axes are switched on and off in a repeated pattern,
called a pulse sequence. The protons respond to changes in the magnetic field strength
immediately, changing their resonant frequencies within a few picoseconds (10–12 s).
Throughout this section we shall use upper case (X, Y and Z) when referring to the
axes of the image volume. Z is conventionally (in cylindrical magnets) the direction of
the magnetic field (superior-inferior in the body), Y is the vertical axis (anterior-posterior)
and X is the remaining horizontal axis (right-left). Since all three directions cannot be
encoded simultaneously, there are three slightly different ways of using gradients for spa-
tial encoding.
(a)
Strength and
direction of B
Signal
intensity
FIGURE 16.11
Signals acquired while a gradient is applied. (a) Each container has a different frequency due to its location
along X; the signal amplitude depends on the total number of protons in each container. The combined signal
has a complicated shape. (b) After a Fourier transform, the signals are shown as amplitude versus frequency,
which is equivalent to X position.
Δf = ω ⋅ Δt (16.8)
(a) (b)
Phase encode
Phase encode
(c)
Phase encode
Frequency encode
FIGURE 16.12
(a) The RF pulse brings all the protons into phase with each other. (b) The phase encode gradient along Y causes
variations in precessional frequency and when it is switched off the columns have phase angles proportional to
their position. (c) During the readout gradient the rows now have different frequencies as shown. The combina-
tion of phase and frequency information describes the position of each proton.
which resets the phase to zero. To get a range of phase angles, it is possible to vary tPE, but
that would cause differences in echo times (TEs), so in practice only GPE, the strength of
the gradient, is varied.
Now consider the set of phase angles measured for just one of the protons shown in
Figure 16.12. Since GPE has been varied slowly, the phase angles will correspond to a slowly
varying signal in the PE direction (Figure 16.13). This looks just like the measurements for the
frequency-encoded signal, except that the time scale is very different. Whereas the FE signal
was measured in real time, the PE signal was measured in ‘pseudo-time’. Mathematically
the FE and PE directions are identical, except for this time scale. The Fourier transform can
be used in the PE direction to separate the ‘pseudo-frequencies’ and their amplitudes.
TR1 TR2 TR3 TR4 TR5 TR6 TR7 TR8 TR9 ‘pseudo’-time
FIGURE 16.13
The series of phase encode angles, acquired in ‘pseudo-time’, can be considered as a slowly varying (low-
frequency) signal. One phase angle is measured in each TR period.
Transmitted
range of RF
frequencies
Z direction in magnet
FIGURE 16.14
Selective excitation of a slice of tissue, achieved by applying a narrow bandwidth RF pulse whilst a gradient is
on (in the Z direction in this example).
Only protons whose resonant frequencies match those in the RF pulse will absorb energy, that
is, only those within the slice of tissue shown (Figure 16.14). The gradient is switched off after
the RF pulse finishes: protons within the slice will then emit an MR signal at |ω 0|, which can
then be encoded for the two dimensions within the slice (frequency and phase encoding).
By changing the centre frequency of the RF pulse, a different slice location can be excited
independently of the first. In fact signals from many slices can be excited and acquired in
an inter-leaved fashion (multi-slice imaging).
Ideally, all protons within the slice should receive a 90° pulse, while all those outside
should remain unexcited. In practice, however, this is impossible, and protons at the edges
of the excited slice receive a smaller flip angle. Thus there is a slice profile which depends
580 Physics for Diagnostic Radiology
Excitation
flip angle
FWHH
FIGURE 16.15
The ideal slice profile is rectangular (dotted lines). In practice the slice is shaped according to the characteristics
of the RF pulse. Slice width is defined by the full width at half height (FWHH).
on the characteristics of the RF pulse and the slice select gradient. The slice width is defined
as the full width of the slice at half the height (FWHH) of the profile (Figure 16.15).
Slice select
Phase encode
Frequency encode
Time
FIGURE 16.16
Only protons within the selected slice are at the right resonance frequency for the RF pulse, and see the 90°
pulse. (a) Arrows indicate the direction of each packet of magnetisation. (b) The phase encode gradient produces
a range of phase angles. Arrows now indicate the phase angle within each voxel. (c) With the frequency encode
gradient on, the signals are acquired. The simplified pulse sequence is shown at the top of the diagram.
formulation; however, it can be understood without any maths, simply by considering the
effect of gradients on proton phase. What makes k-space useful is that it separates signal
from resolution. Information about SNR and contrast is encoded in the middle of k-space,
while resolution (information about edges) is located in the edges. To acquire an image,
it is essential to sample k-space uniformly, extending the sampling as far as necessary to
achieve the desired spatial resolution. How we cover k-space to sample data—by rectilinear
scanning, or by radial sampling, or by a continuous sweep—defines the main groups of
pulse sequences in MRI.
There are three basic ‘rules’ for using k-space.
• An excitation pulse (90° or a smaller flip angle) ‘resets’ all the protons’ phases to
zero, and brings us to the centre of k-space.
• A refocusing pulse (180°) rotates the current k-space location through 180° about
the centre of k-space.
• A gradient moves the current k-space location in the direction of the gradient; the
speed of movement depends on the strength of the gradient.
582 Physics for Diagnostic Radiology
Insight
Using k-Space
Use the k-space rules to see how the spin echo sequence collects an image. Refer to Figure 16.17
for the pulse sequence diagram. For this exercise we will ignore the slice select (Z) gradients, and
just start with the 90° pulse, which locates the k-vector in the middle of k-space (Figure 16.17b).
The first gradient is the phase encode on Y, which is applied simultaneously with a dephase gradi-
ent on the X-axis. The combined effect is to move diagonally to the upper right corner of k-space
(Figure 16.17c). Next there is the 180° refocusing pulse, which rotates the position to the lower
left corner of k-space (Figure 16.17d). Then the frequency encode (readout) gradient is applied
on X, simultaneously with data sampling. While travelling horizontally across k-space, the signals
are sampled and stored (Figure 16.17e). After a repetition time (TR) waiting time (to allow some
relaxation, or multi-slice acquisition), another sequence is applied. This time the phase encode
gradient is slightly smaller, so the diagonal travel ends up at a lower ky position. After the 180°
pulse, the second readout line is a little above the first one, so a new row of k-space is sampled
(Figure 16.17f). This process is repeated, each time changing the phase encode gradient slightly,
until we have sampled the whole of k-space with a set of horizontal lines. These data points can
now be Fourier transformed in both directions, to produce the image.
It is possible to play some games with the k-space data, and see how the final image is
affected. First, most of the edge information is discarded (Figure 16.18a,b). The recon-
structed image has all the main features of signal and contrast, but is very blurred—low
resolution. Next data at the edges of k-space are retained but the central region is dis-
carded (Figure 16.18c,d). Some edges are visible, but the SNR is very low. Note that these
manipulations are very similar to using low pass and high pass filters in the ‘insight’ on
image processing (Section 6.11).
that is, when the time integrals of the gradient pulses are equal and opposite. On the pulse
sequence diagram, this is when the areas of the gradient pulses are equal. Because it is the
area which is important, it is even possible to split a gradient pulse and move part of it to
a different time in the sequence.
For the frequency encoding gradient, the maximum signal (the spin echo) should occur
in the middle of the data capture window; it is important that the net phase effect is zero
at this time. Thus, a ‘frequency dephase’ gradient pulse is applied before the frequency
Magnetic Resonance Imaging 583
ky
(a) (b)
Slice
select
90°
180°
Phase
encode kx
Frequency
encode Acquire signals
ky ky
(c) (d)
kx kx
ky ky
(e) (f )
kx kx
FIGURE 16.17
(a) The pulse sequence diagram. (b) The 90° pulse puts us in the middle of k-space. (c) The phase encode on Y,
simultaneously with dephase on X, takes us diagonally to the upper right corner. (d) The 180° pulse rotates us
to the lower left corner. (e) The frequency encode gradient on X, simultaneously with data sampling, takes us
horizontally picking up information along the row. (f) The next phase encode gradient is slightly smaller, so we
end up at a lower ky position and sample a new row of k-space.
584 Physics for Diagnostic Radiology
(a) (b)
(c) (d)
FIGURE 16.18
(a) Only the central section of k-space and (b) the resulting image. (c) The outer edges of k-space and (d) the
resulting image.
encoding pulse, with area equal to half that of the frequency encoding pulse. The dephase
lobe comes before the 180° pulse which effectively inverts the phases: therefore in a spin
echo sequence the frequency dephase gradient is positive. In the case of the phase encod-
ing gradient, rephasing is unnecessary (the phase angles give positional information).
The situation for the slice selection gradient is slightly more complex, because the RF
pulse has a changing shape. By convention, however, the shaped RF pulse is considered to
act as a sharp spike at the middle of the pulse; that is, during the first half of the pulse there
is no excitation, then all protons in the slice receive a 90° pulse instantaneously. Whilst
there are no excited protons, the gradient can have no dephasing effect, so the period up
to the middle of the RF pulse is ignored. The remaining area of the slice selection gradient
will cause dephasing of the newly excited protons, which may be remedied by applying
a negative gradient with approximately equal area (the ‘slice rephase’ gradient). It is nec-
essary slightly to adjust the size of the rephase gradient since the RF pulse does not act
instantaneously.
Magnetic Resonance Imaging 585
as ‘gradient-echo T2-weighted’. Magnet design has greatly improved since the early days of
MRI so GRE images have very high image quality, comparable to spin echo in many cases,
except where there are large inhomogeneities introduced by the human body.
Gradient echo sequences can be very fast, and they have an additional flexibility with
the excitation RF pulse. It is often less than 90° and has an impact on both SNR and the
image contrast, as we will see in the next section.
Slice
select
α°
Phase
encode
Frequency
encode Acquire signals
FIGURE 16.19
A gradient echo pulse sequence. The initial excitation pulse is usually less than 90°; there is no 180° refocusing
pulse. The initial frequency dephase gradient is negative instead of positive.
586 Physics for Diagnostic Radiology
Repetition time is defined as the time to repeat the sequence. Repetition is necessary
for a number of reasons: to acquire signals with different phase encoding gradients, or to
increase SNR by averaging signals together. In a GRE sequence TR is the time between
two α° pulses; in a spin echo sequence it is the time between two 90° pulses.
Echo time is defined as the time from the excitation pulse to the centre of the echo
(whether gradient or spin echo). For spin echo sequences, the 180° pulse must be exactly
halfway between the 90° excitation and the desired TE.
• Long TR, short TE: contrast is dominated by the proton density (water content) of
the tissues
• Long TR, long TE: contrast depends on T2 (spin-spin relaxation time)
• Short TR, short TE: contrast is dictated by T1 (spin-lattice relaxation time)
The fourth combination, short TR with long TE, has very low SNR and contrast which
depends both on T1 and T2, which is of little value.
Since both proton density (PD) and T2-weighted images have long TRs, it is possible to
use repeated 180° pulses to produce more than one spin echo. With two spin echoes, one
at a short TE and the other with long TE, both PD and T2 images are produced within the
same TR (and therefore the same scan time). Such ‘dual echo’ sequences are less common
today than a decade ago; with modern speed-up techniques such as ‘parallel imaging’ and
‘turbo spin echo’, scan times are much reduced, and it is possible to create separate T2 and
PD scans in much less time than the old dual echo SE scans. (Explanation of these faster
sequences is beyond the scope of this chapter.)
Long TR
(c)
Short TR
FIGURE 16.20
Different combinations of TR and TE give proton density weighting (a), T2 weighting (b), or T1 weighting (c). The
combination of short TR and long TE is not useful.
So a small flip angle can be used to avoid saturation, even when TR is very short. We there-
fore can ignore TR as a contrast control mechanism. Flip angle becomes the parameter
which controls Mz, while TE still controls Mxy.
Once again there are four possible combinations of flip angle and TE, but only three are
useful (Figure 16.22).
• Small α°, short TE: contrast is dominated by the proton density (water content) of
the tissues
• Small α°, long TE: contrast depends on T2 (spin-spin relaxation time, or to be more
correct T2* the effective spin-spin relaxation time)
• Large α°, short TE: contrast is dictated by T1 (spin-lattice relaxation time)
The fourth combination, large α° with long TE, has very low SNR and contrast which
depends both on T1 and T2, which is of no value, just like SE images with short TR and long
TE. Since GRE images can be acquired in very short times, dual echo GRE (to produce PD
and T2* images within the same scan) is never used. However, a multi-echo GRE is often
588 Physics for Diagnostic Radiology
z(z)
Mz–cos α
Mz–
α°
B1
Mz–sin α
y
x
FIGURE 16.21
Gradient echo sequence use flip angles less than 90°. This leaves a significant M z, which makes full T1 relaxation
much quicker. The penalty is a smaller M xy, giving less SNR.
used with three or five echoes, with all the images summed together at the end of the scan,
to provide an image with high SNR and good T2* contrast.
Small α
(c)
Large α
FIGURE 16.22
Different combinations of α° and TE give proton density weighting (a), T2 (T2*) weighting (b), or T1 weighting (c).
The combination of long TE and large α° is not useful. TR is always short.
T1 relaxation. If the inversion delay is such that one of the tissues is at the null point (i.e.
Mz = 0), its signal after the 90° pulse will be zero—the tissue will be suppressed in the
final image. The appropriate TI is approximately 0.7 · T1 of the tissue to be nulled, bearing
in mind that T1 depends on field strength (Figure 16.23).
With STIR, the aim is to create a fat-suppressed T2w image. Fat has a short T1 and so a short
TI is needed (120 ms at 1.5 T, 150 ms at 3.0 T). At these short TIs, most other tissues still have
a negative Mz which means they create negative spin echoes. When the image is formed
however, we ignore the sign of the echo and only use its magnitude. Fluids, with the longest
T1s, have the most negative signal and therefore have the highest signal on the final images,
giving the required T2w contrast. A short TE is needed, to maximise the signal from fluids.
FLAIR is used exclusively in brain and spine imaging, to null CSF signals in a T2w image.
This is helpful to distinguish periventricular lesions from the high signals in the ventricles.
Since CSF has a very long T1, a long TI is needed (1700–2500 ms). There is a wide range of
TIs because the signal is changing only slowly; a few tens of ms difference can still give
adequate CSF suppression. At such long TIs, all other tissues already have positive Mz and
in fact may be close to |M0|. A long TE is used to create T2w contrast.
590 Physics for Diagnostic Radiology
Mz
T1 recovery
FIGURE 16.23
T1 recovery after an initial 180° inversion pulse. If the excitation pulse is applied at the null point for a certain
tissue, there will be no signal from that tissue in the resulting image. If other tissues have negative M z at that
null point, the imaging process takes the magnitude of all signals, so the longest T1 tissues will appear brightest
in the image.
(a) (b)
(c) (d)
FIGURE 16.24
Examples of common MRI artefacts. (a) Motion during the scan produces ‘ghosts’ (cardiac in this case).
(b) Inflowing arterial blood generates high signal in gradient echo images. (c) Phase wrap occurs when the field
of view is too small in the phase encode direction, here the arms have wrapped around. (d) Metal implants
cause signal dropout on most images, shown here in the common example of dental fillings.
structure, displaced in the phase encode direction (even if the actual motion is through the
slice, e.g. blood flowing in a vessel).
Since they have (relatively) regular cycles, it is possible to compensate for respiratory
and cardiac motion. In all cases, it is necessary to have some method of measuring the
motion, and then to either adjust the phase encode gradient sequence, or temporarily sus-
pend image acquisition.
usual way, and the waveform is electronically filtered and processed to detect the R wave.
As each R peak is detected the imaging sequence is started, so that the effective TR is the
RR interval which of course depends on the heart rate of the patient. Each line of data for a
particular image is acquired at the same time relative to the cardiac cycle and thus motion
artefacts do not appear.
Echocardiogram gating is essential when imaging the thorax for cardiac or pulmonary
disease. Gated imaging can also be useful in situations where a major artery is in the field
of view, or to remove artefacts due to CSF pulsation in spinal imaging. In the latter cases
it is more convenient to use a photoplethysmographic (PPU) detector on the finger, which
detects the arrival of arterial blood in the digit and thus provides an R-wave trigger. Since
the ‘effective TR’ is determined by the RR interval the contrast and SNR in an image will
to some extent be dependent on pulse rate, and images are usually T1 weighted.
16.12.3 MR Angiography
Moving blood within vessels causes distinctive artefacts, always in the phase encoding
direction. On spin echo images, the lumen of through-plane blood vessels appears black,
because the blood receives either the 90° or the 180° pulse but not both, and therefore can-
not produce an echo (Figure 16.25a). However, in-plane or slow-flowing blood will have
an intermediate signal, and may then cause ghost artefacts. GRE sequences produce the
opposite appearance (‘bright blood’) because the echo is rephased by the gradients not
an RF pulse (Figure 16.25b). These bright signals often produce many ghosts, especially
where the flow is very pulsatile. Any turbulence or reduction in flow rate changes the
appearance of moving fluid in the image, usually by reducing intensity.
Magnetic Resonance Imaging 593
(a)
v ≥ 2z/TE
(b)
v ≥ z/TR
FIGURE 16.25
Appearance of flowing blood. v = blood velocity, z = slice thickness. (a) In spin echo images, when the velocity
is higher than 2z/TE, the flowing blood does not experience both RF pulses and does not generate an echo.
Vessels are black. (b) In gradient echo images, when the velocity is higher than z/TR, inflowing vessels are
bright because they contain ‘fresh’ magnetisation.
To remove the flow artefacts, it is necessary to use balanced gradients. Known as ‘gra-
dient moment nulling’ or simply ‘flow comp’, additional gradient lobes are added to the
pulse sequence, to ensure that the phase of moving protons as well as static tissue is prop-
erly rephased at the TE. These extra gradients tend to increase the minimum TE and TR
of the sequence.
Insight
Turning Flow Artefacts into Useful Images: MRA Angiography
These flowing artefacts can be exploited to produce MR angiograms, which show only the blood
vessels. Time-of-flight (TOF) angiography uses a flow-compensated GRE sequence with a very
short TR. The signal from static tissues within the slice is almost zero, due to T1 saturation. Blood
flowing perpendicular to the slice, however, has ‘fresh’ magnetisation for each excitation and pro-
duces a high signal. The flow compensation prevents ghosting from the blood vessels. The high
contrast between blood vessels and static tissue is usually viewed using a ‘maximum intensity pro-
jection’ (MIP). This post-processing technique generates a 2-D projection view from a volume of
data. The volume may be acquired as a true 3-D acquisition, or as a stack of thin 2-D slices. Since
it is basically a T1w image, short-T1 tissues can leave residual signals in the MIPs which obscure
the vessels, so it is common practice to ‘cut away’ the unwanted regions.
TABLE 16.2
Comparison of Various Magnet Types
Magnet Type Properties Advantages Disadvantages
Permanent Made of iron alloy blocks Cheap; zero running costs Low field (<0.6 T)
(permanently magnetised) Open design Weight > 40 tonnes
Electromagnetic Water-cooled copper coils Field can be turned off in High power requirements
emergency Poor stability
Open design
Mid cost
Superconducting Liquid helium cooled wires High field High capital cost
Good homogeneity Cost of helium
Relatively enclosed bore
at 4 K (–269°C) to cool the current-carrying wires so that they have almost zero resistance.
Once the current is set to provide the correct magnetic field, no electrical power is neces-
sary and the major running cost is that of topping up the liquid helium. Very good homo-
geneity is achievable, often to less than 0.2 ppm over the volume of the head coil.
The second major component is a gradient set containing three sets of coils, one for
each orthogonal direction, which produce linear magnetic field gradients when current
is passed through them. Gradient sets are characterised by a maximum gradient strength,
expressed in mT m–1, and a slew rate, T m–1 s–1. The maximum gradient has implications for
resolution, while both strength and speed are important for fast imaging. Modern gradi-
ent systems are often cooled with water or air, and the amplifiers which provide the high
voltage and currents needed may also be water-cooled.
Finally there is the RF sub-system. All modern MR systems have a built-in ‘body coil’
which is used to transmit the RF pulses with homogeneous B1 over a large volume. It may
also be used to receive the signal (it is a ‘transmit/receive coil’), but it is more common to
use a variety of RF coils, each designed to optimise SNR from a particular part of the body.
These are known as ‘receive-only’ or ‘surface’ coils. Since the surface coil is close to the
area of interest, it is much more sensitive to the signals of interest.
Radiofrequency coils are also described as quadrature, linear or phased array. The sim-
plest ‘linear’ coils are single loops of copper, rarely circular but shaped appropriately for a
particular part of the body. Quadrature coils have two loops, usually overlapping, and are
able to improve the SNR by reducing the amount of noise detected. Phased array coils are
the most common surface coils, using several coil elements to pick up signal. Each element
forms an image, and during reconstruction they are combined together to form a single
image. Such coils are able to create images over a large FOV with high SNR. Phased array
coils are particularly important due to a relatively new technique called parallel imaging,
which uses information about the spatial sensitivity of coil elements to recreate missing
phase encoding information. This allows scan times to be reduced, by skipping PE lines.
The whole system is controlled by a central computer, with dedicated processors to do
complex tasks such as reconstruction or pulse sequence generation. The software through
which the user controls the system is nowadays very user friendly, using graphical inter-
faces and automated processing wherever possible to reduce examination times and
avoid artefacts. The final item to mention in an MRI installation is the RF-screened room
(or Faraday cage). Sheets of copper or aluminium built into the walls, floor and ceiling of
the main magnet room form a totally closed metal box around the system. This prevents
the tiny MRI signal from being swamped by the ambient RF noise.
596 Physics for Diagnostic Radiology
patient’s body may also be ferromagnetic. The degree of hazard depends both on the type
of implant and its location. Of particular concern are intra-cranial aneurysm clips which
can cause fatal haemorrhage if they move.
Other items may not become missiles but will be damaged by the magnetic field and
should be removed. Examples include analogue watches and cards with magnetic strips.
Various magnetically activated devices also fall into this category such as cochlear implants
and cardiac pacemakers. Persons with these implanted devices may not enter the 0.5 mT
fringe field and therefore cannot be scanned.
16.14.4 RF Fields
Radiofrequency waves contain both electric and magnetic fields oscillating at MHz fre-
quencies. At these rates, the induction of circulating current in the body is minimal, as
there are high resistive losses. Most of the RF power is therefore converted to heat in the
body, and bioeffects and safety limits are considered accordingly. In healthy tissues, a
local temperature rise caused by RF power deposition will trigger the thermoregulatory
mechanism of increased perfusion to dissipate the heat around the rest of the body. If the
rate of power deposition is very high, or the thermoregulation system is impaired in some
way, heat will accumulate locally, eventually causing tissue damage. Some areas of the
body are particularly heat sensitive, for example the eyes, the testes and the foetus, and
extra care should be taken when scanning such patients. The safety limits are designed to
limit the temperature rise of the body to 1°C. For the whole body in a 30-minute examina-
tion, the specific energy absorption rate (SAR) limit is 1 W kg–1, while for a head scan of
similar duration it is 2 W kg–1.
Particular care must be taken when metal objects are in the imaging field, for example,
ECG leads or non-removable metallic implants. Such objects absorb RF energy very effi-
ciently and may become hot enough to burn the skin if in direct contact. It is worth noting
that RF burns to patients form the majority of MRI-related accidents reported to either the
FDA or MDA.
established and evidence that exposure to RF radiation may cause heat-induced damage
is available. Currently the evidence concerning the safety of MRI during pregnancy is not
conclusive; indeed there are many centres which provide foetal scanning for abnormali-
ties, usually following a suspect ultrasound scan.
Since the safety of MRI cannot be proved, the guidelines recommend that scans are
not performed in the first trimester. However, if other non-ionising imaging methods
are inadequate or the alternative diagnostic method involves ionising radiation, and if
the information is regarded as clinically important, a scan may take place at any stage of
pregnancy.
References
FDA: Food and Drug Administration (2003) Criteria for Significant Risk Investigations. http://www.
fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm072686.
htm [accessed 5 July 2009].
MHRA: Medicines and Healthcare Devices Regulatory Agency (2007) Safety Guidelines for
Magnetic Resonance Imaging Equipment in Clinical Use. http://www.mhra.gov.uk/home/
idcplg?IdcService=GET_FILE&dDocName=CON2033065&RevisionSelectionMethod=LatestR
eleased [accessed 5 July 2009].
ICNIRP: International Commission on Non-Ionizing Radiation Protection (2004) Medical Magnetic
Resonance (MR) Procedures: Protection of Patients. Health Physics 87:197–216. http://www.
icnirp.de/documents/MR2004.pdf [accessed 5 July 2009].
NRPB: National Radiological Protection Board (1991) Principles for the Protection of Patients and
Volunteers during Clinical Magnetic Resonance Procedures. Documents of the NRPB Vol 2 No
1 (ISBN 0859513394). http://www.hpa.org.uk/webw/HPAweb&Page&HPAwebAutoListNa
meDesc/Page/1219908766891?p=1219908766891 [accessed 5 July 2009].
IEC: International Electrotechnical Commission (1995) Medical Electrical Equipment – Part 2:
Particular Requirements for the Safety of Magnetic Resonance Equipment for Medical Diagnosis
Edition 2.2. IEC 60601-2-33. http://www.iec.ch/webstore/ [accessed 5 July 2009].
Magnetic Resonance Imaging 599
Further Reading
Elster AD & Burdette JH, Questions and Answers in Magnetic Resonance Imaging, 2nd ed, Mosby, 2001.
Hashemi RH, Bradley WG & Lisanti CJ, MRI the Basics, 2nd ed, Lippincott Williams & Wilkins,
2003.
McRobbie DM, Moore EA, Graves MJ & Prince MR, MRI from Picture to Proton, 2nd ed, Cambridge
University Press, 2007.
Exercises
1. What is meant by the term ‘resonance’ in relation to MRI?
2. Describe what is meant by the Larmor frequency and how it is related to the
applied magnetic field?
3. What are the Larmor frequencies for water nuclei in magnetic fields of 1.0 T, 1.5 T
and 3.0 T?
4. The Tesla (T) is a unit used to measure the magnetic field strength in MRI. What
is the strength of the earth’s magnetic field?
5. What is meant by ‘net magnetisation’?
6. Classically, the spin population states are divided into ‘parallel’ and ‘anti-parallel’:
what is the approximate population difference between these two states at 1.5 T
and body temperature?
7. What determines the frequency of the rotating frame of reference?
8. What is meant by flip angle (α)?
9. Explain what is meant by FID of the MR signal?
10. Explain what is meant by spin-lattice relaxation and how is this characterised
mathematically?
11. Explain what is meant by spin-spin relaxation and how is this characterised
mathematically?
12. Explain the spin echo sequence ignoring the imaging process. How might the tim-
ing parameters be adjusted to reflect T1, T2 and proton density in the image?
13. What is the difference between a spin echo and a gradient echo sequence and how
are these differences useful?
14. Explain how magnetic field gradients are used in the imaging process.
15. What is meant by frequency and phase encoding?
16. Describe how you would recognise motion and chemical shift artefacts and how
are they related to the magnetic field gradients?
17. Explain the inversion recovery sequence and how might a modification of this
sequence be used to remove the fat signal from the image?
18. What is the purpose of RF screening and where might it be found?
19. What is the fringe field?
600 Physics for Diagnostic Radiology
20. What is the maximum value in the fringe field that is generally regarded as safe
for a person with a cardiac pacemaker to stand?
21. List the main contraindications for MRI.
22. What precautions should be taken for staff or patients who may be pregnant in
relation to magnetic fields?
23. What is the biological effect of applying radio frequencies to tissues?
17
Digital Image Storage and Handling
G Cusick
SUMMARY
The aim of this chapter is to explain the technology underlying Picture Archiving and
Communications Systems (PACS) at a level that will help clinical users of such sys-
tems to understand the key factors that determine the performance of the systems.
• The stages in the imaging chain from the patient to reporting are identified.
• The requirements for acquiring faithful digital representations of images
are discussed, with some reference to the mathematics that underpins these
processes.
• Computer network architectures and some of the technicalities of networks
are discussed.
• The different architectures of PACS are discussed.
• The essential features of modern display systems are described and the need
for a customised display device to be matched to its purpose (diagnostic
reporting, high-quality images in a non-radiological environment, generic
displays) is explained.
• The importance of standards that ensure both compatibility of hardware for
communication of images and patient safety from the potential impact of
unsuitable software is emphasised.
• Methods of data compression to facilitate the ever-increasing amount of data
collected in some imaging protocols are discussed.
CONTENTS
17.1 Introduction......................................................................................................................... 603
17.2 The Imaging Chain............................................................................................................. 603
17.3 Image Acquisition—Digital Representation of Images.................................................604
17.3.1 Sampling.................................................................................................................. 605
17.3.2 Encoding and Storage............................................................................................ 607
17.3.2.1 Files............................................................................................................608
17.4 PACS System Architectures............................................................................................... 610
17.4.1 Networks.................................................................................................................. 610
17.4.1.1 Functions of the Network....................................................................... 611
17.4.1.2 Open Systems Interconnection.............................................................. 611
601
602 Physics for Diagnostic Radiology
17.4.2 Ethernet........................................................................................................................612
17.4.2.1 Network Topology—How Devices Are Connected Together..............612
17.4.2.2 Bridging and Switching.............................................................................613
17.4.2.3 Data Packaging on the Network...............................................................614
17.4.2.4 Layers 2 and 3..............................................................................................615
17.4.3 Internet Protocol.........................................................................................................615
17.4.3.1 IP Addressing..............................................................................................616
17.4.3.2 Bandwidth and Latency.............................................................................617
17.4.3.3 Shared Networks.........................................................................................617
17.4.3.4 Subnets and Virtual Networks..................................................................617
17.4.3.5 Quality of Service........................................................................................618
17.4.4 Servers..........................................................................................................................618
17.4.4.1 Image Acquisition.......................................................................................619
17.4.4.2 Image Database........................................................................................... 620
17.4.4.3 Integration with Other Systems............................................................... 620
17.4.5 Virtualisation............................................................................................................. 620
17.4.5.1 Storage.......................................................................................................... 620
17.4.5.2 Sizing Storage.............................................................................................. 621
17.4.5.3 Strategies...................................................................................................... 621
17.4.5.4 Backup and Security.................................................................................. 621
17.5 Display Devices...................................................................................................................... 622
17.5.1 Classes of Workstation.............................................................................................. 622
17.5.1.1 Diagnostic Reporting................................................................................. 622
17.5.1.2 High-Quality Clinical Workstations....................................................... 622
17.5.1.3 Generic......................................................................................................... 622
17.5.2 Properties of Electronic Displays............................................................................ 623
17.5.2.1 Dynamic Range.......................................................................................... 624
17.5.2.2 Stability and Reliability............................................................................. 624
17.6 Standards................................................................................................................................ 624
17.6.1 DICOM........................................................................................................................ 624
17.6.1.1 Overview..................................................................................................... 624
17.6.1.2 What Is DICOM?........................................................................................ 625
17.6.2 The Medical Devices Directive................................................................................ 625
17.6.2.1 Overview..................................................................................................... 625
17.6.2.2 Consequences of Classification................................................................ 626
17.7 Availability and Reliability.................................................................................................. 626
17.7.1 Availability................................................................................................................. 626
17.7.1.1 Importance of PACS................................................................................... 626
17.7.1.2 Management Consequences..................................................................... 627
17.7.2 Reliability.................................................................................................................... 627
17.8 Data Compression................................................................................................................. 627
17.8.1 Background—Reasons for Needing Compression............................................... 627
17.8.2 Lossy Compression................................................................................................... 628
17.8.3 Lossless Compression............................................................................................... 628
17.8.3.1 Attributes..................................................................................................... 628
17.8.3.2 JPEG 2000..................................................................................................... 629
17.9 Conclusion.............................................................................................................................. 631
References ....................................................................................................................................... 631
Further Reading.............................................................................................................................. 631
Exercises........................................................................................................................................... 632
Digital Image Storage and Handling 603
17.1 Introduction
The storage and transmission of images in digital form has had a profound effect on most
aspects of medical imaging. The emergence of Picture Archiving and Communications
Systems (PACS) over the last 20 years or so has transformed the way that images are cap-
tured, interpreted and reported upon, and used clinically. PACS is widely accepted to be
one of, if not the, most important and influential applications of information technology
(IT) in medicine.
The aim of this chapter is to explain the technology underlying PACS at a level that will
help clinical users of such systems to understand the key factors that determine the perfor-
mance of the systems. This will help users of the systems to communicate effectively with
the range of technical and clinical specialists who develop, deploy and maintain them.
Conventionally, this sort of chapter would open with an explanation of bits, bytes, binary
coding and computer fundamentals. IT is now so well established, and so ubiquitous that
this is no longer necessary, or helpful. References to some basic computer science texts
are included in the ‘Further Reading’ section for those readers who wish to pursue this
further. There are important IT concepts that still bear explanation, though, and these are
covered in the next sections. The purpose of this, at the minimum, is to establish a com-
mon vocabulary.
Image
Patient
Interpretation
Reporting
Radiologist
FIGURE 17.1
The imaging chain, from patient to report.
604 Physics for Diagnostic Radiology
Modality
Digitise
Process
Present
Select
Workstation
Patient
Interpret
Report
Adjust
Store
Radiologist
FIGURE 17.2
The digital imaging chain. This separates the acquisition phase, in which the image is converted to digital form,
the processing phase, in which the image is stored and possibly re-formatted and compressed, and the presen-
tation phase where selected images are properly formatted for sending to display or printing devices.
similar device. The options at that point for interacting with the image are limited—the
presentation of the image is essentially fixed.
Figure 17.2 shows a simplified version of the equivalent digital system. Whilst the overall
flow is similar, from patient to report, there are intermediate steps where the technology
can affect the nature and quality of the image seen by the radiologist at the workstation.
There is also the facility for the radiologist to interact with the image during the interpreta-
tion process.
The remainder of this chapter is devoted to examining the features of the digital imag-
ing chain that have an impact on the quality of the image available for review, and with the
system components that influence the performance and usability of the overall imaging
system.
17.3.1 Sampling
To represent an image digitally, it is necessary to assign discrete values both to the coor-
dinates in the image (x, y, and possibly in the third dimension, z), and to the value of the
parameter being imaged. The process of assigning these values is referred to as digitisa-
tion. The digital values represent samples of the continuous, analogue image. The sampled
digital version of the image can then be transmitted and stored just as any other set of
digital data.
To view the image, it must be reconstructed from the sampled version. The reconstruc-
tion process entails reading back the image samples, placing them in the correct sequence
and position, and removing artefacts introduced by the sampling process.
Accurate reconstruction requires that there is sufficient information in the sampled sig-
nal to specify the original signal completely. This requirement is codified in the Nyquist-
Shannon sampling theorem (Nyquist 1928; Shannon 1948):
Insight
Fourier Analysis
Any signal which is a function of a variable against time or space can be represented as a sum
of sinusoidal signals. A plot of the amplitudes of the component sinusoids against frequency is
known as the amplitude spectrum of the signal: this is a frequency-domain representation of the
signal. The spectrum is derived from the time- (or space-) domain signal, f(x), using the Fourier
Transform, in which the frequency-domain representation f(ν) is calculated thus:
∞
f ( ν) = ∫ f ( x )e −2 pixν dx
−∞
This is the Continuous Fourier Transform; more commonly, the Discrete Fourier Transform is com-
puted. The form of this is similar, substituting summation of discrete frequency components for the
integration of the continuous transform:
N −1
X k = ∑ xne( −2πi /N )kn
n= 0
The values Xk are complex numbers, representing the amplitude and phase of the sinusoidal com-
ponents of the original signal xn. The amplitude spectrum, illustrated in Figure 17.3, is calculated
by taking the amplitudes of the complex values. This application of the Fourier transform is one-
dimensional; the signal is a function only of time in this example. It is straightforward, though to
606 Physics for Diagnostic Radiology
generalise the transform to more than one dimension, for example, in mapping the spatial frequen-
cies present in an image. Further related transforms, notably the discrete cosine transform (DCT)
and the discrete wavelet transform, are the basis for common compression methods, discussed
later in the chapter.
Figure 17.3 shows the steps in sampling and reconstruction for an arbitrary signal. The
negative frequencies shown in the figure arise from the mathematical analysis; there are
several conventional ways to try to rationalise negative frequency as a physical concept,
none of them particularly satisfactory. Figure 17.3a shows the spectrum of the original sig-
nal. Figure 17.3b shows the spectrum after sampling at a sample rate fs; in Figure 17.3b, the
sample rate is more than twice the highest frequency present in the original signal. The
effect of the sampling process is to produce images of the original spectrum centred on the
sample frequency. The original signal can be reconstructed accurately by passing the sam-
pled signal through a low-pass filter which removes all signal components with frequency
greater that fs/2. This is shown in Figure 17.3c.
Figure 17.4 shows a similar sampling process, but in this case, the sample rate is below
the Nyquist rate. In the sampled signal, then, the image spectra overlap the original signal
spectrum (the arrowed areas in Figure 17.4a). Reconstructing the signal using a low-pass fil-
ter produces the result shown in Figure 17.4b. Note that the parts of the signal at frequencies
above fs/2 are folded back into the signal spectrum. This is known as aliasing; its importance is
that the aliased signals are not distinguishable from genuine signals in the reconstruction.
The discussion above has assumed idealised conditions: the signals being sampled are
entirely contained within a limited range of frequencies (i.e. the signal is band limited), and
the reconstruction filter ideal in that it passes all signals below the cut-off frequency and
none above. Real signals will generally contain some components (e.g. noise) which cover
(a)
–f +f
(b)
–fs fs
(c)
FIGURE 17.3
Sampling of signals, where the sampling frequency exceeds the Nyquist frequency. (a) The spectrum of the
original signal. (b) Sampling creates ‘image’ spectra, shifted by the sampling frequency fs. (c) Reconstruction of
the signal uses a filter, indicated by the broken line, to remove the image spectra.
Digital Image Storage and Handling 607
(a)
(b) –fs fs
–fs fs
FIGURE 17.4
Sampling of signals, where the sampling frequency is less than the Nyquist frequency. (a) The image spectra
now overlap the original signal in the arrowed regions. (b) When reconstructed using the appropriate filter,
parts of the image spectra remain (arrowed) as aliases.
a range of frequencies that is, in principle, infinite, and ideal filters are hard to implement.
Consequently, the Nyquist criterion must be taken as a theoretical limit: in practice, sam-
pling rates 2.5 or 3 times the maximum expected signal frequency are required for reliable
reconstruction.
It is conventional to express the quality of an image in terms of the spatial resolution,
expressed in line-pairs per mm. This translates more or less directly to spatial frequency;
thus, to achieve a resolution of 5 line-pairs per mm, a sampling rate in excess of 10 samples
per mm would be required in theory.
Grey-scale images can be represented by a single numerical value for each sampled point
in the image, representing the intensity at that point. Within the limits of the detector, digi-
tal systems exhibit a linear relationship between exposure and grey scale. This is different
from the s-shaped characteristic of film.
The number of bits available for each sampled point sets a limit on the fineness of the
grey-scale representation possible. Figure 17.5 shows four versions of a grey-scale image
608 Physics for Diagnostic Radiology
FIGURE 17.5
The effect of bit depth on image appearance. The four images are rendered at accuracies of 8, 4, 2 and 1 bits, from
top left to bottom right. Note that 8 bits corresponds to 256 grey shades (2^8= 256), 4 bits to 16 shades, 2 bits to
4 shades, and 1 bit to 2 shades only.
of flowers, at steadily reducing grey-scale resolution at 256, 16, 4 and 2 levels from top left
to bottom right.
A consequence of reducing the grey-scale resolution is the production of artefactual con-
tours in the image. For example, there are visible artefacts in the 16-level image in Figure
17.5. Compare the appearance of the two image sections in Figure 17.6. The lower (4 grey
shade) image shows some clear contours where there are none in the higher-resolution
image. In medical images, where interpretation often depends on detecting subtle changes
of density, this is particularly important.
Whilst floating-point number representations offer a greater range of values and accu-
racy, this is obtained at a cost in the processing requirement. It is not generally prac-
ticable to manipulate large image arrays of floating-point values sufficiently fast to be
usable.
17.3.2.1 Files
Computer systems use files to store data. This concept is familiar to anyone who has used a
desktop computer for word processing. In general, a file is no more than a structured block
of digital data stored, for example, on a computer disk. The file structure is determined by
the application.
The image representation discussed above, in which the image is represented as a rect-
angular array of intensity values, is often known as a bitmap. The term derives from the
display of strictly black and white images, in which each pixel was either ‘on’ or ‘off’, and
Digital Image Storage and Handling 609
FIGURE 17.6
Creation of artefactual boundaries due to insufficient accuracy. The top image is digitised at 8 bits. Note the
boundaries that appear in the lower image, digitised at 2 bits accuracy (only 4 grey shades).
could thus be represented by a single bit. Bitmap files have some of the simplest structures,
but even then, have data additional to the raw image data.
For example, the outline of the Microsoft bitmap (bmp) file format is shown in
Table 17.1.
The information in the first three elements in the file is used by the system that dis-
plays the file to determine how to interpret the bitmap itself. The reason for doing this
is that it increases flexibility by allowing the same file type, and the same basic readback
mechanism, to handle a range of different detailed files. By reading the information in
this header, the system can determine how to render the information in the bitmap that
follows it.
The DICOM standard (see Section 17.6.1 below) includes detailed definitions for the stor-
age of Pixel Overlay and Waveform data. The definitions are complex, and much more
extensive than the simple example above. But the purpose is exactly the same: to provide
detailed rules by which a device or system creating an image file must operate, in order
that another device or system can operate on the file, to render it for display or printing,
for example.
610 Physics for Diagnostic Radiology
TABLE 17.1
Fields in the Header of a Bitmap File
BMP File Header Stores General Information about the BMP File
Bitmap information Stores detailed information about the bitmap image, such as:
• The sizes of the elements in the file
• The height and width of the image in pixels
• Colour depth which determines the number of bits per pixel
• The compression method used
Colour palette Stores the definition of the colours being used for indexed colour bitmaps
Bitmap data Stores the actual image, pixel by pixel
17.4.1 Networks
The data network is fundamental to the operation of a digital image storage system, since it
provides the channel by which the elements of the system communicate. The performance
of the network is thus a key factor determining the performance of the system. This sec-
tion explains the structure of modern data networks, referring particularly to the features
which either enhance or limit performance in one way or another. Network design is a
large and complex field. This section necessarily simplifies things: the goal is to give an
appreciation of some of the issues, and to remove some of the mysteries.
The design of an ‘enterprise’ network intended to meet all the IT requirements of, for
example, the whole of a modern hospital is complex. The single physical network installed
in modern buildings is expected to serve a wide range of applications. Different applica-
tions place different demands on the network: digital telephony, often known as voice over
IP, for example, requires relatively low network bandwidth, but requires that the delay in
transmitting the information (latency) is minimised. PACS can consume large bandwidth,
Digital Image Storage and Handling 611
but because the large data volumes are transferred between intelligent devices, latency
is less of an issue. Good practice mandates the appointment of a network design authority,
with responsibility for designing and implementing the network, and all changes to it, to
deliver the required performance.
The model provides a common basis for the coordination of standards development
for the purpose of systems interconnection, while allowing existing standards to be
placed into perspective within the overall Reference Model. The model identifies areas
for developing or improving standards. It does not intend to serve as an implementation
specification.
FIGURE 17.7
A point-to-point network connection.
612 Physics for Diagnostic Radiology
TABLE 17.2
The Layers of the ISO Open Systems Interconnect Model
Data Unit Layer Function
7. Application Network process to application
Data 6. Presentation Data representation and encryption
Host layers
5. Session Interhost communication
Segment 4. Transport End-to-end connections and reliability
Packet 3. Network Path determination and logical addressing
Media layers Frame 2. Data Link Physical addressing
Bit 1. Physical Media, signal and binary transmission
The model divides the networking system into layers. Functionality within a layer is imple-
mented by one or more entities, each of which can interact directly only with the layer directly
below it and can provide facilities for use by entities in the layer immediately above it. The
benefits of this are, first, that it divides overall network functionality into smaller and sim-
pler components, facilitating their design, development and troubleshooting; and second,
that the effects of changes in functionality in one layer are restricted to that layer alone.
Table 17.2 shows the layers in the ISO Open System Interconnect (OSI) model. The seven
layers cover all aspects of communication between systems, from the physical connection
to the network (layer 1) to the communications aspects of the application seen by the user.
The two layers with the most importance in the design and implementation of networks
are layer 2, the Data Link Layer, and layer 3, the Network Layer.
17.4.2 Ethernet
The predominant networking technology is now based upon developments of the Ethernet
technology, developed by the Xerox Corporation in the early 1970s. The original Ethernet
used a common coaxial cable to which all attached machines were connected. Any connected
machine could transmit on the cable, and access was therefore governed by a scheme known
as carrier sense multiple access with collision detection (CSMA/CD). When an attached
machine wanted to send a package of data, it first waited for the cable to be free for a mini-
mum time. During transmission of the data, it checked continuously for corruption of the
data due to collision with data transmitted by another machine. If a collision was detected,
the transmission was terminated, and restarted after a short, randomly chosen delay.
This scheme was much simpler than the competing technologies at the time, and was
rapidly adopted as the basis of the Institute of Electrical and Electronics Engineers (IEEE)
standard 802, for the standardisation of local area networks (LAN). The standard became
IEEE802.3, and was later adopted by the ISO as ISO/IEEE 802/3.
FIGURE 17.8
A network arranged in a bus topology. All nodes on the network connect identically to the network cable.
FIGURE 17.9
A star topology. All the outer nodes connect to the central node of the star.
The original Ethernet topology was a bus, as shown in Figure 17.8. The term ‘bus’ derives
from electrical engineering, where a single conductor connecting together a number of cir-
cuits is known as a busbar. All nodes are connected onto a common backbone connection.
The simplicity of this arrangement is more than offset by the disadvantages. Damage to the
cable anywhere on the segment can result in signal reflections from the resulting imped-
ance discontinuity, which mimic collisions and can therefore prevent valid transmissions.
Failure of any node on the segment can cause the whole segment to cease to function.
The star topology, shown in Figure 17.9, avoids some of these problems. The hub, at the
centre of the star, essentially isolates the connections to the nodes from one another. Only
a fault in the hub itself can disrupt the network partitioning. The initial applications of
star topologies in Ethernet networks still used coaxial cables, though often ‘thin’ (RG/58)
cable.
A key development, though, was the move to using unshielded twisted-pair cables. In
these simple networks, the hub is a simple device that just re-transmits every packet to
every connected node. The total throughput of the hub is limited to that of a single link,
and all must operate at the same speed. Obviously, as network sizes rise, the capacity of the
hub becomes a significant restriction.
alleviated by bridging, where the network is divided into multiple segments, and only well-
formed Ethernet packets are passed between segments. Bridges learn where devices are
by building a map of the unique hardware identifiers (MAC addresses) of the devices, and
only forward packets to a segment when the destination is known to be on that segment,
or is unknown.
Developments of the bridge devices which inspected the entire packet before forward-
ing it, have become known as switches, though this name does not appear in the standards.
Modern Ethernet networks are generally constructed around a hierarchy of switches,
allowing segmentation of the network to increase performance, and meshing, where there
are multiple potential paths between parts of the network to increase reliability.
Switches also allow different parts of the network to operate at different speeds, allowing
high-bandwidth (and hence high cost) links to be used where they are required. Typically,
the network ‘backbone’, connecting the switches at the highest level of the hierarchy, will
be connected using optical fibres which offer the highest bandwidth, but are relatively
costly to install. In a network supporting PACS, high-bandwidth (1 Gb/s) copper links
will be concentrated in the imaging department, connecting the acquisition systems and
diagnostic workstations.
destination
Ethertype/
delimeter
Preamble
Padding)
(Data &
Payload
CRC32
Length
source
MAC
MAC
72–1526 Octets
FIGURE 17.10
The Ethernet frame.
Digital Image Storage and Handling 615
The Payload is the data actually carried in the packet. The maximum length, about 1500
bytes, was chosen as a compromise between maximising throughput (the larger the better),
and re-transmission overhead (the smaller the better). Current developments will probably
lead to the adoption of longer frames in due course.
The CRC32 field contains a 32-bit Cyclic Redundancy Code (CRC), calculated from all the
data fields in the frame. The CRC is calculated by the transmitting machine, and appended
to the frame. The receiving machine carries out the same calculation and compares the
result with the CRC in the packet. Virtually all transmission errors will result in a differ-
ence between the received and calculated values, and the receiver can request re-transmis-
sion of the packet.
A gap of at least 12 octets must be left idle after each packet.
Each packet can carry a maximum of approximately 1500 bytes of ‘useful’ data; the
remaining 26 bytes (plus a minimum 12-byte interpacket gap) are an overhead of the basic
transmission mechanism.
17.4.3.1 IP Addressing
In an IP network, every attached device is assigned a unique address (its IP address); this
is used in the protocol to direct packets of data to that device. In the most widely used ver-
sion of the protocol (IPv4), addresses consist of 32 bits, usually expressed for readability in
a dot-decimal notation as shown in Figure 17.11.
A revision of the IP (IPv6) has been agreed which increases the size of addresses from
32 to 128 bits, in response to the huge growth in the number of devices and items of equip-
ment that now require an IP address. Whilst modern operating systems generally support
IPv6 addressing, it is not widely implemented in other devices. The remainder of this sec-
tion will therefore concentrate on IPv4.
The allocation of IP addresses is managed globally by an organisation called The Internet
Assigned Numbers Authority (IANA). The allocation of addresses is a key factor in keep-
ing the internet working properly. Of the 232 (4,294,967,296) available addresses, about 18
million are reserved for private networks, a further 270 million for multicast addresses,
such as that used by the network time protocol, NTP. Networks implemented within an
organisation will normally use addresses from the private network pool. This should not
cause conflict because these addresses will not propagate outside the organisation. Where
such a corporate network connects to the internet, a device, generally referred to as a proxy,
is required that translates the internal, private addresses into public addresses.
It is simplest to consider the IP address for a particular device to be fixed, assigned
when the device was configured. In networks with few devices, or where there is minimal
change, this is practicable, and has the advantage that the IP address can be used as a prime
identifier for the device. In larger networks, however, or where devices may join or leave
the network, managing a fixed addressing scheme becomes difficult. An alternative is that
the ‘network’ assigns an IP address from an available pool at the time the device connects
to the network. This is governed by a specific protocol, the Dynamic Host Configuration
Protocol, DHCP, and is widely used. To minimise ‘churn’ of addresses when devices join
and leave the network repeatedly, DHCP addresses are assigned a ‘lease’; having been
assigned to a particular device, the assignment is retained for the duration of the lease. If
the device reconnects during that time, the same address will be assigned.
The IP address cannot then be used as the primary identifier, as it may change from
time to time. Instead, a unique Host Name is assigned to each device. But because IP relies
on using the IP address for routing data packets, the host name must be converted to an
IP address before the device can communicate on the network. This process, host name
FIGURE 17.11
IP addressing. The 32-bit address is broken into four 8-bit fields, each of which can be represented as a decimal
number in the range 0–255.
Digital Image Storage and Handling 617
resolution, is required at the start of any communication with the device, and adds an
overhead to that process.
It is common practice for some devices on a network to have fixed IP addresses.
FIGURE 17.12
Subnet masking. The subnet mask 255.255.255.0 selects a group of 256 network addresses.
TABLE 17.3
Quality of Service-Related Terms
Bit rate The total number of physically transferred bits per second over a
communication link, including useful data as well as protocol overhead
Delay or latency The time from the start of packet transmission to the start of packet reception
Packet dropping probability Probability that a packet will be dropped (i.e. fail to be received),
expressed as a percentage. A value of 0 means that a packet will never be
dropped, and a value of 100 means that all packets will be dropped
Bit error rate The proportion of bits that have errors relative to the total number of bits
received in a transmission, usually expressed as ten to a negative power
network, but it allows for a logical grouping of network devices that is independent of their
physical location. VLANs are OSI Layer 2 constructs, whereas subnets are implemented
at Layer 3, as part of the IP. In an environment using VLANs, there is often a one-to-one
relationship between VLANs and subnets, though there can be multiple VLANs on one
subnet and vice versa.
VLANs offer the possibility of separating logical position on the network from physical
location, and also mean that network reconfiguration can often be done without physically
moving connections. In complex networks, these are important benefits.
17.4.4 Servers
Figure 17.13 shows the principal server components of a typical PACS concerned with the
acquisition, storage and processing of images. In a normal implementation, there will be
Digital Image Storage and Handling 619
Imaging Local
modalities storage
Acquisition Local
storage
Redundant disks
Image servers
Tape library or
jukebox
Workstation Local
storage
FIGURE 17.13
A typical PACS server architecture. The servers shown are ‘logical’ devices; the distribution amongst physical
machines may be different.
additional servers, particularly those required to provide interfaces to other hospital sys-
tems such as the patient master index and radiology information system. There may also
be a separate server supporting web presentation of images, though some systems incor-
porate this function into the main image servers.
Server functions are discussed in the next sections.
• Acquires image data from the radiological device, managing communication with
the devices and carrying out any error recovery tasks required
• Where required, it converts the image data to a standardised, PACS-compliant
format, meeting the requirements of the relevant DICOM standards
• Forwards the image to the image servers and display workstations
Images will generally be buffered by the acquisition server, until the image servers con-
firm that the image is safely and correctly stored. This provides insurance against the
failure of either the image server or the communications channel.
620 Physics for Diagnostic Radiology
17.4.5 Virtualisation
In computing, virtualisation refers to a range of techniques intended to facilitate the sharing
of computing resources—processing, network or storage, for example—between separate
and independent applications. The techniques all work by abstraction, breaking the con-
nection between a physical resource, for example, a specific hard disk, and its application,
a unit of storage used by an application.
Virtualisation is becoming widespread throughout commercial computing. In the case
of medical systems, a caveat is required. Since many such systems, PACS included, may
be classified as medical devices, the decision to virtualise components of a PACS must be
supported by the system manufacturer.
17.4.5.1 Storage
In a hospital IT context, PACS is likely to place by far the largest demand on storage. Annual
data volumes for a filmless hospital can easily exceed 1 TB. Experience in centres which
have established PACS is that the annual volume continues to grow. New modalities, for
instance spiral CT, generate large numbers of high resolution, high dynamic range images.
Storage and its management constitute a large part of PACS costs, and are central to good
performance.
Virtualisation technologies affect the implementation of storage through systems such
as the storage array network (SAN). A SAN is an architecture to attach remote computer
storage devices (such as disk arrays, tape libraries and optical jukeboxes) to servers in such
a way that the devices appear as locally attached to the operating system. Sharing storage
Digital Image Storage and Handling 621
in this way usually simplifies storage administration and adds flexibility since cables and
storage devices do not have to be physically moved to shift storage from one server to
another. SANs also tend to enable more effective disaster recovery processes. A SAN could
span a primary data centre and a distant location containing a secondary storage array.
This enables storage replication either implemented by disk array controllers, by server
software, or by specialized SAN devices.
17.4.5.3 Strategies
The quantity and type of storage used will have marked effects on system performance, and
informed choices have to be made. The factors to consider in this include the following:
• The number of studies that are required for immediate, rapid access. This will
be affected by the patient population served, the types of study in use, and the
patient throughput
• Availability and cost of storage
There can be no hard-and-fast rules determining how much of what sort of storage or
how much is ideal. Technical developments in this area continue at a surprising rate; stor-
age decisions need to be considered carefully both at the time of PACS installation, and
throughout the life of the system.
The first requirement translates into a need to implement a robust and comprehensive
backup strategy, so that up-to-date copies of image data are regularly and frequently
created. It is equally essential that these backup copies are capable of being restored
correctly.
The second requirement is addressed through the authentication and sign-on processes
for PACS, and through encryption of data outside the secure cordon. Such security mea-
sures have to be implemented within an appropriate management framework to ensure
that all users comply with the requirements.
Protecting against malicious software requires a range of protective measures such as
firewalls that prevent unauthorised access to the network system, and anti-virus software
622 Physics for Diagnostic Radiology
that detects and disables malicious software. Again, management measures are necessary
to minimise the risk of introducing malicious software from portable storage, such as the
ubiquitous USB stick.
17.5.1.3 Generic
Images are widely used outside the imaging department, generally in association with the
radiologist’s report. In these circumstances, the requirements for image quality are less
stringent, but the cost of the high-grade workstations and displays is prohibitive. It is com-
mon, therefore, to provide a view of images via a web browser on more or less standard
desktop computers.
Digital Image Storage and Handling 623
Web transmission requires that images are compressed, with the attendant artefacts and
loss of quality. Web displays will permit some limited manipulation of the image.
(a)
A linear polariser
(b)
Glass window with a pattern of electrodes deposited on it
A liquid crystal layer, typically 10 µm thick
(c)
(d)
A second glass window with electrodes
(e)
A rear linear polariser, with its axis of polarisation perpendicular to the other
polariser
(f) Backlight or reflector
The display generates the image as a physical pattern in the liquid crystal layer which is
viewed by the light from the backlight transmitted through the layers in front of it. The
Viewing direction
FIGURE 17.14
Liquid crystal displays operate by modulating the light transmitted through the liquid crystal layer and a pair
of crossed linear polarisers.
624 Physics for Diagnostic Radiology
display’s operation relies on the fact that, in one state, the liquid crystal layer (c) rotates the
polarisation of light by 90 degrees. The liquid crystal is chosen so that, in its relaxed state
(i.e. with no electric field applied), the molecules are twisted and rotate the polarisation of
light passing through it. Thus, polarised light passing through the rear polariser is rotated
so that it will also pass through the front polariser. Applying an electric field, by placing a
voltage between electrodes on the front and rear windows, aligns the molecules and stops
the rotation of polarisation. As a result, the pattern of the electrodes shows as dark.
The matrix displays used in computer applications have ‘row’ and ‘column’ electrodes.
In early displays, these were driven directly by external electronics (so-called passive-ma-
trix displays). All modern displays are active-matrix displays, where the electrode drive cir-
cuitry is integrated on the display itself. Active-matrix addressed displays look ‘brighter’
and ‘sharper’ than passive-matrix addressed displays of the same size, and generally have
quicker response times, producing much better images.
A colour display can be created by overlaying a set of colour filters over groups of pixels.
The key characteristic of LCDs is that they are transmissive (or transflective) displays,
rather than emissive displays like CRTs.
17.6 Standards
17.6.1 DICOM
17.6.1.1 Overview
The DICOM (Digital Imaging and Communications in Medicine) standard originated in
the 1980s from a joint initiative between the American College of Radiology (ACR) and
Digital Image Storage and Handling 625
. . . any instrument, apparatus, appliance, software, material or other article, whether
used alone or in combination, together with any accessories, including the software
626 Physics for Diagnostic Radiology
and which does not achieve its principal intended action in or on the human body by
pharmacological, immunological or metabolic means, but which may be assisted in its
function by such means.
The 2007 revision of the Directive contains an important clarification on the classification
of software systems:
It is necessary to clarify that software in its own right, when specifically intended by the
manufacturer to be used for one or more of the medical purposes set out in the defini-
tion of a medical device, is a medical device. Software for general purposes when used
in a healthcare setting is not a medical device.
The importance of this is that it makes a clear separation between those systems actively
used in diagnosis or treatment, and those that would normally be thought of as ‘adminis-
trative’. There is then little doubt that PACS, used as they are for acquiring, viewing and
processing medical images, are medical devices under the Directive.
PACS has to be considered, then, in the context of the whole hospital’s operation, and mea-
sures taken to minimise the risk of PACS failure.
17.7.2 Reliability
PACS must be viewed as a high-reliability system. This implies some key design
requirements:
• Elimination of single points of failure. A careful analysis of the system and the infra-
structure that it uses, may reveal particular resources—items of equipment pieces
of software, or even people—failure of which would result in failure of PACS.
• Resilience. An analysis of external factors will reveal those non-PACS factors with
the potential to bring PACS down.
Single points of failure can generally be eliminated by redundant design, where critical
items are duplicated. Resilience factors include the use of backup and uninterruptible
power supplies. It is important, though, to ensure that both redundant and resilient design
are based on rigorous analysis.
1. The representation of the colours in the image is converted from RGB to YCbCr,
consisting of one luma component (Y), representing brightness, and two chroma
components, (Cb and Cr), representing colour. This step is sometimes skipped.
2. The resolution of the chroma data is reduced, usually by a factor of 2. This reflects the
fact that the eye is less sensitive to fine colour details than to fine brightness details.
3. The image is split into blocks of 8 × 8 pixels, and for each block, each of the Y, Cb, and
Cr data undergoes a discrete cosine transform (DCT). A DCT is similar to a Fourier
transform in the sense that it produces a kind of spatial frequency spectrum.
4. The amplitudes of the frequency components are quantised. Human vision is
much more sensitive to small variations in colour or brightness over large areas
than to the strength of high-frequency brightness variations. Therefore, the magni-
tudes of the high-frequency components are stored with a lower accuracy than the
low-frequency components. The quality setting of the encoder affects the extent to
which the resolution of each frequency component is reduced. If an excessively low
quality setting is used, the high-frequency components are discarded altogether.
5. The resulting data for all 8 × 8 blocks is further compressed with a lossless
algorithm.
The JPEG compression algorithm works best on images of realistic scenes with smooth
variations of tone and colour. For web usage, where the bandwidth used by an image
is important, JPEG is very popular. JPEG is not as well suited for line drawings or other
images with the sharp contrasts between adjacent pixels where noticeable artefacts are
created. As the compression ratio is increased, JPEG typically achieves compression ratios
of 10:1, without perceptible loss of quality, on suitable images. It is important to remember,
though, that the compressed image is not identical to the uncompressed.
Pixel 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Values 15 15 15 15 15 15 15 15 50 50 15 15 15 15 15
RLE 7 15 2 50 5 15
FIGURE 17.15
Run-length encoding is a simple lossless compression technique which works well with line images.
are 14 values in all, requiring 14 bytes to store them. But if, instead of storing the value of
every pixel we store pixel count/pixel value pairs, we obtain the result in the RLE row. In
this trivial example, the storage requirement has been reduced from 14 to 6 bytes. For line
images, this process is very effective, but where there are continuous variations of grey
shade, as in many medical images, the compression achieved is not great.
The requirement for a lossless compression technique should not require prior classifica-
tion of the image to select the optimum method to use.
LPF DS
Level 3
LPF DS HPF DS
FIGURE 17.16
The wavelet transform process.
LL2 HL2
HL1
LH2 HH2
LL0
LH1 HH1
FIGURE 17.17
The core of the JPEG2000 compression process.
Conclusion
As in many other areas of life, digital technology has brought huge changes in the way that
we acquire, use and exchange information. In medical imaging, PACS affects all aspects
of the radiologist’s working environment: the entire clinical team relies on the availability,
facilities and performance of the system. Whilst it is possible to operate in this environ-
ment without an understanding of the underlying principles, it is surely better to work
with and appreciation of the technical and scientific principles which allow PACS to work
at all, but, more importantly, limit what can be expected of it. Laying the basis for this
appreciation has been the goal of this chapter.
References
DICOM (2009) DICOM Standards. Online Available from http://medical.nema.org/ [accessed 24th
January 2011].
European Parliament (2007) Directive 2007/47/EC of the European Parliament and of the Council
of 5 September 2007. Online Available from http://ec.europa.eu/enterprise/medical_devices/
revision_mdd_en.htm [accessed 24th January 2011].
ISO (1994a) ISO/IEC 7498–1:1994: Information technology—Open systems interconnection—Basic
reference model: The basic model. Online Available from http://www.iso.org/iso/catalogue_
detail.htm?csnumber=20269 [accessed 24th January 2011].
ISO (1994b) ISO/IEC 10918–1:1994: Information technology—Digital compression and coding of
continuous-tone still images: Requirements and guidelines. Online Available from http://
www.iso.org/iso/catalogue_detail.htm?csnumber=18902 [accessed 24th January 2011].
ISO (2004) ISO/IEC 15444–1:2004: Information technology—JPEG 2000 image coding system: Core
coding system. Online Available from http://www.iso.org/iso/iso_catalogue/catalogue_tc/
catalogue_detail.htm?csnumber=37674 [accessed 24th January 2011].
Nyquist H (1928) Certain topics in telegraph transmission theory. Trans. AIEE, vol. 47, pp. 617–644,
April, 1928.
Shannon CE (1948) A mathematical theory of communication. Bell System Technical Journal, vol. 27,
pp. 379–423, 623–656, July, October, 1948.
Further Reading
Brookshear J G (2008) Computer Science: An Overview. Pearson Education (A good introduction to
computer science concepts, for those interested in exploring the basics of computing and
computers).
Oakley J D (ed.) (2003) Digital Imaging. Cambridge University Press (This is a good, introductory
level text covering many of the practicalities of selecting and using PACS).
632 Physics for Diagnostic Radiology
Huang H K (2004) PACS and Imaging Informatics—Basic Principles and Applications (2nd edition). Wiley
Blackwell (An in-depth, authoritative book covering the technical and operational aspects of
PACS).
Exercises
1. What are the consequences of the Nyquist-Shannon theorem for the digital acqui-
sition and storage of images? What are the differences in requirements between a
CT scan image and a gamma camera image?
2. Why is it common practice in modern data networks to use VLANs? How does a
VLAN differ from a subnet in an IP network?
3. Why could a PACS be classified as a ‘Medical Device’, and what are the implica-
tions of this?
4. How are the servers used in a typical PACS? How may the use of virtual serv-
ers affect this, and what are the advantages and disadvantages of server
virtualisation?
5. What is the DICOM standard and what should the standard define? How does the
form of the DICOM standard differ from that of the ISO standards?
6. Explain how a liquid crystal display works. What are the advantages and disad-
vantages for displaying diagnostic images when compared to a cathode ray tube?
7. Why is data compression important for radiological images and how can this
be achieved? What are the essential differences between lossy and lossless
compression?
18
Multiple Choice Questions
CONTENTS
18.1 MCQs.................................................................................................................................... 633
18.2 MCQ Answers.....................................................................................................................664
18.3 Notes .................................................................................................................................... 667
The following multiple choice questions (MCQs) have been constructed so as to test your
knowledge of the subject. Most answers are either given in the book or may be obtained by
deductive reasoning. A few may require some additional reading, particularly for a fuller
explanation.
The questions have been grouped in the same order as the chapter headings. There is no
constraint on the number of parts of any question that may be right or wrong.
Unless otherwise stated, assume in all practical radiographic situations that any change
in conditions is accompanied by an adjustment of mAs to give a similar signal at the recep-
tor, for example, optical density on the film. Note that without this constraint many of the
questions would be ambiguous.
18.1 MCQs
1.1 The nucleus of an atom may contain one or more of the following:
a) Photons
b) Protons
c) Neutrons
d) Positrons
e) Electron traps
1.2 The atomic number of a nuclide:
a) Is the number of neutrons in the nucleus of an atom
b) Determines the chemical identity of the atom
c) Will affect the attenuation properties of the material at diagnostic X-ray
energies
d) Is the same for 11-C as for 14-C
e) Is increased when a radioactive atom decays by negative beta emission
633
634 Physics for Diagnostic Radiology
5.6 The following statements refer to the phosphor layer in an image receptor:
a) Matching the K shell edge to the X-ray spectrum increases absorption
efficiency
b) Addition of dye to the phosphor increases absorption efficiency
c) Increasing the kVp increases the absorption efficiency
d) A thicker phosphor increases efficiency of conversion of X-ray energy into
light
e) Rare earth phosphors increase the efficiency of conversion of X-ray energy
into light
5.7 Receptors for digital radiography (DR) have the following properties:
a) Detector arrays can cover a large area (30 cm × 40 cm)
b) In an indirect detector the initial X-ray detector is a phosphor
c) In a direct detector the initial X-ray detector is amorphous silicon
d) Image blurring by visible light spreading is only a problem with indirect
detectors
e) Electronic noise and quantum noise may be a problem with all receptors
5.8 The following statements refer to receptors for computed radiography (CR)
a) The phosphor screen is a re-usable plate
b) The wavelengths of the laser stimulating light and the output light are
similar
c) For general radiology the spatial resolution of CR is better than that of film
screen
d) The CR receptor latitude is better than film-screen latitude
e) CR receptors and DR receptors have similar matrix sizes and resolutions
5.9 Charge coupled devices (CCDs) have the following advantages over television
cameras:
a) A single device can cover a larger field of view
b) The resolution does not vary with field size
c) A scanning beam of electrons is not necessary
d) The output is a digital image
e) CCDs can record images directly from X-rays
6.1 The following statements are true regarding the radiographic image:
a) Increasing the field size increases markedly the geometric unsharpness
b) The heel effect is most pronounced along the anode side of the X-ray beam
c) The visible contrast on a fluorescent screen between two substances of equal
thickness depends on the differences between their mass attenuation coef-
ficients and densities
d) For an X-ray generator operating above 150 kV bone/soft tissue contrast is
less than soft tissue/air contrast
e) The limiting spatial resolution between different parts of the image depends
on the contrast
Multiple Choice Questions 643
7.3 The amount of quantum noise in the image produced by an image intensifier
and television system for fluoroscopy can be reduced by the following:
a) Increasing the screening time
b) Increasing the exposure rate
c) Use of thicker intensifier input screens
d) Increasing the brightness gain on the image intensifier
e) Reducing electronic noise in the television chain
7.4 In the analysis of the spatial frequency response of an imaging system which
uses an image intensifier as the receptor:
a) The modulation transfer function (MTF) is normalised to one at zero spatial
frequency
b) The resultant MTF is the sum of the MTFs of the components
c) MTF and line spread function (LSF) contain the same information
d) The MTF always decreases with increasing spatial frequency
e) The resultant MTF is independent of the design of the X-ray tube
7.5 The following changes will improve the contrast to noise ratio of a digital image:
a) Increasing the focus to receptor distance
b) Decreasing the matrix size
c) High pass filtering
d) Frame averaging
e) ROC analysis
7.6 In the assessment of a diagnostic imaging procedure:
a) The line spread function is an effective measure of resolution
b) The modulation transfer function measures the performance of the com-
plete imaging system
c) The sum of the true positive and false positive images will be constant
d) A strict criterion for a positive abnormal image is a better way to discrimi-
nate between two imaging techniques than a lax criterion
e) Bayes Theorem provides a way to allow for the prevalence of disease
7.7 Receiver operator characteristic (ROC) curves:
a) Provide a means to compare the effectiveness of different imaging
procedures
b) Are obtained when contrast is varied in a controlled manner
c) Normally plot the false positives on the X axis against the false negatives on
the Y axis
d) Require the observer to adopt at least five different visual thresholds
e) Reduce to a straight line at 45 degrees when the observer guesses
7.8 Computer aided diagnosis:
a) Can only be used with digitised images
b) Has been used both as a ‘stand alone’ system and to provide prompts to an
expert observer
646 Physics for Diagnostic Radiology
8.10 In computed tomography the causes of certain artefacts in the image are as
follows:
a) Streak artefacts—patient movement
b) Ring artefacts—mechanical misalignment
c) Rib artefacts—beam hardening
d) Low-frequency artefacts—under-sampling high spatial frequencies
e) Partial volume effects—beam width not including the whole pixel
9.1 In asymptomatic mammographic screening:
a) X-rays generated below 20 kV must be used
b) The X-ray tube may have a rhodium target
c) A focus film distance of at least 1 m is necessary
d) Dose to the normal-sized breast must be less than 2 mGy per view
e) A DR system (flat panel detector) can be fitted retrospectively to a film-
screen mammography unit
9.2 The spectrum of X-rays emitted from an X-ray tube being used for
mammography:
a) Will contain characteristic X-rays if a molybdenum anode is being used
b) Is likely to contain X-rays with a maximum energy of 50 keV
c) At fixed kilovoltage, has a total intensity that depends on the atomic num-
ber of the anode
d) Is independent of the tube window material
e) Will pass through additional filtration before the X-rays fall on the patient
9.3 The following statements relate to high kV imaging:
a) Scattered radiation is a major problem
b) Exposures can be shorter than at low kV
c) Problems associated with the limited dynamic range of flat panel digital
detectors can be eliminated
d) Photoelectric interactions contribute very little to subject contrast
e) Increasing the kV has no effect on magnification or distortion
9.4 If the focus film distance for a particular examination is increased from 80 cm
to 120 cm with the image receptor kept as close to the patient as possible:
a) The image is magnified more
b) Geometric unsharpness is reduced
c) A larger film should be used
d) The entrance skin dose to the patient is reduced
e) The exit skin dose to the patient is unchanged
9.5 Requirements for good magnification radiography include the following:
a) A tube-patient distance of at least 1 m
b) A fine focus X-ray tube
c) A higher mAs than conventional radiography of the same body part at the
same kV
Multiple Choice Questions 649
d) Stationary grids
e) Image receptors with higher intrinsic resolution
9.6 In digital subtraction angiography the logarithm of pixel values is taken before
subtraction to remove the effect of variation in the following:
a) Overlying tissue
b) Underlying tissue
c) Scatter
d) X-ray tube output
e) Noise
9.7 In digital subtraction angiography the input dose rate to the patient:
a) Is reduced by using 0.2 mm copper
b) Is increased as the preferred method to improve image quality
c) Can be monitored using a dose area product meter
d) Must be increased with decreasing field size to maintain image quality
e) Is reduced when faster frame rates are used
9.8 The following would be features of a state-of-the-art system for interventional
work:
a) 2 mm or 4 mm selectable focal spot size
b) Adjustable pulse frequencies
c) Variable source-detector distance
d) 0.2–3 mm variable aluminium filtration
e) Higher dose rates for digital cine mode than fluoroscopy
9.9 For a PA projection of the chest of a small child or neonate:
a) A grid should normally be used to reduce scatter
b) Exposure times may be as short as a few milliseconds
c) More added filtration is used than for the corresponding adult examination
d) Criteria of image quality for an adult must be modified
e) An acceptable entrance surface dose for a neonate would be 200 µGy
10.1 Technetium-99m is a suitable radionuclide for imaging because:
a) Its half-life is such that it can be kept in stock in the department for long
periods
b) Its gamma ray energy is about 140 keV
c) It emits beta rays that can contribute to the image
d) It can be firmly bound to several different pharmaceuticals
e) A high proportion of disintegrations produce gamma rays
10.2 A molybdenum-technetium generator contains 3.7 GBq of molybdenum-99 at
0900 hours on a Monday. The activity of technetium-99m in equilibrium with
the molybdenum-99:
a) At any time depends on the half-life of technetium-99m
b) Is 3.7 GBq at 0900 hours on the Monday
650 Physics for Diagnostic Radiology
11.4 In quantitative PET, corrections to the data may be required for the following:
a) Detector dead time
b) Scatter effects
c) Radioactive decay
d) Partial volume effects
e) Quantum effects
11.5 In PET/CT:
a) The patient is moved from one imager to the other as quickly as possible.
b) The CT scan is used to provide attenuation data
c) The CT scanner is adjusted to operate at the same effective keV as the radia-
tion from the positron emitter
d) Patient movement in the two scans will be similar since the imaging times
are comparable
e) For radiotherapy treatment planning the two imaging methods give good
agreement on gross tumour volume
11.6 The following statements relate to the whole body dose to the operator from a
PET scan using 375 MBq of F-18 FDG:
a) Injecting and positioning the patient are major sources of dose
b) The operator would probably have to be classified if engaged in more than
3 scans/week
c) Ingestion of radioactivity will contribute negligibly to the dose
d) Doses in the control room are at safe public levels
e) Doses from samples taken from the patient can be disregarded because of
the short half-life of the radionuclide
12.1 Equivalent dose is:
a) Affected by radiation weighting factors
b) The absorbed dose averaged over the whole body
c) For X-rays numerically equal to the absorbed whole body dose in grays
d) Used to specify some dose limits
e) Used to specify doses to patients from X-ray examinations
12.2 The following statements refer to the biological effects of radiations on cells and
tissues:
a) The risk of fatal cancer is higher than the risk of severe hereditary disease
b) A tissue reaction increases steadily in severity from zero dose
c) Stochastic effects may be caused by background radiation
d) Chromosomal aberrations are strong evidence that ionising radiation causes
mutations in humans
e) Equal equivalent doses of different radiations cause equal risk to different
tissues
12.3 With respect to the responses of cells to ionising radiation:
a) Stem cells are generally more sensitive than differentiated cells
Multiple Choice Questions 653
c) Patient compression
d) Using a small focal spot
e) An air gap technique
13.7 Tissue weighting factors:
a) Allow for the fact that all tissues are not equally sensitive to radiation
b) Must sum to 1.0 for all tissues
c) Have SI units of sievert
d) Allow effective doses to be calculated very accurately
e) Are different for males and females
13.8 The effective dose to a patient from a single plane film radiograph is as follows:
a) Decreased at lower mA
b) Decreased by the use of grids
c) Decreased by using a faster film
d) Decreased by decreasing the field size
e) Always less than 5 mSv
13.9 The following dosimetric quantities are approximately correct:
a) The effective dose from a barium enema is 7 mSv
b) The annual whole body equivalent dose limit for members of the public is
15 mSv
c) A chest X-ray entrance skin dose is 150 µGy
d) The effective dose to a patient from a lung perfusion scan with 100 MBq
Tc-99m is 1 mSv
e) The average annual effective dose to the UK population from medical expo-
sures is 380 µSv
13.10 The risk from injected radioactivity:
a) Is independent of the biological half-life in the body
b) Depends on both the radionuclide and on the pharmaceutical form
c) Decreases steadily with time after injection
d) May be primarily due to the dose to a single organ
e) Is a factor limiting the quality of radionuclide images
13.11 The following statements relate to risks from radiological examinations:
a) The risk of fatal cancer from a lumbar spine examination is about 5 in 105
b) An imposed risk of 1 in 104 is likely to be challenged by the general public
c) A PA examination of the chest involves a smaller risk than a single anaesthetic
d) The risk from a CT examination varies from about 1 in 104 to 1 in 105
e) Diagnostic doses do not cause tissue reactions
13.12 The following statements relate to the use of diagnostic X-rays during pregnancy:
a) The dose to the uterus is usually calculated to obtain an estimate of the dose
to the foetus
b) Severe mental retardation is the most serious risk during the first few weeks
of pregnancy
656 Physics for Diagnostic Radiology
d) Lead curtains reduce staff doses more for over-couch than for under-couch
tubes
e) For some interventional procedures two or more personal monitors may be
necessary
14.10 The following precautions are taken to minimise staff doses in nuclear
medicine:
a) Keeping bottles containing radionuclides in lead pots
b) Using tongs to handle bottles containing radionuclides
c) Working over a tray when transferring material between containers
d) Wearing leaded gloves when giving injections
e) Staying as far away from a patient who has received radioactivity as good
patient care permits
14.11 The following are desirable design features for a diagnostic X-ray room:
a) All walls should be lined with lead sheet of a suitable thickness
b) No special care is required over the design of the door into the room because
the kV is low
c) Extra shielding may be required wherever the primary beam can strike the
wall
d) The dose rate should not exceed 1 µSv/h in the radiographer’s cubicle
e) Suitable hanging rails for lead-rubber aprons should be provided
14.12 Responsibilities of an employee working in the radiology department under the
UK Ionising Radiations Regulations (1999) include the following:
a) Only to use X-ray equipment after adequate training
b) Wearing personal protective equipment provided
c) Wearing personal monitors as instructed
d) Notifying the employer immediately of any suspected over-exposure to
radiation
e) Notifying the employer immediately of any malfunction of equipment
14.13 UK Ionising Radiations (Medical Exposure) Regulations (2000):
a) Do not apply to departments with only one X-ray room
b) Require training records to be kept for radiologists and radiographers
c) Require that the Dose Reference Level is never exceeded
d) Require a Local Ethics Committee to be set up
e) Do not allow research on children involving ionising radiations
14.14 Under the UK Ionising Radiations (Medical Exposure) Regulations (2000):
a) The same person cannot act as referrer, operator and practitioner
b) The responsibility for implementation lies with the head of the radiology
department
c) The practitioner must not justify a request if insufficient information is given
d) The medical physics expert is responsible for carrying out the investigation
if a dose ‘greater than intended’ is given to a patient
Multiple Choice Questions 659
d) In pulse wave Doppler, the pulse repetition frequency is in the kHz range
e) In a pulsed Doppler system, aliasing occurs if a negative Doppler shift
exceeds half the pulse repetition frequency
15.11 Concerning new ultrasound techniques:
a) In tissue harmonic imaging, the second harmonic is the most useful
b) Pulse-inversion harmonic methods are superior to frequency-based har-
monic filtering methods
c) Compound imaging is a combination of B-mode imaging and spectral
Doppler
d) Infusing liquids with bubbles is an important technique for creating hyper-
echoic contrast
e) Ultrasound elastography allows the non-invasive assessment of tissue
hardness
15.12 Concerning ultrasound bio-effects and safety:
a) Manipulation of the receiver controls makes no difference to the safety of a
scan
b) The thermal index indicates approximately the temperature rise in centi-
grade degrees being produced in the tissue
c) Bone in the field of view decreases the thermal effect of ultrasound by atten-
uating its energy
d) Ultrasound can produce ionising effects via inertial cavitation
e) An ultrasound beam exerts a radiation force which can cause mechanical
stresses in blood cells
16.1 Imaging by magnetic resonance:
a) Requires at least a 1 T static magnetic field
b) Depends on excitation of nuclei by a time varying RF magnetic field
c) Can demonstrate blood flow without injection of contrast medium
d) Produces a digital image
e) Can use diamagnetic materials to enhance contrast
16.2 In magnetic resonance imaging:
a) The strength of the signal increases as the strength of the static magnetic
field increases
b) Short T1 values are associated with highly structured tissues
c) Field gradients must be applied to obtain spatial information
d) A gradient echo sequence with long repetition time (TR) and long echo time
(TE) will produce T1 weighted contrast
e) The major hazard to health limiting the static magnetic field is the associ-
ated temperature rise in the tissues
16.3 In magnetic resonance imaging of protons:
a) The SI unit of magnetic induction (magnetic flux density) is tesla per metre
b) A magnetic field varying with the Larmor frequency is used to define the
slice to be imaged
662 Physics for Diagnostic Radiology
Question a) b) c) d) e)
4.5 T T T T T
4.6 T F F F T
4.7 T T F T F
4.8 T T F F T
4.9 T T F T T
4.10 T F T T F
5.1 F F T T T
5.2 F T F T F
5.3 T T F F F
5.4 F T T F F
5.5 F T F T F
5.6 T F F F T
5.7 T T F T T
5.8 T F F T T
5.9 F T T F F
6.1 F T T T T
6.2 T F F T F
6.3 F F T T T
6.4 T T F T F
6.5 F F F T T
6.6 T F F T T
6.7 T F T T F
6.8 T F T T F
6.9 F T T F F
6.10 T T T F F
7.1 F T F T T
7.2 T T T F F
7.3 F T T F F
7.4 T F T T F
7.5 F F F T F
7.6 T T F F T
7.7 T F F F T
7.8 T T T T F
7.9 F T T F T
8.1 T T T T T
8.2 T F T F F
8.3 F T T F F
8.4 T F T T F
8.5 F F T F T
8.6 T F F T T
8.7 T F F F T
8.8 T F T F T
8.9 T F F T F
8.10 T F T T F
9.1 F T F T F
9.2 T F T F T
(continued)
666 Physics for Diagnostic Radiology
Question a) b) c) d) e)
9.3 T T F T T
9.4 F T F T T
9.5 F T T F F
9.6 T T F T F
9.7 T F T T F
9.8 F T T F T
9.9 F T T T F
10.1 F T F T T
10.2 F T F F F
10.3 F F T F F
10.4 T F T F F
10.5 T T T F T
10.6 F F T F T
10.7 F F F T F
10.8 T T T T T
10.9 F T F T T
10.10 F T T F T
11.1 F T T T F
11.2 F F T T F
11.3 F T T F T
11.4 T T T F F
11.5 F T F F F
11.6 T F T T F
12.1 T F T T F
12.2 T F T T F
12.3 T T T F F
12.4 T F T F T
12.5 F T F T F
12.6 T T F F F
12.7 T F T F T
12.8 T F T T F
13.1 F T F F F
13.2 T F T F F
13.3 T F T F F
13.4 F T T F F
13.5 F F F T T
13.6 T F T F F
13.7 T T F F F
13.8 F F T T T
13.9 T F T T T
13.10 F T F T T
13.11 T T T F T
13.12 T F T F T
13.13 T F F F F
14.1 F T F T T
14.2 T T T F T
14.3 T F F F T
14.4 F T F T T
Multiple Choice Questions 667
Question a) b) c) d) e)
14.5 F T T F T
14.6 T T F F F
14.7 T T T F F
14.8 T F F F F
14.9 F T T T T
14.10 T T F F T
14.11 F F T T T
14.12 T T T T F
14.13 F T F T F
14.14 F F T F F
15.1 T F F F T
15.2 F F F T T
15.3 T F F T F
15.4 F T T T T
15.5 F T T F F
15.6 T T T F F
15.7 F T T T F
15.8 T T T T T
15.9 F T F F F
15.10 F F F T T
15.11 T T F T T
15.12 F T F T T
16.1 F T T T F
16.2 T T T F F
16.3 F F T F T
16.4 T T T T T
16.5 T T F F T
16.6 T F T T T
16.7 F T T T T
16.8 F T F T T
16.9 F T T F T
17.1 F T T F F
17.2 F T F F T
17.3 F T F F T
17.4 F F T T F
17.5 F T F T T
17.6 T F T T T
18.3 Notes
Some of these notes illustrate one of the weaknesses of MCQs in this subject area. It is
often very difficult to set non-trivial questions that are not potentially ambiguous with
deeper knowledge. Thus MCQs are a good teaching aide for testing an understanding
668 Physics for Diagnostic Radiology
of the subject and sometimes promote further discussion. They are less satisfactory as a
method of examination.
1.3d) As Auger electrons (see Persson L, The Auger effect in radiation dosimetry,
Health Phys 67, 471–6, 1994.
1.7e) Ionising radiations are oxidising agents, for example, ferrous ions to
ferric ions.
2.2b) The high density prevents too much electron penetration into the anode.
2.3d) kV and mAs will affect the effective spot size because they will influence the
performance of the cathode focussing cup in forming a small target on the
anode surface.
2.6d) There is substantial self-absorption of X-rays in the anode and this will be
affected by anode angle.
2.8e) Patient dose may be unacceptably high.
2.10c) The extra heat is lost mainly by conduction.
2 .10e) Black bodies are good emitters of radiation.
2.12e) Actual kV, and hence keV, is above the K shell energy for longer.
3.1b) Because of absorption edges.
3.1d) Any such radiation will be of such low energy that it is absorbed within the
body, not scattered.
3.2b) The second half value thickness will be greater because of beam hardening.
3.3d) This statement would normally be true for the whole body because of the
effects of attenuation, but is not true for the Compton process itself.
3.4c) Healthy lung tissue is less dense than water.
3.5b) There will be a small amount of elastic scattering.
3.5d) Characteristic radiation will be produced if a K shell (or higher shell) vacancy
is created. If this occurs in the body (with low atomic number elements) the
radiation may be of too low energy to escape.
3.7b) Scatter may be reduced but is never eliminated.
3.8a) Some X-rays will be Compton scattered. Note also that many photons will
not interact at all, but these are excluded by the stem of the question.
3.9b) See Section 9.2.3.
3.9e) There are several reasons but the most important is that the filter, in practice,
would be too thin.
4.2b) Some energy will be deposited as excitation.
4.3e) The voltage does not need to be very constant on the ionisation plateau.
4.7d) Although the electric field will be much higher than for an ionisation cham-
ber, the operating voltage may be similar.
5.4e) Although the intensifying screen stops a higher fraction of photons than
film, the number of incident photons will be greatly reduced.
5.9d) Both outputs are digital images so this is not an advantage.
Multiple Choice Questions 669
6.3e) If the penumbra are excluded the larger focal spot will give a smaller image,
if the penumbra are included it will give a larger image.
6.5c) This assumes the field size is set at the cassette, if it were set on the patient
surface there might be less scatter because of less beam divergence.
6 .6a
and d) Both reduce the scatter reaching the film.
6.10c) A major contribution to the variations in grey scale will be quantum mottle.
7.1d) Quantum noise = N ½ so it increases with dose. Signal to noise ratio (usually the
key consideration in imaging) also equals N ½ and improves with increasing
dose.
7.9e) The dominant factor will be the way in which the quantum efficiency of the
receptor varies with keV. This will depend on the K shell absorption edges of
the constituent material.
8.7c) The two are unrelated. Quantum noise is caused by too few photons, after-
glow is a property of the receptor.
9.7b) Increasing the concentration of iodine contrast is more effective.
9.7e) Each frame needs the same photon density so photon flux must be
increased.
9.9e) 50 µGy should not normally be exceeded, 30 µGy should suffice.
10.2d) 4 days is less than two half-lives for Mo-99.
10.2e) Note the question says ‘equilibrium’, by the time the generator has returned
to equilibrium, it will not matter whether the generator has been eluted or
not.
10.10d) This thickness of crystal will cause loss of resolution with minimal increase
in sensitivity for Tc-99m.
11.2b) Compton scattering, not elastic scattering.
11.3a) The contributions are added in quadrature.
11.3d) It is better for the PET imager.
11.5d) PET imaging time is much longer.
12.5e) X-rays of all energies are low LET radiations, so the quadratic term in the
model for double strand breaks (see Section 12.7.1) is important for both.
12.8b) At Hiroshima there was a substantial neutron component.
13.1e) Filtration cannot increase any component of the spectrum, so the exit dose
cannot increase with the specified conditions.
13.2b) Variation is in input dose, exit dose is governed by the receptor sensitivity.
13.2e) For this to be true the half value thickness would have to be 10 cm—it is
much less.
13.6c) See general instructions at the beginning of the questions. Compression will
actually reduce the amount of tissue in the beam.
13.7 Many values of wT are only given to one significant figure.
13.8d) See general instructions.
13.9c) A reasonable mean from quite a wide range of quoted values.
670 Physics for Diagnostic Radiology
13.11e) Interventional procedures may cause skin reactions but they are an adjunct
to therapy, not diagnostic.
13.13e) Although hereditary effects cannot be positively excluded, the risk will be
very low and no higher than for adults.
14.1e) The mean atomic number is similar to that of air and soft tissue.
14.7e) Although this is recommended good practice, it may not be feasible—for
example, mobile radiography.
14.9e) See Section 9.6.4.
14.10d) it is preferable to retain dexterity and work quickly.
14.12e) Only if a patient received a dose ‘much greater than intended’.
15.6e) Full 2D is not necessary. Either a lens or a few rows of detectors covering the
slice-width direction (to allow focussing) will suffice.
15.9b) It is the difference in frequency between the outgoing and returning signals
that is in the audible range.
15.9e) The word ‘power’ in the name of this mode refers to the measurement of
echo power and is not a reference to the acoustic power used, which is less
than in most other Doppler modes.
15.12a) Improving the quality of the received image reduces scan time.
16.7a) The amount of C-11 is too small to give an adequate signal, C-12 gives no
signal.
(a) (b)
FIGURE 10.19
Functional image of a normal MUGA study. (a) The phase image represents the phase (or timing) of the contraction of the heart chambers. (b) The amplitude image
represents the amplitude of the contraction. In this case it can be seen that the largest contraction occurs apically and along the lateral wall. Both of these parameters
are derived from the Fourier fitted curve (Figure 10.18).
CT transaxial
CT Scout view
CT coronal PET coronal Fused coronal
PET transaxial
Fused transaxial
FIGURE 11.1
Typical PET/CT review screen showing CT, PET and fused image data sets. The bottom right hand image shows
a rotating maximum intensity reprojection image.
FIGURE 15.38
A 3D ultrasound rendering of a foetal face.
(a) (b)
FIGURE 15.44
Doppler ultrasound images showing vasculature in the kidney. There are two main ways of mapping Doppler shifts detected within the ‘colour box’ on the B-mode
image: (a) A colour flow map (CFM) shows mean Doppler shifts at each point; (b) A power Doppler map shows the strength of Doppler-shifted echoes at each point.
Notice the reference colour bars to the left of each image.
Physics
With every chapter revised and updated, Physics for Diagnostic Radiology, Third
Edition continues to emphasise the importance of physics education as a critical
component of radiology training. This bestselling text helps readers understand
how various imaging techniques work, from planar analogue and digital radiology
to computed tomography (CT), nuclear medicine and positron emission tomography
(PET) to ultrasound imaging and magnetic resonance imaging (MRI).
83155
ISBN: 978-1-4200-8315-6
90000
9 781420 083156