Principles of Computerized Tomographic Imaging
Principles of Computerized Tomographic Imaging
Principles of Computerized Tomographic Imaging
Tombgraphic Imaging
OTHER IEEE PRESS BOOKS Radar Applications, Edited by M. I. Skolnik Robust Control, Edited by P. Dorato Writing Reports to Get Results: Guidelines for the Computer Age, By R. S. Blicq Multi-Microprocessors, Edited by A. Gupta Advanced Microprocessors, II, Edited by A. Gupta Adaptive Signal Processing, Edited by L. H. Sibul System Design for Human Interaction, Edited by A. P. Sage Advances in Local Area Networks, Edited by K. Kiimmerle, J. 0. Limb, and F. A. Tobagi Computers and Manufacturing Productivity, Edited by R. K. Jurgen Being the Boss, By L. K. Lineback Effective Meetings for Busy People, By W. T. Carnes VLSI Signal Processing, II, Edited by S. Y. Kung, R. E. Owen, and J. G. Nash Modem Acoustical Imaging, Edited by H. Lee and G. Wade Low-Temperature Electronics, Edited by R. K. Kirschman Undersea Lightwave Communications, Edited by P. K. Runge and P. R. Trischitta Multidimensional Digital Signal Processing, Edited by the IEEE Multidimensional Signal Processing Committee Adaptive Methods for Control System Design, Edited by M. M. Gupta Residue Number System Arithmetic, Edited by M. A. Soderstrand, W. K. Jenkins, G. A. Jullien, and F. J. Taylor Singular Perturbations in Systems and Control, Edited by P. V. Kokotovic and H. K. Khalil Space Science and Applications, Edited by J. H. McElroy Medical Applications of Microwave Imaging, Edited by L. Larsen and J. H. Jacobi Modern Spectrum Analysis, II, Edited by S. B. Kesler The Calculus Tutoring Book, By C. Ash and R. Ash Imaging Technology, Edited by H. Lee and G. Wade Kalman Filtering: Theory and Application, Edited by H. W, Sorenson Biological Effects of Electromagnetic Radiation, Edited by J. M. Osepchuk Engineering Contributions to Biophysical Electrocardiography, Edited by T. C. Pilkington and R. Plonsey
Malcolm Slaney
Schfumberger Palo Alto Research
Electronic Copy (c) 1999 by A. C. Kak and Malcolm Copies can be made for personal use only.
Slaney
+
The lnstbte of Electrical
IEEE e PRESS
and Electronics Engineers, Inc., New York
IEEE PRESS 1987 Editorial Board R. F. Cotellessa, Editor in Chief J. K. Aggarwal. Editor, Selected Reprint Series Glen Wade, Editor, Special Issue Series James Aylor F. S. Barnes J. E. Brittain B. D. Carrol Aileen Cavanagh D. G. Childers H. W. Colborn J. F. Hayes W. K. Jenkins A. E. Joel, Jr. Shlomo Karni R. W. Lucky R. G. Meyer Seinosuke Narita J. D. Ryder W. R. Crone, Managing Editor Hans P. Leander, Technical Editor Laura J. Kelly, Administrative Assistant Randi E. Scholnick and David G. Boulanger, Associate Editors A. C. Schell L. G. Shaw M. I. Skolnik P. W. Smith M. A. Soderstrand M. E. Van Valkenburg John Zaborsky
This book is set in Times Roman. Copyright 0 1988 by THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, INC. 345 East 47th Street, New York, NY 10017-2394 All rights reserved. PRINTED IN THE UNITED STATES OF AMERICA IEEE Order Number: PC02071 Library of Congress Cataloging-in-Publication Kak, Avinash C. Principles of computerized Data
tomographic
Published under the sponsorship of the IEEE Engineering Biology Society. Includes bibliographies and index. II. IEEE Engineering 1. Tomography. I. Slaney, Malcolm. Biology Society. Ill. Title. 87-22645 616.07 572 RC78.7.T6K35 1987 ISBN O-87942-1 98-3
in Medicine
and
Contents
Preface I Introduction
References 3
ix
2.2
Image Processing
28
Point Sources and Delta Functions l Linear Shift Invariant Operations l Fourier Analysis l Properties of Fourier Transforms l The Two-Dimensional Finite Fourier Transform l Numerical Implementation of the Two-Dimensional FFT
2.3
References
47 49
60
Theory
3.4
75
l
A Re-sorting
3.5 3.6
93
Three-Dimensional Filtered
3.7 3.8
107
113
Monochromatic X-Ray Projections l Measurementof Projection Data with Polychromatic Sourcesl Polychromaticity Artifacts in X-Ray CT Scatter l Different Methods for Scanningl Applications 4.2 Emission Computed Tomography 134
Single Photon Emission Tomography l Attenuation Compensationfor Single Photon Emission CT l Positron Emission Tomography l Attenuation Compensationfor Positron Tomography 4.3 Ultrasonic Computed Tomography 147
FundamentalConsiderationsl Ultrasonic Refractive Index Tomography l Ultrasonic Attenuation Tomography l Applications 4.4 4.5 4.6 Magnetic Resonance Imaging Bibliographic Notes 168 References 169 158
177
What Does Aliasing Look Like? Noise in Reconstructed Images Bibliographic Notes References 200 200
203
Decomposingthe Green Function l Fourier Transform Approach l s Short Wavelength Limit of the Fourier Diffraction Theorem l The Data Collection Process 6.4 Interpolation and a Filtered Backpropagation Algorithm for Diffracting Sources 234 Frequency Domain Interpolation l BackpropagationAlgorithms
6.5
Limitations
247
Mathematical Limitations l Evaluation of the Born Approximation l Evaluation of the Rytov Approximation l Comparison of the Born and Rytov Approximations
6.6 6.7
252
Evanescent Waves l Sampling the Received Wave l The Effects of a Finite Receiver Length l Evaluation of the Experimental Effects l Optimization l Limited Views
6.8 6.9
268
275
Implementation of the
7.5 7.6
292
8 Reflection Tomography
8.1 8.2 8.3 Introduction 297 B-Scan Imaging 298 Reflection Tomography 303
297
Plane Wave Reflection Transducers l Reflection Tomography vs. Diffraction Tomography l Reflection Tomography Limits
313
Experimental Results
321
323 329
vii
Preface
The purpose of this book is to provide a tutorial overview on the subject of computerized tomographic imaging. We expect the book to be useful for practicing engineers and scientists for gaining an understanding of what can and cannot be done with tomographic imaging. Toward this end, we have tried to strike a balance among purely algorithmic issues, topics dealing with how to generate data for reconstruction in different domains, and artifacts inherent to different data collection strategies. Our hope is that the style of presentation used will also make the book useful for a beginning graduate course on the subject. The desired prerequisites for taking such a course will depend upon the aims of the instructor. If the instructor wishes to teach the course primarily at a theoretical level, with not much emphasis on computer implementations of the reconstruction algorithms, the book is mostly self-contained for graduate students in engineering, the sciences, and mathematics. On the other hand, if the instructor wishes to impart proficiency in the implementations, it would be desirable for the students to have had some prior experience with writing computer programs for digital signal or image processing. The introductory material we have included in Chapter 2 should help the reader review the relevant practical details in digital signal and image processing. There are no homework problems in the book, the reason being that in our own lecturing on the subject, we have tended to emphasize the implementation aspects and, therefore, the homework has consisted of writing computer programs for reconstruction algorithms. The lists of references by no means constitute a complete bibliography on the subject. Basically, we have included those references that we have found useful in our own research over the years. Whenever possible, we have referenced books and review articles to provide the reader with entry points for more exhaustive literature citations. Except in isolated cases, we have not made any attempts to establish historical priorities. No value judgments should be implied by our including or excluding a particular work. Many of our friends and colleagues deserve much credit for helping bring this book to fruition. This book draws heavily from research done at Purdue by our past and present colleagues and collaborators: Carl Crawford, Mani Azimi, David Nahamoo, Anders Andersen, S. X. Pan, Kris Dines, and Barry Roberts. A number of people, Carl Crawford, Rich Kulawiec, Gary S. Peterson, and the anonymous reviewers, helped us proofread the manuscript;
PREFACE
ix
we are grateful for the errors they caught and we acknowledge that any errors that remain are our own fault. We are also grateful to Carl Crawford and Kevin King at GE Medical Systems Division, Greg Kirk at Resonex, Dennis Parker at the University of Utah, and Kris Dines of XDATA, for sharing their knowledge with us about many newly emerging aspects of medical imaging. Our editor, Randi Scholnick, at the IEEE PRESS was most patient with us; her critical eye did much to improve the quality of this work. Sharon Katz, technical illustrator for the School of Electrical Engineering at Purdue University, was absolutely wonderful. She produced most of the illustrations in this book and always did it with the utmost professionalism and a smile. Also, Pat Kerkhoff (Purdue), and Tammy Duarte, Amy Atkinson, and Robin Wallace (SPAR) provided excellent secretarial support, even in the face of deadlines and garbled instructions. Finally, one of the authors (M.S.) would like to acknowledge the support of his friend Kris Meade during the long time it took to finish this project.
PREFACE
Introduction
Tomography refers to the cross-sectional imaging of an object from either transmission or reflection data collected by illuminating the object from many different directions. The impact of this technique in diagnostic medicine has been revolutionary, since it has enabled doctors to view internal organs with unprecedented precision and safety to the patient. The first medical application utilized x-rays for forming images of tissues based on their x-ray attenuation coefficient. More recently, however, medical imaging has also been successfully accomplished with radioisotopes, ultrasound, and magnetic resonance; the imaged parameter being different in each case. There are numerous nonmedical imaging applications. which lend themselves to the methods of computerized tomography. Researchers have already applied this methodology to the mapping of underground resources via crossborehole imaging, some specialized cases of cross-sectional imaging for nondestructive testing, the determination of the brightness distribution over a celestial sphere, and three-dimensional imaging with electron microscopy. Fundamentally, tomographic imaging deals with reconstructing an image from its projections. In the strict sense of the word, a projection at a given angle is the integral of the image in the direction specified by that angle, as illustrated in Fig. 1.1. However, in a loose sense, projection means the information derived from the transmitted energies, when an object is illuminated from a particular angle; the phrase diffracted projection may be used when energy sources are diffracting, as is the case with ultrasound and microwaves. Although, from a purely mathematical standpoint, the solution to the problem of how to reconstruct a function from its projections dates back to the paper by Radon in 1917, the current excitement in tomographic imaging originated with Hounsfield invention of the x-ray computed tomographic s scanner for which he received a Nobel prize in 1972. He shared the prize with Allan Cormack who independently discovered some of the algorithms. His invention showed that it is possible to compute high-quality cross-sectional images with an accuracy now reaching one part in a thousand in spite of the fact that the projection data do not strictly satisfy the theoretical models underlying the efficiently implementable reconstruction algorithms. His invention also showed that it is possible to process a very large number of measurements (now approaching a million for the case of x-ray tomography) with fairly complex mathematical operations, and still get an image that is incredibly accurate.
INTRODUCTION
Fig. 1.1: Two projections are shown of an object consisting of a pair of cylinders.
It is perhaps fair to say that the breakneck pace at which x-ray computed tomography images improved after Hounsfield invention was in large s measure owing to the developments that were made in reconstruction algorithms. Hounsfield used algebraic techniques, described in Chapter 7, and was able to reconstruct noisy looking 80 x 80 images with an accuracy of one part in a hundred. This was followed by the application of convolutionbackprojection algorithms, first developed by Ramachandran and Lakshminarayanan [Ram711 and later popularized by Shepp and Logan [She74], to this type of imaging. These later algorithms considerably reduced the processing time for reconstruction, and the image produced was numerically more accurate. As a result, commercial manufacturers of x-ray tomographic scanners started building systems capable of reconstructing 256 x 256 and 512 x 512 images that were almost photographically perfect (in the sense that the morphological detail produced was unambiguous and in perfect agreement with the anatomical features). The convolution-backprojection algorithms are discussed in Chapter 3. Given the enormous success of x-ray computed tomography, it is not surprising that in recent years much attention has been focused on extending this image formation technique to nuclear medicine and magnetic resonance on the one hand; and ultrasound and microwaves on the other. In nuclear medicine, our interest is in reconstructing a cross-sectional image of radioactive isotope distributions within the human body; and in imaging with magnetic resonance we wish to reconstruct the magnetic properties of the object. In both these areas, the problem can be set up as reconstructing an image from its projections of the type shown in Fig. 1.1. This is not the case when ultrasound and microwaves are used as energy sources; although the
COMPUTERIZED
TOMOGRAPHIC
IMAGING
aim is the same as with x-rays, viz., to reconstruct the cross-sectional image of, say, the attenuation coefficient. X-rays are nondiffracting, i.e., they travel in straight lines, whereas microwaves and ultrasound are diffracting. When an object is illuminated with a diffracting source, the wave field is scattered in practically all directions, although under certain conditions one might be able to get away with the assumption of straight line propagation; these conditions being satisfied when the inhomogeneities are much larger than the wavelength and when the imaging parameter is the refractive index. For situations when one must take diffraction effects (inhomogeneity caused scattering of the wave field) into account, tomographic imaging can in principle be accomplished with the algorithms described in Chapter 6. This book covers three aspects of tomography: Chapters 2 and 3 describe the mathematical principles and the theory. Chapters 4 and 5 describe how to apply the theory to actual problems in medical imaging and other fields. Finally, Chapters 6, 7, and 8 introduce several variations of tomography that are currently being researched. During the last decade, there has been an avalanche of publications on different aspects of computed tomography. No attempt will be made to present a comprehensive bibliography on the subject, since that was recently accomplished in a book by Dean [Dea83]. We will only give selected references at the end of each chapter, their purpose only being to cite material that provides further details on the main ideas discussed in the chapter. The principal textbooks that have appeared on the subject of tomographic imaging are [Her80], [Dea83], [Mac83], [Bar8 11. The reader is also referred to the review articles in the field [Gor74], [Bro76], [Kak79] and the two special issues of IEEE journals [Kak81], [Her83]. Reviews of the more popular algorithms also appeared in [Ros82], [Kak84], [Kak85], [Kak86].
References
[Bar811 [Bro76] [Dea83] [Gor74] [Her801 [Her831 [Kak79] [Kak81] H. H. Barrett and W. Swindell, Radiological Imaging: The Theory of Image Formation, Detection and Processing. New York, NY: Academic Press, 1981. R. A. Brooks and G. DiChiro, Principles of computer assisted tomography (CAT) in radiographic and radioisotopic imaging, Phys. Med. Biol., vol. 21, pp. 689732, 1976. S. R. Dean, The Radon Transform and Some of Its Applications. New York, NY: John Wiley and Sons, 1983. R. Gordon and G. T. Herman, Three-dimensional reconstructions from projections: A review of algorithms, in International Review of Cytology, G. H. Boume and J. F. Danielli, Eds. New York, NY: Academic Press, 1974, pp. 111-151. G. T. Herman, Image Reconstructions from Projections. New York, NY: Academic Press, 1980. -, Guest Editor, Special Issue on Computerized Tomography, Proceedings of the IEEE, vol. 71, Mar. 1983. A. C. Kak, Computerized tomography with x-ray emission and ultrasound sources, Proc. IEEE, vol. 67, pp. 1245-1272, 1979. -, Guest Editor, Special Issue on Computerized Medical Imaging, IEEE Transactions on Biomedical Engineering, vol. BME-28, Feb. 1981.
INTRODUCTION
-, Image reconstructions from projections, in Digital Image Processing Techniques, M. P. Ekstrom, Ed. New York, NY: Academic Press, 1984. [Kak85] -, Tomographic imaging with diffracting and non-diffracting sources, in Array Signal Processing, S. Haykin, Ed. Englewood Cliffs, NJ: Prentice-Hall, 1985. [Kak86] A. C. Kak and B. Roberts, Image reconstruction from projections, in Handbook of Pattern Recognition and Image Processing, T. Y. Young and K. S. Fu, Eds. New York, NY: Academic Press, 1986. [Mac831 A. Macovski, Medical Imaging Systems. Englewood Cliffs, NJ: Prentice-Hall, 1983. [Ram711 G. N. Ramachandran and A. V. Lakshminarayanan, Three dimensional reconstructions from radiographs and electron micrographs: Application of convolution instead of Fourier transforms, Proc. Nat. Acad. Sci., vol. 68, pp. 2236-2240, 1971. [Ros82] A. Rosenfeld and A. C. Kak, Digital Picture Processing, 2nd ed. New York, NY: Academic Press, 1982. [She741 L. A. Shepp and B. F. Logan, The Fourier reconstruction of a head section, IEEE Trans. Nucl, Sci., vol. NS-21, pp. 21-43, 1974.
[Kak84]
COMPUTERIZED
TOMOGRAPHIC
IMAGING
We can hope to cover all the important details of one- and twot dimensional signal processing in one chapter. For those who have already seen this material, we hope this chapter will serve as a refresher. For those readers who haven had prior exposure to signal and image processing, we t hope that this chapter will provide enough of an introduction so that the rest of the book will make sense. All readers are referred to a number of excellent textbooks that cover oneand two-dimensional signal processing in more detail. For information on 1-D processing the reader is referred to [McG74], [Sch75], [Opp75], [Rab75]. The theory and practice of image processing have been described in [Ros82], [Gon77], [Pra78]. The more general case of multidimensional signal processing has been described in [Dud84].
One-dimensional continuous functions, such as in Fig. 2.1(a), will be represented in this book by the notation x(t) (1)
where x(t) denotes the value as a function at t. This function may be given a discrete representation by sampling its value over a set of points as illustrated in Fig. 2.1(b). Thus the discrete representation can be expressed as the list - * - X(-T), x(O), X(T), x(27), * * *, x(m), *-- . (2)
As an example of this, the discrete representation of the data in Fig. 2.1(c) is 1, 3, 4, 5, 4, 3, 1. (3)
It is also possible to represent the samples as a single vector in a multidimensional space. For example, the set of seven samples could also be represented as a vector in a 7-dimensional space, with the first element of the vector equal to 1, the second equal to 3, and so on. There is a special function that is often useful for explaining operations on functions. It is called the Dirac delta or impulse function. It can be defined t
c J
:
Fig. 2.1: A one-dimensional signal is shown in (a) with its sampled version in (b). The discrete version of the signal is illustrated in (c).
directly; instead it must be expressed as the limit of a sequence of functions. First we define a new function called rect (short for rectangle) as follows 1 0 elsewhere.
rect (t) =
(4)
This is illustrated in Fig. 2.2(a). Consider a sequence of functions of ever decreasing support on the t-axis as described by &(t)=n rect (nt) (5)
and illustrated in Fig. 2.2(b). Each function in this sequence has the same area but is of ever increasing height, which tends to infinity as n + 03. The limit of this sequence of functions is of infinite height but zero width in such a manner that the area is still unity. This limit is often pictorially represented as shown in Fig. 2.2(c) and denoted by S(t). Our explanation leads to the definition of the Dirac delta function that follows
6(-t) s--co dt=l.
s
6
COMPUTERIZED TOMOGRAPHIC IMAGING
--cc x(t)&t-
t dt=x(t ) )
(4
Fig. 2.2: A rectangle function as shown in (a) is scaled in both width and height (b). In the limit the result is the delta function illustrated in (c).
where 6(t - t is an impulse shifted to the location t = t . When an impulse ) enters into a product with an arbitrary x(t), all the values of x(t) outside the location t = t are disregarded. Then by the integral property of the delta function we obtain (7); so we can say that 13(t - t samples the function x(t) ) at t . 2.1.2 Linear Operations Functions may be operated on for purposes such as filtering, smoothing, etc. The application of an operator 0 to a function x(t) will be denoted by
orx(t)l.
The operator is linear provided
(8)
otQX(f)+~~(t)l=~otx(t)l+pOtu(t)l
(9)
for any pair of constants a! and p and for any pair of functions x(t) and y(t). An interesting class of linear operations is defined by the following integral form z(t)=
)h(t, j, x(t
t dt )
(10)
where h is called the impulse response. It is easily shown that h is the system response of the operator applied to a delta function. Assume that the input
function is an impulse at t = to or
x(t)=tqt-to).
Substituting into (lo), we obtain
(11)
z(t) = jy ,
6(t - to)h(t,
t )
dt
(12) (13)
= h(t, to).
Therefore h(t, t can be called the impulse response for the impulse applied ) at t . A linear operation is called shift invariant when
u(t) = 0 tx(ol
implies
(14)
y(t--7)=O[x(t-7)]
or equivalently
h(t, t )=h(t-t ).
(1%
(16)
This implies that when the impulse is shifted by t , so is the response, as is further illustrated in Fig. 2.3. In other words, the response produced by the linear operation does not vary with the location of the impulse; it is merely shifted by the same amount. For shift invariant operations, the integral form in (10) becomes
z(t)= Co s
-m
Fig. 2.3: The impulse response of a shift invariant filter is shown convolved with three impulses.
x(t )h(t-
t dt ) .
(17)
z(t) =x(t)*h(t).
W-4)
The process of convolution can be viewed as flipping one of the two functions, shifting one with respect to the other, multiplying the two and integrating the product for every shift as illustrated by Fig. 2.4.
1.5
(1)
h(t)
I.0
,-
0.5
0.c
L0 2
j-
)7- L
6 -2
-i
IL0
2 4 y(t) 0 2 4
IS-
A x(t)
A
ISh(t) IS-
1.0.
05-
0.0 7 -2
b 2 4 6 6 -2 0 2 4 6 6 -2 6 6
Fig. 2.4: The results of convolving an impulse response with an impulse (top) and a square pulse (bottom) are shown
Convolution can also be defined for discrete sequences. If xj=x(i7) and Yi=Y(id then the convolution of x; with yi can be written as (21) This is a discrete approximation to the integral of (17). 2.1.3 Fourier Representation For many purposes it is useful to represent functions in the frequency domain. Certainly the most common reason is because it gives a new perspective to an otherwise difficult problem. This is certainly true with the (20)
(19)
convolution integral; in the time domain convolution is an integral while in the frequency domain it is expressed as a simple multiplication. In the sections to follow we will describe four different varieties of the Fourier transform. The continuous Fourier transform is mostly used in theoretical analysis. Given that with real world signals it is necessary to periodically sample the data, we are led to three other Fourier transforms that approximate either the time or frequency data as samples of the continuous functions. The four types of Fourier transforms are summarized in Table 2.1. Assume that we have a continuous function x(t) defined for Tl I t 15 Tz. This function can be expressed in the following form:
x(t)=
2
k=-m
zkejkuot
(22)
where j = a and w. = 2rfo = 27r/T, T = T2 - T, and zk are complex coefficients to be discussed shortly. What is being said here is that x(t) is the sum of a number of functions of the form #qt. This function represents dkoot = cos kwo t +j sin kwo t.
(24)
(23)
The two functions on the right-hand side, commonly referred to as sinusoids, are oscillatory with kfo cycles per unit of t as illustrated by Fig. 2.5. kfo is
Four different Fourier transforms can be defined by sampling the time and frequency
Continuous Time Name: Fourier Transform Continuous Frequency Forward: X(w) = I:, Periodicity: None Name: Fourier Series Discrete Frequency Forward: X,, = l/T j~x(f)e-jn(2r *)f Inverse: x(t) = C;= _ m Xnejn(z~/r)r Periodicity: x(t) = x(t + iT) x(t)e-jwf
dt du
Forward: X(w) = C;= _ m x(nr)e-ion7 Inverse: x(nr) = 7/27r S_* X(4ejwnr ,:, Periodic@: X(w) = X(w + i(27r/r)) Name: Finite Fourier Transform Forward: Xk = l/N ~~zO x,e -j(2* N)kn Inverse: xk= zf==, x&i(2* N)kn Periodic@: xk = xk+ iN and Xk = Xk+ iN dw
X(w)ejot
* In the above table time domain functions are indicated by x and frequency domain functions are X. The time domain sampling interval is indicated by 7.
10
COMPUTERIZED
TOMOGRAPHIC
IMAGING
cos(2nkll
sin(2rkl)
Fig. 2.5: The first three components of a Fourier series are shown. The cosine waves representthe real part of the signal while the sine waves representthe imaginary.
called the frequency of the sinusoids. Note that the sinusoids in (24) are at multiples of the frequency fo, which is called the fundamental frequency. The coefficients zk in (22) are called the complex amplitude of the kth component, and can be obtained by using the following formula 1
Zk=T T2
I TI
x(t)e-ikmoT.
(25)
The representation in (22) is called the Fourier Series. To illustrate pictorially the representation in (22), we have shown in Fig. 2.6, a triangular function and some of the components from the expansion. A continuous signal x(t) defined for t between - 01 and 00 also possesses another Fourier representation called the continuous Fourier transform and defined by
dt.
(26)
(27)
Comparing (22) and (27), we see that in both representations, x(t) has been expressed as a sum of sinusoids, e jwr; the difference being that in the former, the frequencies of the sinusoids are at multiples of wg, whereas in the latter we have all frequencies between - 03 to m. The two representations are not independent of each other. In fact, the series representation is contained in the continuous transform representation since zk in (25) are similar to x(w) in s (26) for o = kwo = k(27r/T), especially if we assume that x(t) is zero outside [T,, Tz], in which case the range of integration in (27) can be cut
11
-..*-l-ljl~,-*-.-IS/b) -6
-6 -4 -2 0 2 4 6 6 Cycles per Sl?ClUeWX -6 -6 -4 -2 2 4 6 6 Cycles per Sequence Length
Fig. 2.6: This illustrates the Fourier seriesfor a simple waveform. A triangle wave is shown in (a) with the magnitude (b) and phase (c) of the first few terms of the Fourier series.
down to [T,, 7]. For the case when x(t) is zero outside [T,, TJ, the reader might ask that since one can recover x(t) from zk using (22), why use (27) since we require X(w) at frequencies in addition to kws The information in s. X(w) for w # kws is necessary to constrain the values of x(t) outside the interval [T,, T2]. If we compute zk using (25), and then reconstruct x(t) from zk using s s (22), we will of course obtain the correct values of x(t) within [T,, Tz]; however, if we insist on carrying out this reconstruction outside [ T,, T,], we will obtain periodic replications of the original x(t) (see Fig. 2.7). On the other hand, if X(w) is used for reconstructing the signal, we will obtain x(t) within [T,, T2] and zero everywhere outside. The continuous Fourier transform defined in (26) may not exist unless x(t) satisfies certain conditions, of which the following are typical [Goo68]: 1) joD Ix(t)1 dt c 00. 2) g(t) must have only a finite number of discontinuities and a finite number of maxima and minima in any finite interval. 3) g(t) must have no infinite discontinuities. Some useful mathematical functions, like the Dirac 6 function, do not obey the preceding conditions. But if it is possible to represent these functions as limits of a sequence of well-behaved functions that do obey these conditions then the Fourier transforms of the members of this sequence will also form a
12
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(4
Tl Fig. 2.7: The signal represented by a Fourier series is actually a periodic version of the original signal defined between T, and T2. Here the original function is shown in (a) and the replications caused by the Fourier series representation are shown in (b).
T2
T,+T
T2+2T
sequence. Now if this sequence of Fourier transforms possesses a limit, then this limit is called the generalized Fourier transform of the original function. Generalized transforms can be manipulated in the same manner as the conventional transforms, and the distinction between the two is generally ignored; it being understood that when a function fails to satisfy the existence conditions and yet is said to have a transform, then the generalized transform is actually meant [Goo68], [Lig60]. Various transforms described in this section obey many useful properties; these will be shown for the two-dimensional case in Section 2.2.4. Given a relationship for a function of two variables, it is rather easy to suppress one and visualize the one-dimensional case; the opposite is usually not the case. 2.1.4 Discrete Fourier Transform (DFT)
As in the continuous case, a discrete function may also be given a frequency domain representation: X(W)= i x(n7)e-jwnr n= -m (28)
where X(W) are the samples of some continuous function x(t), and X(w) the frequency domain representation for the sampled data. (In this book we will
generally use lowercaseletters to representfunctions of time or spaceand the uppercaseletters for functions in the frequency domain.)
Note that our strategy for introducing the frequency domain representation is opposite of that in the preceding subsection. In describing Fourier series we defined the inverse transform (22), and then described how to compute its coefficients. Now for the DFT we have first described the transform from time into the frequency domain. Later in this section we will describe the inverse transform.
13
As will be evident shortly, X(o) represents the complex amplitude of the sinusoidal component e jorn of the discrete signal. Therefore, with one important difference, X(w) plays the same role here as zk in the preceding subsection; the difference being that in the preceding subsection the frequency domain representation was discrete (since it only existed at multiples of the fundamental frequency), while the representation here is continuous as X(w) is defined for all w. For example, assume that
n=O
n=l elsewhere. For this signal X(W) = 1 - e-jW7. Note that X(W) obeys the following periodicity
X(w)=X
(29)
(30)
( >
w+2?r 7 T/T
(31)
which follows from (28) by simple substitution. In Fig. 2.8 we have shown several periods of this X(w). X(w) is called the discrete Fourier transform of the function x(m). From the DFT, the function x(nr) can be recovered by using
Fig. 2.8:
The discrete Fourier transform (OFT) of a two element sequenceis shown here.
x(m) =y
27r s -x/r
X(w)ejwnT da
(32)
J,
0 -211 -4?r -671 8a w
14
COMPUTERIZED
TOMOGRAPHIC
IMAGING
which points to the discrete function x(m) being a sum (an integral sum, to be more specific) of sinusoidal components like ejwnr. An important property of the DFT is that it provides an alternate method for calculating the convolution in (21). Given a pair of sequences Xi = x(i7) and hi = h(k), their convolution as defined by
Yi= 2
j=-m
xjhi-j,
(33)
Y(o) =X(w)H(w).
(34)
This can be derived by noting that the DFT of the convolution is written as
(35)
r_. Y&J)= c
(36)
i= -m
nl= --oD
(37)
Note that the limits of the summation remain from - 00 to 00. At this point it is easy to see that Y(w) = X(w)H(o). (38)
A dual to the above relationship can be stated as follows. Let multiply s two discrete functions, x, and yn , each obtained by sampling the corresponding continuous function with a sampling interval of r and call the resulting sequence .zn Zn =xYw (39)
Then the DFT of the new sequence is given by the following convolution in the frequency domain Z(w) =& y;,,
(40)
15
(41)
that is N elements long. Let represent this sequence with the following s subscripted notation
x0, XI, x2, XN-I.
(42)
Although the DFT defined in Section 2.1.4 is useful for many theoretical discussions, for practical purposes it is the following transformation, called the finite Fourier transform (FFT), l that is actually calculated with a computer:
X, =h NzX,e-j(2a/NMn
n=O
(43)
(44)
Comparing (44) and (28), we see that the X,, are the samples of the s continuous function X(o) for 0=?41 NT with u=O, 1, 2, **a, N-l.
(45)
Therefore, we see that if (43) is used to compute the frequency domain representation of a discrete function, a sampling interval of r in the t-domain implies a sampling interval of l/Nr in the frequency domain. The inverse of the relationship shown in (43) is
x,= N-l C II=0 Xuej(2r/N)un
(46)
Both (43) and (46) define sequences that are periodically replicated. First consider (43). If the u = Nm + i term is calculated then by noting that ej(2a/wNm = 1 for all integer values of m, it is easy to see that X Nm+i=xi-
(47)
I The acronym FFT also stands for fast Fourier transform, which is an efficient algorithm for the implementation of the finite Fourier transform.
16
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(48)
When the finite Fourier transforms of two sequences are multiplied the result is still a convolution, as it was for the discrete Fourier transform defined in Section 2.1.4, but now the convolution is with respect to replicated sequences. This is often known as circular convolution because of the effect discussed below. To see this effect consider the product of two finite Fourier transforms. First write the product of two finite Fourier transforms Z,=X,Y, and then take the inverse finite Fourier transform to find
Zn= N-l C u=O e.i@r/N)unX y
(49)
U.
(50)
Substituting the definition of X, and Y,, as given by (43) the product can now be written
Zn = $ Ni ll=O ei(2*/N)un N-l 2 xiei(hdN)iu N-l C k=O yk&(2*/NWm
(51)
The order of summation can be rearranged and the exponential terms combined to find
,& y r=O y k=O xiyk y Id=0 ej(2u/N)un-ui-uke
(52)
There are two cases to consider. When n - i - k # 0 then as a function of 1( the samples of the exponential ej(2+/~un-ui-uk represent an integral number of cycles of a complex sinusoid and their sum is equal to zero. On the other hand, when i = n - k then each sample of the exponential is equal to one and thus the summation is equal to IV. The summation in (52) over i and k represents a sum of all the possible combinations of Xi and yk. When i = n k then the combination is multiplied by a factor of N while when i # n - k then the term is ignored. This means that the original product of two finite Fourier transforms can be simplified to
z,, =; y k=O x,,-kyk.
(53)
This expression is very similar to (21) except for the definition of x,-k and yk for negative indices. Consider the case when n = 0. The first term of the
17
Fig. 2.9: The effect of circular convolution is shown in (a). (b) shows how the data can be zero-padded so that when an FFT convolution is performed the result representssamples of an aperiodic convolution.
summation is equal to xoyo but the second term is equal to x- 1yr . Although in the original formulation of the finite Fourier transform, the x sequence was only specified for indices from 0 through N - 1, the periodicity property in (48) implies that x-r be equal to XN- r. This leads to the name circular convolution since the undefined portions of the original sequence are replaced by a circular replication of the original data. The effect of circular convolution is shown in Fig. 2.9(a). Here we have shown an exponential sequence convolved with an impulse. The result represents a circular convolution and not samples of the continuous convolution. A circular convolution can be turned into an aperiodic convolution by zeropadding the data. As shown in Fig. 2.9(b) if the original sequences are doubled in length by adding zeros then the original N samples of the product sequence will represent an aperiodic convolution of the two sequences. Efficient procedures for computing the finite Fourier transform are known as fast Fourier transform (FFT) algorithms. To calculate each of the N points of the summation shown in (43) requires on the order of N2 operations. In a fast Fourier transform algorithm the summation is rearranged to take advantage of common subexpressions and the computational expense is reduced to N log N. For a 1024 point signal this represents an improvement by a factor of approximately 100. The fast Fourier transform algorithm has revolutionized digital signal processing and is described in more detail in [Bri74].
-w
Positive Time Negative Time Positive Time Negative Time
Positive Time
Negative Time
18
2.1.6 Just How Much Data Is Needed? In Section 2.1.1 we used a sequence of numbers Xi to approximate a continuous function x(t). An important question is, how finely must the data be sampled for Xi to accurately represent the original signal? This question was answered by Nyquist who observed that a signal must be sampled at least twice during each cycle of the highest frequency of the signal. More rigorously, if a signal x(t) has a Fourier transform such that X(w) = 0 for wBT
(54)
then samples of x must be measured at a rate greater than UN. In other words, if T is the interval between consecutive samples, we want 2a/T 1 wN. The frequency WN is known as the Nyquist rate and represents the minimum frequency at which the data can be sampled without introducing errors. Since most real world signals aren limited to a small range of frequencies, t it is important to know the consequences of sampling at below the Nyquist rate. We can consider the process of sampling to be equivalent to multiplication of the original continuous signal x(t) by a sampling function given by
h(t)=i
A(t-iT). --m
(55)
(56)
By (40) we can convert the multiplication to a convolution in the frequency domain. Thus the result of the sampling can be written
(57)
This result is diagrammed in Fig. 2.10. It is important to realize that when sampling the original data (Fig. 2.10(a)) at a rate faster than that defined by the Nyquist rate, the sampled data are an exact replica of the original signal. This is shown in Fig. 2.10(b). If the sampled signal is filtered such that all frequencies above the Nyquist rate are removed, then the original signal will be recovered. On the other hand, as the sampling interval is increased the replicas of the signal in Fig. 2.10(c) move closer together. With a sampling interval greater
19
than that predicted by the Nyquist rate some of the information in the original data has been smeared by replications of the signal at other frequencies and the original signal is unrecoverable. (See Fig. 2.10(d).) The error caused by the sampling process is given by the inverse Fourier transform of the frequency information in the overlap as shown in Fig. 2.10(d). These errors are also known as aliasing.
Fig. 2.10: Sampling a waveform generatesreplications of the original Fourier transform of the object at periodic intervals. If the signal is sampled at a frequency of o then the Fourier transform of the object will be replicated at intervals of 2~. (a) shows the Fourier transform of the original signal, (b) shows the Fourier transform when x(t) is sampled at a rate faster than the Nyquist rate, (c) when sampled at the Nyquist rate and finally (d) when the data are sampled at a rate less than the Nyquist rate.
2.1.7 Interpretation
Correct interpretation of the XU in (43) is obviously important. Toward s that goal, it is immediately apparent that X0 stands for the average (or, what is more frequently called the dc) component of the discrete function, since from (43) x0=$ x,. n=O (58)
Interpretation of Xi requires, perhaps, a bit more effort; it stands for 1 cycle per sequence length. This can be made obvious by setting Xi = 1, while all
20
=cos ($n)+j
sin (zn)
(60)
forn = 0,1,2, **e, N - 1. A plot of either the cosine or the sine part of this expression will show just one cycle of the discrete function x, , which is why we consider X, as representing one cycle per sequence length. One may similarly show that X2 represents two cycles per sequence length. Unfortunately, this straightforward approach for interpreting X, breaks down for u > N/2. For these high values of the index u, we make use of the following periodicity property x-,=x,-, (61)
which is easily proved by substitution in (43). For further explanation, consider now a particular value for N, say 8. We already know that X0 X1 X2 X, X4 represents dc represents 1 cycle per sequence length represents 2 cycles per sequence length represents 3 cycles per sequence length represents 4 cycles per sequence length.
From the periodicity property we can now add the following X5 represents - 3 cycles per sequence length X, represents - 2 cycles per sequence length X7 represents - 1 cycle per sequence length. Note that we could also have added X4 represents - 4 cycles per sequence length. The fact is that for any N element sequence, XN,2 will always be equal to X-N/& since from (43)
N-l xN/2 = x-N/2 = c 0 %I(1).
The discussion is diagrammatically represented by Fig. 2.11, which shows that when an N element data sequence is fed into an FFT program, the output sequence, also N elements long, consists of the dc frequency term, followed by positive frequencies and then by negative frequencies. This type of an output where the negative axis information follows the positive axis information is somewhat unnatural to look at. To display the FFT output with a more natural progression of frequencies, we can, of course, rearrange the output sequence, although if the aim is
21
merely to filter the data, it may not be necessary to do so. In that case the filter transfer function can be rearranged to correspond to the frequency assignments of the elements of the FFT output. It is also possible to produce normal-looking FFT outputs (with dc at the center between negative and positive frequencies) by modulating the data prior to taking the FFT. Suppose we multiply the data with (- 1) to produce a new sequence x, x, =x,( - 1).
V-53)
Let Xi designate the FFT of this new sequence. Substituting (63) in (43), we obtain
x: =-&N/2
(64)
(65) (66)
(67)
(68)
xiv2 = x0 (69)
xh-,
=&,2-,.
(70) (71)
2.1.8 How to Increase the Display Resolution in the Frequency Domain The right column of Fig. 2.12 shows the magnitude of the FFT output (the dc is centered) of the sequence that represents a rectangular function as shown in the left column. As was mentioned before, the Fourier transform of a discrete sequence contains all frequencies, although it is periodic, and the FFT output represents the samples of one period. For many situations, the
22
FFT
(i
-,I
.I
ti
FFT
(b)
-8
ID
(cl
Fig. 2.12: As shown here, padding a sequenceof data with zeros increasesthe resolution in the frequency domain. The sequencein (a) has only 16 points, (b) has 32 points, while (c) has 64 points.
frequency domain samples supplied by the FFT, although containing practically all the information for the reconstruction of the continuous Fourier transform, are hard to interpret visually. This is evidenced by Fig. 2.12(a), where for part of the display we have only one sample associated with an oscillation in the frequency domain. It is possible to produce smootherlooking outputs by what is called zero-padding the data before taking the FFT. For example, if the sequenceof Fig. 2.12(a) is extended with zeros to
23
twice its length, the FFT of the resulting 32 element sequence will be as shown in Fig. 2.12(b), which is visually smoother looking than the pattern in Fig. 2.12(a). If we zero-pad the data to four times its original length, the output is as shown in Fig. 2.12(c). That zero-padding a data sequence yields frequency domain points that are more closely spaced can be shown by the following derivation. Again let x1, x2, ***, xN- i represent the original data. By zero-padding the data we will define a new x sequence: x, =x, for n=O, for n=N, 1, 2, em*, N-l N+l, *se, 2N-1.
(72) (73)
=o
Let X;
(74)
(75)
u=2m
we get
(76)
N-l Xi,= C
0
Xne-j(2xr/N)mn
(77) (78)
=X,.
In Fig. 2.13 is illustrated the equality between the even-numbered elements of the new transform and the original transform. That X; , Xi, * * *, etc. are the interpolated values between X0 and Xi; between Xi and X2; etc. can be seen from the summations in (43) and (74) written in the following form
(80)
Comparing the two summations, we see that the upper one simply represents the sampled DFT with half the sampling interval.
24
COMPUTERIZED
TOMOGRAPHIC
IMAGING
0 x t 0 x 0 x 0 x 0 x 0 x
04
Fig. 2.13: When a data sequence is padded with zeros the effect is to increase the resolution in the frequency domain. The points in (a) are also in the longer sequence shown in (b), but there are additional points, as indicated by circles, that provide interpolated values of the FFT.
So we have the following conclusion: to increase the display resolution in the frequency domain, we must zero-extend the time domain signal. This also means that if we are comparing the transforms of sequences of different lengths, they must all be zero-extended to the same number, so that they are all plotted with the same display resolution. This is because the upper summation, (79), has a sampling interval in the frequency domain of 27r/2Nr while the lower summation, (BO), has a sampling interval that is twice as long or 27r/Nr. 2.1.9 How to Deal with Data Defined for Negative Time Since the forward and the inverse FFT relationships, (43) and (46), are symmetrical, the periodicity property described in (62) also applies in time domain. What is being said here is that if a time domain sequence and its transform obey (43) and (46), then an N element data sequence in the time domain must satisfy the following property x-,=xN-,,. (81)
To explain the implications of this property, consider the case of N = 8, for which the data sequence may be written down as
x0, Xl, x2, x3, x4, x5, x5, x7.
W-4
(83)
Then if our data are defined for negative indices (times), and, say, are of the following form
X-3, x-2, x-1, x0, Xl, x2, x3, x4
(84)
25
(85)
To further drive home the implications of the periodicity property in (62), consider the following example, which consists of taking an 8 element FFT of the data 0.9 0.89 0.88 0.87 0.86 0.85 0.84 0.83. 036)
We insist for the sake of explaining a point, that only an 8 element FFT be taken. If the given data have no association with time, then the data should be fed into the program as they are presented. However, if it is definitely known that the data are ZERO before the first element, then the sequence presented to the FFT program should look like 0.9 0.89 0.88 0.86+0 0.87 -0 2 I 0 0. (87) (88) negative time (89)
positive time
This sequence represents the given fact that at t = - 1, - 2 and - 3 the data are supposed to be zero. Also, since the fifth element represents both x4 and x-~ (these two elements are supposed to be equal for ideal data), and since in the given data the element xv4 is zero, we simply replace the fifth element by the average of the two. Note that in the data fed into the FFT program, the sharp discontinuity at the origin, as represented by the transition from 0 to 0.9, has been retained. This discontinuity will contribute primarily to the high frequency content of the transform of the signal. 2.1.10 How to Increase Frequency Domain Display Resolution of Signals Defined for Negative Time Let say that we have an eight element sequence of data defined for both s positive and negative times as follows:
x-3 x-2 x1 x0 x1 x2 x3 x4. (90)
(91)
If x-4 was also defined in the original sequence, we have three options: we can either ignore xT4, or ignore x4 and retain x-4 for the fifth from left position in the above sequence, or, better yet, use (x-4 + x4)/2 for the fifth
26
COMPUTERIZED
TOMOGRAPHIC
IMAGING
position. Note we are making use of the property that due to the data periodicity properties assumed by the FFT algorithm, the fifth element corresponds to both x4 and x-~ and in the ideal case they are supposed to be equal to each other. Now suppose we wish to double the display resolution in the frequency domain; we must then zero-extend the data as follows x0 x1 x2
x3 x4
00 000 00
x-4
x-3
x-2
x-1.
(92)
Note that we have now given separate identities to x4 and x-~, since they don have to be equal to each other anymore. So if they are separately t available, they can be used as such. 2.1.11 Data Truncation Effects
To see the data truncation effects, consider a signal defined for all indices n. If X(w) is the true DFT of this signal, we have
X(0) = 3 x,e-jwflTs. -m
(93)
Suppose we decide to take only a 16 element transform, meaning that of all the x, we will retain only 16. s, Assuming that the most significant transitions of the signal occur in the base interval defined by n going from - 7 to 8, we may write approximately X(W) = i xne-jmnTs. -7 More precisely, if X (w) write (94
(9%
(96)
where Z&r) is a function that is equal to 1 for n between - 7 and 8, and zero outside. By the convolution theorem
(97)
27
where A(&
-7
e-jwnTs
(98)
. oNTs sin -
= e-j4 73
w Ts sin 2
(99)
with N = 16. This function is displayed in Fig. 2.14, and illustrates the nature of distortion introduced by data truncation,
28
COMPUTERIZED
TOMOGRAPHIC
IMAGING
rect (x, y) =
1
0
and 1~~15;
(101)
n=l,
2, *-* .
(102)
Thus 6, is zero outside the l/n x I/n square described by 1x1 I 1/2n, (y 1 I 1/2n and has constant value n2 inside that square. It follows that
m
Fig. 2.15: As in the one-dimensional case, the delta function (6) is defined as the limit of the rectangle function shown here.
-m
6,(x, JJ y) dx dy= 1
(103)
for any n. As n -+ CQ,the sequence 6, does not have a limit in the usual sense, but it is convenient to treat it as though its limit existed. This limit, denoted by 6, is
29
called a Dirac delta function. Evidently, we have 6(x, y) = 0 for all (x, y) other than (0, 0) where it is infinite. It follows that 6(-x, -y) = 6(x, y). A number of the properties of the one-dimensional delta function described earlier extend easily to the two-dimensional case. For example, in light of (103), we can write
m ss6(x, y) -c4
dx dy= 1.
(104)
More generally, consider the integral Jr, j , g(x, y)&(x, y) dx dy. This is just the average of g(x, y) over a l/n x l/n square centered at the origin. Thus in the limit we retain just the value at the origin itself, so that we can conclude that the area under the delta function is one and write
gk JJ -m YMX,
m
00
Y)
dx dy=g(O, Oh
(105)
If we shift 6 by the amount ((Y, fl), i.e., we use 6(x - (Y,y - 0) instead of 6(x, y), we similarly obtain the value of g at the point ((Y, P), i.e.,
g(x, JS y)&(x-a,
-m
y-0)
dx dy=g(cr, P).
(106)
The same is true for any region of integration containing (CX,0). Equation (106) is called the sifting property of the 6 function. As a final useful property of 6, we have
m
-ca
exp SJ [ -j2n(ux+
(107)
For a discussion of this property, see Papoulis [Pap62]. 2.2.2 Linear Shift Invariant Operations
Again let us consider a linear operation on images. The point spread function, which is the output image for an input point source at the origin of the xy-plane, is denoted by h(x, y). A linear operation is said to be shift invariant (or space invariant, or position invariant) if the response to 6(x - CY, - ,8), which is a point source y located at (CY,/3) in the xy-plane, is given by h(x - CX, - /3). In other y words, the output is merely shifted by 01 and 8 in the x and y directions, respectively.
30
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Now let us consider an arbitrary input picture f (x, y). By (106) this picture can be considered to be a linear sum of point sources. We can write f (x, y) as
-m -co
1 (10%
(110)
by the linearity of the operation, which means that the response to a sum of excitations is equal to the sum of responses to each excitation. As stated earlier, the response to 6(a - x, fl - y) [=6(x - (II, y - @)I, which is a point source located at (CY,P), is given by h(x - CX, - 0) and if O[f] is y denoted by g, we obtain
gk
(111)
The right-hand side is called the convolution off and h, and is often denoted by f * h. The integrand is a product of two functions f (a, 0) and h(a, @)with the latter rotated about the origin by 180 and shifted by x and y along the x and y directions, respectively. A simple change of variables shows that (111) can also be written as
(112)
sothatf * h = h *f. Fig. 2.16 shows the effect of a simple blurring operation on two different images. In this case the point response, h, is given by
h(x,
Y)=
~ +y~<O.25~ elsewhere.
(113)
As can be seen in Fig. 2.16 one effect of this convolution is to smooth out the edges of each image.
31
32
COMPUTERIZED
TOMOGRAPHIC
IMAGING
2.2.3 Fourier Analysis Representing two-dimensional images in the Fourier domain is as useful as it is in the one-dimensional case. Let f (x, y) be a function of two independent variables x and y; then its Fourier transform F(u, u) is defined by
(114)
In the definition of the one- and two-dimensional Fourier transforms we have used slightly different notations. Equation (26) represents the frequency in terms of radians per unit length while the above equation represents frequency in terms of cycles per unit length. The two forms are identical except for a scaling and either form can be converted to the other using the relation
f=u=u=2ao.
(115)
By splitting the exponential into two halves it is easy to see that the twodimensional Fourier transform can be considered as two one-dimensional transforms; first with respect to x and then y F(~, u)= JT, e-j2ruy dy Jy, f(x, y)e-j2ru dx. (116)
In general, F is a complex-valued function of u and u. As an example, let f (x, y) = rect (x, y). Carrying out the integration indicated in (114) we find
F(u,
u)=
(117)
s -,,2 e-
j2w dy
(118)
(119)
This last function is usually denoted by sine (u, u) and is illustrated in Fig. 2.17. More generally, using the change of variables x = IW and y = ny, it is easy to show that the Fourier transform of rect (nx, ny) is (l/n2) sine (u/n, u/n). (120)
Given the definition of the Dirac delta function as a limit of a sequence of the functions n2 rect (nx, ny); by the arguments in Section 2.1.3, the Fourier transform of the Dirac delta function is the limit of the sequence of Fourier
33
Fig. 2.17: The two-dimensional Fourier transform of the rectanglefunction is shown here.
fk
then
Y)=w>
Y)
(121)
(122)
The inverse Fourier transform of F(u, u) is found by multiplying both sides of (114) by ej2r(U+ufl) and integrating with respect to u and u to find
m cu u) exp [-j2?r(ux+u@)] du du J-co J-coF(u, ej2r(ua+ du =J:,J:,J:,J:+ y) @ d)du & dy (123) =J:,J:,J:,J:,
f(x, y)e-j2*[U(x-~)+"(Y-B)1 du du & dye
(124)
(125) (126)
s,-
(127)
This integral is called the inverse Fourier transform of F(u, u). By (114) and (127), f(x, y) and F(u, u) form a Fourier transform pair. If x and y represent spatial coordinates, (127) can be used to give a physical interpretation to the Fourier transform F(u, u) and to the coordinates u and u. Let us first examine the function
&2r(u+
UY)
(128)
The real and imaginary parts of this function are cos 27r(ux + uy) and sin 2n(ux + uy), respectively. In Fig. 2.18(a), we have shown cos 27r(ux + uy). It is clear that if one took a section of this two-dimensional pattern parallel to the x-axis, it goes through u cycles per unit distance, while a section parallel to the y-axis goes through u cycles per unit distance. This is the reason why u and u are called the spatial frequencies along the x- and yaxes, respectively. Also, from the figure it can be seen that the spatial period of the pattern is (u2 + u2)- i12. The plot for sin 2n(ux + uy) looks similar to the one in Fig. 2.18(a) except that it is displaced by a quarter period in the direction of maximum rate of change. From the preceding discussion it is clear that ej2r@x+Uy) is a twodimensional pattern, the sections of which, parallel to the x- and y-axes, are spatially periodic with frequencies u and u, respectively. The pattern itself 2 has a spatial period of (u2 + u2)- 1 along a direction that subtends an angle tan- (u/u) with the x-axis. By changing u and u, one can generate patterns with spatial periods ranging from 0 to00 in any direction in the xy-plane. Equation (127) can, therefore, be interpreted to mean thatf(x, y) is a linear combination of elementary periodic patterns of the form ej2*(Ux+UJ ). Evidently, the function, F(u, u), is simply a weighting factor that is a measure of the relative contribution of the elementary pattern to the total sum. Since u and u are the spatial frequency of the pattern in the x and y directions, F(u, u) is called the frequency spectrum of f(x, y). 2.2.4 Properties of Fourier Transforms Several properties of the two-dimensional Fourier transform follow easily from the defining integrals equation. Let F(f) denote the Fourier transform of a function f(x, y). Then F{f(x, y)} = F(u, u). We will now present without proof some of the more common properties of Fourier transforms. The proofs are, for the most part, left for the reader (see the books by Goodman [Go0681 and Papoulis [Pap62]).
I) Linearity:
Fiafdx, Y) + WAX, Y>> = aF{fdx, Y>> + bFLMx, Y>> (129) (130)
35
Fig. 2.18: The Fourier transform represents an image in terms of exponentials of the form e*2r(ux+uy). Here we have shown the real (cosine) and the imaginary (sine) parts of onesuch exponential.
2) Scaling:
FLf (ax,
(131)
= CXX, y = fly. This
To see this, introduce the change of variables x property is illustrated in Fig. 2.19.
36
Fig. 2.19: Scaling the size of an image leads to compression and amplification in the Fourier domain.
a, Y
This too follows immediately if we make the change of variables x = x = y - 0. The corresponding property for a shift in the frequency domain is
F {exp W7du0x+ v~u)lf(x,
Y>>=F(u
- UO, u - ~0).
(133)
FW,
fO>=Fh,
4).
(134)
F{f(r,
19+cx)}=F(w, d+a).
(135)
This property is illustrated in Fig. 2.20. 5) Rotational Symmetry: If f(x, y) is a circularly symmetric function, i.e., f(r, 0) is only a function of r, then its frequency spectrum is also
37
Fig. 2.20: Rotation of an object by 30 leads to a similar rotation in the Fourier transform of the image.
dr.
(136)
f (4 = 2n 1, pF@) JdWp)
where r=m, and O=tan(y/x),
p=GT7,
dp
(137)
+=tan-
(u/U)
(138)
Jo(x) = (1/27r) j:
exp [ -jx
cos (6 - 4)1 de
(139)
38
is the zero-order Bessel function of the first kind. The transformation in (136) is also called the Hankel transform of zero order.
6) 180 Rotation:
F{FU-(x,
7) Convolution: F - fib, Is -co 1 -m =F{fdx, =Fdu,
Y)I)
=f(-x,
-Y).
(140)
PMx-a, Y>>
Y-P) da d0
Y>>F{h(x,
u)Fz(u,
u).
(142)
Note that the convolution of two functions in the space domain is equivalent to the very simple operation of multiplication in the spatial frequency domain. The corresponding property for convolution in the spatial frequency domain is given by
F{fLx,
YUXX,
-s,
u-
t)Ms,
ds
dt.
(143)
-m
A useful example of this property is shown in Figs. 2.21 and 2.22. By the Fourier convolution theorem we have chosen a frequency domain function, H, such that all frequencies above Q cycles per picture are zero. In the space domain the convolution of x and h is a simple linear filter while in the frequency domain it is easy to see that all frequency components above s1 cycles/picture have been eliminated.
8) Parseval Theorem: s
l"", j;,fd%
J')f;(x,
Y)
dx dy= j;,
where the asterisk denotes the complex conjugate. Whenfi(x, = f(x, y), we have
y) = f2(x, y)
I=-, {;-
(145)
39
Fig. 2.21: An ideal low pass filter is implemented by multiplying the Fourier transform of an object by a circular window.
40
Fig. 2.22: An ideal low pass filter is implemented by multiplying the Fourier transform of an object by a circular window.
41
Ni f(m,
n=O
n) exp [ -j2r(
:+:)I
(146)
1.
f(m,
(147)
form = 0, 1, ***,Ml;n = 0, 1, ***,Nl.Itiseasytoverifythatthe summations represented by the FFT and IFFT are inverses by noting that
F.
exp [$
km]
exp [T
mn] = {:
ii:.
(148)
This is the discrete version of (107). That the inverse FFT undoes the effect of the FFT is seen by substituting (43) into (147) for the inverse DFT to find
f(m,
n)
exp [j2r(
;+:)I
(149)
The desired result is made apparent by rearranging the order of summation and using (148). In (146) the discrete Fourier transform F(u, u) is defined for u between 0 and A4 - 1 and for u between 0 and N - 1. If, however, we use the same equation to evaluate F( k u, + u), we discover that the periodicity properties
* To be consistent with the notation in the one-dimensional case, we should express the space and frequency domain arrays as fm,. and F,,.. However, we feel that for the two-dimensional case, the math looks a bit neater with the style chosen here, especially when one starts dealing with negative indices and other extensions. Also, note that the variables u and v are indices here, which is contrary to their usage in Section 2.2.3 where they represent continuously varying spatial frequencies.
42
COMPUTERIZED
TOMOGRAPHIC
IMAGING
N-u) u)
(150) (151)
(152)
u, N- u).
Similarly, using (147) we can show that f(--m, .f(m, n)=fW-m, n) (153) (154) (155)
-n)=f(m,
N-n)
f(-m,
Another related consequence of the periodicity properties of the exponential factors in (28) and (147) is that
u) and f(aM+m,
bN+n)=f(m,
n) (156)
for a = 0, +l, +2, .*a, b = 0, +_l, k2, *.* . Therefore, wehavethe following conclusion: if a finite array of numbers f,,, and its Fourier transform F,,, are related by (28) and (147), then if it is desired to extend the definition off (m, n) and F(u, u) beyond the original domain as given by [0 5 (m and u) I M - 11 and [0 I (n and u) I N - 11, this extension must be governed by (151), (154) and (156). In other words, the extensions are periodic repetitions of the arrays. It will now be shown that this periodicity has important consequences when we compute the convolution of two M x N arrays, f (m, n) and d(m, n), by multiplying their finite Fourier transforms, F(u, u) and D(u, u). The convolution of two arrays f (m, n) and d(m, n) is given by
g(cx, fl)=kNMz
m=O
Nzf(m,
n=O
n)d(u-m,
/3-n)
(157)
=kNy
f(a!-m,
fl-n)d(m,
n)
m=On=O
(158)
fora = 0, 1, ***,Ml,p = 0, 1, ***,N1,whereweinsistthatwhen the values off (m, n) and d(m, n) are required for indices outside the ranges 0 smsMlandOsn<Nl,forwhichf(m,n)andd(m,n)are defined, then they should be obtained by the rules given in (151), (154) and (156). With this condition, the convolution previously defined becomes a circular or cyclic convolution. As in the l-dimensional case, the FFT of (157) can be written as the
43
n)d(cx-m, P-n)
(159)
* [y
z) exp [jhr((a-rn)$+y)]j
(160)
and then rearranging the summations =&y y y y k(U, u=o u=o w=o z=o z: exp [j27rm(L u)D(w, z) exp [j2r(z+$)]
* z:
exp [j2n
v])
(161)
= c
(162)
u=o u=o Thus we see that the convolution of the two-dimensional arrays f and d can be expressed as a simple multiplication in the frequency domain. The discrete version of Parseval theorem is an often used property of the s finite Fourier transform. In the continuous case this theorem is given by (144) while for the discrete case
M-I N-l M-l N-l
c
m=O
c f(m,
n=O
n)g*(m, n)=MN
z
u=o
-c
m=O
c
n=O
If(m, n)J2=MN C
u=o
C IF(u, u)l .
n=O
As in the one-dimensional and the continuous two-dimensional cases Parseval theorem states that the energy in the space domain and that in the s frequency domain are equal.
44
COMPUTERIZED
TOMOGRAPHIC
IMAGING
As in a one-dimensional case, a two-dimensional image must be sampled at a rate greater than the Nyquist frequency to prevent errors due to aliasing. For a moment, going back to the interpretation of u and u as continuous frequencies (see Section 2.2.3), if the Fourier transform of the image is zero for all frequencies greater than B, meaning that F(u, u) = 0 for all u and u such that ]u] 2 B and I u( L B, then there will be no aliasing if samples of the image are taken on a rectangular grid with intervals of less than A. A pictorial representation of the effect of aliasing on two-dimensional images is shown in Fig. 2.23. Further discussion on aliasing in two-dimensional sampling can be found in [Ros82]. 2.2.6 Numerical Implementation of the Two-Dimensional FFT
Before we end this chapter, we would like to say a few words about the numerical implementation of the two-dimensional finite Fourier transform. Equation (28) may be written as
F(u, u)=i$
m=o [
kNz f(m,
n=O
n) exp (-jsnu)] ,
u=O, *.a, N-l.
* exp (-j$mu)
u=o, ***,
M-l,
(165)
The expression within the square brackets is the one-dimensional FFT of the mth row of the image, which may be implemented by using a standard FFT (fast Fourier transform) computer program (in most instances N is a power of 2). Therefore, to compute F(u, u), we replace each row in the image by its
one-dimensional FFT, and then perform the one-dimensional FFT of each column.
Ordinarily, when a 2-D FFT is computed in the manner described above, the frequency domain origin will not be at the center of the array, which if displayed as such can lead to difficulty in interpretation. Note, for example, that in a 16 x 16 image the indices u = 15 and u = 0 correspond to a negative frequency of one cycle per image width. This can be seen by substituting u = 1 and u = 0 in the second equation in (151). To display the frequency domain origin at approximately the center of the array (a precise center does not exist when either M or N is an even number), the image data f (m, n) are first multiplied by ( - l)m+ and then the finite Fourier transformation is performed. To prove this, let us define a new arrayf (m, n) as follows:
f (m,
n)=f(m,
n)(- l)m+n
(166)
SIGNAL
PROCESSING
FUNDAMENTALS
45
Fig. 2.23: The effect of aliasing in two-dimensional images is shown here. (This is often known as the Moire effect.) In (a) a high-frequency sinusoid is shown. In (b) this sinusoid is sampled at a rate much lower than the Nyquist rate and the sampled values are shown as black and white dots (gray is used to represent the area between the samples). Finally, in (c) the sampled data shown in (b) are low pass filtered at the Nyquist rate, Note that both the direction and frequency of the sinusoid have changed due to aliasing.
Nglf(m,
n=O
n)(-
I),+,
(z+:)]
(167)
46
COMPUTERIZED
TOMOGRAPHIC
IMAGING
* exp
c[
j2?r u-g
(M/2)m
+ (Av2)n
II
W3)
(169)
u=o,
1, *mm, M-l;
u=o,
1, *em, N-l.
(170)
Therefore, when the array F (u, u) is displayed, the location at u = A412 and v = N/2 will contain F(0, 0). We have by no means discussed all the important properties of continuous, discrete and finite Fourier transforms; the reader is referred to the cited literature for further details.
2.3 References
E. 0. Brigham, The Fast Fourier Transform. Englewood Cliffs, NJ: PrenticeHall, 1974. [Dud841 D. E. Dudgeon and R. M. Mersereau, Multidimensional Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1984. [Gon77] R. C. Gonzalez, DigitalImage Processing. Reading, MA: Addison-Wesley, 1977. [Go0681 J. W. Goodman, Introduction to Fourier Optics. San Francisco, CA: McGrawHill Book Company, 1968. [Lig60] M. J. Lighthill, Introduction to Fourier Analysis and Generalized Functions. London and New York: Cambridge Univ. Press, 1960. [McG74] C. D. McGillem and G. R. Cooper, Continuous and Discrete Signal and System Analysis. New York, NY: Holt, Rinehart and Winston, 1974. [OPP751 A. V. Oppenheim and R. V. Schafer, Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1975. [Pap621 A. Papoulis, The Fourier Integral and Its Applications. New York, NY: McGraw-Hill, 1962. [Pra78] W. K. Pratt, Digital Image Processing. New York, NY: J. Wiley, 1978. [Rab75] L. R. Rabiner and B. Gold, Theory and Applications of Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1975. [Ros82] A. Rosenfeld and A. C. Kak, Digital Picture Processing, 2nd ed. New York, NY: Academic Press, 1982. [Sch75] M. Schwartz and L. Shaw, Signal Processing: Discrete Spectral Analysis, Detection, and Estimation. New York, NY: McGraw-Hill, 1975. [Bri74]
47
In this chapter we will deal with the mathematical basis of tomography with nondiffracting sources. We will show how one can go about recovering the image of the cross section of an object from the projection data. In ideal situations, projections are a set of measurements of the integrated values of some parameter of the object-integrations being along straight lines through the object and being referred to as line integrals. We will show that the key to tomographic imaging is the Fourier Slice Theorem which relates the measured projection data to the two-dimensional Fourier transform of the object cross section. This chapter will start with the definition of line integrals and how they are combined to form projections of an object. By finding the Fourier transform of a projection taken along parallel lines, we will then derive the Fourier Slice Theorem. The reconstruction algorithm used depends on the type of projection data measured; we will discuss algorithms based on parallel beam projection data and two types of fan beam data.
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
49
Fig. 3.1:
An object,
f(x,
y),
and its projection, Pe(tl), are shown for an angle of 0. (From [Kak79J.)
(2)
y)6(x
cos
0 +y sin 8 - t) dx dy.
(3)
y).
50
Pe (t)
2
Parallel projections are taken by measuring a set of parallel rays for a number of different angles. (From fRos82J.)
Fig. 3.2:
A projection is formed by combining a set of line integrals. The simplest projection is a collection of parallel ray integrals as is given by PO(t) for a constant 8. This is known as a parallel projection and is shown in Fig. 3.2. It could be measured, for example, by moving an x-ray source and detector along parallel lines on opposite sides of an object. Another type of projection is possible if a single source is placed in a fixed position relative to a line of detectors. This is shown in Fig. 3.3 and is known as a fan beam projection because the line integrals are measured along fans. Most of the computer simulation results in this chapter will be shown for the image in Fig. 3.4. This is the well-known Shepp and Logan [She741
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
51
A fan beam projection is collected if all the rays meet in one location. (From [Ros82J.)
Fig. 3.3:
head phantom, so called because of its use in testing the accuracy of reconstruction algorithms for their ability to reconstruct cross sections of the human head with x-ray tomography. (The human head is believed to place the greatest demands on the numerical accuracy and the freedom from artifacts of a reconstruction method.) The image in Fig. 3.4(a) is composed of 10 ellipses, as illustrated in Fig. 3.4(b). The parameters of these ellipses are given in Table 3.1. A major advantage of using an image like that in Fig. 3.4(a) for computer simulation is that now one can write analytical expressions for the projections. Note that the projection of an image composed of a number of ellipses is simply the sum of the projections for each of the ellipses; this follows from the linearity of the Radon transform. We will now present
52
head
I
-0.8 -0.6 -0.4 -0.2 0.0 x (b) 0.2 0.4 0.6 0.8
I
1.0
The Shepp and Logan phantom is shown in (a). Most of the computer simulated results in this chapter were generated using this phantom. The phantom is a superposition of 10 ellipses, each with a size and magnitude as shown in (b). (From (Ros82 J.)
Fig. 3.4:
expressions for the projections of a single ellipse. Letf(x, Fig. 3.5(a), i.e.,
y) be as shown in
(4)
otherwise (outside the ellipse).
SOURCES
53
r 98\
t-
a (6)
= A2 cos*e
+ Bz sin*0
f(X.Y) = P
= 0 outside
(a) An analytic expression is shown for the projection of an ellipse. For computer simulations a projection can be generated by simply summing the projection of each individual ellipse. (b) Shown here is an ellipse with its center located at (x,, y,) and its major axis rotated by (Y. (From lRos82J.I
Fig. 3.5:
54
Fig. 3.5:
Continued.
Table
3.1:
Center Coordinate
Minor Axis 0.69 0.6624 0.11 0.16 0.21 0.046 0.046 0.023 0.023 0.023
Rotation Angle
90 90
Refractive Index 2.0 - 0.98 - 0.02 - 0.02 0.01 0.01 0.01 0.01 0.01 0.01
(0, - 0.0184) (0.22, 0) (-0.22, 0) (0, 0.35) (0, 0.1) (0, -0.1) (- 0.08, - 0.605) (0, - 0.605) (0.06, - 0.605)
(0, 0)
72 108 90 0 0 0 0 90
SOURCES
55
It is easy to show that the projections of such a function are given by P@(t)=
ZpABJm L
a2(@ 0 ltl>4e)
cos
where a2(0) = A 2 cos2 0 + B2 sin2 8. Note that a(0) is equal to the projection half-width as shown in Fig. 3.5(a). Now consider the ellipse described above centered at (xi, yl) and rotated by an angle Q! as shown in Fig. 3.5(b). Let P (e, t) be the resulting projections. They are related to PO(t) in (5) by
P~(~)=P~-~(~-s
where s = -and
+e))
(6)
y = tan- (yi/x,).
(7)
(8)
The simplest example of the Fourier Slice Theorem is given for a projection at 8 = 0. First, consider the Fourier transform of the object along the line in the frequency domain given by u = 0. The Fourier transform integral now simplifies to
(9)
but because the phase factor is no longer dependent on y we can split the integral into two parts,
[ j:, f(x, y) dy
e-j2rux dx.
(10
From the definition of a parallel projection, the reader will recognize the term
56
COMPUTERIZED
TOMOGRAPHIC
IMAGING
F(u, 0) = I:,
Pe=o(x)e-j2*uxdx.
(12)
The right-hand side of this equation represents the one-dimensional Fourier transform of the projection PO=,; thus we have the following relationship between the vertical projection and the 2-D transform of the object function: F(u, 0) = Se=o(u). (13)
This is the simplest form of the Fourier Slice Theorem. Clearly this result is independent of the orientation between the object and the coordinate system. If, for example, as shown in Fig. 3.6 the (t, s) coordinate system is rotated by an angle 8, the Fourier transform of the projection defined in (11) is equal to the two-dimensional Fourier transform of the object along a line rotated by 19.This leads to the Fourier Slice Theorem which is stated as [Kak85] :
The Fourier Slice Theorem relates the Fourier transform of a projection to the Fourier transform of the object along a radial line. (From [Pan83].)
Fig. 3.6:
The Fourier transform of a parallel projection of an imagef(x, y) taken at angle 19gives a slice of the two-dimensional transform, F(u, u), subtending an angle 0 with the u-axis. In other words, the Fourier transform of PO(t) gives the values of F(u, u) along line BB in Fig. 3.6.
Fourier
transform
Y A
space domain
frequency
domain
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
57
The derivation of the Fourier Slice Theorem can be placed on a more solid foundation by considering the (t, s) coordinate system to be a rotated version of the original (x, y) system as expressed by
(14)
In the (t, s) coordinate system a projection along lines of constant t is written
(15)
and from (8) its Fourier transform is given by
(8)
(16)
This result can be transformed into the (x, y) coordinate system by using the relationships in (14), the result being
(17)
The right-hand side of this equation now represents the two-dimensional Fourier transform at a spatial frequency of (U = w cos 19,u = w sin 0) or
(18)
This equation is the essence of straight ray tomography and proves the Fourier Slice Theorem. The above result indicates that by taking the projections of an object function at angles el, e2,. * *, ek and Fourier transforming each of these, we can determine the values of F(u, u) on radial lines as shown in Fig. 3.6. If an infinite number of projections are taken, then F(u, u) would be known at all points in the uu-plane. Knowing F(u, u), the object function f(x, y) can be recovered by using the inverse Fourier transform:
(19)
If the function f(x, y) is bounded by - A/2 < x < A/2 and - A/2 < y < A/2, for the purpose of computation (19) can be written as
f(x, y)=$
c c F (a n
, !!)
@((m A)x+(n/A)Y)
(20)
58
COMPUTERIZED
TOMOGRAPHIC
IMAGING
for -z<x<-2
and -z<y<z.
A
(21)
Since in practice only a finite number of Fourier components will be known, we can write
f(,y,y)--$
for
5
=-N/2
5 F (z , a>
n= -N/2
@((/A)X+(n/A)Y)
(22)
-2<x<-2
and-2<y<y
(23)
Fig. 3.7: Collecting projections of the object at a number of angles gives estimates of the Fourier transform of the object along radial lines. Since an FFT algorithm is used for transforming the data, the dots represent the actual location of estimates of the object Fourier s transform. (From [Pan83/.)
where we arbitrarily assume N to be an even integer. It is clear that the spatial resolution in the reconstructed picture is determined by N. Equation (22) can be rapidly implemented by using the fast Fourier transform (FFT) algorithm provided the N2 Fourier coefficients F(m/A, n/A) are known. In practice only a finite number of projections of an object can be taken. In that case it is clear that the function F(u, V) is only known along a finite number of radial lines such as in Fig. 3.7. In order to be able to use (22) one must then interpolate from these radial points to the points on a square grid. Theoretically, one can exactly determine the N2 coefficients required in (22) provided as many values of the function F(u, u) are known on some radial lines [Cro70]. This calculation involves solving a large set of simultaneous equations often leading to unstable solutions. It is more common to determine the values on the square grid by some kind of nearest neighbor or linear interpolation from the radial points. Since the density of the radial points becomes sparser as one gets farther away from the center, the interpolation error also becomes larger. This implies that there is greater error in the
Y
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
59
calculation of the high frequency components in an image than in the low frequency ones, which results in some image degradation.
algorithm is perhaps one of the most illustrative examplesof how we can obtain a radically different computer implementation by simply rewriting the fundamental expressions the underlying theory. for
In this chapter, derivations and implementation details will be presented for the backprojection algorithms for three types of scanning geometries, parallel beam, equiangular fan beam, and equispaced fan beam. The computer implementation of these algorithms requires the projection data to be sampled and then filtered. Using FFT algorithms we will show algorithms for fast computer implementation. Before launching into the mathematical derivations of the algorithms, we will first provide a bit of intuitive rationale behind the filtered backprojection type of approach. If the reader finds this presentation excessively wordy, he or she may go directly to Section 3.3.2.
60
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 3.8: This figure shows the frequency domain data available from one projection. (a) is the ideal situation. A reconstruction could be formed by simply summing the reconstruction from each angle until the entire frequency domain is filled. What is actually measured is shown in (b). As predicted by the Fourier Slice Theorem, a projection gives information about the Fourier transform of the object along a single line. The filtered backprojection algorithm takes the data in (b) and applies a weighting in the frequency domain so that the data in (c) are an approximation to those in (a).
this projection gives the values of the object two-dimensional Fourier s transform along a single line. If the values of the Fourier transform of this projection are inserted into their proper place in the object two-dimensional s Fourier domain then a simple (albeit very distorted) reconstruction can be formed by assuming the other projections to be zero and finding the twodimensional inverse Fourier transform. The point of this exercise is to show that me reconstruction so formed is equivalent to the original object Fourier s transform multiplied by the simple filter shown in Fig. 3.8(b). What we really want from a simple reconstruction procedure is the sum of projections of the object filtered by pie-shaped wedges as shown in Fig. 3.8(a). It is important to remember that this summation can be done in either the Fourier domain or in the space domain because of the linearity of the Fourier transform. As will be seen later, when the summation is carried out in the space domain, this constitutes the backprojection process. As the name implies, there are two steps to the filtered backprojection algorithm: the filtering part, which can be visualized as a simple weighting of each projection in the frequency domain, and the backprojection part, which is equivalent to finding the elemental reconstructions corresponding to each wedge filter mentioned above. The first step mentioned above accomplishes the following: A simple weighting in the frequency domain is used to take each projection and estimate a pie-shaped wedge of the object Fourier transform. Perhaps the s simplest way to do this is to take the value of the Fourier transform of the projection, Se(w), and multiply it by the width of the wedge at that frequency. Thus if there are K projections over 180 then at a given frequency w, each wedge has a width of 274 w 1/K. Later when we derive the theory more rigorously, we will see that this factor of 1WI represents the Jacobian for a change of variable between polar coordinates and the rectangular coordinates needed for the inverse Fourier transform. The effect of this weighting by 274 w 1/K is shown in Fig. 3.8(c). Comparing this to that shown in (a) we see that at each spatial frequency, w, the weighted projection, (274 WI /K&(w), has the same mass as the pieshaped wedge. Thus the weighted projections represent an approximation to the pie-shaped wedge but the error can be made as small as desired by using enough projections. The final reconstruction is found by adding together the two-dimensional inverse Fourier transform of each weighted projection. Because each
SOURCES
61
projection only gives the values of the Fourier transform along a single line, this inversion can be performed very fast. This step is commonly called a backprojection since, as we will show in the next section, it can be perceived as the smearing of each filtered projection over the image plane. The complete filtered backprojection algorithm can therefore be written as: Sum for each of the K angles, 19,between 0 and 180 Measure the projection, P@(t) Fourier transform it to find So(w) Multiply it by the weighting function 27~1 /K WI Sum over the image plane the inverse Fourier transforms of the filtered projections (the backprojection process). There are two advantages to the filtered backprojection algorithm over a frequency domain interpolation scheme. Most importantly, the reconstruction procedure can be started as soon as the first projection has been measured. This can speed up the reconstruction procedure and reduce the amount of data that must be stored at any one time. To appreciate the second advantage, the reader must note (this will become clearer in the next subsection) that in the filtered backprojection algorithm, when we compute the contribution of each filtered projection to an image point, interpolation is often necessary; it turns out that it is usually more accurate to carry out interpolation in the space domain, as part of the backprojection or smearing process, than in the frequency domain. Simple linear interpolation is often adequate for the backprojection algorithm while more complicated approaches are needed for direct Fourier domain interpolation [%a8 11. In Fig. 3.9(a) we show the projection of an ellipse as calculated by (5). To perform a reconstruction it is necessary to filter the projection and then backproject the result as shown in Fig. 3.9(b). The result due to backproject-
A projection of an ellipse is shown in (a). (b) shows the projection after it has been filtered in preparation for backprojection.
Fig. 3.9:
(a)
0 (b)
62
COMPUTERIZED
TOMOGRAPHIC
IMAGING
ing one projection is shown in Fig. 3.10. It takes many projections to accurately reconstruct an object; Fig. 3.10 shows the result of reconstructing an object with up to 512 projections.
3.3.2 Theory
We will first present the backprojection algorithm for parallel beam projections. Recalling the formula for the inverse Fourier transform, the object function, f(x, y), can be expressed as
du jm j- -m F(u, u)~~~*(~~+Q dv. -m Exchanging the rectangular coordinate system in the frequency domain, (u, u), for a polar coordinate system, (w, e), by making the substitutions
fk U)
The result of backprojecting the projection in Fig. 3.9 is shown here. (a) shows the result of backprojecting for a single angle, (b) shows the effect of backprojecting over 4 angles, (c) shows 64 angles, and (d) shows 512 angles.
Fig. 3.10:
(25)
(26)
dudv=wdwdt
(27)
ALGORITHMS
SOURCES
63
(28)
This integral can be split into two by considering 0 from 0 to 180 and then from 180 to 360,
f(x, y) = ji jr F(w,
T + m
e)e~2rw(xcose+~sine)~
dw de
cm (B+ 180)+y sin (B+ 1800)]w dw de
F( w, 6 + 18())ej2mw[x
s0 s0
F(w, 8+18o)=F(-W,8)
the above expression forf(x, y) may be written as
(30)
f(x,y)=
j, [iymF(w,
O)Iwlej2rwr dw
de.
(31)
sin e.
(32)
If we substitute the Fourier transform of the projection at angle 8, the two-dimensional Fourier transform F(w, e), we get
So(w), for
(33)
Se(w)IwIejZnWr dw
de.
This integral in (33) may be expressed as f(x, y) = j: Qe(x cos e +y sin 8) de where (34)
(35)
This estimate of f(x, y), given the projection data transform So(w), has a simple form. Equation (35) represents a filtering operation, where the frequency response of the filter is given by 1w I ; therefore Qs (w) is called a filtered projection. The resulting projections for different angles 8 are then added to form the estimate of f(x, y). Equation (34) calls for each filtered projection, Qe, to be backprojetted. This can be explained as follows. To every point (x, y) in the image
64
COMPUTERIZED
TOMOGRAPHIC
IMAGING
plane there corresponds a value of t = x cos 8 + y sin 0 for a given value of 8, and the filtered projection Qs contributes to the reconstruction its value at t ( =x cos 8 + y sin 0). This is further illustrated in Fig. 3.11. It is easily shown that for the indicated angle 8, the value oft is the same for all (x, y) on the line LM. Therefore, the filtered projection, Qe, will make the same contribution to the reconstruction at all of thesepoints. Therefore, one could say that in the reconstruction process each filtered projection, Qe, is smeared back, or backprojected, over the image plane. The parameter w has the dimension of spatial frequency. The integration in (35) must, in principle, be carried out over all spatial frequencies. In practice the energy contained in the Fourier transform components above a certain frequency is negligible, so for all practical purposes the projections may be considered to be bandlimited. If W is a frequency higher than the highest frequency component in each projection, then by the sampling theorem the projections can be sampled at intervals of
Reconstructions are often done using a procedure known as backprojection. Here a filtered projection is smeared back over the reconstruction plane along lines of constant t. The filtered projection at a point t makes the same contribution to all pixels along the line LM in the x-y plane. (From [Ros82].)
Fig. 3.11:
without introducing any error. If we also assume that the projection data are equal to zero for large values of It ( then a projection can be represented as
p&W, m=-,
-N
. . ..O.
. ...--1
(37)
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
65
=A
kTgI, Pe (A)
e-j2r(mk N).
(38)
Given the samples of a projection, (38) gives the samples of its Fourier transform. The next step is to evaluate the modified projection Qs(t) digitally. Since the Fourier transforms So(w) have been assumed to be bandlimited, (35) can be approximated by
Q@(t)= jr,
So(w)(w(ej2 dw + *
(39)
provided N is large enough. Again, if we want to determine the projections Q@(t)for only those t at which the projections P@(t)are sampled, we get
k= -N/2,
(42)
By the above equation the function Qo(t) at the sampling points of the projection functions is given (approximately) by the inverse DFT of the product of &(m(2 W/N)) and (m(2 W/N)\. From the standpoint of noise in the reconstructed image, superior results are usually obtained if one multiplies the filtered projection, &(2 W/N)Im(2 W/N)( , by a function such as a Hamming window [Ham77]:
. jm !!J
H (m T)
ej2*(mk/N) (43)
where H(m(2 W/N)) represents the window function used. The purpose of the window function is to deemphasize high frequencies which in many cases represent mostly observation noise. By the familiar convolution theorem for the case of discrete transforms, (43) can be written as
(44)
where * denotes circular (periodic) convolution and where +(k/2 W) is the inverse DFT of the discrete function I m(2 W/N)IN(m(2 W/N)), m = -N/2, m-e, -1, 0, 1, .*., N/2.
66
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Clearly at the sampling points of a projection, the function Qe(t) may be obtained either in the Fourier domain by using (40), or in the space domain by using (44). The reconstructed picture f (x, y) may then be obtained by the discrete approximation to the integral in (34), i.e., f&s Y)=c i Q&cos Bi+y sin ei) (45)
where the K angles & are those for which the projections PO(t) are known. Note that the value of x cos Bi + y sin ei in (45) may not correspond to one of the values oft for which Qei is determined in (43) or in (44). However, Qei for such t may be approximated by suitable interpolation; often linear interpolation is adequate. Before concluding this subsection we would like to make two comments about the filtering operation in (35). First, note that (35) may be expressed in the t-domain as
Qe(t) = 1 Pe(aU)P(t - a) da
(46)
where p(t) is nominally the inverse Fourier transform of the I w 1 function in the frequency domain. Since I WI is not a square integrable function, its inverse transform doesn exist in an ordinary sense. However, one may t examine the inverse Fourier transform of 1wle- lwl (47)
pm =
(E2
(48)
This function is sketched in Fig. 3.12. Note that for large t we get p,(t) = - l/(27@. Now our second comment about the filtered projection in (35): This equation may also be written as QeW= j, j2rwSe(w) where w(w)= -1 1 for w>O for w<O. [ 2 sgn (w)] ej2*wt dw
(49)
* {IFT of 2
sgn (w)}
(51)
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
67
Fig. 3.12: An approximation to the impulse response of the ideal backprojection filter is shown here. (From [Ros82].)
where the symbol * denotes convolution and the abbreviation IFT stands for inverse fast Fourier transform. The IFT of j2nwSO(w) is (W3t)P&) while the IFT of ( -j/24 sgn (w) is l/t. Therefore, the above result may be written as (52)
= Hilbert Transform of 7
mw
at
(53)
where, expressed as a filtering operation, the Hilbert Transform is usually defined as the following frequency response:
N(w)= c -; , w>o w<o.
W= k cycles/cm.
Let the sampled projections be represented by PB(/cr) where k takes integer values. The theory presented in the preceding subsection says that for each sampled projection PO(k7) we must generate a filtered Qe(k~) by using the periodic (circular) convolution given by (40). Equation (40) is very attractive since it directly conforms to the definition of the DFT and, if N is decomposable, possesses a fast FFT implementation. However, note that (40) is only valid when the projections are of finite bandwidth and finite order. Since these two assumptions (taken together) are never strictly satisfied, computer processing based on (40) usually leads to interperiod interference artifacts created when an aperiodic convolution (required by (35)) is implemented as a periodic convolution. This is illustrated in Fig. 3.13. Fig. 3.13(a) shows a reconstruction of the Shepp and Logan head phantom from 110 projections and 127 rays in each projection using (40) and (45). Equation (40) was implemented with a base 2 FFT algorithm using 128 points. Fig. 3.13(b) shows the reconstructed values on the horizontal line for y = - 0.605. For comparison we have also shown the values on this line in the original object function. The comparison illustrated in Fig. 3.13(b) shows that reconstruction based on (42) and (45) introduces a slight dishing and a dc shift in the image. These artifacts are partly caused by the periodic convolution implied by (40) and partly by the fact that the implementations in (40) zero out all the information in the continuous frequency domain in the cell represented by m = 0, whereas the theory (eq. (35)) calls for such zeroing out to occur at only one frequency, viz. w = 0. The contribution to these artifacts by the interperiod interference can be eliminated by adequately zero-padding the projection data before using the implementations in (42) or (43). Zero-padding of the projections also reduces, but never completely eliminates, the contribution to the artifacts by the zeroing out of the information in the m = 0 cell in (40). This is because zero-padding in the space domain causes the cell size to get smaller in the frequency domain. (If N&r points are used for performing the discrete Fourier transform, the size of each sampling cell in the frequency domain is equal to l/ZVrrrr.) To illustrate the effect of zero-padding, the 127 rays in each projection in the preceding example were padded with 129 zeros to make the data string 256 elements long. These data were transformed by an FFT algorithm and filtered with a 1WI function as before. The y = -0.605 line through the
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
69
dishing
Fig. 3.13: (a) This reconstruction of the Shepp and Logan phantom shows the artifacts caused when the projection data are not adequately zero-padded and FFTs are used to perform the filtering operation in the filtered backprojection algorithm. The dark regions at the top and the bottom of the reconstruction are the most visible artifacts here. This 128 x 128 reconstruction was made from 110 projections with 127 rays in each projection. (b) A numerical comparison of the true and the reconstructed values on they = -0.605 line. (For the location of this line see Fig. 3.4.) The and the dc shift artifacts are quite evident in this comparison. (c) Shown here are the reconstructed values obtained on they = -0.605 line if the 127 rays in each projection are zero-padded to 256 points before using the FFTs. The dishing caused by interperiod interference has disappeared; however, the dc shift still remains. (From [Ros82].)
1.05
2.0
925
800
Reconstructed
values
/
,675
,550
-1 .o
0.0 I
I 0.0 (b)
70
2.0
1.050
2.0
fill
IIIt
1.012
-1.0
-.50
0.0
(c)
20
1.0
Fig. 3.13:
Continued.
reconstruction is shown in Fig. 3.13(c), demonstrating that the dishing distortion is now less severe. We will now show that the artifacts mentioned above can be eliminated by the following alternative implementation of (35) which doesn require the t approximation used in the discrete representation of (40). When the highest frequency in the projections is finite (as given by (55)), (35) may be expressed as
Qe(t)= I:,
where
So(w)H(w)ejzTwf dw
(56)
(57)
(58)
H(w), shown in Fig. 3.14, represents the transfer function of a filter with which the projections must be processed. The impulse response, h(t), of this filter is given by the inverse Fourier transform of H(w) and is
h(t) = SW H(w)e+j2*wf dw -m
1 sin 2?rt/2r =272 2*t/2r 1 (60)
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
71
H(W)
I -27
I Ti
Frequency
(w)
Fig. 3.14: The ideal filter response for the filtered backprojection algorithm is shown here. It has been bandlimited to l/27. (From [Ros82 J.)
where we have used (55). Since the projection data are measured with a sampling interval of 7, for digital processing the impulse response need only be known with the same sampling interval. The samples, h(nr), of h(t) are given by l/472,
h(m) =
i
0,
-1 n21r2r2
(61)
This function is shown in Fig. 3.15. Since both PO(t) and h(t) are now bandlimited functions, they may be expressed as sin 2n W(t - kr)
MO= i pew) 2TW(t-k7)
(62)
k=-m
(63)
By the convolution theorem the filtered projection (56) can be written as
Qe(t)= I-
P&t )h(t-02
t dt ) .
72
COMPUTERIZED
TOMOGRAPHIC
IMAGING
h(m)
The impulse response the filter shown in Fig. 3.14 is shown here. (From [Ros82J.)
Fig. 3.15:
of
Substituting (62) and (63) in (64) we get the following result for the values of the filtered projection at the sampling points:
Qe(m) = 7 2
k=-m
h(m- b)P&).
(65)
In practice each projection is of only finite extent. Suppose that each P&T) is zero outside the index range k = 0, * * *, N - 1. We may now write the following two equivalent forms of (65):
N-l
Q@(m)=7 2 h(m-b)&(b),
k=O
(66)
or
N-l
Q&W = 7
c
k=-(N-1)
h(kT)P&w-k7),
(67)
SOURCES
73
0.00
,613
1.23
1.64
2.45
The DFT of the bandlimited filter (broken line) and that of the ideal filter (solid line) are shown here. Notice the prihary difference is in the dc component. (From [Ros821.)
Fig. 3.16:
These equations imply that in order to determine Q&T) the length of the sequence h(m) used should be from I = -(N - 1) to 1 = (N - 1). It is important to realize that the results obtained by using (66) or (67) aren t identical to those obtained by using (42). This is because the discrete Fourier transform of the sequence h(m) with n taking values in a finite range [such as when n ranges from - (N - 1) to (N - l)] is not the sequence 1k[(2 IV)/ N]]. While the latter sequence is zero at k = 0, the DFT of h(m) with n ranging from -(N - 1) to (N - 1) is nonzero at this point. This is illustrated in Fig. 3.16. The discrete convolution in (66) or (67) may be implemented directly on a general purpose computer. However, it is much faster to implement it in the frequency domain using FFT algorithms. [By using specially designed hardware, direct implementation of (66) can be made as fast or faster than the frequency domain implementation.] For the frequency domain implementation one has to keep in mind the fact that one can now only perform periodic (or circular) convolutions, while the convolution required in (66) is aperiodic. To eliminate the interperiod interference artifacts inherent to periodic convolution, we pad the projection data with a sufficient number of zeros. It can easily be shown [Jak76] that if we pad Po(k7) with zeros so that it is (2N - 1) elements long, we avoid interperiod interference over the N samples of Qe(k7). Of course, if one wants to use the base 2 FFT algorithm, which is most often the case, the sequences Po(k7) and h(kT) have to be zero-padded so that each is (2N - 1)2 elements long, where (2N - 1)2 is the smallest integer that is a power of 2 and that is greater than 2N - 1. Therefore, the frequency domain implementation may be expressed as
Q&m) = r
(68)
14
COMPUTERIZED
TOMOGRAPHIC
IMAGING
where FFT and IFFT denote, respectively, fast Fourier transform and inverse fast Fourier transform; ZP stands for zero-padding. One usually obtains superior reconstructions when some smoothing is also incorporated in (68). Smoothing may be implemented by multiplying the product of the two FFTs by a Hamming window. When such a window is incorporated, (68) may be rewritten as
After the filtered projections Qe(m) are calculated with the alternative method presented here, the rest of the implementation for reconstructing the image is the same as in the preceding subsection. That is, we use (45) for backprojections and their summation. Again for a given (x, y) and 8i the argument x cos 8i + y sin Bi may not correspond to one of the k7 at which Qei is known. This will call for interpolation and often linear interpolation is adequate. Sometimes, in order to eliminate the computations required for interpolation, preinterpolation of the functions Q@(t) is also used. In this technique, which can be combined with the computation in (69), prior to backprojection, the function Qs(t) is preinterpolated onto 10 to 1000 times the number of points in the projection data. From this dense set of points one simply retains the nearest neighbor to obtain the value of Qoi at x cos 8i + y sin &. A variety of techniques are available for preinterpolation [Sch73]. One method of preinterpolation, which combines it with the operations in (69), consists of the following: In (69), prior to performing the IFFT, the frequency domain function is padded with a large number of zeros. The inverse transform of this sequency yields the preinterpolated Qe. It was recently shown [Kea78] that if the data sequence contains fractional frequencies this approach may lead to large errors especially near the beginning and the end of the data sequence. Note that with preinterpolation and with appropriate programming, the backprojection for parallel projection data can be accomplished with virtually no multiplications. Using the implementation in (68), Fig. 3.17(b) shows the reconstructed values on the line y = - 0.605 for the Shepp and Logan head phantom. Comparing with Fig. 3.13(b), we see that the dc shift and the dishing have been eliminated. Fig. 3.17(a) shows the complete reconstruction. The number of rays used in each projection was 127 and the number of projections 100. To make convolutions aperiodic, the projection data were padded with zeros to make each projection 256 elements long.
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
75
Fig. 3.17: (a) Reconstruction obtained by using the filter shown in Fig. 3.16. The 127 rays in the projection were zero-padded so that each projection was 256 elements long. The unit sample response h(nr) was used with n ranging from - I28 to 127, yielding 2% points for this function. The number of projections was 100 and the display matrix size is 128 x 128. (b) A numerical comparison of they = -0.605 line of the reconstruction in (a) with the true values. Note that the dishing and dc shsft artifacts visible in Fig. 3.13 have disappeared. (From [Ros82].)
-1.0
-.50
0.0 (b)
.50
1.0
length of a projection, then rotate through a certain angular interval, then scan linearly over the length of the next projection, and so on. This usually results in times that are as long as a few minutes for collecting all the data. A much faster way to generate the line integrals is by using fan beams such as those shown in Fig. 3.3. One now uses a point source of radiation that emanates a fan-shaped beam. On the other side of the object a bank of detectors is used to make all the measurements in one fan simultaneously. The source and the entire bank of detectors are rotated to generate the desired number of fan projections. As might be expected, one has to pay a price for this simpler and
76
faster method of data collection; as we will see later the simple backprojection of parallel beam tomography now becomes a weighted backprojection. There are two types of fan projections depending upon whether a projection is sampled at equiangular or equispaced intervals. This difference is illustrated in Fig. 3.18. In (a) we have shown an equiangular set of rays. If the detectors for the measurement of line integrals are arranged on the straight line D1D2, this implies unequal spacing between them. If, however, the detectors are arranged on the arc of a circle whose center is at S, they may now be positioned with equal spacing along this arc (Fig. 3.18(b)). The second type of fan projection is generated when the rays are arranged such that the detector spacing on a straight line is now equal (Fig. 3.18(c)). The algorithms that reconstruct images from these two different types of fan projections are different and will be separately derived in the following subsection.
(70)
where D is the distance of the source S from the origin 0. The relationships in (70) are derived by noting that all the rays in the parallel projection at angle 0 are perpendicular to the line PQ and that along such a line the distance OB is equal to the value of t. Now we know that from parallel projections PO(t) we may reconstruct f(x, y) by
fk Y)= j;
&,(t)h(x cos
dt de
(71)
where tm is the value oft for which PO(t) = 0 with 1t 1 > t, in all projections. This equation only requires the parallel projections to be collected over 180. However, if one would like to use the projections generated over 360, this equation may be rewritten as
J-(x, j; Y)=;
Po(t)h(xe+y cos
sin 8-t)
dt de.
(72)
Derivation of the algorithm becomes easier when the point (x, y) (marked C in Fig. 3.20) is expressed in polar coordinates (r, +), that is,
x=r cos 4
y=r sin 4.
(73)
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
77
Rays
at intervals
equiangular
spacing mequa
Two different types of fan beams are shown here. In (a) the angle between rays is constant but the detector spacing is uneven. If the detectors are placed along a circle the spacing will then be equal as shown in (b). As shown in (c) the detectors can be arranged with constant spacing along a line but then the angle between rays is not constant. (From [Ros82J.)
Fig. 3.18:
78
ing
Fig. 3.18:
Continued.
(74)
Using the relationships in (70), the double integration may be expressed in terms of y and P,
- d sin r)D cos y dy dP (75) where we have used dt de = D cos y dy do. A few observations about this
expression are in order. The limits - y to 2 z - y for 0 cover the entire range of 360. Since all the functions of @are periodic (with period 2n) these limits
SOURCES
79
I :
Fig. 3.19: An equiangular fan is shown here. Each ray is identified by its angle y from the central ray. (From [Ros82J.)
may be replaced by 0 and 27r, respectively. Sin- (t,,,/D) is equal to the value of y for the extreme ray SE in Fig. 3.19. Therefore, the upper and lower limits for y may be written as yrn and - yrn, respectively. The expression Po+,(D sin y ) corresponds to the ray integral along SA in the parallel projection data PO(t). The identity of this ray integral in the fan projection data is simply &(r). Introducing these changes in (75) we get
(76)
In order to express the reconstruction formula given by (76) in a form that can be easily implemented on a computer we will first examine the argument
80
Fig. 3.20: This figure illustrates that L is the distance of the pixel at location (x, y) from the source S; and y is the angle that the source-to-pixel line subtends with the central ray. (From [Ros82J.)
of the function h. The argument may be rewritten as rcos (P+y-4)-D =rcos (p-4) sin y cos -y-[r sin (fl-c#J)+D] sin y. (77)
Let L be the distance coordinates] such as variables, r, 4, and p. this point (r, 9). One
from the source S to a point (x, y) [or (r, 4) in polar C in Fig. 3.20. Clearly, L is a function of three Also, let y be the angle of the ray that passes through can now easily show that L cos y =D+r L sin y =rcos sin (p-4) (p-4). (78)
Note that the pixel location (r, 4) and the projection angle /3 completely determine both L and y : L(r, 4, @=d[D+r and y =tan- sin (/3-4)]2+[rcos (fl-q5)12 (79)
r cos (P-9)
D+r sin (p-4)
(80)
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
81
Using (78) in (77) we get for the argument of h rcos @+7-+)-D and substituting this in (76) we get .m, 4,=; s; s &(r)h(L -7m sin (7 -r))LI cos y d-y d/3. (82) sin y=L sin (y -y) (81)
We will now express the function h(L sin (y - 7)) in terms of h(t). Note that h(t) is the inverse Fourier transform of 1WI in the frequency domain:
h(t)= I:,
Therefore,
IwleiZnW dw.
(83)
(84)
WL sin y
Y
(85)
Wsiny)=(&)*sr,IwlejZrwydwr
h(y). Therefore, (82) may be written as
(86)
(87)
W-ikdy
where
- YP cm y dy dP
038)
* h(y).
(8%
For the purpose of computer implementation, (88) may be interpreted as a weighted filtered backprojection algorithm. To show this we rewrite (88) as follows: f(r, 4)= jf $
Qp<r dP >
(90)
82
COMPUTERIZED
TOMOGRAPHIC
IMAGING
where
(91)
(92)
Step 1: Assume that each projection R@(Y)is sampled with sampling interval (Y. The known data then are R&x) where n takes integer values. pi are the angles at which projections are taken. The first step is to generate for each fan projection R&m) the corresponding Rii(na) by
R;i(na)=R&za)
- D - cos na.
(93)
Note that n = 0 corresponds to the ray passing through the center of the projection. Step 2: Convolve each modified projection R@m) with g(ncu) to generate the corresponding filtered projection:
Q&m) = Ril;i(na)*g(na).
(94)
h(m).
(95)
If we substitute in this the values of h(m) from (61), we get for the discrete impulse response 1 -3 n=O n is even nisodd. (96)
gel= 0, I
8a2
(,,~n.,> y
Although, theoretically, no further filtering of the projection data than that called for by (94) is required, in practice superior reconstructions are obtained if a certain amount of smoothing is combined with the required filtering:
Q&m) = R,&(ncx)*g(na)*k(na)
(97)
SOURCES 83
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
(a)
Fig. 3.21: While the filtered projections are backprojected along parallel lines for the parallel beam case (a), for the fan beam case the backprojection is performed along converging lines (b). (c) This figure illustrates the implementation step that in order to determine the backprojected value at pixel (x, y), one must first compute y for that pixel. (From [Ros82].)
(b)
84
Fig. 3.21:
Continued.
where k(ncu) is the impulse response of the smoothing filter. In the frequency domain implementation this smoothing filter may be a simple cosine function or a Hamming window. Step 3: Perform a weightedbackprojection of each filtered projection along the fan. Since the backprojection here is very different from that for the parallel case, we will explain it in some detail. For the parallel case the filtered projection is backprojected along a set of parallel lines as shown in Fig. 3.21(a). For the fan beam case the backprojection is done along the fan (Fig. 3.21(b)). This is dictated by the structure of (90):
(98)
where y is the angle of the fan beam ray that passes through the point (x, y) and A0 = 2n/A4. For pi chosen in Fig. 3.2 1(c) in order to find the contribution of Qai(r) to the point (x, y) shown there one must first find the angle, y , of the ray SA that passes through that point (x, y). Qai(-y ) will then be contributed from the filtered projection at pi to the point (x, y) under consideration. Of course, the computed value of y may not correspond to one of ncyfor which QSi(m) is known. One must
ALGORITHMS
FOR RECONSTRUCTION
WITH
NONDIFFRACTING
SOURCES
85
then use interpolation. The contribution Qai(r ) at the point (x, y) must then be divided by L2 where L is the distance from the source S to the point (x, y). This concludes our presentation of the algorithm for reconstructing projection data measured with detectors spaced at equiangular increments.
Fig. 3.22: For the case of equispaced detectors on a straight line, each projection is denoted by the function R&S). (From [Ros82].)
86
COMPUTERIZED
TOMOGRAPHIC
IMAGiNG
for theoretical purposes it is more efficient to assume the existence of an imaginary detector line 0; 0; passing through the origin. We now associate the ray integral along SB with point A on 0; Di , as opposed to point B on DiDz. Thus in Fig. 3.23 we will associate a fan projection Rp(s) with the imaginary detector line D, D; . Now consider a ray &I in the figure; the value of s for this ray is the length of OA. If parallel projection data were generated for the object under consideration, the ray SA would belong to a parallel projection PO(t) with 19 and t as shown in the figure. The relationship between /3 and t for the parallel case is given by t=s cos y e=fl+y 0=/3+tan- i
SD t=mT7
Fig. 3.23: This figure illustrates several of the parameters used in the derivation of the reconstruction algorithm for equispaced detectors. (From [Ros82].)
where use has been made of the fact that angle AOC is equal to angle OX, and where D is the distance of the source point S from the origin 0. In terms of the parallel projection data the reconstructed image is given by (74) which is repeated here for convenience:
(74)
ALGORITHMS
FOR RECONSTRUCTION
WITH
NONDIFFRACTING
SOURCES
87
where f(r, 4) is the reconstructed image in polar coordinates. Using the relationships in (99) the double integration may be expressed as
*h
rcos (/3+tan-
s 05
-4)-
DS
dm
03 ds do (D2+s2)3 2
dt de=
In (100) s, is the largest value of s in each projection and corresponds to t,,, for parallel projection data. The limits - tan- (s,,JD) and 27r - tan-i (s,,J D) cover the angular interval of 360. Since all functions of fl in (100) are periodic with period 27r, these limits may be replaced by 0 and 2n, respectively. Also, the expression
corresponds to the ray integral along SA in the parallel projection data PO(t). The identity of this ray integral in the fan projection data is simply R@(s). Introducing these changes in (100) we get
R&)h
( (
-,/m
03
+s2)3/2 ds dp (lo3)
In order to express this formula in a filtered backprojection form we will first examine the argument of h. The argument may be written as
(104)
We will now introduce two new variables that are easily calculated in a computer implementation. The first of these, denoted by U, is for each pixel (x, y) the ratio of SP (Fig. 3.24) to the source-to-origin distance. Note that
88
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 3.24: For a pixel at the polar coordinates (r, I$) the variable (I is the ratio of the distance SP, which is the projection of the source to pixel line on the central ray, to the source-to-center distance. (Adapted from [Ros82].)
SP is the projection of the source to pixel distance SE on the central ray. Thus SO+OP W, 4, PI= D
= -
(10%
(106)
The other parameter we want to define is the value of s for the ray that passes through the pixel (r, 4) under consideration. Let s denote this value of s. Since s is measured along the imaginary detector line D, , it is given by D; the distance OF. Since
(107)
(108)
SOURCES
89
Equations (106) and (108) can be utilized to express (104) in terms of U and
S :
>
Substituting (109) in (103), we get
Ds
s UD
sUD
(109)
J~=dm--di%?
We will now express the convolving kernel h in this equation in a form closer to that given by (61). Note that, nominally, h(t) is the inverse Fourier transform of ( WI in the frequency domain:
h(t) = iy,
Therefore,
1w 1ejzrwf dw.
(111)
UD w =wJm
we can rewrite (112) as follows:
(113)
dDTs2]
=E$
I:,
IW lejZr(s -s)w dw
(114) (115)
f(r, 4) = 1: $
where
sm R&k(s -02
--s) ,,&
ds di3
(116)
g(s) =; h(s).
(117)
For the purpose of computer implementation, (116) may be interpreted as a weighted filtered backprojection algorithm. To show this we rewrite (116) as
90
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Q,.&) = Ra * g(s) 6)
and
(119)
Step 1: Assume that each projection R&) is sampled with a sampling interval of a. The known data then are Rgi(na) where n takes integer values with n = 0 corresponding to the central ray passing through the origin; pi are the angles for which fan projections are known. The first step is to generate for each fan projection RB,(na) the corresponding modified projection R&(na) given by
R&(na)=Rp,(na) . dm
(121)
Step 2: Convolve each modified projection R&(na) with g(na) to generate the corresponding filtered projection:
Q&M
= Rpi(na) * gW4
(W
g(na) =i h(na).
(123)
Substituting in this the values of h(na) given in (61) we get for the impulse response of the convolving filter:
g(na)= 0, I
i
1 y&i&
ALGORITHMS FOR RECONSTRUCTION
1 G
WITH NONDIFFRACTING
SOURCES
91
When the convolution of (122) is implemented in the frequency domain using an FFT algorithm the projection data must be padded with a sufficient number of zeros to avoid distortion due to interperiod interference. In practice superior reconstructions are obtained if a certain amount of smoothing is included with the convolution in (122). If k(na) is the impulse response of the smoothing filter, we can write
(125)
In a frequency domain implementation this smoothing may be achieved by a simple multiplicative window such as a Hamming window. Step 3: Perform a weighted backprojection of each filtered projection along the corresponding fan. The sum of all the backprojections is the reconstructed image
(126)
where U is computed using (106) and s identifies the ray that passes through (x, y) in the fan for the source located at angle pi. Of course, this value of s may not correspond to one of the values of na at which Qsi is known. In that case interpolation is necessary.
t=D siny
and
8=P+r.
(127)
If, as before, R,(y) denotes a fan beam projection taken at angle 0, and PO(t) a parallel projection taken at angle 8, using (127) we can write R@(r) = b+-,(D sin Y). (128)
Let A@ denote the angular increment between successive fan beam projections, and let A-y denote the angular interval used for sampling the fan
92
COMPUTERIZED
TOMOGRAPHIC
IMAGING
beam projections. We will assume that the following condition is satisfied: Afi=Ay=cu. (129)
Clearly then fl and y in (128) are equal to ma! and na, respectively, for some integer values of the indices m and n. We may therefore write (128) as
R,,(m)
(130)
This equation serves as the basis of a fast re-sorting algorithm. It expresses the fact that the nth ray in the mth radial projection is the nth ray in the (m + n)th parallel projection. Of course, because of the sin na factor on the righthand side of (130), the parallel projections obtained are not uniformly sampled. This can usually be rectified by interpolation.
e,a<e,+
Fig. 3.25: As shown in this figure, each line integral can be thought of as a single point in the Radon transform of the object. Each line integral is identified by its distance from the origin and its angle.
180
(132) (133)
and
- L.x Itltm,
is where tmax large enough so that each projection is at least as wide as the object at its widest. If each ray integral is represented as a point in a polar coordinate system (t, 0) as shown in Fig. 3.25 then a complete set of ray
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
93
Fig. 3.26: An object and its Radon transform are shown here. The object in (a) is used to illustrate the short scan algorithm developed by Parker [Par82a J. (b) shows the Radon transform in rectangular coordinates, while (c) represents the Radon transform in polar coordinates. (Reprinted with permission from [Par82aJ, [Par82b J.)
integrals will completely fill a disk of radius t,,,, . This is commonly known as the Radon transform or a sinogram and is shown for the Shepp and Logan phantom both in polar and rectangular coordinates in Fig. 3.26. These ideas can also be extended to the fan beam case. From Fig. 3.27 we see that two ray integrals represented by the fan beam angles (PI, r,) and (p2, 72) are identical provided
P1-~1=@2-~2+180~
(134)
and
Yl= -72.
(135)
t=D sin y
e=j3+r
(136)
maps the (0, y) description of a ray in a fan into its Radon transform equivalent. This transformation can then be used to construct Fig. 3.27, which shows the data available in Radon domain as the projection angle fl varies between 0 and 180 with a fan angle of 40 (-ymax= 20 Recall that points in Radon space that are periodic with respect to the intervals shown in (132) and (133) represent the same ray integral. Thus the data in Fig. 3.28 for angles 0 > 180 and t > 0 are equal to the Radon data for 0 c 0 and t < 0. These two regions are labeled A in Fig. 3.28. On the other hand, the regions marked B in Fig. 3.28 are areas in the Radon space where there are no measurements of the object. To cover these areas it is necessary to measure projections over an additional 27, degrees as shown in
94
Fig. 3.27: Rays in two fan beams will represent the same line integral if they satisfy the relationship 0, - y, = P2 - y2 + 180 .
Fig. 3.28: Collecting projections over 180 gives estimates of the Radon transform between the curved lines as shown on the left. The curved lines represent the most extreme projections for a fan angle of Y,,,. On the right is shown the available data in the 0-t coordinate system used in describing fan beams. In both cases the region marked A represents the part of the Radon transform where two estimates are available. On the other hand, for 180 of projections there are no estimates of the Radon transform in the regions marked B.
180+-7,,, 180
SOURCES
95
l8O+y, 180
Fig. 3.29: If projections are gathered over an angle of 180 + 27, then the data illustrated are available. Again on the left is shown the Radon transform while the right shows the available data in the b-7 coordinate system. The line integrals in the shaded regions represent duplicate data and these points must be gradually weighted to obtain good reconstructions.
Fig. 3.29. Thus it should be possible to reconstruct an object using fan beam projections collected over 180 + 2-r,,, degrees. Fig. 3.30 shows a perfect reconstruction of a phantom used by Parker [Par82a], [Par82b] to illustrate his algorithm for short scan or 180 degree plus reconstructions. Projection data measured over a full 360 of 0 were used to generate the reconstruction. It is more natural to discuss the projection data overlap in the (0, r) coordinate system. We derive the overlap region in this space by using the relations in (134) and (135) and the limits 01&~18++2y, 01&s 180 +2ym. (137)
Substituting (134) and (135) into the first equation above we find Oc:&-2y2+ and then by rearranging - 180+2r,~&~2ym-2yz. (139) 1801180+2y, (138)
Substituting the same two equations into the second inequality in (137) we find OS/~,-2y,-180~180+2y, (140)
96
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 3.30: This figure shows a reconstruction using 360 offan beam projections and a standard filtered backprojection algorithm. (Reprinted with permission from [Par82aJ, [Par82bJ.)
and then by rearranging 180 Since the fan beam angle, y, is always less then 90 are given by 05&12Ym+2y* (141) the overlapping regions
(142)
Fig. 3.31: This reconstruction was generated with a standard filtered backprojection algorithm using 220 of projections. The large artifacts are due to the lack of data in some regions of the Radon transform and duplicate data in others. (Reprinted with permission from (Par82aJ. JPar82b J.)
and (1431 as is shown in Fig. 3.29. If projections are gathered over an angle of 180 + 27,,, and a reconstruction is generated using the standard fan beam reconstruction algorithms described in Section 3.4, then the image in Fig. 3.3 1 is obtained.
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
97
In this case a fan angle of 40 (ymax = 20) was used. As described above, the severe artifacts in this reconstruction are caused by the double coverage of the points in region B of Fig. 3.28. One might think that the reconstruction can be improved by setting the data to zero in one of the regions of overlap. This can be implemented by multiplying a projection at angle p, p@(y), by a one-zero window, w@(r), given by o<pr2y,+2y elsewhere. (144)
As shown by Naparstek [Nap801 using this type of window gives only a small improvement since streaks obscure the resulting image. While the above filter function properly handles the extra data, better reconstructions can be obtained using a window described in [Par82a]. The sharp cutoff of the one-zero window adds a large high frequency component to each projection which is then enhanced by the 1w 1 filter that is used to filter each projection. More accurate reconstructions are obtained if a smoother window is used to filter the data. Mathematically, a smooth window is both continuous and has a continuous derivative. Formally, the window, ws(-r), must satisfy the following condition:
v3,cYd+ y32(r2)=
(145)
for (or, -yr) and (f12, y2) satisfying the relations in (134) and (135), and woeI)= and
w180 +2y,
(146)
=o.
(147)
To keep the filter function continuous and smooth at the boundary between the single and double overlap regions the following constraints are imposed on the derivative of ~~(7):
awm =o
ap 0=2~~+2~
and
(148)
awph) afl
8=180+2y
=o.
(149)
98
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 3.32: Using a weighting function that minimizes the discontinuities in the projection this reconstruction is obtained using 220 of projection data. (Reprinted with permission from [Par82a], [Par826].)
osp12ym-2y
2y,-2y=/31180 180 Y+Ym
180
(150)
A reconstruction using this weighting function is shown in Fig. 3.32. From this image we see that it is possible to eliminate the overlap without introducing errors by using a smooth window.
SOURCES
99
object
Fig. 3.33: A three-dimensional reconstruction can be done by repetitively using two-dimensional reconstruction algorithms at different heights along the z-axis. (From [Kak86].)
rays form a cone as illustrated in Fig. 3.34. Cone beam algorithms have been studied for use with Mayo Clinic Digital Spatial Reconstructor (DSR) s [Rob831 and Imatron Fifth Generation Scanner [Boy83]. s The main advantage of cone beam algorithms is the reduction in data collection time. With a single source, ray integrals are measured through every point in the object in the time it takes to measure a single slice in a conventional two-dimensional scanner. The projection data, R,(t, r), are now a function of the source angle, 0, and horizontal and vertical positions on the detector plane, t and r.
(151) (152)
A new coordinate system (t, s, r) is obtained by two rotations of the (x, y, z)axis as shown in Fig. 3.35. The first rotation, as in the two-dimensional case, is by 0 degrees around the z-axis to give the (t, s, z)-axes. Then a second rotation is done out of the (t, s)-plane around the t-axis by an angle of y. In matrix form the required rotations are given by
100
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 3.34: In cone beam projections the detector measures the x-ray flux over a plane. By rotating the source and detector plane completely around the object all the data necessary for a three-dimensional reconstruction can be gathered in the time a conventional fan beam system collects the data for its two-dimensional reconstruction. (From (Kak86].)
Note that four variables are being used to specify the desired ray; (t, t9) specify the distance and angle in the x-y plane and (r, y) in the s-z plane. In a cone beam system the source is rotated by /3 and ray integrals are measured on the detector plane as described by R,(p , { To find the ). equivalent parallel projection ray first define P= P DSO r= I Dso (155)
as was done in Section 3.4.2. Here we have used Dso to indicate the distance from the center of rotation to the source and DDE to indicate the distance from the center of rotation to the detector. For a given cone beam ray, Ro(p, j-), the parallel projection ray is given by
Dso t=pe
(156) (157)
where
8 = p + tan - l (p/D,,)
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
101
Fig. 3.35: To simplifv the discussion of the cone beam reconstruction the coordinate system is rotated by the angle of the source to give the (s, t)-axis. The r-ax& is not shown but is perpendicular to the t- and s-axes. (From [Kak86].)
(159) were r and y specify the location of the tilted fan itself. The reconstructions shown in this section will use a three-dimensional version of the Shepp and Logan head phantom. The two-dimensional ellipses of Table 3.1 have been made ellipsoids and repositioned within an imaginary skull. Table 3.2 shows the position and size of each ellipse and Fig. 3.36 illustrates their position. Because of the linearity of the Radon transform, a projection of an object consisting of ellipsoids is just the sum of the projection of each individual
y = tan- ({IDso).
Table 3.2:
Ellipsoid
Gray Level P 2.0 -0.98 - 0.02 - 0.02 0.02 0.02 0.01 0.01 0.02 - 0.02
( - 0.22, 0, - 0.25) (0.22, 0, -0.25) (0, 0.1, -0.25) (0, 0.1, -0.25) (-O&-0.65, -0.25) (0.06, - 0.065, - 0.25) (0.06, - 0.105, 0.625) (0, 0.1, -0.625)
(0, 0, 0) (0, 0, 0)
(0.6624, 0.874, 0.88) (0.41, 0.16, 0.21) (0.31, 0.11, 0.22) (0.046, 0.046, 0.046) (0.046, 0.046, 0.046) (0.046, 0.023, 0.02) (0.046, 0.023, 0.02) (0.56, 0.04, 0.1) (0.056, 0.056, 0.1)
0 108 72 0 0 0 90 90 0
102
COMPUTERIZED
TOMOGRAPHIC
IMAGING
A three-dimensional version of the Shepp and Logan head phantom is used to test the cone beam reconstruction algorithms in this section, (a) A vertical slice through the object illustrating the position of the two reconstructed planes. (b) An image at plane B (z = - 0.25) and (c) an illustration of the level of each of the ellipses. (d) An image at plane A (z = 0.625) and (e) an illustration of the gray levels with the slice. (From [KakS6f .)
Fig. 3.36:
ALGORITHMS
FOR RECONSTRUCTION
WITH
NONDIFFRACTING
SOURCES
103
P
.I-(-% Y9 2) =
A2 B2 C=
x2 y2 -+-+--11
22 (160)
e&,
-r2(A2 cos2 8+B2 sin2 8)( 7+cy -2tr sin y cos t9 sin 8(B2-A2)
where
(4y))
l/2
(161)
(162)
g(r3 +I=;
1: j$ I, 4dpYO
aor cos (P - 4)
=Dso+r sin (P-4) Dso+r
(164)
W, 9, PI=
(165)
104
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Equation (163) is the same as (116)) except that we have now used different names for some of the variables. To further simplify this expression we will replace the (r, +) coordinate system by the rotated coordinates (t, s). Recall that (t, S) is the location of a point rotated by the angular displacement of the source-detector array. The expressions
s= -x sin fl+y
cos /3
(166) (167)
y=r sin 4,
-- Dsot
U(x, Y, PI=-
&o-s
Dso
(168)
Fig. 3.37: The (t i) coordinate system represents a point in the object with respect to a tilted fan beam. (From [Kak86].)
In a cone beam reconstruction it is necessary to tilt the fan out of the plane of rotation; thus the size of the fan and the coordinate system of the reconstructed point change. As shown in Fig. 3.37 a new coordinate system (c 5) is defined that represents the location of the reconstructed point with respect to the tilted fan. Because of the changing fan size both the source distance, Dso , and the angular differential, 0, change. The new source distance is given by D;;=D2,0+{2 (170)
ALGORITHMS
SOURCES
105
where !: is the height of the fan above the center of the plane of rotation. In addition, the increment of angular rotation d/3 becomes
(171)
Substituting these new variables, D i. for Dso and d@ for do, and writing the projection data as RPI (p, {), (169) becomes
;ct,
and (170) and (171) to find g(t, s)=i 1; ,,Eb,,,
+i, i. Dso
r z -=Dso Go-s
(173)
JD;o;;z+pz
dp d@.
(174)
The cone beam reconstruction algorithm can be broken into the following three steps: Step 1: Multiply the projection data, R,(p, JD,, + r2 + p2) to find R&p, r): r), by the function
(Dsol
(175)
Step 2: Convolve the weighted projection Ri(p, {) with h(p)/2 by multiplying their Fourier transforms with respect top. Note this convolution is done independently for each elevation, [. The result, Qo(p, {), is written
(176)
106
COMPUTERIZED
TOMOGRAPHIC
IMAGING
sEs)
d@. (177)
The two arguments of the weighted projection, Qp, represent the transformation of a point in the object into the coordinate system of the tilted fan shown in Fig. 3.37. Only those points of the object that are illuminated from all directions can be properly reconstructed. In a cone beam system this region is a sphere of radius Ds,-, sin (P,) where Pm is half the beamwidth angle of the cone. Outside this region a point will not be included in some of the projections and thus will not be correctly reconstructed. Figs. 3.38 and 3.39 show reconstructions at two different levels of the object described in Fig. 3.36. In each case 100 projections of 127 x 127 elements were simulated and both a gray scale image of the entire plane and a line plot are shown. The reconstructed planes were at z = 0.625 and z = -0.25 planes and are marked as Plane A and Plane B in Fig. 3.36. In agreement with [Smi85], the quality of the reconstruction varies with the elevation of the plane. On the plane of rotation (z = 0) the cone beam algorithm is identical to a equispatial fan beam algorithm and thus the results shown in Fig. 3.38 are quite good. Farther from the central plane each point in the reconstruction is irradiated from all directions but now at an oblique angle. As shown in Fig. 3.39 there is a noticeable degradation in the reconstruction.
3.7 Bibliographic
Notes
The current excitement in tomographic imaging originated with Hounsfield invention [Hou72] of the computed tomography (CT) scanner in 1972, s which was indeed a major breakthrough. His invention showed that it is possible to get high-quality cross-sectional images with an accuracy now reaching one part in a thousand in spite of the fact that the projection data do not strictly satisfy theoretical models underlying the efficiently implementable reconstruction algorithms. (In x-ray tomography, the mismatch with the assumed theoretical models is caused primarily by the polychromaticity of the radiation used. This will be discussed in Chapter 4.) His invention also showed that it is possible to process a very large number of measurements (now approaching a million) with fairly complex mathematical operations, and still get an image that is incredibly accurate. The success of x-ray CT has naturally led to research aimed at extending this mode of image formation to ultrasound and microwave sources. The idea of filtered backprojection was first advanced by Bracewell and Riddle [Bra671 and later independently by Ramachandran and Lakshminarayanan [Ram71]. The superiority of the filtered backprojection algorithm over
ALGORITHMS
SOURCES
107
Fig. 3.38: (a) Cone beam algorithm reconstruction of plane B in Fig. 3.36. (b) Plot of they = - 0.605 line in the reconstruction compared to the original. (From (Kak86].)
(b)
108
1.06
l.OLt I I 1.03 t I I I I I I I I I I I I I I I I I I I I ; I I I I
---__I
iI I I I
I I I I_------
-_
1.01
.962
(a) Cone beam algorithm reconstruction of plane A in Fig. 3.36. (b) Plot of they = -0.105 line in the reconstruction compared to the original. (From [KakMj.)
Fig. 3.39:
.9%
I
I - .256 -.007 (b) .2L10 .9 3 .7b6 .9bi
.930 -I-1.00
-, ,752
- .soLt
ITH
NONDIFFRACTING
SOURCES
109
the algebraic techniques was first demonstrated by Shepp and Logan [She74]. Its development for fan beam data was first made by Lakshminarayanan [Lak75] for the equispaced collinear detectors case and later extended by Herman and Naparstek [Her771 for the case of equiangular rays. The fan beam algorithm derivation presented here was first developed by Scudder [Scu78]. Many authors [Bab77], [Ken79], [Kwo77], [Lew79], [Tan751 have proposed variations on the filter functions of the filtered backprojection algorithms discussed in this chapter. The reader is referred particularly to [Ken79], $2~791 for ways to speed up the filtering of the projection data by using binary approximations and/or inserting zeros in the unit sample response of the filter function. Images may also be reconstructed from fan beam data by first sorting them into parallel projection data. Fast algorithms for ray sorting of fan beam data have been developed by Wang lWan77], Dreike and Boyd [Dre77], Peters and Lewitt [Pet77], and Dines and Kak [Din76]. The reader is referred to [Nah81] for a filtered backprojection algorithm for reconstructions from data generated by using very narrow angle fan beams that rotate and traverse continuously around the object. The reader is also referred to [Hor78], [Hor79] for algorithms for nonuniformly sampled projection data, and to [Bra67], [Lew78], [Opp75], [SatgO], [Tam811 for reconstructions from incomplete and limited projections. Full three-dimensional reconstructions have been discussed in [Chi79], [Chi80], [Smi85]. We have also not discussed the circular harmonic transform method of image reconstruction as proposed by Hansen [Han8 1a], [Han8 1b] . Tomographic imaging may also be accomplished, although less accurately, by direct Fourier inversion, instead of the filtered backprojection method presented in this chapter. This was first shown by Bracewell [Bra561 for radioastronomy, and later independently by DeRosier and Klug [DeR68] in electron microscopy and Rowley [Row691 in optical holography. Several workers who applied this method to radiography include Tretiak et al. [Tre69], Bates and Peters [Bat71], and Mersereau and Oppenheim [Mer74]. In order to utilize two-dimensional FFT algorithms for image formation, the direct Fourier approach calls for frequency domain interpolation from a polar grid to a rectangular grid. For some recent methods to minimize the resulting interpolation error, the reader is referred to [Sta81]. Recently Wernecke and D Addario [Wer77] have proposed a maximum-entropy approach to direct Fourier inversion. Their procedure is especially applicable if for some reason the projection data are insufficient.
3.8 References
[Bab77] [Bat711 [Boy831 N. Baba and K. Murata, Filtering for image reconstruction from projections, J. Opt. Sot. Amer., vol. 67, pp. 662-668, 1977. R. H. T. Bates and T. M. Peters, Towards improvements in tomography, New Zealand J. Sci., vol. 14, pp. 883-896, 1971. D. P. Boyd and M. J. Lipton, Cardiac computed tomography, Proc. IEEE, vol. 71, pp. 298-307, Mar. 1983.
110
COMPUTERIZED
TOMOGRAPHIC
IMAGING
[Bra561
R. N. Bracewell, Strip integration in radio astronomy, Aust. J. Phys., vol. 9, pp. 198-217, 1956. [Bra671 R. N. Bracewell and A. C. Riddle, Inversion of fan-beam scans in radio astronomy, Astrophys. J., vol. 150, pp. 427-434, Nov. 1967. M. Y. Chiu, H. H. Barrett, R. G. Simpson, C. Chou, J. W. Arendt, and G. R. Gindi, [Chi79] Three dimensional radiographic imaging with a restricted view angle, J. Opt. Sot. Amer., vol. 69, pp. 1323-1330, Oct. 1979. M. Y. Chiu, H. H. Barrett, and R. G. Simpson, Three dimensional reconstruction [Chi80] from planar projections, J. Opt. Sot. Amer., vol. 70, pp. 755-762, July 1980. [Cro lO] R. A. Crowther, D. J. DeRosier, and A. Klug, The reconstruction of a threedimensional structure from projections and its applications to electron microscopy, Proc. Roy. Sot. London, vol. A317, pp. 319-340, 1970. [DeR68] D. J. DeRosier and A. Klug, Reconstruction of three dimensional structures from electron micrographs, Nature, vol. 217, pp. 130-134, Jan. 1968. [Din761 K. A. Dines and A. C. Kak, Measurement and reconstruction of ultrasonic parameters for diagnostic imaging, Research Rep. TR-EE 77-4, School of Electrical Engineering, Purdue Univ., Lafayette, IN, Dec. 1976. P. Dreike and D. P. Boyd, Convolution reconstruction of fan-beam reconstruc[Dre77] tions, Comp. Graph. Image Proc., vol. 5, pp. 459-469, 1977. [Fe1841 L. A. Feldkamp, L. C. Davis, and J. W. Kress, Practical cone-beam algorithm, J. Opt. Sot. Amer., vol. 1, pp. 612-619, June 1984. [Ham771 R. W. Hamming, Digital Filters. Englewood Cliffs, NJ: Prentice-Hall, 1977. [HanSla] E. W. Hansen, Theory of circular image reconstruction, J. Opt. Sot. Amer., vol. 71, pp. 304-308, Mar. 1981. Circular harmonic image reconstruction: Experiments, Appl. Opt., vol. [Hanglb] ~ 20, pp. 2266-2274, July 1981. [Her771 G. T. Herman and A. Naparstek, Fast image reconstruction based on a Radon inversion formula appropriate for rapidly collected data, SIAM J. Appl. Math., vol. 33, pp. 511-533, Nov. 1977. B. K. P. Horn, Density reconstruction using arbitrary ray sampling schemes, [Hor78] Proc. IEEE, vol. 66, pp. 551-562, May 1978. ~ Fan-beam reconstruction methods, Proc. IEEE, vol. 67, pp. 1616-1623, [Hor79] 1979. [Hot1721 G. N. Hounsfield, A method of and apparatus for examination of a body by radiation such as x-ray or gamma radiation, Patent Specification 1283915, The Patent Office, 1972. C. V. Jakowatz, Jr. and A. C. Kak, Computerized tomography using x-rays and [Jak76] ultrasound, Research Rep. TR-EE 76-26, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1976. Computerized tomography with x-ray emission and ultrasound [Kak79] A. C. Kak, sources, Proc. IEEE, vol. 67, pp. 1245-1272, 1979. Tomographic imaging with diffracting and non-diffracting sources, in [Kak85] -, Array Signal Processing, S. Haykin, Ed. Englewood Cliffs, NJ: Prentice-Hall, 1985. [Kak86] A. C. Kak and B. Roberts, Image reconstruction from projections, in Handbook of Pattern Recognition and Image Processing, T. Y. Young and K. S. Fu, Eds. New York, NY: Academic Press, 1986. [Kea lS] P. N. Keating, More accurate interpolation using discrete Fourier transforms, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-26, pp. 368-369, 1978. [Ken791 S. K. Kenue and J. F. Greenleaf, Efficient convolution kernels for computerized tomography, Ultrason. Imaging, vol. 1, pp. 232-244, 1979. [Kwo77] Y. S. Kwoh, I. S. Reed, and T. K. Truong, A generalized 1WI-filter for 3-D reconstruction, IEEE Trans. Nucl. Sci., vol. NS-24. DO. 1990-1998. 1977. [Lak75] A. V. Lakshminarayanan, Reconstruction from divergeniray data, Tech. Rep. 92, Dept. of Computer Science, State Univ. of New York at Buffalo, 1975. [Lew78] R. M. Lewitt and R. H. T. Bates, Image reconstruction from projections, Optik, vol. 50, pp. 19-33 (Part I), pp. 85-109 (Part II), pp. 189-204 (Part III), pp. 269-278 (Part IV), 1978.
ALGORITHMS
FOR RECONSTRUCTION
WITH NONDIFFRACTING
SOURCES
111
R. M. Lewitt, Ultra-fast convolution approximation for computerized tomography, IEEE Trans. Nucl. Sci., vol. NS-26, pp. 2678-2681, 1979. R. M. Mersereau and A. V. Oppenheim, Digital reconstruction of multidimensional [Mer74] signals from their projections, Proc. IEEE, vol. 62, pp. 1319-1338, 1974. [Nah81] D. Nahamoo, C. R. Crawford, and A. C. Kak, Design constraints and reconstruction algorithms for transverse-continuous-rotate CT scanners, IEEE Trans. Biomed. Eng., vol. BME-28, pp. 79-97, 1981. A. Naparstek, Short-scan fan-beam algorithms for CT, IEEE Trans. Nucl. Sci., PW301 vol. NS-27, 1980. 1OPP751 B. E. Oppenheim, Reconstruction tomography from incomplete projections, in Reconstruction Tomography in Diagnostic Radiology and Nuclear Medicine, M. M. Ter Pogossian et al., Eds. Baltimore, MD: University Park Press, 1975. [Pan831 S. X. Pan and A. C. Kak, A computational study of reconstruction algorithms for diffraction tomography: Interpolation vs. filtered-backpropagation, IEEE Trans. Acoust. Speech Sinnal Processinn, vol. ASSP-31, DD. 1262-1275. Oct. 1983. [Par82a] D. L. Parker, O$imal short-s& convolution reconstruction for fanbeam CT, Med. Phys., vol. 9, pp. 254-257, Mar.lApr. 1982. Optimization of short scan convolution reconstruction for fan-beam CT, in [Par82b] -, Proc. International Workshop on Physics and Engineering in Medical Imaging, Mar. 1982, pp. 199-202. [Pet771 T. M. Peters and R. M. Lewitt, Computed tomography with fan-beam geometry, J. Comput. Assist. Tomog., vol. 1, pp. 429-436, 1977. [Ram7 l] G. N. Ramachandran and A. V. Lakshminarayanan, Three dimensional reconstructions from radiographs and electron micrographs: Application of convolution instead of Fourier transforms, Proc. Nat. Acad. Sci., vol. 68, pp. 2236-2240, 1971. [Rob831 R. A. Robb, E. A. Hoffman, L. J. Sinak, L. D. Harris, and E. L. Ritman, Highspeed three-dimensional x-ray computed tomography: The dynamic spatial rec&structor, Proc. IEEE, vol. 71, DD. 308-319. Mar. 1983. [Ros82] A. Rosenfeld and A. C..Kak, Dig*%1 Picture processing, 2nd ed. New York, NY: Academic Press, 1982. [Row691 P. D. Rowley, Quantitative interpretation of three dimensional weakly refractive phase objects using holographic interferometry, J. Opt. Sot. Amer., vol. 59, pp. 1496-1498, Nov. 1969. [SatSO] T. Sato, S. J. Norton, M. Linzer, 0. Ikeda, and M. Hirama, Tomographic image reconstruction from limited projections using iterative revisions in image and transform spaces, Appl. Opt., vol. 20, pp. 395-399, Feb. 1980. [Sch73] R. W. Schafer and L. R. Rabiner, A digital signal processing approach to interpolation, Proc. IEEE, vol. 61, pp. 692-702, 1973. [Scu78] H. J. Scudder, Introduction to computer aided tomography, Proc. IEEE, vol. 66, pp. 628-637, June 1978. [She741 L. A. Shepp and B. F. Logan, The Fourier reconstruction of a head section, IEEE Trans. Nucl. Sci., vol. NS-21, pp. 21-43, 1974. [Smi85] B. D. Smith, Image reconstruction from cone-beam projections: Necessary and sufficient conditions and reconstruction methods, IEEE Trans. Med. Imaging, vol. MI-4, pp. 14-25, Mar. 1985. [Sta81] H. Stark, J. W. Woods, I. Paul, and R. Hingorani, Direct Fourier reconstruction in computer tomography, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-29, pp. 237-244, 1981. [Tam8 l] K. C. Tam and V. Perez-Mendez, Tomographical imaging with limited angle input, J. Opt. Sot. Amer., vol. 71, pp. 582-592, May 1981. [Tan751 E. Tanaka and T. A. Iinuma, Correction functions for optimizing the reconstructed image in transverse section scan, Phys. Med. Biol., vol. 20, pp. 789-798, 1975. [Tre69] 0. Tretiak, M. Eden, and M. Simon, Internal structures for three dimensional images, in Proc. 8th Znt. Conf. on Med. Biol. Eng., Chicago, IL, 1969. [wan771 L. Wang, Cross-section reconstruction with a fan-beam scanning geometry, IEEE Trans. Comput., vol. C-26, pp. 264-268, Mar. 1977. Addario, Maximum entropy image reconstruction, [Wer77] S. J. Wemecke and L. R. D IEEE Trans. Comput., vol. C-26, pp. 351-364, 1977. [Lew79]
112
COMPUTERIZED
TOMOGRAPHIC
IMAGING
The mathematical algorithms for tomographic reconstructions described in Chapter 3 are based on projection data. These projections can represent, for example, the attenuation of x-rays through an object as in conventional x-ray tomography, the decay of radioactive nucleoids in the body as in emission tomography, or the refractive index variations as in ultrasonic tomography. This chapter will discuss the measurement of projection data with energy that travels in straight lines through objects. This is always the case when a human body is illuminated with x-rays and is a close approximation to what happens when ultrasonic tomography is used for the imaging of soft biological tissues (e.g., the female breast). Projection data, by their very nature, are a result of interaction between the radiation used for imaging and the substance of which the object is composed. To a first approximation, such interactions can be modeled as measuring integrals of some characteristic of the object. A simple example of this is the attenuation a beam of x-rays undergoes as it travels through an object. A line integral of x-ray attenuation, as we will show in this chapter, is the log of the ratio of monochromatic x-ray photons that enter the object to those that leave. A second example of projection data being equal to line integrals is the propagation of a sound wave as it travels through an object. For a narrow beam of sound, the total time it takes to travel through an object is a line integral because it is the summation of the time it takes to travel through each small part of the object. In both the x-ray and the ultrasound cases, the measured data correspond only approximately to a line integral. The attenuation of an x-ray beam is dependent on the energy of each photon and since the x-rays used for imaging normally contain a range of energies the total attenuation is a more complicated sum of the attenuation at each point along the line. In the ultrasound case, the errors are caused by the fact that sound waves almost never travel through an object in a straight line and thus the measured time corresponds to some unknown curved path through the object. Fortunately, for many important practical applications, approximation of these curved paths by straight lines is acceptable. In this chapter we will discuss a number of different types of tomography, each with a different approach to the measurement of projection data. An
MEASUREMENT
OF PROJECTION
DATA
113
excellent review of these and many other applications of CT imaging is provided in [Bat83]. The physical limitations of each type of tomography to be discussed here are also presented in [Mac83].
114
COMPUTERIZED
TOMOGRAPHIC
IMAGING
An x-ray tube is shown here illuminating a homogeneous material with a beam of x-rays. The beam is measured on the-far side of the object to determine the attenuation of the object.
Fig. 4.1:
where r and u represent the photon loss rates (on a per unit distance basis) due to the photoelectric and the Compton effects, respectively. For our purposes we will at this time lump these two together and represent the above equation as AN N 1 * x= -pFL. (2)
where NO is the number of photons that enter the object. The number of photons as a function of the position within the slab is then given by In N-In or N(x) = Noe-@. (6) NO= -@x (5)
The constant p is called the attenuation coefficient of the material. Here we assumed that p is constant over the interval of integration. Now consider the experiment illustrated in Fig. 4.2, where we have shown
MEASUREMENT
OF PROJECTION
DATA
115
Fig. 4.2: A parallel beam of x-rays is shown propagating through a cross section of the human body. (From [Kak79].)
a cross section of the human body being illuminated by a single beam of xrays. If we confine our attention to the cross-sectional plane drawn in the figure, we may now consider p to be a function of two space coordinates, x and y, and therefore denote the attenuation coefficient by ~(x, y). Let Ni be the total number of photons that enter the object (within the time interval of experimental measurement) through the beam from side A. And let Nd be the total number of photons exiting (within the same time interval) through the beam on side B. When the width, 7, of the beam is sufficiently small, reasoning similar to what was used for the one-dimensional case now leads to the following relationship between the numbers Nd and Ni [Ha174], [Ter67]: Nd = Ni exp
ray
14x, Y) ds
or, equivalently,
s
ray
~(x, y) ds=ln Nd
1 Nin
(7)
where ds is an element of length and where the integration is carried out along line AB shown in the figure. The left-hand side precisely constitutes a ray integral for a projection. Therefore, measurements like In (Nin/Nd) taken for
116
COMPUTERIZED
TOMOGRAPHIC
IMAGING
different rays at different angles may be used to generate projection data for the function ~(x, y). We would like to reiterate that this is strictly true
only under the assumption that the x-ray beam consists of monoenergetic photons. This assumption is necessary because the linear attenuation
coefficient is, in general, a function of photon energy. Other assumptions needed for this result include: detectors that are insensitive to scatter (see Section 4.1.4), a very narrow beam so there are no partial volume effects, and a very small aperture (see Chapter 5).
measured x-ray spectrum from [Epp66] is shown here. The anode voltage was IO5 kvp. (From fKak791.)
dE
(9)
Energy
in
KeV
MEASUREMENT
OF PROJECTION
DATA
117
where Sin(E) represents the incident photon number density (also called energy spectral density of the incident photons). Sin(E) dE is the total number of incident photons in the energy range E and E + dE. This equation incorporates the fact that the linear attenuation coefficient, CL,at a point (x, JJ) is also a function of energy. The reader may note that if we were to measure the energy spectrum of exiting photons (on side B in Fig. 4.2) it would be given by
1.
(10)
In discussing polychromatic x-ray photons one has to bear in mind that there are basically three different types of detectors [McC75]. The output of a detector may be proportional to the total number of photons incident on it, or it may be proportional to total photon energy, or it may respond to energy deposition per unit mass. Most counting-type detectors are of the first type, most scintillation-type detectors are of the second type, and most ionization detectors are of the third type. In determining the output of a detector one must also take into account the dependence of detector sensitivity on photon energy. In this work we will assume for the sake of simplicity that the detector sensitivity is constant over the energy range of interest. In the energy ranges used for diagnostic examinations the linear attenuation coefficient for many tissues decreases with energy. For a propagating polychromatic x-ray beam this causes the low energy photons to be preferentially absorbed, so that the remaining beam becomes proportionately richer in high energy photons. In other words, the mean energy associated with the exit spectrum, S&E), is higher than that associated with the incident spectrum, Sin(E). This phenomenon is called beam hardening. Given the fact that x-ray sources in CT scanning are polychromatic and that the attenuation coefficient is energy dependent, the following question arises: What parameter does an x-ray CT scanner reconstruct? To answer this question McCullough [McC74], [McC75] has introduced the notion of effective energy of a CT scanner. It is defined as that monochromatic energy at which a given material will exhibit the same attenuation coefficient as is measured by the scanner. McCullough et al. [McC74] showed empirically that for the original EM1 head scanner the effective energy was 72 keV when the x-ray tube was operated at 120 kV. (See [Mi178] for a practical procedure for determining the effective energy of a CT scanner.) The concept of effective energy is valid only under the condition that the exit spectra are the same for all the rays used in the measurement of projection data. (When
the exit spectra are not the same, the result is the appearance of beam hardening artifacts discussedin the next subsection.) It follows from the
work of McCullough [McC75] that it is a good assumption that the measured attenuation coefficient P,,,,~ at a point in a cross section is related to the actual attenuation coefficient p(E) at that point by
118
COMPUTERIZED
TOMOGRAPHIC
IMAGING
dE sP(E)&t(E)
(11)
This expression applies only when the output of the detectors is proportional to the total number of photons incident on them. McCullough has given similar expressions when detectors measure total photon energy and when they respond to total energy deposition/unit mass. Effective energy of a scanner depends not only on the x-ray tube spectrum but also on the nature of photon detection. Although it is customary to say that a CT scanner calculates the linear attenuation coefficient of tissue (at some effective energy), the numbers actually put out by the computer attached to the scanner are integers that usually range in values from - 1000 to 3000. These integers have been given the name Hounsfield units and are denoted by HU. The relationship between the linear attenuation coefficient and the corresponding Hounsfield unit is
(W
where p,ater is the attenuation coefficient of water and the values of both p and cc,,, are taken at the effective energy of the scanner. The value W = 0 corresponds to water; and the value H = - 1000 corresponds to p = 0, which is assumed to be the attenuation coefficient of air. Clearly, if a scanner were perfectly calibrated it would give a value of zero for water and - 1000 for air. Under actual operating conditions this is rarely the case. However, if the assumption of linearity between the measured Hounsfield units and the actual value of the attenuation coefficient (at the effective energy of the scanner) is valid, one may use the following relationship to convert the measured number H,,, into the ideal number HI
H=
where
Hrn
Hm,
water
H m,waterHm, air -
x 1000
(13)
E-I,, water and H,,,, air are, respectively, the measured Hounsfield units for water and air. [This relationship may easily be derived by assuming that ,U = aN, + b, calculating a and b in terms of H,,,, water, H,, air, and bwater,and then using (12).] Brooks [Bro77a] has used (11) to show that the Hounsfield unit Hat a point in a CT image may be expressed as f&+&Q
H=
(14)
l+Q
where H, and HP are the Compton and photoelectric coefficients of the material being measured, expressed in Hounsfield units. The parameter Q,
MEASUREMENT
OF PROJECTION
DATA
119
called the spectral factor, depends only upon the x-ray spectrum used and may be obtained by performing a scan on a calibrating material. A noteworthy feature of H, and HP is that they are both energy independent. Equation (14) leads to the important result that if two different CT images are reconstructed using two different incident spectra (resulting in two different values of Q), from the resulting two measured Hounsfleld units for a given point in the cross section, one may obtain some degree of chemical identification of the material at that point from H, and HP. Instead of performing two different scans, one may also perform only one scan with split detectors for this purpose [Bro78a].
120
COMPUTERIZED
TOMOGRAPHIC
IMAGING
A:
Polychromatic
case
0.2706
0.2687
0.1669
0 ,?,,5)
..I, ,j*
O.O; center
cI. of the
n,yua
0, iso
1 iorj,,
phantom
(b)
Fig. 4.4: This reconstruction shows the effect of polychromaticity artifacts in a simulated skull. (a) shows the reconstructed image using the spectrum in Fig. 4.3, while (b) is the center line of the reconstruction for both the polychromatic and monochromatic cases. (From fKak79].)
Various schemes have been suggested for making these artifacts less apparent. These fall into three categories: 1) preprocessing of projection data, 2) postprocessing of the reconstructed image, and 3) dual-energy imaging. Preprocessing techniques are based on the following rationale: If the assumption of the photons being monoenergetic were indeed valid, a ray integral would then be given by (8). For a homogeneous absorber of attenuation coefficient CL,this implies CL&?= In Nci
Nn
(1%
MEASUREMENT
OF PROJECTION
DATA
121
4.5: Hard objects such as bones also can causestreaks in the reconstructed image. (a) Reconstruction from polychromatic projection data of a phantom that consists of a skull with five circular bones inside. inside The rest of the the skull is water. The wide dark streaks are caused by the polychromaticity of x-rays. The polychromatic projections were simulated using the spectrum in Fig. 4.3. (b) Reconstruction of the same phantom as in (a) using projections generated with monochromatic x-rays. The variations in the gray levels outside the bone areas within the skull are less than 0.1% of the mean value. The image was displayed with a narrow window to bring out these variations. Note the absenceof streaks shown in (a). (From [Kak79].)
Fig.
where P is the thickness of the absorber. This equation says that under ideal conditions the experimental measurement In (Nin/Nd) should be linearly proportional to the absorber thickness. This is depicted in Fig. 4.6. However, under actual conditions a result like the solid curve in the figure is obtained. Most preprocessing corrections simply specify the selection of an ate absorber and then experimentally obtain the solid curve in Fig. 4.6.
122
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(no
Thickness homxjeneous
of
a absorber
Fig. 4.6:
The solid curve shows that the experimental measurementof a ray integral dependsnonlinearly on the thickness of a homogeneous absorber. (Adapted from [Kak79].)
Thus, should a ray integral be measured at A, it is simply increased to A for tomographic reconstruction. This procedure has the advantage of very rapid implementation and works well for soft-tissue cross sections because differences in the composition of various soft tissues are minimal (they are all approximately water-like from the standpoint of x-ray attenuation). For preprocessing corrections see [Bro76], [McD75], [McD77], and for a technique that combines preprocessing with image deconvolution see [Cha78]. Preprocessing techniques usually fail when bone is present in a cross section. In such cases it is possible to postprocess the CT image to improve the reconstruction. In the iterative scheme one first does a reconstruction (usually incorporating the linearization correction mentioned above) from the projection data. This reconstruction is then thresholded to get an image that shows only the bone areas. This thresholded image is then forwardprojected to determine the contribution made by bone to each ray integral in each projection. On the basis of this contribution a correction is applied to each ray integral. The resulting projection data are then backprojected again to form another estimate of the object. Joseph and Spital [Jos78] and Kijewski and Bjamgard [Kij78] have obtained very impressive results with this technique. A fast reprojection technique is described in [Cra86]. The dual-energy technique proposed by Alvarez and Macovski [Alv76a], [Due781 is theoretically the most elegant approach to eliminating the beam hardening artifacts. Their approach is based on modeling the energy dependence of the linear attenuation coefficient by
(16)
The part a,(x, y)g(E) describes the contribution made by photoelectric absorption to the attenuation at point (x, y); a,(x, y) incorporates the material
MEASUREMENT
OF PROJECTION
DATA
123
parameters at (x, JJ) and g(E) expresses the (material independent) energy dependence of this contribution. The function g(E) is given by
(See also Brooks and DiChiro [Bro77b]. They have concluded that g(E) = E-2.8.) The second part of (16) given by a2(x, Y)&(E) gives the Compton scatter contribution to the attenuation. Again a2(x, JJ) depends upon the material properties, whereas f&(E), the Klein-Nishina function, describes the (material independent) energy dependence of this contribution. The functionfxN(E) is given by l+cr
fKN&) =0?
1 (Y
In (1+2a)
1
(18)
(1 + 3o) (1 +2a)2
+iG
with LY = E/510.975. The energy E is in kilo-electron volts. The importance of (16) lies in the fact that all the energy dependence has been incorporated in the known and material independent functions g(E) and fKN(E). Substituting this equation in (9) we get
Nd= j SO(E) exp 1 -(A&E) +A2fKN@))l dE
(19)
(21)
Al and A2 are, clearly, ray integrals for the functions a,(~, u) and az(x, JJ). Now if we could somehow determine A, and A2 for each ray, from this
information the functions ar(x, y) and a2(x, JJ) could be separately reconstructed. And, once we know al(x, JJ) and 02(x, JJ), using (16) an
attenuation coefficient tomogram could be presented at any energy,free from beam hardening artifacts. A few words about the determination of Al and A2: Note that it is the
intensity Nd that is measured by the detector. Now suppose instead of making one measurement we make two measurements for each ray path for two different source spectra. Let us call these measurements 1, and 12; then
ZI(AI, 4
(22)
124
COMPUTERIZED
TOMOGRAPHIC
IMAGING
exp [-(AI~(E)+A~~KN(E))I
dE
(23)
which gives us two (integral) equations for the two unknowns Al and AZ. The two source spectra, S,(E) and S2(E), may for example be obtained by simply changing the tube voltage on the x-ray source or adding filtration to the incident beam. This, however, requires that two scans be made for each tomogram. In principle, one can obtain equivalent results from a single scan with split detectors [Bro78a] or by changing the tube voltage so that alternating projections are at different voltages. Alvarez and Macovski [Alv76b] have shown that statistical fluctuations in a,(x, y) and a2(x, y) caused by the measurement errors in Ii and I2 are small compared to the differences of these quantities for body tissues.
4.1.4 Scatter
X-ray scatter leads to another type of error in the measurement of a projection. Recall that an x-ray beam traveling through an object can be attenuated by photoelectric absorption or by scattering. Photoelectric absorption is energy dependent and leads to beam hardening as was discussed in the previous section. On the other hand, attenuation by scattering occurs because some of the original energy in the beam is deflected onto a new path. The scatter angle is random but generally more x-rays are scattered in the forward direction. The only way to prevent scatter from leading to projection errors is to build detectors that are perfectly collimated. Thus any x-rays that aren traveling t in a straight line between the source and the detector are rejected. A perfectly collimated detector is especially difficult to build in a fourth-generation, fixed-detector scanner (to be discussed in Section 4.13. In this type of machine the detectors must be able to measure x-rays from a very large angle as the source rotates around the object. X-ray scatter leads to artifacts in reconstruction because the effect changes with each projection. While the intensity of scattered x-rays is approximately constant for different rotations of the object, the intensity of the primary beam (at the detector) is not. Once the x-rays have passed through the collimator the detector simply sums the two intensities. For rays through the object where the primary intensity is very small, the effect of scatter will be large, while for other rays when the primary beam is large, scattered x-rays will not lead to much error. This is shown in Fig. 4.7 [Glo82], [Jos82]. For reasons mentioned above, the scattered energy causes larger errors in some projections than others. Thus instead of spreading the error energy over the entire image, there is a directional dependence that leads to streaks in reconstruction. This is shown in the reconstructions of Fig. 4.8. Correcting for scatter is relatively easy compared to beam hardening. While it is possible to estimate the scatter intensity by mounting detectors
MEASUREMENT
OF PROJECTION
DATA
125
The effect of scatter on two different projections is shown here. For the projections where the intensity of the primary beam is high the scatter makes little difference, When the intensity of the scattered beam is high compared to the primary beam then large (relative) errors are seen.
Fig. 4.1:
slightly out of the imaging plane, good results have been obtained by assuming a constant scatter intensity over the entire projection [Glo82].
126
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 4.8: Reconstructions are shown from an x-ray phantom with 15-cm-diameter water and two 4-cm Teflon rods. (A) Without I20-kvp correction; (B) same with polynomial beam hardening correction; and (C) 120-kvp/80-kvp dual-energy reconstruction. Note fhat the artifacts remain after polychromaticity correction. (Reprinted with permission from [Glo82].)
Fig. 4.9: In a third-generation fan beam x-ray tomography machine a point source of x-rays and a detector array are rotated continuously around the patient. (From fKak79j.)
Detector
Array
Plate
MEASUREMENT
OF PROJECTION
DATA
127
width
of
one T
detector
one
detector
x-ray photons
electrode high voltage surface aluminum entrance window at ground potential high voltage electrode (-ve)
A xenon gas detector is often used to measurethe number of x-ray photons that pass through the object. (From [Kak79].)
Fig. 4.10:
detector consists of a central collecting electrode with a high voltage strip on each side. X-ray photons that enter a detector chamber cause ionizations with high probability (which depends upon the length, P, of the detector and the pressure of the gas). The resulting current through the electrodes is a measure of the incident x-ray intensity. In one commercial scanner, the collector plates are made of copper and the high voltage strips of tantalum. In the same scanner, the length P (shown in Fig. 4.10) is 8 cm, the voltage applied between the electrodes 170 V, and the pressure of the gas 10 atm. The overall efficiency of this particular detector is around 60%. The primary advantages of xenon gas detectors are that they can be packed closely and that they are inexpensive. The entrance width, 7, in Fig. 4.10 may be as small as 1 mm. Yaffee et al. [Yaf77] have discussed in detail the energy absorption efficiency, the linearity of response, and the sensitivity to scattered and offfocus radiation for xenon gas detectors. W illiams [wi178] has discussed their use in commercial CT systems. In a fixed-detector and rotating-source scanner (fourth generation) a large number of detectors are mounted on a fixed ring as shown in Fig. 4.11. Inside this ring is an x-ray tube that continually rotates around the patient. During this rotation the output of the detector integrators facing the tube is sampled every few milliseconds. All such samples for any one detector constitute what is known as a detector-vertexfan. (The fan beam data thus collected from a fourth-generation machine are similar to third-generation fan beam data.) Since the detectors are placed at fixed equiangular intervals around a ring, the data collected by sampling a detector are approximately equiangular, but not exactly so becausethe source and the detector rings must have different radii. Generally, interpolation is used to convert these data into a more precise equiangular fan for reconstruction using the algorithms in Chapter 3. Note that the detectors do not have to be packed closely (more on this at the
128
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 4.11: In a fourth-generation scanneran x-ray source rotates continuously around the patient. A stationary ring of detectors comnletelv surrounds the oatient. (From [Khk79].) z
end of this section). This observation together with the fact that the detectors are spread all around on a ring allows the use of scintillation detectors as opposed to ionization gas chambers. Most scintillation detectors currently in use are made of sodium iodide, bismuth germanate, or cesium iodide crystals coupled to photo-diodes. (See [Der77a] for a comparison of sodium iodide and bismuth germanate.) The crystal used for fabricating a scintillation detector serves two purposes. First, it traps most of the x-ray photons which strike the crystal, with a degree of efficiency which depends upon the photon energy and the size of the crystal. The x-ray photons then undergo photoelectric absorption (or Compton scatter with subsequent photoelectric absorption) resulting in the production of secondary electrons. The second function of the crystal is that of a phosphor-a solid which can transform the kinetic energy of the secondary electrons into flashes of light. The geometrical design and the encapsulation of the crystal are such that most of these flashes of light leave the crystal through a side where they can be detected by a photomultiplier tube or a solid state photo-diode. A commercial scanner of the fourth-generation type uses 1088 cesium iodide detectors and in each detector fan 1356 samples are taken. This particular system differs from the one depicted in Fig. 4.9 in one respect: the x-ray source rotates around the patient outside the detector ring. This makes
MEASUREMENT
OF PROJECTION
DATA
129
it necessary to nutate the detector ring so that measurements like those shown in the figure may be made [Haq78]. An important difference exists between the third- and the fourth-generation configurations. The data in a third-generation scanner are limited essentially in the number of rays in each projection, although there is no limit on the number of projections themselves; one can have only as many rays in each projection as the number of detectors in the detector array. On the other hand, the data collected in a fourth-generation scanner are limited in the number of projections that may be generated, while there is no limit on the number of rays in each projection. (It is now known that for good-quality reconstructions the number of projections should be comparable to the number of rays in each projection. See Chapter 5.) In a fan beam rotating detector (third-generation) scanner, if one detector is defective the same ray in every projection gets recorded incorrectly. Such correlated errors in all the projections form ring artifacts [She77]. On the other hand, when one detector fails in a fixed detector ring type (fourthgeneration) scanner, it implies a loss or partial recording of one complete projection; when a large number of projections are measured, a loss of one projection usually does not noticeably degrade the quality of a reconstruction [Shu77]. The reverse is true for changes in the x-ray source. In a thirdgeneration machine, the entire projection is scaled and the reconstruction is not greatly affected; while in fourth-generation scanners source instabilities lead to ring artifacts. Reconstructions comparing the effects of one bad ray in all projections to one bad projection are shown in Fig. 4.12. The very nature of the construction of a gas ionization detector in a thirdgeneration scanner lends them a certain degree of collimation which is a protection against receiving scatter radiation. On the other hand, the detectors in a fourth-generation scanner cannot be collimated since they must be capable of receiving photons from a large number of directions as the x-ray tube is rotating around the patient. This makes fixed ring detectors more vulnerable to scattered radiation. When conventional CT scanners are used to image the heart, the reconstruction is blurred because of the heart motion during the data s collection time. The scanners in production today take at least a full second to collect the data needed for a reconstruction but a number of modifications have been proposed to the standard fan beam machines so that satisfactory images can be made [Lip83], [Mar82]. Certainly the simplest approach is to measure projection data for several complete rotations of the source and then use only those projections that occur during the same instant of the cardiac cycle. This is called gated CT and is usually accomplished by recording the patient EKG as each projection is s
I Although one may generate a very large number of rays by taking a large number of samples would be limited by the width of the focal spot on the xin each projection, useful information ray tube and by the size of the detector aperture.
130
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Three reconstructions are shown here to demonstrate the ring artifact due to a bad detector in a third-generation (rotating detector) scanner. (a) shows a standard reconstruction with 128 projections and 128 rays. (b) shows a ring artifact due to scaling detector 80 in all projections by 0.99.5.(c) shows the effect of scaling all rays in projection 80 by 0.995.
Fig. 4.12:
measured. A full set projection data for any desired portion of the EKG cycle is estimated by selecting all those projections that occur at or near the right time and then using interpolation to estimate those projections where no data are available. More details of this procedure can be found in [McK8 11. Notwithstanding interpolation, missing projections are a shortcoming of the gated CT approach. In addition, for angiographic imaging, where it is necessary to measure the flow of a contrast medium through the body, the movement is not periodic and the techniques of gated CT do not apply. Two new hardware solutions have been proposed to overcome these problems-in both schemes the aim is to generate all the necessary projections in a time interval that is sufficiently short so that within the time interval the object may be assumed to be in a constant state. In the Dynamic Spatial Reconstructor (DSR) described by Robb et al. in [Rob83], 14 x-ray sources and 14 large
MEASUREMENT
OF PROJECTION
DATA
131
circular fluorescent screens are used to measure a full set (112 views) of projections in a time interval of 0.127 second. In addition, since the x-ray intensity is measured on a fluorescent screen in two dimensions (and then recorded using video cameras), the reconstructions can be done in three dimensions. A second approach described by Boyd and Lipton [Boy83], [Pes85], and implemented by Imatron, uses an electron beam that is scanned around a circular anode. The circular anode surrounds the patient and the beam striking this target ring generates an x-ray beam that is then measured on the far side of the patient using a fixed array of detectors. Since the location of the x-ray source is determined completely by the deflection of the electron beam and the deflection is controlled electronically, an entire scan can be made in 0.05 second.
4.1.6 Applications
Certainly, x-ray tomography has found its biggest use in the medical industry. Fig. 4.13 shows an example of the fine detail that has made this type of imaging so popular. This image of a human head corresponds to an axial plane and the subject eyes, nose, and ear lobes are clearly visible. The
This figure shows a typical x-ray tomographic image produced with a third-generation machine. (Courtesy of Carl Crawford of the General Electric Medical Systems Division in Milwaukee, WI.)
Fig. 4.13:
132
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Journal of Computerized Tomography, for additional medical applications. Computerized tomography has also been applied to nondestructive testing (NDT) of materials and industrial objects. The rocket motor in Fig. 4.14 was examined by the Air Force-Aerojet Advanced Computed Tomography System I (AF/ACTS-I)* and its reconstruction is shown in Fig. 4.15. In the reconstruction, the outer ring is a PVC pipe used to support the motor, a grounding wire shows in the upper left as a small circular object, and the large mass with the star-shaped void represents solid fuel propellant. Several anomalies in the propellant are indicated with square boxes.
Fig. 4.14: A conventional photograph is shown here of a solid fuel rocket motor studied by the Aerojet Corporation. (Courtesy of Jim Berry and Gary Cawood of Aerojet Strategic Propulsion Company.)
This project was sponsored by Air Force Wright Aeronautical Laboratories, Air Force Materials Laboratory, Air Force Systems Command, United States Air Force, Wright-Patterson AFB, OH.
MEASUREMENT
OF PROJECTION
DATA
133
Fig. 4.15: A cross section of the motor in Fig, 4.14 is shown here. The white squares indicate flaws in the rocket propellant. (Courtesy of Aerojet Strategic Propulsion Company,)
An Optical Society of America meeting on Industrial Applications of Computerized Tomography described a number of unique applications of CT [OSA85]. These include imaging of core samples from oil wells [Wan85], quality assurance [A1185], [Hef85], [Per85], and noninvasive measurement of fluid flow [Sny85] and flame temperature [Uck85].
134
COMPUTERIZED
TOMOGRAPHIC
IMAGING
cross section changes with time due to radioactive decay, flow, and biochemical kinetics within the body. This implies that all the data for one cross-sectional image must be collected in a time interval that is short compared to the time constant associated with the changing concentration. But then this aspect also gives emission CT its greatest potential and utility in diagnostic medicine, because now by analyzing the images taken at different times for the same cross section we can determine the functional state of various organs in a patient body. s Emission CT is of two types: single photon emission CT and positron emission CT. The word single in the former refers to the product of the radioactive decay, a single photon, while in positron emission CT the decay produces a single positron. After traveling a short distance the positron comes to rest and combines with an electron. The annihilation of the emitted positron results in two oppositely traveling gamma-ray photons. We will first discuss CT imaging of (single) gamma-ray photon emitters.
Fig. 4.16: In single photon emission tomography a distributed source of gamma-rays is imaged using a collimated detector. (From [Kak79].)
collimator
A distributed source of
gamma-ray
MEASUREMENT
OF PROJECTION
DATA
135
Fig. 4.17: Axial SPECT images showing the concentration of iodine-123 at four cross-sectional planes are shown here. The 64 x 64 reconstructions were made by measuring 128 projections each with 64 rays. (The images were produced on a General Electric 4000T/Star and are courtesy of Grant Gullberg of General Electric in Milwaukee, WI,)
an infinitely long time to make a statistically meaningful observation.) Then clearly the total number of photons recorded by the detector in a meaningful time interval is proportional to the total concentration of the emitter along the line defined by R1R2. In other words, it is a ray integral as defined in Chapter 3. By moving the detector-collimator assembly to an adjacent position laterally, one may determine this integral for another ray parallel to R1R2. After one such scan is completed, generating one projection, one may either rotate the patient or the detector-collimator assembly and generate other projections. Under ideal conditions it should be possible to generate the projection data required for the usual reconstruction algorithms. Figs. 4.17 and 4.18 show, respectively, axial and sag&al SPECT images of a head. The axial images are normal CT reconstructions at different crosssectional locations, while the images of Fig. 4.18 were found by reformatting the original reconstructed images into four sagittal views. The reconstructions are 64 x 64 images representing the concentration of an amphetamine tagged with iodine-123. The measured data for these reconstructions consisted of 128 projections (over 360 each with 64 rays. As the reader might have noticed already, the images in Figs. 4.17 and 4.18 look blurry compared to the x-ray CT images as exemplified by the reconstructions in Fig. 4.13. To get better resolution in emission CT, one might consider using more detectors to provide finer sampling of each projection; unfortunately, that would mean fewer events per detector and thus a diminished signal-to-noise ratio at each detector. One could consider increasing the dosage of the radioactive isotope to enhance the signal-to-noise ratio, but that is limited by what the body can safely absorb. The length of
136
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 4.18:
The reconstructed data in Fig. 4. I7 were reformatted to produce the four sagittal images shown here. (The images were produced on a General Electric 4000TLStar and are courtesy of Grant Gullberg of General Electric in Milwaukee, WI)
time over which the events are integrated could also be prolonged for an increased signal-to-noise ratio, but usually that is constrained by body motion [Bro8 11. A serious difficulty with tomographic imaging of a gamma-ray emitting source is caused by the attenuation that photons suffer during their travel from the emitting nuclei to the detector. 3 The extent of this attenuation depends upon both the photon energy and the nature of the tissue. Consider two elemental sources of equal strength at points A and B in Fig. 4.16: because of attenuation the detector will find the source at A stronger than the one at B. The effect of attenuation is illustrated in Fig. 4.19, which shows reconstructions of a disk phantom for three different values of the attenuation: p = obtained by using three different media in the 0.05, 0.11, and 0.15 cmphantom. The original disk phantom is also shown for comparison. (These reconstructions were done using 360 of projection data.) A number of different approaches for attenuation compensation have been developed. These will now be briefly discussed in the following section.
MEASUREMENT
OF PROJECTION
DATA
137
(a)
a--
2 o--
1,I--
of a gamma-ray emitting disk phantom are shown in (a) for different values of attenuation. (b) shows a quantitative comparison of the reconstructed values on the center line. (Courtesy of T. Budinger.)
cl-
138
COMPUTERIZED
TOMOGRAPHIC
IMAGING
coefficient. Let p(x, y) denote the source distribution in a desired cross section. In the absence of any attenuation the projection data PO(t) are given from Chapter 3 by l%(t) = [ 1 p(x, y)6(x cos 8 +y sin 19 t) dx dy. -
(24)
However, in the presence of attenuation this relationship must be modified to include an exponential attenuation term, e-r(d-@, where, as shown in Fig. 4.20, s = -x sin 13+ y cos t? and d = d(t, 0) is the distance from the line CC to the edge of the object. Thus the ray integral actually measured is given by &(t)= 11 ,o(x, y) exp [-~(d-s)]6(x
cos
dx dy.
(25)
For convex objects the distance d, which is a function of x, y, and 8, can be determined from the external shape of the object. We can now write Se(t) =&(t) exp [pd] = j 1 p(x, y) exp [ - ~(x sin 0 -y cos e)]
dx dy.
(26)
The function Se(t) has been given the name exponential Radon transform. In [Tre80], Tretiak and Metz have shown that b(r, 4)= jr [ jy,
(27)
is an attenuation compensated reconstruction of p(x, y) provided the convolving function h(t) is chosen such that the point spread function of the system given by
(e-4)] d0
P-0
fits some desired point spread function (ideally a delta function but in practice a low pass filtered version of a delta function). Note that because the integration in (28) is over one period of the integrand (considered as a function of t9), the function b(r, 4) is independent of 4 which makes it radially symmetric. Good numerical approximations to /z(t) are presented in [Tre80]. In [Tre80] Tretiak and Metz have provided analytical solutions for h(t). Note that (27) possesses a filtered backprojection implementation very similar to that described in Chapter 3. Each modified projection Se(t) is first convolved with the function h(t); the resulting filtered projections are then backprojetted as discussed before. For each 8 the backprojected contribution at a given pixel is multiplied by the exponential weight ecsin(e-+). Budinger and his associates have done considerable work on incorporating
MEASUREMENT
OF PROJECTION
DATA
139
f(L
Fig. 4.20: Severalparameters for attenuation correction are shown here. (From [Kak79].)
attenuation compensation in their iterative least squares reconstruction techniques [Bud76]. In these procedures one approximates an image to be reconstructed by a grid as shown in Fig. 4.21 and an assumption is made that the concentration of the nuclide is constant within each grid block, the concentration in block m being denoted by p(m). In the absence of attenuation, the projection measured at a sampling point tk with projection angle ej is given by
PI&)= c PWf~W)
m
(29)
140
COMPUTERIZED
TOMOGRAPHIC
IMAGING
cexecto I3 7u
collimator
Fig. 4.21: This figure shows the grid representation for a source distribution. The concentration of the source is assumed to be constant in each grid square. (From [Kak79].)
where f i(m) is a geometrical factor equal to that fraction of the mth block that is intercepted by the kth ray in the view at angle 8. (The above equation may be solved by a variety of iterative techniques [Ben70], [Goi72], [Her7 11.) Once the problem of image reconstruction is set up as in (29), one may introduce attenuation compensation by simply modifying the geometrical factors as shown here:
Mtk) = i
m-l
(30)
where P, is the distance from the center of the mth cell to the edge of the reconstruction domain in the view 8. The above equations could be solved, as any set of simultaneous equations, for the unknowns p(n).
MEASUREMENT
OF PROJECTION
DATA
141
Unfortunately, this rationale is flawed: In actual practice the attenuating path length for the mth cell does not extend all the way to the detector or, for that matter, even to the end of the reconstruction domain. For each cell and for a given ray passing that cell it only extends to the end of the object along that ray. To incorporate this knowledge in attenuation compensation, Budinger and Gullberg [Bud761 have used an iterative least squares approach. They first reconstruct the emitter concentration ignoring the attenuation. This reconstruction is used to determine the boundaries of the object by using an edge detection algorithm. With this information the attenuation factors exp ( - PCLP~) now be calculated where P, is now the distance from the mth can pixel to the edge of the object along a line 13+ 90. The source concentration is then calculated using the least squares approach. This method, therefore, requires two reconstructions. Also required is a large storage file for the coefficients f;. For other approaches to attenuation compensation the reader is referred to [Be179], [Cha79a], [Cha79b], [Hsi76].
In positron emission tomography the decay of a positron/electron pair is detected by a pair of photons. Since the photons are released in opposite directions it is possible to determine which ray it came from and measure a projection. (From [Kak79J.)
Fig. 4.22:
annihilation photon of .SlSl-gy 511 Kev < c photon of energy 511 Kev
142
COMPUTERIZED
TOMOGRAPHIC
IMAGING
and a coincidence testing circuit are used to determine the location of a positron emission. Arrival of coincident photons at the detectors D, and Dz implies that there was a positron emission somewhere on the line AA This . is known as electronic collimation. (From [Kak79].)
MeV positrons traverse 4 mm and 2.5 cm, respectively, in water before annihilation. Therefore, for accurate localization it is important that the emitted positrons have as little kinetic energy as possible. Usually, in practice, this desirable property for a positron emitting compound has to be balanced against the competing property that in a nuclear decay if the positron emission process is to dominate over other competing processes, such as electron capture decay, the decay energy must be sufficiently large and, hence, lead to large positron kinetic energy. The fact that the annihilation of a positron leads to two gamma-ray photons traveling in opposite directions forms the basis of a unique way of detecting positrons. Coincident detection by two physically separated detectors of two gamma-ray photons locates a positron emitting nucleus on a line joining the two detectors. Clearly, a few words about coincident detection are in order. Recall that in emission work, each photon is detected separately and therefore treated as a distinct entity (hence the name event for the arrival of a photon). Now suppose the detectors D, and Dz in Fig. 4.23(a) record two photons simultaneously (i.e., in coincidence) that would indicate a positron annihilation on the line joining AA . We have used the phrase simultaneous detection here in spite of the fact that the distances SA and SA may not be equal. The coincidence resolving time of circuits that check for whether the two photons have arrived simultaneously is usually on the order of 10 to 25 ns-a sufficiently long interval of time to make path difference considerations unimportant. This means that if the two annihilation photons arrive at the two detectors within this time interval, they are considered to be in coincidence. Positron devices have one great advantage over single photon devices discussed in the preceding subsection, that is, electronic collimation. This is
for
circuits testing
for -
+& e .vMe
(4
(b)
MEASUREMENT
OF PROJECTION
DATA
143
illustrated by Fig. 4.23(b). Let us say we have a small volume of a positron emitting source at location S, in the figure. For all the annihilation photons emitted into the conical volume A2S1A2, their counterparts will be emitted into the volume A3SA4 so as to miss the detector 4 completely. Clearly then, with coincident detection, the source Si will not be detected at all with this detector pair. On the other hand, the source located at & will be detected. Note that, by the same token, if the same small source is located at S3 it will be detected with a slightly reduced intensity (therefore, sensitivity) because of its off-center location. (This effect contributes to spatial variance of the point spread function of positron devices.) In order to appreciate this electronic collimation the reader should bear in mind that if we had used the detectors Di and DZ as ordinary (meaning noncoincident) gamma-ray detectors (with no collimation), we wouldn have been able to differentiate between the t sources at locations Sr and S, in the figure. The property of electronic collimation discussed here was first pointed out in 1951 by Wrenn et al., [WreSl] who also pointed out how it might be somewhat influenced by background scatter. It is easy to see how the projection data for positron emission CT might be generated. In Fig. 4.23 if we ignore variations in the useful solid angle subtended at the detectors by various point sources within AIAZASA6 (and, also, if for a moment we ignore attenuation), then it is clear that the total number of coincident counts by detectors Di and Dz is proportional to the integral of the concentration of the positron emitting compound over the volume A1A2A5A6. This by definition is a ray integral in a projection, provided the width 7 shown in the figure is sufficiently small. This principle has been incorporated in the many positron scanners. As an example, the detector arrangement in the positron system (PETT) developed originally at Washington University by TerPogossian and his associates [Hof76] is shown in Fig. 4.24(a). The system uses six detector banks, containing eight scintillation detectors each. Each detector is operated in coincidence with all the detectors in the opposite bank. For finer sampling of the projection data and also to generate more views, the entire detector gantry is rotated around the patient in 3 increments over an arc of 60 and for each ) angular position the gantry is also translated over a distance of 5 cm in l-cm increments. A multislice version of this scanner is described in [Ter78a] and [Mu178]. These scanners have formed the basis for the development of Ortec ECAT [Phe78]. Many other scanners [Boh78], [Cho76], [Cho77], [Der77b], [Ter78b], [Yam771 use a ring detector system, a schematic of which is shown in Fig. 4.24(b). Derenzo [Der77a] has given a detailed comparison of sodium iodide and bismuth germanate crystals for such ring detector systems. The reader will notice that the detector configuration in a positron ring system is identical to that used in the fixed-detector x-ray CT scanners described in Section 4.1. Therefore, by placing a rotating x-ray source inside the ring in Fig. 4.24(b) one can have a dual-purpose scanner, as proposed by Cho [Cho78]. The reader is also referred to [Car78a] for a characterization of the
144
COMPUTERIZED
TOMOGRAPHIC
IMAGING
la)
(a) Detector arrangement in the PETT III CAT, (b) A ring detector system for positron cameras. Each detector in the ring works in coincidence with a number of the other detectors. (From [Kak79].)
Fig. 4.24:
(b)
performance of positron imaging systems and to [Bud771 for a comparison of positron tomography with single photon gamma-ray tomography. While our discussion here has focused on reconstructing two-dimensional distributions of positron concentration (from the one-dimensional projection data), by using planar arrays for recording coincidences there have also been attempts at direct reconstruction of the three-dimensional distribution of positrons [Chu77], [Tam78].
4 On the other hand, one of the disadvantages of positron emission CT in relation to single gamma-ray emission CT is that the dose of radiation delivered to a patient from the administration of a positron emitting compound (radionuclide) includes, in addition to the contribution from the annihilation radiation, that contributed by the kinetic energy of positrons.
MEASUREMENT
OF PROJECTION
DATA
145
L coincidence :cst
S and traveling toward the D, detector is attenuated over a distance of L, - L. while a photon tr&eiing to-ward the Dz detector undergoesan attenuation proportional to L - L2. (From fKak79].)
where p(x) is the attenuation coefficient at 5 11 keV as a function of distance along the line joining the two detectors. Similarly, the probability of the photon y2 reaching the detector D2 is given by
Then the probability that this particular annihilation will be recorded by the detectors is given by the product of the above two probabilities
(33)
which is equal to
(34)
This is a most remarkable result because, first, this attenuation factor is the same no matter where positron annihilation occurs on the line joining II1 and 02, and, second, the factor above is exactly the attenuation that a beam of monoenergetic photons at 511 keV would undergo in propagating from L, at one side to L2 at the other. Therefore, one can readily compensate for attenuation by first doing a transmission study (one does not have to do a
146
COMPUTERIZED
TOMOGRAPHIC
IMAGING
reconstruction in this study) to record total transmission loss for each ray in each projection. Then, in the positron emission study, the data for each ray can simply be attenuation compensated when corrected (by division) by this transmission loss factor. This method of attenuation compensation has been used in the PETT and other [Bro78] positron emission scanners. There are other approaches to attenuation compensation in positron CT [Cho77]. For example, at 511-keV photon energy, a human head may be modeled as possessing constant attenuation (which is approximately equal to that of water). If in a head study the head is surrounded by a water bath, the attenuation factor given by (34) may now be easily calculated from the shape of the water bath [Eri76].
MEASUREMENT
OF PROJECTION
DATA
147
propagates through tissue, it undergoes a deflection at every interface between tissues of different refractive indices. Carson et al. [Car771 have discussed some of the distortions introduced in a CT image by hard tissues such as bone. (For a computer simulation study of these distortions, see [Far78].) It has been suggested [Joh75] that perhaps we could correct for refraction by using the following iterative scheme: we could first reconstruct a refractive index tomogram ignoring refraction; rays could then be digitally traced through this tomogram indicating the propagation paths; these curved paths could then be used for a subsequent reconstruction, and the process repeated. Another possible approach is to use inverse scattering solutions to the problem [Iwa75], [Mue80]. Both of these approaches will be discussed in later chapters. The problem of tomographic imaging of hard tissues with ultrasound remains unsolved. In this section we will assume that we are only dealing with soft-tissue structures. (The refraction effects are much smaller now and can generally be ignored.) An important application of this case is ultrasonic tumor detection in the female breast [Car78b], [Gre78], [Gregl], [Sch84]. Our review here will only deal with transmission ultrasound. Recently it has been shown theoretically that it is also possible to achieve (computed) tomographic imaging with reflected ultrasound [Nor79a], [Nor79b]. Clinical verification of this new technique has yet to be carried out. (See Chapter 8 for more information.)
148
COMPUTERIZED
TOMOGRAPHIC
IMAGING
v,(t)
T2
water
Transmittin Transducer
6
L
Tl
B
x(t) L (b) (cl
T1
x(t)
(4
4.26: As an ultrasonic beam travels between two transducers(a) it undergoesa phase shift in water over a distance of PW, t&, and both a and phase shift and an attenuation due to the object. In (b) the beam undergoesa phase shift as it goes through the water and in (c) the beam travels through both the water and a multilayered object. (From [Kak79].)
Fig.
3) the transmittance of the front surface of the tissue or the percentage of energy in the water that is coupled into the tissue, 71; 4) the attenuation, e-OL(fjP, phase change, e-ismr, caused by the layer and of tissue; 5) the transmittance of the rear surface of the tissue or the percentage of energy in the tissue that is coupled into the water, 72; 6) the attenuation, e+Jf)tw,, and phase change, e-jfldf)C,, caused by the . water on the far side of the tissue; 7) the receiver transfer function relating a pressure wave to the resulting electrical signal, Hz(f). W e will assume the center frequency of the transducers is high enough so that beam divergence may be neglected. (If the center frequency is too low, the transmitted wavefront will diverge excessively as it propagates toward the receiver; the resulting loss of signal would then have to be compensated for by another factor.) W ith these assumptions the Fourier transform Y(f) of the received signal y(t) is related to X(f), the Fourier transform of the signal x(t), as follows [Din76], [Kak78]: Y(f)
=~m~df)~2m~7
(35)
(36)
C, and e,, being water path lengths on two sides of the tissue layer and 0
MEASUREMENT
OF PROJECTION
DATA
149
and /3(f) are the attenuation and phase coefficients, respectively, of the tissue layer; a,(f) and P,,(f) are the corresponding coefficients for the water medium; Hi(j) and Hz(f) are, respectively, the transfer functions of the transducers T, and T2. In the above equation A, is given by
A~=T, * r2
(37)
where 7l and 72 are, respectively, the transmittances of the front and the back faces of the layer. In order not to concern ourselves with the transducer properties, as depicted by functions H,(f) and Hz(f), we will always normalize the received signal y(t) by the direct water path signal y,(t); see Fig. 4.26(b). Clearly, Y&f) where Y,(f) =~(f)~d.f)~2(f)
&)I
(38)
Y(f) = Ydf)A
- Pwm)~ll. (39)
In most cases, the attenuation coefficient of water is much smaller than that of tissue [Din79b] and may simply be neglected. Therefore,
- ,&Lf))~ll.
(40)
Extending this rationale to multilayered objects such as the one shown in Fig. 4.26(c), we get for the Fourier transform Y(f) of the received signal:
-exp [ -ALUYwl
(41)
where A, = ~17273 * * * rN (7; being the transmittance at the ith interface) and where o(f) and /3(f) have been replaced by CY(X, and p(x, f) since, now, f) they are functions of position along the path of propagation. This equation corresponds to (35) for the single layer case. Combining it with (37) and again ignoring the attenuation of water, we get
* exp [
-j2rf
(l/V(x)s0
l/V,)
dx
(42)
where we have ignored dispersion in each layer (it is very small for soft tissues [We177]) and expressed 0(x, f) and P,(f) as 2?rf/ V(x) and 2?rf/ V,, respectively. V(x) and V, are propagation velocities in the layer at x, and
150
COMPUTERIZED
TOMOGRAPHIC
IMAGING
We may consider y;(t) to be an attenuated water path signal. This is the hypothetical signal that would be received if it underwent the same loss as the actual signal going through tissue. By the shift property, the relationship depicted in (42) may be expressed as u(O=y:(twhere Ted
(44)
Ti=+ w
ip 0
[n(x)--
11 dx
n(x)=-.
VW VW
(46)
The relationship among the signals x(t), y,,,(t), y:(t), and y(t) is also depicted in Fig. 4.27. As implied by our discussion on refraction, in the actual tomographic imaging of soft biological tissues the assumptions made above regarding the propagation of a sound beam are only approximately satisfied. In propagating through a complex tissue structure, the interfaces encountered are usually not perpendicular to the beam. However, since the refractive index variations in soft tissues are usually less than 5 % the beam bending effects are usually not that serious; especially so at the resolution with which the projection data are currently measured. But minor geometrical distortions are still introduced. For example, when the projection data are taken with a simple scan-rotate configuration, a round disk-like soft-tissue phantom with a refractive index less than one would appear larger by about 3 to 5% as a result of such distortion.
s
A
[I-n(x,y)]
ds=
-V,T,.
MEASUREMENT
OF PROJECTION
DATA
151
The phase shift and the attenuation of an ultrasonic signal, x(t), as it travels through water, yW(t), and is attenuated, y;(t), and then phase shifted by the object, y(t), are shown here. (From [Kak79].)
Fig. 4.27:
(1 - n(x, y)), and hence, from such measurements we may reconstruct 1 n(x, y) (or n(x, y)). Note that one usually makes the image for 1 - n(x, y) rather than n(x, y ) itself. This is to ensure that in the reconstructed image the numerical values reconstructed for background are zero, since the refractive index of water is 1. In (47) Td is positive if the transit time through the tissue structure is longer than the transit time through the direct water path. Usually the opposite is the case, since most tissues are faster than water. Therefore, most often Td is negative making the right-hand side of the above equation positive. Measuring the time of flight (TOF) of an ultrasonic pulse is generally done by thresholding the received signal and measuring the time between the source excitation and the first time the received signal is larger than the threshold. Since acoustic energy travels at 1500 m/s in water, the TOF measured is on the order of 100 ps and is easily measured with fairly straightforward digital hardware. More details of this process and prepro-
1.52
COMPUTERIZED
TOMOGRAPHIC
IMAGING
y(t)
6 ,A T2 ! 1
cessing algorithms that can be used to clean up the projection data are described in [Cra82]. A refractive index reconstruction made for a Formalin-fixed dog heart is s shown in Fig. 4.29. 5 After this and other experiments reported in this section, the heart was cut at the level chosen; the cut section is shown in Fig. 4.30. The reconstruction shown here was made with only 18 measured projections (which were then extrapolated to 72; see [Din76]) and 56 rays in each projection.
x(t)
Fig. 4.28: In ultrasound refractive index tomography the time it takesfor an ultrasound pulse to travel betweenpoints A and B is measured. (From [Kak79].)
is a good approximation in the low MHz range. Clearly now, instead of reconstructing the attenuation coefficient a(x, y, S) one can reconstruct the parameter oo(x, y). To the extent the above approximation applies, ao(x, y) completely characterizes the attenuation properties of the soft tissue at location (x, y). In order to obtain a tomogram for (YO(X, we need projection data with y), each ray being given by s ray ~o(X, Y) ds.
(49)
The path of integration could, for example, be the ray AB in Fig. 4.28. W e will call the above integral the integrated attenuation coefficient, although it must be multiplied by a frequency in order to get Ja(x, y, f) ds at that frequency. A number of different techniques for measuring the integrated attenuation coefficient using broadband pulsed ultrasound are presented in [Kak78]. In
5 The reconstructions of a dog heart presented here are not meant to imply the suitability of s computerized ultrasonic tomography for in vivo cardiovascular imaging. Air in the lungs and refraction due to the surrounding rib cage would preclude that as a practical possibility.
Ultrasonic tomography of the female breast for tumor detection would be an ideal candidate for such techniques. The reconstructions presented were done on dogs hearts
because of their easy availability. 6 C W is an abbreviation for continuous wave. Pulsed C W means that the signal is a few cycles of a continuous sinusoid.
MEASUREMENT
OF PROJECTION
DATA
153
Fig. 4.29: A refractive index reconstntction of the dog heart. (From [Kak79].)
what follows we will list some of these techniques with brief descriptions and show reconstructions obtained by using them. i) Energy-Ratio Method: It has been shown in [Kak78] that 1
s =aY
aok
Y) ds=
w2 -fl)
where El and E2 are, respectively, weighted energies in frequency bands (Jr - Q,.f, + 0) and (f2 - Q, f2 + Q) of the transfer functions of the tissue structure along the desired ray. The transfer function, H(f), is defined by
Fig. 4.30: After data collection
the dog heart was cut at the level for which reconstructions were made. (From fKak79J.)
H(f)
y&f = -
x7(f)
(51)
154
COMPUTERIZED
TOMOGRAPHIC
IMAGING
where Y,,(f) and X,(f) are Fourier transforms of the signals v,(t) and x,(t), respectively (Fig. 4.26(c)). One can show that in terms of the experimentally measured signals y(t) and v,,(t) [Din79b]: (52) In terms of the function H(f), 4.31):
df
(53)
and
E2=2 j;;;
2
I~W-f2)121Wf)12 df
(54)
Fig. 4.31: H(J) is the transfer function of the tissue structure. The weighted integrals of IH(f over the two intervals shown give El and E2. (From [Kak79].)
where X(f) is any arbitrary weighting function. The weighting function can be used to emphasize those frequencies at which there is more confidence in the calculation of H(f). A major advantage of the energy-ratio method is that the calculation of the integrated attenuation coefficient doesn depend upon the knowledge of t transmittances (as incorporated in the factor A,). To the extent this calculation doesn depend on the magnitude of the received signal (but only t on its spectral composition) this method should also be somewhat insensitive to the partial loss of signal caused by beam refraction. The extent of this insensitivity is not yet known. A reconstruction using this method is shown in Fig. 4.32. ii) Division of Transforms Followed by Averaging Method: Let H,,(f)
H(f)
frequency
MEASUREMENT
OF PROJECTION
DATA
155
Y(f)
I
(55)
F(h,fz,
Ql,
a,,=$
j;;:
2 2
HA(f)
df-&
J;;l
1 1
HA(f)
df.
(56)
a,,(~, y) ds = F.
(57)
Again, the method is independent of the value of transmittances at tissuetissue and tissue-medium interfaces. The method may also possess some immunity to noise because of the integration in (56). In Fig. 4.33 a reconstruction for the dog heart is shown using this method. The level chosen was the same as that for the refractive index tomogram. iii) Frequency-Shift Method: From the standpoint of data processing the above two methods suffer from a disadvantage. In order to use them one must determine the transfer function H(f) from the recorded waveform y(t) for each ray and y,,,(t). This requires that for each ray the entire time signal y(t) be digitized and recorded, and this may take anywhere from 100 to 300 samples depending upon the maximum frequency (above the noise level) in the acoustic pulse produced by the transmitting transducer. This is in marked contrast to the case of x-ray tomography where for each ray one records only one number, i.e., the total number of photons arriving at the detector during the measurement time interval.
156
COMPUTERIZED
TOMOCXAPHIC
IMAGING
reconstruction of the dog heart obtained from the averagesof the function H..,(f). (From [Kak791.)
In the frequency-shift method the integrated attenuated coefficient is measured by measuring the center frequencies of the direct water path signal y,,,(t) and the signal received after transmission through tissue, y(t). The relationship is [Din79b]
ao(x, y) ds=fg
s ray
(58)
where f. is the frequency at which Y,(f) is a maximum and fr is that at which Y(f) is a maximum; u2 is a measure of the width of the power spectrum of YWW. For a precise implementation this method also requires that the entire waveform y(t) be recorded for each ray. However, we are speculating that it might be possible to construct some simple circuit that could be attached to the receiving transducer the output of which would directly be fr [Nap8 11. (Such a circuit could estimate, perhaps suboptimally, the frequency fr from the zeros and locations of maxima and minima of the waveforms.) The center frequency f. needs to be determined only once for an experiment so it shouldn pose any logistical problems. In Fig. 4.34 we have shown a reconstruction using this method. The reconstruction was made from the same data that were recorded for the preceding two experiments.
4.3.4 Applications
A clinical study discussing the use of ultrasound tomography for the diagnosis of breast abnormalities was described by Schreiman et al. in [Sch84]. In this study the information from refractive index images was combined with that from attenuation images and compared against mammo-
MEASUREMENT
OF PROJECTION
DATA
157
Fig. 4.34: An attenuation reconstruction obtained by using the frequency-shift method. (From [Kak79J.)
grams. In addition, the design of a program to automatically diagnose breast tomograms based on the attenuation constant and the index of refraction near the lesion was described. The mammograms and ultrasound tomographic images in Figs. 4.35 and 4.36, respectively, show a small spiculated cancer in the upper outer quadrant of a right breast. The tomographic reconstructions shown in Fig. 4.36 were based on the measurement of 60 parallel projections each with 200 rays. For each ray the time of arrival and the signal level of a ~-MHZ ultrasound signal were measured and stored on tape for off-line processing. The total data collection time was 5 minutes. In this study the attenuation and refractive index images were based on a full wave rectified and low pass filtered version of the measured ultrasonic pressure wave. The time delay caused by the object was measured by timing the instant when the filtered signal first crossed a threshold. This gives a direct estimate of the time delay, Td, as described in Section 4.3.2. On the other hand, the attenuation of the signal was measured by integrating the first two microseconds of the filtered signal. While this method doesn take into account the frequency dependence of the attenuation coefficient, it does have the overriding advantage that its hardware implementation is very simple and fast.
and
158
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 4.35: The x-ray mammograms of thesefemale breasts show a small spiculated cancer in the upper outer quadrant of the right breast. (Courtesy of Jim Greenleaf of the Mayo Clinic in Rochester, MN.)
195Os, only since 1972 has it been used for imaging. In the sense that the images produced represent a cross section of the object, MRI is a tomographic technique. Two head images obtained using MRI are shown in Fig. 4.37. The fundamentals of chemistry and physics required to derive MRI are beyond the scope of this book. A rigorous derivation requires the use of quantum mechanics, but since acceptable models of the process can be built using classical mechanics, this will be the approach used here. For more information the reader is referred to excellent accounts of the theory in [Man82], [Mac83], [Cho82], [Hin83], [Pyk82]. Magnetic resonance imaging is based on the measurement of radio frequency electromagnetic waves as a spinning nucleus returns to its equilibrium state. Any nucleus with an odd number of particles (protons and neutrons) has a magnetic moment, and, when the atom is placed in a strong magnetic field, the moment of the nucleus tends to line up with the field. If the atom is then excited by another magnetic field it emits a radio frequency signal as the nucleus returns to its equilibrium position. Since the frequency of the signal is dependent on not only the type of atom but also the magnetic
MEASUREMENT
OF PROJECTION
DATA
159
The time of flight (TOF) images on top and the combined TOF and attenuation (A TN) images on the bottom show the small cancer. (Reprinted with permission from [Sch84J.)
Fig. 4.36:
fields present, the position and type of each nucleus can be detected by appropriate signal processing. Two of the more interesting atoms for MRI are hydrogen and phosphorus. The hydrogen atom is found most often bound into a water molecule while phosphorus is an important link in the transfer of energy in biological
160
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 4.37:
These two images demonstrate the contrast and resolution obtainable using MRI. They were obtained using a 1.5-Tesla Signa system at General Electric MR Development Center. (Courtesy of General Electric Medical Systems Group.)
systems. Both of these atoms have an odd number of nucleons and thus act like a spinning magnetic dipole when placed into a strong field. When a spinning magnetic moment is placed in a strong magnetic field and perturbed it precesses much like a spinning top or gyroscope. The frequency of precession is determined by the magnitude of the external field and the type and chemical binding of the atom. The precession frequency is known as the
MEASUREMENT
OF PROJECTION
DATA
161
w=yH
(59)
where H is the magnitude of the local magnetic field and y is known as the gyromagnetic constant. The gyromagnetic constant, although primarily a function of the type of nucleus, also changes slightly due to the chemical elements surrounding the nucleus. These small changes in the gyromagnetic constant are known as chemical shifts and are used in NMR spectroscopy to identify the compounds in a sample. In MRI, on the other hand, a spatially varying field is used to code each position with a unique resonating frequency. Image reconstruction is done using this information. Recalling that a magnetic field has both a magnitude and direction at a point in three space, (x, y, z), the field is described by the vector quantity H(x, y, z). When necessary we will use the orthogonal unit vectors 2, 9, and 2 to represent the three axes. Conventionally, the z-axis is aligned along the axis of the static magnetic field used to align the magnetic moments. The static magnetic field is then described by H0 = Ho& A radio frequency magnetic wave in the (x, y)-plane and at the Larmor frequency, w. = yH0, is used to perturb the magnetic moments from their equilibrium position. The degree of tipping or precession that occurs is dependent on the strength of the field and the length of the pulse. Using the classical mechanics model a sinusoidal field of magnitude H, that lasts tp seconds will cause the magnetic moment to precess through an angle given by
O=yH,t,.
The actual transmitted field, Hi(x, y, z), is given by
(60)
(61)
Generally, HI and tp are varied so that the moment will be flipped either 90 or 180. By flipping the moments 90 the maximum signal is obtained as the system returns to equilibrium while 180 flips are often used to change the sign of the phase (with respect to the Hi-axis) of the moment. It is important to note that only those nuclei where the magnitude of the local field is Ho will flip according to (60). Those nuclei with a local magnetic field near Ho will flip to a small degree while those nuclei with a local field far from Ho will not be flipped at all. This property of spinning nuclei in a magnetic field is used in MRI to restrict the active nuclei to restricted sections of the body [Man82]. Typical slice thicknesses in 1986 machines are from 3 to 10 mm. After the radio frequency (RF) pulse is applied there are two effects that can be measured as the magnetic moment returns to its equilibrium position. They are known as the longitudinal and transverse relaxation times. The longitudinal or spin-lattice relaxation time, T,, is the simpler of the two and represents the time it takes for the energy to dissipate and the moment to
162
COMPUTERIZED
TOMOGRAPHIC
IMAGING
return to its equilibrium position along the Z-axis. In addition, after the RF pulse is applied, the spinning magnetic moments gradually become out of phase due to the effects of nearby nuclei. The time for this to occur is known as the transverse or spin-spin relaxation time, T2. In practice, there is a third parameter called T,*that also takes into account the local inhomogeneities of the magnetic field. Because of physical constraints the following relationship always holds:
(62)
As an excited magnetic moment relaxes toward its equilibrium position it emits a free induction decay (FID) signal which can be thought of as the transverse component of the precessing moment. In addition, as the moment returns to its equilibrium state the longitudinal component of the magnetic field returns to the value of MO.
Fig. 4.38:
Note that T; includes the effect of T2. The process of tipping (or even flipping) a moment and its eventual return to the equilibrium state are diagrammed in Fig. 4.38. Conventionally the magnetic moments are shown in a coordinate system that rotates at the Larmor frequency. The direction of the magnetic moment before and immediately after a 45 pulse is shown in Figs. 4.38(a) and (b). Fig. 4.38(c) diagrams the moments as they start to return to the equilibrium position and some of the moments become out of phase. The time T2 is shorter than T, so the moments are totally out of phase before they return to the equilibrium position. This is shown in Fig. 4.38(d). Finally, after several T, intervals the moments return to their equilibrium position as shown in Fig. 4.38(e). As the spinning moments return to their equilibrium position they generate an electromagnetic wave at the Larmor frequency. This wave is known as the free induction decay (FID) signal and can be detected using coils around the object. When the magnetic moments are in phase, as they are immediately following an RF excitation, the FID signal is proportional to both the density and the transverse component of the magnetic moments. Near time t = 0,
MEASUREMENT
OF PROJECTION
DATA
163
immediately following the end of the RF pulse, the received signal is given by S(t) = p sin (0) cos (wet)
(63)
where again 8 is the flip angle and p is the density of the magnetic moments. From this signal it is easy to verify that the largest FID signal is generated by a 90 pulse. Both the spin-spin and the spin-lattice relaxation processes contribute to the decay of the FID signal. The FID signal after a 90 pulse can be written as S(t) = p cos (coot) exp [ - t/T,*] exp [ - t/T,]
(64)
where the exponent& with respect to Tr and T; represent the attenuation of the FID signal due to the return to equilibrium ( Tl) and the dephasing (Tz). In tissue the typical times for Tl and T2 are 0.5 s and 50 ms, respectively. Thus the decay of the FID signal is dominated by the spin-spin relaxation time (T2 and TF) and the effects of the spin-lattice time (e- jrl in the equation above) are hidden. A typical FID signal is shown in Fig. 4.38(f). A clinician is interested in three parameters of the object: spin density, T, and Tz. The spin density is easiest to measure; it can be estimated from the magnitude of the FID immediately following the RF pulse. On the other hand, the T, and the T2 parameters are more difficult. To give our readers just a flavor of the algorithms used in MRI we will only discuss imaging of the spin density. More complicated pulse sequences, such as those described in [Cho82], are used to weight the image by the object T, s or T2 parameters. In addition, much work is being done to discover combinations of the above parameters that make tissue characterization easier. There are many ways to spatially encode the FID signal so that tomographic images can be formed. We will only discuss two of them here. The first measures line integrals of the object and then uses the Fourier Slice Theorem to reconstruct the object. The second approach measures the twodimensional Fourier transform of the object directly so that a simple inverse Fourier transform can be used to estimate the object. To restrict the imaging to a single plane a magnetic gradient
AH, = Gzz
(65)
is superimposed on the background field Ho as is shown in Fig. 4.39. If a narrow band excitation at the Larmor frequency 00 = /HOis then appliedto the object only those nuclei near the plane z = 0 will be excited. For maximum response the excitation should be long enough to cause each nucleus to precess through 90. A projection of the object in the plane z = 0 is measured by applying a readout gradient of the form
(66)
164
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(while
A,
measuring FID)
Fig. 4.39: To measure projections of a three-dimensional object a field of strength AHP = Gzz used to restrict the initial flip to a single plane. Then a readout gradient AH, = GA + G,y is used to measure projections of the object. In the case shown here the integrals are along lines perpendicular to the page.
as the nuclei return to the equilibrium state. This second gradient serves to split each line integral into a separate frequency. Consider the line G,x + GYy = AH, = constant. Along this line the FID signal will be at a unique frequency given by
(67)
w = - y(H+ AHr).
(68)
To measure a projection in the plane it is necessary to apply the readout gradient and then find the Fourier transform of the received signal. Each temporal frequency component of the FID signal will then correspond to a single line integral of the object. This is illustrated in Fig. 4.39. A two-dimensional reconstruction of an object can be easily found by rotating the readout gradient and then using the reconstruction algorithms discussed in Chapter 3. A full three-dimensional reconstruction is easily formed by stacking the two-dimensional images. A more common approach to magnetic resonance imaging is to use a phase encoding gradient. The gradient, applied between the excitation pulse and the readout of the FID, spatially encodes each position in the object with a phase. This leads to a very natural reconstruction scheme because data can be collected over a rectangular grid in the Fourier domain. Thus reconstructions using this method can be performed using a two-dimensional FFT instead of the Fourier backprojection usually found in computerized tomography. One possible sequence of events is presented next. Like the projection approach described above, a magnetic gradient is applied to the object as the nuclei are excited. This restricts the imaging to a single plane where the local magnetic field and the frequency of the excitation satisfy the Larmor equation. This is shown in Fig. 4.40. Two perpendicular gradients are used to encode each point in the plane. First a gradient, for example in they direction or AH, = G,,y, is applied for T seconds. Because the frequency of precession is related to the local
MEASUREMENT
OF PROJECTION
DATA
165
Fig. 4.40: Three different gradients are used to measure the Fourier transform of an object using MRI. First a gradient in the z direction is used to restrict the frip to a single plane of the object. Then a second gradient, this time in y, is used to encode each line of constant y with a different phase. Finally, a third gradient, in x, is used while the FID signal is read to split each line of constant x into a different line integral.
magnetic field, nuclei at different points in the object start spinning at different rates. After T seconds, when the phase encoding gradient is turned off, each line of constant y will have accumulated a phase given by
4=wt=(Ho+AHp)yT = w. T+ G,,yy T.
(69) (70)
Like the projection case the FID is measured while applying a readout gradient, this time along the x-axis or
AH, = G,x.
(71)
As before, the number of spinning nuclei along each line of constant x is now encoded by the frequency of the received signal. Unlike the previous case each position along the line is also encoded with a unique phase (see (69)). The following phase encoded line integral is measured:
(72)
where q,, = GyrT and qx = G,yt. Note that except for the ej% term this equation is similar to the inverse Fourier transform of the data p(x, y). To recover the phase encoded line integrals it is necessary to find the inverse Fourier transform of the data with respect to time or
(73)
166
COMPUTERIZED
TOMOGRAPHIC
IMAGING
frequency of p(w, q,J by the Larmor frequency, wo, or P(X, qy) =P(w009 qy). (74)
A complete reconstruction is formed by stepping the phase encoding gradient, G,,, through N steps between GMAXand - GMUIAX measuring the and phase encoded line integrals p,(t). To prevent aliasing it is important that (75) where the minimum feature size in the object is described by A. Note that in general the FID signal, p,(t), will be sampled in both qy and t and thus the integral equations presented here will be approximated with discrete summations. Since each line integral containing the point x, y is encoded with a different phase the spin density at any point can be recovered by inverting the integral equations. This is easily done by finding the Fourier transform of the collection of line integrals or
P(X, Y)=&
dq,.
(76)
While a reconstruction can be done with either approach most images today are produced by direct Fourier inversion as opposed to the convolution backprojection algorithms described in Chapter 3. Two errors found in MRI machines are nonlinear gradients and a nonuniform static magnetic field. These errors affect the final reconstruction in different ways depending on the reconstruction technique. First consider nonlinear gradients. In the direct Fourier approach only the magnitude of the gradients changes and not their direction. Thus any nonlinearities show up as a warping of the image space. As long as the gradient is monotonic the image will look sharp, although a bit distorted. On the other hand, in the projection approach the direction of the gradients is constantly changing so that each projection is warped differently. This leads to a blurring of the final reconstruction [ODo85]. The effect is similar with a nonhomogeneous static field, HO. Since the gradient fields are simply added to the static field to determine the Larmor frequency a nonhomogeneous field can be thought of as a warping of the projection data. Since the Fourier approach doesn change the angle of the t projections, using phase changes to distinguish the different parts of the line integral, the direct Fourier approach yields sharper images. In the simple analysis above we have ignored two important limitations on MRI. The first is the frequency spreading due to the T2 relaxation time. In the analysis above we assumed a short enough measurement interval so that the relaxation could be considered negligible. Since the resolution in the
MEASUREMENT
OF PROJECTION
DATA
167
frequency domain is linearly dependent on the measurement time the maximum possible measurement time should be used. Unfortunately the exponential attenuation of the FID signal broadens the frequency spectrum thereby determining the ultimate resolution of the magnetic resonance image. A much more difficult problem is the data collection time. In the procedure described above each measurement is made assuming all the magnetic moments are at rest. Since the spin-lattice relaxation time is on the order of a second this implies that only a single FID can be measured per second. Since a three-dimensional image requires at least a million data points this is a severe restriction. In practice, pulse sequences have been designed that allow more than one FID to be measured during the Tl relaxation time. This can be done using a combination of gradients and selective gradients to only excite a single piane within the object and also using selective spin-echo pulses to measure more than one projection (or Fourier transform) within a single plane.
168
COMPUTERIZED
TOMOGRAPHIC
IMAGING
complex biomolecules from transmission micrograms, the reader should look to [Cro70], [Gor7 11. The applications of this technique in optical interferometry, where the aim is to determine the refractive index field of an optically transparent medium, are discussed in [Ber70], [Row69], [Swe73]. The applications of tomography in earth resources imaging are presented in [Din79a], [Lyt80]. For information about a large number of industrial applications the reader is referred to [OSASS].
4.6 References
[Al1851 C. J. Allan, N. A. Keller, L. R. Lupton, T. Taylor, and P. D. Tonner, Tomography: An overview of the AECL program, Appl. Opt., vol. 24, pp. 4067-4075, Dec. 1, 1985. Energy-selective reconstructions in x-ray R. E. Alvarez and A. Macovski, computerized tomography, Phys. Med. Biol., vol. 21, pp. 733-744, 1976. Proc. ~ Noise and dose in energy dependent computerized tomography, S.P.;.E., vol. 96, pp. 131-137, 1976. L. Axel, P. H. Arger, and R. A. Zimmerman, Applications of computerized tomography to diagnostic radiology, Proc. IEEE, vol. 71, pp. 293-297, Mar. 1983. improvements in tomography, New R. H. T. Bates and T. M. Peters, Towards Zealand J. Sci., vol. 14, pp. 883-896, 1971. R. H. T. Bates, K. L. Garden, and T. M. Peters, Overview of computerized tomography with emphasis on future developments, Proc. IEEE, vol. 71, pp. 356-372, Mar. 1983. S. Bellini, M. Pianentini, and P. L. de Vinci, Compensation of tissue absorption in emission tomography, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-27, pp. 213-218, June 1979. R. Bender, S. H. Bellman, and R. Gordon, ART and the ribosome: A preliminary report on the three dimensional structure of individual ribosomes determined by an algebraic reconstruction technique, J. Theor. Biol., vol. 29, pp. 483-487, 1970. M. V. Berry and D. F. Gibbs, The interpretation of optical projections, Proc. Roy. Sot. London, vol. A314, pp. 143-152, 1970. C. Bohm, L. Eriksson, M. Bergstrom, J. Litton, R. Sundman, and M. Singh, A computer assisted ringdetector positron camera system for reconstruction tomography of the brain, IEEE Trans. Nucl. Sci., vol. NS-25, pp. 624-637, 1978. D. P. Boyd and M. J. Lipton, Cardiac computed tomography, Proc. IEEE, vol. 71, pp. 298-307, Mar. 1983. R. N. Bracewell, Strip integration in radio astronomy, Aust. J. Phys., vol. 9, pp. 198-217, 1956. R. H. Bracewell and A. C. Riddle, Inversion of fan-beam scans in radio astronomy, Astrophys. J., vol. 150, pp. 427-434, Nov. 1967. R. A. Brooks and G. DiChiro, Principles of computer assisted tomography (CAT) in radiographic and radioisotopic imaging, Phys. Med. Biol., vol. 21, pp. 689732, 1976. R. A. Brooks, A quantitative theory of the Hounsfield unit and its application to dual energy scanning, J. Comput. Assist. Tomog., vol. 1, pp. 487-493, 1977. R. A. Brooks and G. DiChiro, Slice geometry in computer assisted tomography, J. Comput. Assist. Tomog., vol. 1, pp. 191-199, 1977. -, Split-detector computed tomography: A preliminary report, Radiology, vol. 126, pp. 255-257, Jan. 1978. R. A. Brooks, G. H. Weiss, and A. J. Talbert, A new approach to interpolation in computed tomography, J. Comput. Assist. Tomog., vol. 2, pp. 577-585, Nov. 1978. G. L. Brownell and S. Cochavi, Transverse section imaging with carbon-l 1
[Bat71 ] [Bat831
[Be1791
[Ben701
[Ber70] [Boh78]
[BOY831
[Bra561 [Bra671 [Bro76]
[Bro78]
MEASUREMENT
OF PROJECTION
DATA
169
[BroSl]
[Bud741 [Bud761
[Bud771
[Car761
[Car771
[Car78a]
[Car78b]
[Cho76]
[Cho77]
[Cho78]
[Cho82]
[Chu77] [Cor63]
tcolw
[Cra78]
[Cra82]
labeled carbon monoxide, .I. Comput. Assist. Tomog., vol. 2, pp. 533-538, Nov. 1978. R. A. Brooks, V. J. Sank, W. S. Friauf, S. B. Leighton, H. E. Cascio, and G. DiChiro, Design considerations for positron emission tomography, IEEE Trans. Biomed. Eng., vol. BME-28, pp. 158-177, Feb. 1981. T. F. Budinger and G. T. Gullberg, Three dimensional reconstruction of isotope distributions, Phys. Med. Biol., vol. 19, pp. 387-389, June 1974. -, Transverse section reconstruction of gamma-ray emitting radionuclides in patients, in Reconstruction Tomography in Diagnostic Radiology and Nuclear Medicine, M. M. TerPogossian et al., Eds. Baltimore, MD: University Park Press, 1976. T. F. Budinger, S. E. Derenzo, Cl. T. Gullberg, W. L. Greenberg, and R. H. Huesman, Emission computer assisted tomography with single photon and positron annihilation photon emitters, J. Comput. Assist. Tomog., vol. 1, pp. 131-145, 1977. P. L. Carson, T. V. Oughton, and W. R. Hendee, Ultrasound transaxial in Ultrasound in Medicine Il. D. N. White and tomography by reconstruction, R. W. Barnes, Eds. New York, NY: Plenum Press, 1976, pp. 391-400. P. L. Carson, T. V. Oughton, W. R. Hendee, and A. S. Ahuja, Imaging soft tissue Med. through bone with ultrasound transmission tomography by reconstruction, Phys., vol. 4, pp. 302-309, July/Aug. 1977. L. R. Carroll, Design and performance characteristics of a production model positron imaging system, IEEE Trans. Nucl. Sci., vol. NS-25, pp. 606-614, Feb. 1978. P. L. Carson, D. E. Dick, G. A. Thieme, M. L. Dick, E. J. Bayly, T. V. Oughton, G. L. Dubuque, and H. P. Bay, Initial investigation of computed tomography for breast imaging with focussed ultrasound beams, in Ultrasound in Medicine, D. White and E. A. Lvons. Eds. New York. NY: Plenum Press, 1978, DD. 319-322. R. C. Chase and J. A. Stein, An improved image algorithm for Cy scanners, Med. Phys., vol. 5, pp. 497-499, Dec. 1978. L. T. Chang, A method for attenuation correction in radionuclide computed tomography, IEEE Trans. Nucl, Sci., vol. NS-25, pp. 638-643, Feb. 1979. ~ Attenuation correction and incomplete projection in single photon emission computed tomography, IEEE Trans. Nucl. Sci., vol. 26, no. 2, pp. 2780-2789, Apr. 1979. 2. H. Cho, L. Eriksson, and J. Chart, A circular ring transverse axial positron camera, in Reconstruction Tomography in Diagnostic Radiology and Medicine, M. M. TerPogossian et al., Eds. Baltimore, MD: University Park Press, 1976, pp. 393421. 2. H. Cho, M. B. Cohen, M. Singh, L. Eriksson, J. Chan, N. MacDonald, and L. Spolter, Performance and evaluation of the circular ring transverse axial positron camera, IEEE Trans. Nucl. Sri., vol. NS-24, pp. 532-543, 1977. Z. H. Cho, 0. Nalcioglu, and M. R. Fun&hi, Analysis of a cylindrical hybrid positron camera with bismuth germanate (BGO) scintillation crystals, IEEE Trans. Nucl. Sci., vol. NS-25, pp. 952-963, Apr. 1978. Z. H. Cho, H. S. Kim, H. B. Song, and J. Cumming, Fourier transform nuclear magnetic resonance tomographic imaging, Proc. IEEE, vol. 70, pp. 1152-1173, Oct. 1982. G. Chu and K. C. Tam, Three dimensional imaging in the positron camera using Fourier techniques, Phys. Med. Biol., vol. 22, pp. 245-265, 1977. A. M. Cormack, Representation of a function by its line integrals with some radiological applications, J. Appf. Phys., vol. 34, pp. 2722-2727, 1963. -, Representation of a function by its line integrals with some radiological applications, II, J. Appt. Phys., ~01.~35, pp. 290812913, Oct. 1964. C. R. Crawford and A. C. Kak, Aliasina artifacts in CT images, Research Reo. TR-EE 78-55, School of Electrical Engin&ing, Purdue Univ.lLafayette, IN, Dec. 1978. Multipath artifacts in ultrasonic transmission tomography, Ultrason. Imaging, vol. 4, no. 3, pp. 234-266, July 1982.
170
COMPUTERIZED
TOMOGRAPHIC
IMAGING
[Cra86] [Cro70]
[DiC78]
[Din761
[EPPW
[Eri76]
[Gad751 [Glo77]
[Gre75]
[Gre78]
C. R. Crawford, Reprojection using a parallel backprojector, Med. Phys., vol. 13, pp. 480-483, July/Aug. 1986. of a threeR. A. Crowther, D. J. DeRosier, and A. Klug, The reconstruction dimensional structure from projections and its applications to electron microscopy, Proc. Roy. Sot. London, vol. A317, pp. 319-340, 1970. D. J. DeRosier and A. Klug, Reconstruction of three dimensional structures from electron micrographs, Nature, vol. 217, pp. 130-134, Jan. 1968. S. E. Derenzo, Positron ring cameras for emission computed tomography, IEEE Trans. Nucl. Sci., vol. NS-24, pp. 881-885, Apr. 1977. S. E. Derenzo, T. F. Budinger, J. L. Cahoon, R. H. Huesman, and H. G. Jackson, High resolution computed tomography of positron emitters, IEEE Trans. Nucl. Sci.,vol. NS-24, pp. -544-558, Feb. 1977: G. DiChiro. R. A. Brooks. L. Dubal. and E. Chew. The anical artifact: Elevated attenuation values toward the apex of the skull, J. Cornput: Assist. Tomog., vol. 2, pp. 65-79, Jan. 1978. and reconstruction of ultrasonic K. A. Dines and A. C. Kak, Measurement Research Rep. TR-EE 774, School of parameters for diagnostic imaging, Electrical Engineering, Purdue Univ., Lafayette, IN, Dec. 1976. K. A. Dines and R. J. Lytle, Computerized geophysical tomography, Proc. IEEE, vol. 67, pp. 1065-1073, 1979. K. A. Dines and A. C. Kak, Ultrasonic attenuation tomography of soft biological tissues, Ultrason. Imaging, vol. 1, pp. 16-33, 1979. A. J. Duerinckx and A. Macovski, Polychromatic streak artifacts in computed tomography images, J. Comput. Assist. Tomog., vol. 2, pp. 481-487, Sept. 1978. E. R. Epp and H. Weiss, Experimental study of the photon energy spectrum of primary diagnostic x-rays, Phys. Med. Biol., vol. 11, pp. 225-238, 1966. L. Erikkson and Z. H. Cho, A simple absorption correction in positron (annihilation gamma coincidence detection) transverse axial tomography, Phys. Med. Biol., vol. 21, pp. 429-433, 1976. T. C. Farrar and E. D. Becker, Pulse and Fourier Transform NMR, Introduction to Theory and Methods. New York, NY: Academic Press, 1971. E. J. Farrell, Processing limitations of ultrasonic image reconstruction, in Proc. 1978 Conf. on Pattern Recognition and Image Processing, May 1978, pp. S-1.5. S. W. Flax, N. J. Pelt, G. H. Glover, F. D. Gutmann, and M. McLachlan, Spectral characterization and attenuation measurements in ultrasound, Ultrason. Imaging, vol. 5, pp. 95-116, 1983. M. Gado and M. Phelps, The peripheral zone of increased density in cranial computed tomography, Radiology, vol. 117, pp. 71-74, 1975. G. H. Glover and J. L. Sharp, Reconstruction of ultrasound propagation speed distribution in soft tissue: Time-of-flight tomography, IEEE Trans. Sonics Ultrason., vol. SU-24, pp. 229-234, July 1977. G. H. Glover, Compton scatter effects in CT reconstructions, Med. Phys., vol. 9, pp. 860-867, Nov./Dee. 1982. M. Goiten, Three dimensional density reconstruction from a series of two dimensional nroiections, Nucl. Instrum. Methods, vol. 101. DD. 509-518. 1972. R. Gordon and G. T. Herman, Reconstruction of pictures fromheir projections, Commun. Assoc. Comput. Mach., vol. 14, pp. 759-768, 1971. J. F. Greenleaf, S. A. Johnson, S. L. Lee, G. T. Herman, and E. H. Wood, Algebraic reconstruction of spatial distributions of acoustic absorption within tissue from their two dimensional acoustic projections, in Acoustical Holography, vol. 5, P. S. Greene, Ed. New York, NY: Plenum Press, 1974, pp. 591603. J. F. Greenleaf, S. A. Johnson, W. F. Wamoya, and F. A. Duck, Algebraic reconstruction of spatial distributions of acoustic velocities in tissue from their timeof-flight orotiles. in Acoustical Holoaraohy. H. Booth, Ed. New York, NY: _ _ _. Plenum Press, 1975, pp. 71-90. J. F. Greenleaf, S. K. Kenue, B. Rajagopalan, R. C. Bahn, and S. A. Johnson, Breast imaging by ultrasonic computer-assisted tomography, in Acoustical
MEASUREMENT
OF PROJECTION
DATA
171
[GreSl]
[Gus781
[Her711
[HerSO] [Hin83]
[Hof76]
[Hsi76]
[ICR64] [Iwa75]
[Jak76]
[Joh75]
[Jos78]
[Jos82] [Kak78]
Imaging, A. Metherell, Ed. New York, NY: Plenum Press, 1978. J. F. Greenleaf and R. C. Bahn, Clinical imaging with transmissive ultrasonic computerized tomography, IEEE Trans. Biomed. Eng., vol. BME-28, pp. 177185, 1981. D. E. Gustafson, M. J. Berggren, M. Singh, and M. K. Dewanjee, Computed transaxial imaging using single gamma emitters, Radiology, vol. 129, pp. 187194, Oct. 1978. J. Hale, The Fundamentals of Radiological Science. Springfield, IL: Charles C. Thomas, Publisher, 1974. P. Haque, D. Pisano, W. Cullen, and L. Meyer, Initial performance evaluation of the CT 7000 scanner, presented at the 20th Meeting of A.A.P.M., Aug. 1978. P. B. Heffeman and R. A. Robb, Difference image reconstruction from a few Appl. Opt., vol. 24, pp. 4105-4110, projections for ND materials inspection, Dec. 1, 1985. G. T. Herman and S. Rowland, Resolution in ART: An experimental investigation of the resolving power of an algebraic picture reconstruction, J. Theor. Biol., vol. 33, pp. 213-223, 1971. G. T. Herman, Image Reconstructions from Projections. New York, NY: Academic Press, 1980. W. S. Hinshaw and A. H. Lent, An introduction to NMR imaging: From the Bloch equation to the imaging equation, Proc. IEEE, vol. 71, pp. 338-350, Mar. 1983. E. J. Hoffman, M. E. Phelps, N. A. Mullani, C. S. Higgins, and -M. M. TerPogossian, Design and performance characteristics of a whole body transaxial tomography, J. Nucl. Med., vol. 17, pp. 493-502, 1976. R. C. Hsieh and W. G. Wee, On methods of three-dimensional reconstruction from a set of radioisotope scintigrams, IEEE Trans. Syst. Man Cybern., vol. SMC-6, pp. 854-862, Dec. 1976. International Commission on Radiological Units and Measurements, Physical aspects of irradiation, Rep. lob. Bethesda, MD: ICRU Publications, 1964. K. Iwata and R. Nagata, Calculation of refractive index distribution from interferograms using the Born and Rytov approximations, s Japan. J. Appt. Phys., vol. 14, pp. 1921-1927, 1975. C. V. Jakowatz, Jr. and A. C. Kak, Computerized tomography using x-rays and ultrasound, Research Rep. TR-EE 76-26, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1976. S. A. Johnson, J. F. Greenleaf, W. F. Samayoa, F. A. Duck, and J. D. Sjostrand, Reconstruction of three-dimensional velocity fields and other parameters by acoustic ray tracing, in Proc. 1975 Ultrasonic Symposium, 1975, pp. 46-51. P. M. Joseph and R. D. Spital, A method for correcting bone induced artifacts in computed tomography scanners, J. Comput. Assist. Tomog., vol. 2, pp. lOO108, Jan. 1978. P. M. Joseph, The effects of scatter in x-ray computed tomography, Med. Phys., vol. 9, pp. 464-472, July/Aug. 1982. A. C. Kak and K. A. Dines, Signal processing of broadband pulsed ultrasound: Measurement of attenuation of soft biological tissues, IEEE Trans. Biomed. Eng., vol. BME-25, pp. 321-344, July 1978. A. C. Kak, Computerized tomography with x-ray emission and ultrasound sources, Proc. IEEE, vol. 67, pp. 1245-1272, 1979. -, Guest Editor, Special Issue on Computerized Medical Imaging, IEEE Transactions on Biomedical Engineering, vol. BME-28, Feb. 1981. D. K. Kijewski and B. E. Bjamgard, Correction for beam hardening in computed tomography, Med. Phys., vol. 5, pp. 209-214, 1978. Proc. IEEE, vol. 71, pp. G. F. Knoll, Single-emission computed tomography, 320-329, Mar. 1983. acoustic attenuation from reflected ultrasound signals: R. Kuc, Estimating Comparison of spectral-shift and spectral-difference approaches, IEEE Trans. Acoust. SpeechSignal Processing, vol. ASSP-32, pp. l-7, Feb. 1984.
172
COMPUTERIZED
TOMOGRAPHIC
IMAGING
[Kuh63] [Lip831
D.
E. Kuhl
and R. Q. Edwards,
Image
separation
radio-isotope
scanning,
Radiology, vol. 80, pp. 653-661, 1963. M. J. Lipton and C. B. Higgins, Computed
W801
[Mac831 [Man821 [Mar821
[McC74]
[McC75] [McD75]
[h&D771 [McK8 l]
[Mi177]
tomography: The technique and its use for the evaluation of cardiocirculatory anatomy and function, Cardiology Clinics, vol. 1, pp. 457-471, Aug. 1983. ray tracing between boreholes for R. J. Lytle and K. A. Dines, Iterative IEEE Trans. Geosciencesand Remote underground image reconstruction, Sensing, vol. GE-18, pp. 234-240, 1980. A. Macovski, Medical Imaging Systems. Englewood Cliffs, NJ: Prentice-Hall, 1983. P. Mansfield and P. G. Morris, NMR Imaging in Biomedicine. New York, NY: Academic Press, 1982. P. M. Margosian, A redundant ray projection completion method for an inverse fan beam computed tomography system, J. Comput. Assist. Tomog., vol. 6, pp. 608-613, June 1982. E. C. McCullough, Jr., H. L. Baker, 0. W. Houser, and D. F. Reese, An evaluation of the quantitative and radiation features of a scanning x-ray transverse June axial tomograph: The EM1 scanner, Radiat. Phys., vol. 3, pp. 709715, 1974. E. C. McCullough, Photon attenuation in computed tomography, Med. Phys., vol. 2, pp. 307-320, 1975. W. D. McDavid, R. G. Waggener, W. H. Payne, and M. J. Dennis, Spectral effects on three-dimensional reconstruction from x-rays, Med. Phys., vol. 2, pp. 321-324, 1975. -, Correction for spectral artifacts in cross-sectional reconstruction from xrays, Med. Phys., vol. 4, pp. 54-57, 1977. G: C. McKinnon and R. H. T.-Bates, Towards imaging the beating heart usefully with a conventional CT scanner. IEEE Trans. Biomed. Ena.. vol. BME-28. up. .. 123-127, Feb. 1981. J. G. Miller, M. O Donnell, J. W. Mimbs, and B. E. Sobel, Ultrasonic attenuation in normal and ischemic myocardium, in Proc. Second Int. Symp. on Ultrasonic
[Mue80]
[Mu1781
Wap811
[Nor79a]
[Nor79b]
[ODo85]
[Old6 l]
[OSASS] [Per851
MEASUREMENT
OF PROJECTION
DATA
173
[Pes85]
[Phe75]
[Phe78]
[Pyk Q
[Radl7]
[Ram7 I]
[Rob831
[Row691
[Sch84]
[Sha76] [She771
[Shu77]
tW851
[Swe73]
[Tam781
[Ter67] [Ter78a]
[Ter79b]
[Tre69]
materials, Appl. Opt., vol. 24, pp. 4095-4104, Dec. 1, 1985. K. R. Peschmann, S. Nape& J. L. Couch, R. E. Rand, R. Alei, S. M. Ackelsberg, R. Gould, and D. P. Boyd, High-speed computed tomography: Systems and performance, Appl. Phys., vol. 58, no. 1, pp. 4052-4060, Dec. 1, 1985. M. E. Phelps, E. J. Hoffman, and M. M. TerPogossian, Attenuation coefficients of various body tissues, fluids, and lesions at photon energies of 18 to 136 KeV, Radiology, vol. 117, pp. 573-583, 1975. M. E. Phelps, E. J. Hoffman, S. C. Huang, and D. E. Kuhl, ECAT: A new computerized tomographic imaging system for positron-emitting radiopharmaceuticals, J. Nucl. Med., vol. 19, pp. 635-647, 1978. 1. L. Pykett, NMR imaging in medicine, Sci. Amer., vol. 246, pp. 78-88, May 1982. J. Radon, Uber due bestimmung von funktionen durch ihre intergralwette langs gewisser mannigfaltigkeiten (On the determination of functions from their integrals along certain manifolds), Berichte Suechsische Akudemie der Wissenschuften, vol. 29, pp. 262-277, 1917. [See also: F. John, Plane Wave and Sphericul Means Applied to Partial Differential Equations. New York, NY: Wiley-Interscience, 1955.1 G. N. Ramachandran and A. V. Lakshminarayanan, Three dimensional reconstructions from radiographs and electron micrographs: Application of convolution instead of Fourier transforms, Proc. Nut. Acud. Sci., vol. 68, pp. 2236-2240, 1971. R. A. Robb, E. A. Hoffman, L. J. Sinak, L. D. Harris, and E. L. Ritman, Highspeed three-dimensional x-ray computed tomography: The dynamic spatial reconstructor, Proc. IEEE, vol. 71, pp. 308-319, Mar. 1983. P. D. Rowley, Quantitative interpretation of three dimensional weakly refractive phase objects using holographic interferometry, J. Opt. Sot. Amer., vol. 59, pp. 1496-1498, Nov. 1969. J. S. Schreiman, J. J. Gisvold, J. F. Greenleaf, and R. C. Bahn, Ultrasound computed tomography of the breast, Radiology, vol. 150, pp. 523-530, Feb. 1984. D. Shaw, Fourier Transform N.M.R. Spectroscopy. Amsterdam, the Netherlands: Elsevier Scientific Publishing, 1976. L. A. Shepp and J. A. Stein, Simulated reconstruction artifacts in computerized xray tomography, in Reconstruction Tomography in Diagnostic Radiology and Nuclear Medicine, M. M. TerPogossian et al., Eds. Baltimore, MD: University Park Press, 1977. R. A. Schulz, E. C. Olson, and K. S. Han, A comparison of the number of rays vs. the number of views in reconstruction tomography, Proc. S.P.I.E., vol. 127, pp. 313-320, 1977. R. Snyder and L. Hesselink, High-speed optical tomography for flow visualizaDec. 1, 1985. tion, Appl. Opt., vol. 24, pp. 4046-4051, D. W. Sweeney and C. M. Vest, Reconstruction of three-dimensional refractive index fields from multi-directional interferometric data, Appl. Opt., vol. 12, pp. 1649-1664, 1973. K. C. Tam, G. Chu, V. Perez-Mendez, and C. B. Lim, Three dimensional reconstruction in planar positron cameras using Fourier deconvolution of generalized tomonrams. IEEE Trans. Nucl. Sri.. vol. NS-25. vp. 152-159. Feb. 1978. M. TerPogossian, The Physical Aspects of Diagnostic R odiology. New York, NY: Harper and Row, 1967. M. M. TerPogossian, N. A. Mullani, J. Hood, C. S. Higgins, and C. M. Curie, A multislice positron emission computed tomography (PETT-IV) yielding transverse and longitudinal images, Radiology, vol. 128, pp. 477-484, Aug. 1978. M. M. TerPogossian, N. A. Mullani, J. J. Hood, C. S. Higgins, and D. C. Ficke, Design consideration for a positron emission transverse tomography (PETT-V) for the imaging of the brain, J. Comput. Assist. Tomog., vol. 2, pp. 439-444, Nov. 1978. 0. Tretiak, M. Eden, and M. Simon, Internal structures for three dimensional images, in Proc. 8th Int. Conf. on Med. Biol. Eng., Chicago, IL, 1969.
174
COMPUTERIZED
TOMOGRAPHIC
IMAGING
[Tre80] [Uck85]
[Wan851
[Yaf77]
[Y am771
0. 3. Tretiak and C. Metz, The exponential radon transform, SIAM J. Appl. vol. 39, pp. 341-354, 1980. of flame temperature H. Uckiyama, M. Nakajima, and S. Yuta, Measurement Appl. Opt., vol. 24, pp. distribution by IR emission computed tomography, 4111-4116, Dec. 1, 1985. S. Y. Wang, Y. B. Huang, V. Pereira, and C. C. Gryte, Applications of computed tomography to oil recovery from porous media, Appl. Opt., vol. 24, pp. 40214027, Dec. 1, 1985. P. N. T. Wells, Ultrasonics in medicine and biology, Phys. Med. Biol., vol. 22, pp. 629-669, 1977. G. H. Williams, The design of a rotational x-ray CT scanner, Media (Proc. of MEDEX 78), vol. 6, no. 7, pp. 47-53, June 1978. F. R. Wrenn, Jr., M. L. Good, and P. Handler, The use of positron-emitting radioisotope for the localization of brain tumors, Nature, vol. 113, pp. 525-527, 1951. M. Yaffe, A. Fenster, and H. E. Johns, Xenon ionization detectors for fan-beam computed tomography scanners, J. Comput. Assist. Tomog., vol. 1, pp. 419428, 1977. Y. Yamamoto, C. J. Thompson, E. Meyer, J. S. Robertson, and W. Feindel, Dynamic positron emission tomography for study of cerebral hemodynamics in a cross-section of the head using positron-emitting 6*Ga-EDTA and Kr, J. Comput. Assist. Tomog., vol. 1, pp. 43-56, Jan. 1977.
Math.,
MEASUREMENT
OF PROJECTION
DATA
175
The errors discussed in the last chapter are fundamental to the projection process and depend upon the interaction of object inhomogeneities with the form of energy used. The effects of these errors can be lessened by simply t increasing the number of measurements in each projection or the total number of projections. This chapter will focus on reconstruction errors of a different type: those caused either by insufficiency of data or by the presence of random noise in the measurements. An insufficiency of data may occur either through undersampling of projection data or because not enough projections are recorded. The distortions that arise on account of insufficiency of data are usually called the aliasing distortions. Aliasing distortions may also be caused by using an undersampled grid for displaying the reconstructed image.
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
171
Sixteen reconstructions of an ellipse are shown for different values of K, the number of projections, and N, the number of rays in each projection. In each case the reconstructions were windowed to emphasize the distortions. (Courtesy of Carl Crawford of the General Electric Medical Systems Division in Milwaukee, WI.)
Fig. 5.1:
Figs. 5.1 and 5.2 the following artifacts are evident: Gibbs phenomenon, streaks, and Moire patterns. We will now show that the streaks evident in Fig. 5.1 for the cases when N is small and K is large are caused by aliasing errors in the projection data. Note that a fundamental problem with tomographic images in general is that the objects (in this case an ellipse), and therefore their projections, are not bandlimited. In other words, the bandwidth of the projection data exceeds the highest frequency that can be recorded at a given sampling rate. To illustrate how aliasing errors enter the projection data assume that the Fourier transform Se(f) of a projection PO(~) looks as shown in Fig. 5.3(a). The bandwidth of this function is B as also shown there. Let choose a sampling interval 7 for sampling the projection. By the discussion in Chapter 2, with this sampling interval we can associate a measurement bandwidth W which is equal to l/27. We will assume that W < B. It follows that the Fourier transform of the samples of the projection data is given by Fig. 5.3(b). We see that the information within the measurement band is contaminated by the tails (shaded areas) of the higher and lower replications of the original Fourier transform. This contaminating information constitutes the aliasing
178
COMPUTERIZED
TOMOGRAPHIC
IMAGING
ial
1.9
1 .o
.60
The center lines of the Fig. 5.2: reconstructionsshown in Fig. 5. I for (a) N = 64, K = 512 and(b) N = 512, K = 512 areshown here. (From fCra79J.)
errors in the sampled projection data. These contaminating frequencies constitute the aliased spectrum. Backprojection is a linear process so the final image can be thought to be made up of two functions. One is the image made from the bandlimited projections degraded primarily by the finite number of projections. The second is the image made from the aliased portion of the spectrum in each projection. The aliased portion of the reconstruction can be seen by itself by subtracting the transforms of the sampled projections from the corresponding theoretical transforms of the original projections. Then if this result is filtered as before, the final reconstructed image will be that of the aliased spectrum. W e performed a computer simulation study along these lines for an elliptical object. In order to present the result of this study we first show in Fig. 5.4(a) the reconstruction of the ellipse for N = 64. (The number of projections was 512, which is large enough to preclude any artifacts due to insufficient number of views, and will remain the same for the discussion here.) W e have subtracted the transform of each projection for the N = 64 case from the corresponding transform for the N = 1024 case. The latter was assumed to be the true transform because the projections are oversampled (at least in comparison to the N = 64 case). The reconstruction obtained from the difference data is shown in Fig. 5.4(b). Fig. 5.4(c) is the bandlimited image obtained by subtracting the aliased-spectrum image of Fig. 5.4(b) from the complete image shown in Fig. 5.4(a). Fig. 5.4(c) is the reconstruction that would be obtained provided the projection data for the N = 64 case were truly bandlimited (i.e., did not suffer from aliasing errors after sampling). The aliased-spectrum reconstruction in Fig. 5.4(b) and the absenceof streaks in Fig. 5.4(c) prove our point that when the number of projections is large, the streaking artifacts are caused by abasing errors in the projection data. W e will now present a plausible argument, first advanced by Brooks et al.
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
179
Spectrum foldover
-2W
-B \ I ; i I
ie
Fig. 5.3: If a projection (a) is sampled at below the Nyquist rate (28 in this case), then aliasing will occur. As shown in (b) the result is aliasing or spectrum foldover. (Adapted from [Cra79].)
[Bro79], for when a streak may be dark and when it may be light. Note that when an object is illuminated by a source, a projection of the object is formed at the detector array as shown in Fig. 5.5. If the object has a discontinuity at its edges, then the projection will also. W e will now show how the position of this discontinuity with respect to the detector array has a bearing on the sign of the aliasing error. When the filtered projection is backprojected over the image array the sign of the error will determine the shade of the streak. Consider sampling a projection described by
x>o
elsewhere. The Fourier transform of this function is given by
F(w)=-.
(1)
- 2j
w
(2)
For the purpose of sampling, we can imagine that the function f is multiplied by the function h(x)= i
k=-m
6(x-/w)
(3)
180
COMPUTERIZED
TOMOGRAPHIC
IMAGING
5.4: (a) Reconstruction of an ellipse with N = 64 and K = 5 12. (b) Reconstruction .from only the aliased spectrum. N&e that the streaks exactly match those in (a). (c) Image obtained by subtracting (b) from (a). This is the reconstruction that would be obtained provided the data for the N = 64 case were truly bandlimited. (From fCra79/.)
Fig.
where T represents the sampling interval of the projection. The Fourier transform of the sampling function is then given by
H(w) = 2
k=-m
6(w - kw,,,)
(4)
where wN = 27r/T. Clearly, the Fourier transform of the sampled function is a convolution of the expressions in (2) and (4):
F sampkdw = ,ga s
N
This function is shown in Fig. 5.6(a). Before these projection data can be backprojected they must be filtered by multiplying the Fourier transform of
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
181
**/-
Detector Array
The projection of an object with sharp discontinuities will have significant high frequency energy.
Fig. 5.5:
To study the errors due to aliasing, we will only consider the terms for k = 1 and k = - 1, and assume that the higher order terms are negligible. Note that the zeroth order term is the edge information and is part of the desired reconstruction; the higher order terms are part of the error but will be small compared to the k = + 1 terms at low frequencies. The inverse Fourier transform of these two aliased terms is written as
(7)
The aliasing due to undersampled projections is illustrated here. (a) shows the Fourier transform of an edge discontinuity. The aliased portions of the spectrum are shaded. (b) shows an approximation to the error when the sampling grid is aligned with the discontinuity and (c) shows the error when the discontinuity is shifted by l/4 of the sampling interval. Note the magnitude of the error changesby more than a factor of 3 when the sampling grid shifts.
Fig. 5.6:
and is shown in Fig. 5.6(b). Now if the sampling grid is shifted by l/4 of the sampling interval its Fourier transform is multiplied by e+jwN(r14) or
j Fshifid(W5 !d ~ - ejk~,(T/4. = )
k=-cc
2?r w+kWN
(8)
This can be evaluated for the k = 1 and k = - 1 terms to find the error integral is
e-h dw
(9)
(a)
182 COMPUTERIZED TOMOGRAPHIC IMAGING
Fig. 5.6:
Continued.
and is shown in Fig. 5.6(c). If the grid is shifted in the opposite direction, then the error will be similar but with the opposite sign. As was done earlier in this section, consider the sampled projection to consist of two components: the true projection and the error term. The true projection data from each view will combine to form the desired image; the error in each projection will combine to form an image like that in Fig. 5.4(b). A positive error in a projection causes a light streak when the data are backprojected. Likewise, negative errors lead to dark streaks. As the view angle changes the size of the ellipse shadow changes and the s discontinuity moves with respect to the detector array. In addition, where the curvature of the object is large, the edge of the discontinuity will move rapidly which results in a large number of streaks. The thin streaks that are evident in Fig. 5.1 for the cases of large N and small K (e.g., when N = 512 and K = 64) are caused by an insufficient number of projections. It is easily shown that when only a small number of filtered projections of a small object are backprojected, the result is a starshaped pattern. This is illustrated in Fig. 5.7: in (a) are shown four projections of a point object, in (b) the filtered projections, and in (c) their backprojections. The number of projections should be roughly equal to the number of rays in each projection. This can be shown analytically for the case of parallel projections by the following argument: By the Fourier Slice Theorem, the Fourier transform of each projection is a slice of the two-dimensional Fourier transform of the object. In the frequency domain shown in Fig. 5.8, each radial line, such as AiA2, is generated by one projection. If there are Mproj
,200
unshlfred
.200
shtfted
,150
.I 50 ,100
,100
,050
000
-.050
-.I00 -.I50
-200 -5 -4 -2 -I 0 X/T I 3 4 5 -4 -2 -I 0 X/T I 3 4 5
(W
ALIASING ARTIFACTS AND NOISE
64
IN CT IMAGES 183
5.7: The backprojecfion operation introduces a star-shapedpattern to the reconstruction. (From [Ros82/.)
Fig.
projections uniformly distributed over 180, the angular interval 6 between successive radial lines is given by
a=-.
Mp*oj
(10)
If r is the sampling interval used for each projection, the highest spatial frequency W measured for each projection will be
w= l/27. (11)
This is the radius of the disk shown in Fig. 5.8. The distance between consecutive sampling points on the periphery of this disk is equal to A& and
184
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 5.8: Frequency domain parameters pertinent to parallel projection data. (From [Kak84J.)
is given by
A2Bz= W6=-
1 27
7r
(12)
Mproj
If there are NraY sampling points in each projection, the total number of independent frequency domain sampling points on a line such as AlAz will also be the same. Therefore, the distance E between any two consecutive sampling points on each radial line in Fig. 5.8 will be
(13)
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
185
which implies that the number of projections should be roughly the same as the number of rays per projection. The reader may have noticed that the thin streaks caused by an insufficient number of projections (see, e.g., the image for N = 512 and K = 64 in Fig. 5.1) appear broken. This is caused by two-dimensional aliasing due to the display grid being only 128 x 128. When, say, N = 512, the highest frequency in each projection can be 256 cycles per projection length, whereas the highest frequency that can be displayed on the image grid is 64 cycles per image width (or height). The effect of this two-dimensional aliasing is very pronounced in the left three images for the N = 512 row and the left two images for the N = 256 row in Fig. 5.1. As mentioned in Chapter 2, the artifacts generated by this two-dimensional aliasing are called Moire patterns, These artifacts can be diminished by tailoring the bandwidth of the reconstruction kernel (filter) to match the display resolution. From the computer simulation and analytical results presented in this section, one can conclude that for a well-balanced N x N reconstructed image, the number of rays in each projection should be roughly N and the total number of projections should also be roughly N.
186
COMPUTERIZED
TOMOGRAPHIC
IMAGING
our discussion):
a(x) =
1 0
Td 2 elsewhere. 1x1I-
(16)
In the frequency domain, the Fourier transform of the ideal projection is multiplied by this function, implying that we are in effect passing the projection through a low pass filter (LPF). Since the first zero of A(w) is located at 2n/Td, it is not unreasonable to say that the effect of A(w) is to filter out all frequencies higher than
$JF=2?r Td .
(18)
In other words, we are approximating the aperture function in the frequency domain by
(19)
Let say that we are using an array of detectors to measure a projection s and that the array is characterized by T, as the center-to-center spacing between the detectors. Measurement of the projection data is equivalent to multiplication of the low pass filtered projection with a train d(x) of impulses, where d(x) is given by
d(x)= g 6(x-nT,) =-Go
(20)
(21)
In the frequency domain the effect of the detector aperture and sampling distance is shown in Fig. 5.9. We can now write the following expression for the recorded samples p,, of an ideal projection p(x):
~~=W---ZT,)b(x)*dx)l
(22)
or, equivalently,
(23)
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
187
,a
-6~ T,
-4n T,
-2?r T,
2n f,
4-n T,
6a T,
A (o)
---?r T,
-2n
Td
2r -Td
ld
The Fourier transform of the detector array response is shown for three different detector soacinns. For values of T, such that Ti > TJ2 there hi be aliasing. If T, 5 Td/2, then a&sing is minimized.
Fig. 5.9:
where P(w) is the Fourier transform of the projection data and IFT is the inverse Fourier transform. Clearly, there will be aliasing in the sampled projections unless
Ts<;. (24)
This relationship implies that we should have at least two samples per detector width [Jos8Oa]. There are several ways to measure multiple samples per detector width. With first-generation (parallel beam) scanners, it is simply a matter of sampling the detectors more often as the source-detector combination moves past the object. Increasing the sampling density can also be done in fourthgeneration (fixed-detector) scanners by considering each detector as the apex of a fan. Now as the source rotates, each detector measures ray integrals and the ray density can be made arbitrarily dense by increasing the sampling rate for each detector. For third-generation scanners a technique known as quarter detector offset is used. Recall that for a fan beam scanner only data for 180 plus the width of the fan need be collected; if a full 360 of data is collected then the rest of the data is effectively redundant. But if the detector array is offset by l/4 of the detector spacing (ordinarily, the detector bank is symmetric with respect to the line joining the x-ray source and the center of rotation; by offset is meant translating the detector bank to the left or right, thereby causing rays in opposite views to be unique) and a full 360 of data is collected it is possible to use the extra views to obtain unique information about the object. This
188
COMPUTERIZED
TOMOGRAPHIC
IMAGING
effectively doubles the projection sampling frequency. Fig. 5.10 compares the effect of quarter detector offset on a first-generation and a thirdgeneration scanner. We will now discuss the second factor that causes projections to become blurred, namely, the size of the x-ray beam. As we will show, we can t account for the extent of blurring caused by this effect in as elegant a manner as we did for the detector aperture. The primary source of difficulty is that objects undergo different amounts of blurring depending upon how far away they are from the source of x-rays. Fig. 5.11 shows the effect of a source of nonzero width. As is evident from the figure, the effect on a projection is dependent upon where the object is located between the source and the detectors. Simple geometrical arguments show that for a given point in the object, the size of its image at the detector array is given by
B,=+ s (25)
where w, is the width of the source and Dd and D, are, respectively, the distances from the point in the object to the detectors and the source. This then would roughly be a measure of blurring introduced by a nonzero-width source in a parallel beam machine. In a fan beam system, the above-mentioned blurring is exacerbated by the natural divergence of the fan. To illustrate our point, consider two detector lines for a fan beam system, as shown in Fig. 5.12. The projection data measured along the two lines would be identical except for stretching of the projection function along the detector arc as we go to the array farther away from the center. This stretch factor is given by (see Fig. 5.13) W-9 where the distances Ds and Dd are for object points at the center of the scan. If we combine the preceding two equations, we obtain for a fan beam system the blurring caused by a nonzero-width source
B,,w,~~ DS DsD,+Dd=ws Dd Ds+Dd (27)
with the understanding that, rigorously speaking, this equation is only valid for object points close to the center of rotation. Since the size of the image is dependent on the position along the ray integral this leads to a spatially varying blurring of the projection data. Near the detector the blurring will be small while near the source a point in the object could be magnified by a large amount. Since the system is linear each point in the object will be convolved with a scaled image of the source point and then projected onto the detector line.
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
189
The ray paths for normal and quarter offset detectors are compared here, Each ray path is represented by plotting an asterisk at the point on the ray closest to the origin. In each case 6 projections of 10 rays each were gathered by rotating a full 360 around the object. (Note: normally only 180 of projection data is used for parallel projection reconstruction.) (a) shows parallel projections without quarter offset (note that the extra 180 of data is redundant). (b) is identical to (a) but the detector array has been shifted by a quarter of the sampling interval. (c) shows equiangular projections without quarter offset and (d) is identical to (c) but the detector array has been shifted by a Quarter of the
Fig. 5.10:
Pf(t)=P&)+Y&).
(28)
We will assume that the noise is a stationary zero-mean random process and that its values are uncorrelated for any two rays in the system. Therefore,
aK9,(~lk92(~2)1 =so WI - e,)wf, - t21. (29)
1.07 75
.50. .25
I 1
LO*
* *
I * . x * * * I * * Li * * II
0.0. -.25
-.25 I -.50 * *
* *
1 *
. * . I .
.
-50
*
-.75
*
75 I.0 4.0 -75 -.50
-1.0 1 -1.0
-75
-.50
-25
0.0
.25
.50
-25
0.0
.25
210
75
I.0
(a)
(4
190
COMPUTERIZED
TOMOGRAPHIC
IMAGING
I.0
1.0 .75
35
.50
.50
.25 -.25 *
.25
0.0
0.0
-.25
I x
I I
* . w II
-50
-JO
-.75
-.75
s
-1.0
0.0
.25
30
.75
ID
-1.0 -1.0
-35
-50
-25
0.0
.25
.50
75
ID
(4
Fig. 5.10:
(d)
Continued.
Q;;l<t>=I:,
Sr(w)(w(G(w)eJZrw*dw
(30)
where S:(w) is the Fourier transform of P:(t) and G(w) is the smoothing filter used; and then backprojecting the filtered projections: f<x, u) = s: Qr(x cos 6+y sin 0) dt3
A finite source of width w, will be imaged by each point in the object onto the detector line. The size of the image will depend on the ratio of D, to Dd. The images of two points in the object are shown here.
Fig, 5.11:
(31)
where !(x, y) is the reconstructed approximation to the original image f(x, y). For the purpose of noise calculations, we substitute (28) and (30) in (31) and write
f(x, y)= 1, iy, [Se(w)+&(w)]1 wJG(w)eJ2~W(XcaSe+~Sine) dw dtl (32)
I+-V+-----
Dd
m
Detector Line
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
191
Detector
Line
~3 I
The magnification of a projection due to a fan beam system is shown here. To find the effect of the source or detector aperture on image resolution it is necessaryto map the blurring of the projection into an equivalent object size.
Fig. 5.12:
where, as before, S,(w) is the Fourier transform of the ideal projection PO(~), and N@(w) is the Fourier transform of the additive noise, I. (Here we assume IVo(w) exists in some sense. Note that in spite of our notation we are only dealing with projections with finite support.) Clearly,
N(w)=
from which we can write
jy, ye (t)e-arwr dt
(33)
=s, 6(wl--w#i(e,-e*)
(35)
where we have used (29). Since No(w) is random, the reconstructed image given by (32) is also random. The mean value of f^(x, JJ) is given by
[Mw)
+E(&(w))]( w~G(w)eJ2~W(XCoSe+YSine) dw de. (36)
$5 ]
1
1
192 COMPUTERIZED TOMOGRAPHIC IMAGING
we get E[iVe(w)]
E[~(x,
JJ)] = j:
-m
(37)
Now the variance of noise at a point (x, y) in the reconstructed image is given by
(38)
y)=E
(39)
=E
X
j: I;,
Ne(w)I w IG(w)~j2~w(x~0~e+~~ine)
dw
de I
N(w)1w 1G ( w)
s ;,
ePw(x
~0se+Y sin 8) dw de
* (40) (41)
= 7rSo
I wl*l G(w)l*dw
2 ~recon ----CT
so
;, I~I*IG(~)I* dw
(42)
where we have dropped the (x, y) dependence of (T&,, since it has turned out to be independent of position in the picture plane. Equation (42) says that in order to reduce the variance of noise in a reconstructed image, the filter function G(w) must be chosen such that the area under the square of I w( G( w) is as small as possible. But note that if there is to be no image distortion I WI G(w) must be as close to (WI as possible. Therefore, the choice of G(w) depends upon the desired trade-off between image distortion and noise variance. We will conclude this subsection by presenting a brief description of the spectral density of noise in a reconstructed image. To keep our presentation simple we will assume that the projections consist only of zero-mean white noise, ve(t). The reconstructed image from the noise projections is given by
~~~~~~~~~~~~~~~~~~~~~~~~ dw dt
(43)
s0 s0
w~(W)ejZ7w(xcos
B+Y sin 0) dw
de
(44)
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
193
where, as before, No(W) is the Fourier transform of t+(t). NOW let R(a, p) be the autocorrelation function of the reconstructed image:
R(cY,P)=E[f(x+a,
u+P)k
u)l=E[f^(x+c~
v+P)fW,
1 11
(45) (46)
=s, If de 1: dwW*Ig(W)I*eJ2*W(olcose+psine).
From this one can show that the spectral density of the reconstructed noise is dependent only on the distance from the origin in the frequency domain and is given by
(47)
where, of course, w is always positive. This may be shown by first expressing the result for the autocorrelation function in polar coordinates
R(r, q5)=So 1: d0 1: dww21G(w)~2e 2rwrcos(e-~) (48)
=so1; wlG(w)12wJo(2awr)
dw
(49)
and recognizing the Hankel transform relationship between the autocorrelation function and the spectral density given above.
u) ds=ln Ni,-ln
Ne(kr)
(50)
where Ne(kr) denotes the value of Nd for the ray at location (0, kr) as shown in the figure. Randomness in the measurement of PO(t) is introduced by statistical fluctuations in Ne(kr). Note that in practice only Ne(kr) is
194
COMPUTERIZED
TOMOGRAPHIC
IMAGING
PB&d
I
B \
y)
ds AB
N = fan $ d
\
A
Fig.
5.14: An x-ray beam with a width of z is shown traveling through a cross section of the human body. (From rKak791.1
measured directly. The value of Ni for all rays is inferred by monitoring the x-ray source with a reference detector and from the knowledge of the spatial distribution of emitted x-rays. It is usually safe to assume that the reference xray flux is large enough so that Nin may be considered to be known with negligible error. In the rest of the discussion here we will assume that for each ray integral measurement Nin is a known deterministic constant, while on the other hand the directly measured quantity Ne(kr) is a random variable. The randomness of Ne(kr) is statistically described by the Poisson probability function [Ter67], [Pap651:
(51)
where p{ *} denotes the probability and Ne(kr) the expected value of the measurement: Ne(h)=E{ Ne(h)}
(52)
where E{ } denotes statistical expectation. Note that the variance of each measurement is given by variance { Ne(k7)) =&Je(kr).
(53)
ALIASING
ARTIFACTS
AND
NOISE
IN CT IMAGES
195
Because of the randomness in Ne(kr) the true value of Pe(k7) will differ from its measured value which will be denoted by Pr(k7). To bring out this distinction we reexpress (50) as follows: Pf(kr)=ln and
Pe(W= S
aY
Ni,-In
Ne(kr)
(54)
AX,
U) ds.
(55)
By interpreting e- pe(kT)as the probability that (along a ray such as the one shown in Chapter 4) a photon entering the object from side A will emerge (without scattering or absorption) at side B, one can show that Ne(k7) = Nine-pe(kr). (56)
We will now assume that all fluctuations (departures from the mean) in Ne(kr) that have a significant probability of occurrence are much less than the mean. With this assumption and using (50) and (5 1) it is easily shown that
E{P,(k~)}=Pg(k7)
(57)
Ne(k7)
From the statistical properties of the measured projections, Pr(k7), we will now derive those of the reconstructed image. Using the discrete filtered backprojection algorithms of Chapter 3, the relationship between the reconstruction at a point (x, y) and the measured projections is given by
f(x, u) =-l,
ProJ
M$
r=l
c
k
Pz(kT)h(x
cos 0i+y
sin 8i-kT).
(59)
M$
r=l
C Pe,(kT)h(x
k
cos 8i+y
sin Bi-k7)
(60)
and variance (f^(x, r)} = 1 h*(x cos 8i+y sin Bi- k7) N+(k7)
(61)
196
COMPUTERIZED
TOMOGRAPHIC
IMAGING
where we have used the assumption that fluctuations in PC&r) are uncorrelated for different rays. Equation (60) shows that the expected value of the reconstructed image is equal to that made from the ideal projection data. Before we interpret (61) we will rewrite it as follows. In terms of the ideal projections, P&r), we define new projections as Ve(k7)=ePfJ(kr) and a new filter function, h,(t), as (62)
h(t)=h*(t).
Substituting (56), (62), and (63) in (61), we get variance {p(x, y)} = *
h,(x cos
(63)
Bi + y sin Bi - kr).
(64)
We will now define a relative-uncertainty image as follows *: variance {f^(x, y)} relative-uncertainty at (X, y ) = ZVi Lox9 Y)12 * (65) In computer simulation studies with this definition the relative-uncertainty image becomes independent of the number of incident photons used for measurements, and is completely determined by the choice of the phantom. Fig. 5.15(c) shows the relative-uncertainty image for the Shepp and Logan phantom (Fig. 5.15(b)) for Mproj = 120 and T = 21101 and for h(t) originally described in Chapter 3. Fig. 5.15(d) shows graphically the middle horizontal line through Fig. 5.15(c). The relative-uncertainty at (x, y) gives us a measure of how much confidence an observer might place in the reconstructed value at the point (x, y) vis-a-vis those elsewhere. We will now derive some special cases of (64). Suppose we want to determine the variance of noise at the origin. From (64) we can write variance {f^(O, 0)} = (66)
where we have used the fact that h(t) is an even function. Chesler et al. [Che77] have argued that since h(kr) drops rapidly with k (see Chapter 3), it is safe to make the following approximation for objects that are approxiI This result only applies when compensators aren used to reduce the dynamic range of the t detector output signal. In noise analyses their effect can be approximately modeled by using different Ni. for different rays. s
197
142.53-
114.03
55.522
57.015
28.507-
-0.500
-0.250
-0.000
0.2500
0.4999
0.7499
(67)
which, when r is small enough, may also be written as variance {f(O, O)} = (~)*7j~~n (68)
198
Note again that the &i(O) are the mean number of exiting photons measured for the center ray in each projection. Using (68) Chesler et al. [Che77] have arrived at the very interesting result that (for the same uncertainty in measurement) the total number of photons per resolution element required for x-ray CT (using the filtered backprojection algorithm) is the same as that required for the measurement of attenuation of an isolated (excised) piece of the object with dimensions equal to those of the resolution element. Now consider the case where the cross section for which the CT image is being reconstructed is circularly symmetric. The Noi(O for all i will be s equal; call their common value &. That is, let No=Nei(0)=Ne2(O)= m-e. (69) The expression (68) for the variance may now be written as variance {f^(O, 0)} = &
,XOJ 0
1:
O1
h*(t) dt.
By Parseval theorem this result may be expressed in the frequency domain s as variance {p(O, 0)) =A {2r A4projivo -I/Z7 N(W) 2
dw
(71)
where r is the sampling interval for the projection data. This result says that the variance of noise at the origin is proportional to the area under the square of the filter function used for reconstruction. This doesn imply that this area t could be made arbitrarily small since any major departure from the ( w ( function will introduce spatial distortion in the image even though it may be lessnoisy. None of the equations above should be construed to imply that
199
5.4 References
[Alv79] [Bro76] [Bro78] [Bro79] [Che77] [Cra79] [Gor78] [JosBOa] [Jos80b] [Jos80] R. E. Alvarez and J. P. Stonestrom, Optimal processing of computed tomography images using experimentally measured noise properties, J. Comput. Tomog., vol. 3, no. 1, pp. 77-84, 1979. R. A. Brooks and G. DiChiro, Statistical limitations in x-ray reconstructive tomography, Med. Phys., vol. 3, pp. 237-240, 1976. R. A. Brooks, G. H. Weiss, and A. J. Talbert, A new approach to interpolation in computed tomography, J. Comput. Assist. Tomog., vol. 2, pp. 577-585, Nov. 1978. R. A. Brooks, G. H. Glover, A. J. Talbert, R. L. Eisner, and F. A. DiBianca, Aliasing: A source of streaks in computed tomograms, J. Comput. Assist. Tomog., vol. 3, no. 4, pp. 511-518, Aug. 1979. D. A. Chesler, S. J. Riederer, and N. J. Pelt, Noise due to photon counting statistics in computed x-ray tomography, J. Comput. Assist. Tomog., vol. 1, pp. 64-74, Jan. 1977. C. R. Crawford and A. C. Kak, Aliasing artifacts in computerized tomography, Appt. Opt., vol. 18, pp. 3704-3711, 1979. J. C. Gore and P. S. Tofts, Statistical limitations in computed tomography, Phys. Med. Biol., vol. 23, pp. 1176-1182, 1978. P. M. Joseph, The influence of gantry geometry on aliasing and other geometry dependent errors, IEEE Trans. Nucl. Sci., vol. 27, pp. 1104-1111, 1980. P. M. Joseph, R. D. Spital, and C. D. Stockham, The effects of sampling on CT images, Comput. Tomog., vol. 4, pp. 189-206, 1980. P. M. Joseph and R. A. Schulz, View sampling requirements in fan beam computed tomography, Med. Phys., vol. 7, no. 6, pp. 692-702, Nov./Dee. 1980.
200
COMPUTERIZED
TOMOGRAPHIC
IMAGING
PWI
]Rie78] [Ros82] [Sch77] [She741 [Ter67] [Tre78]
A. C. Kak, Computerized tomography with x-ray emission and ultrasound sources, Proc. IEEE, vol. 67, pp. 1245-1272, 1979. -, Image reconstruction from projections, in Digital Image Processing Techniques, M. P. Ekstrom, Ed. New York, NY: Academic Press, 1984. G. Kowalski, Reconstruction of objects from their projections. The influence of measurement errors on the reconstruction, IEEE Trans. Nucl. Sci., vol. NS-24, pp. 850-864, Feb. 1977. A. Papoulis, Probability, Random Variables, and Stochastic Processes. New York, NY: McGraw-Hill, 1965 (2nd ed., 1984). S. J. Riederer, N. J. Pelt, and D. A. Chesler, The noise power spectrum in computer x-ray tomography, Phys. Med. Biol., vol. 23, pp. 446-454, 1978. A. Rosenfeld and A. C. Kak, Digital Picture Processing, 2nd ed. New York, NY: Academic Press, 1982. R. A. Schulz, E. C. Olson, and K. S. Han, A comparison of the number of rays vs the number of views in reconstruction tomography, SPIE Co& on Optical Instrumentation in Medicine VI. vol. 127, pp. 313-320, 1977. L. A. Shepp and B. F. Logan, The Fourier reconstruction of a head section, IEEE Trans. Nucl. Sri., vol. NS-21, pp. 21-43, 1974. M. TerPogossian, The Physical Aspects of Diagnostic Radiology. New York, NY: Harper and Row, 1967. 0. J. Tretiak, Noise limitations in x-ray computed tomography, J. Comput. Assist. Tomog., vol. 2, pp. 477-480, Sept. 1978.
ALIASING
201
Diffraction tomographyis an important alternative to straight ray tomography. For some applications, the harm caused by the use of x-rays, an ionizing radiation, could outweigh any benefits that might be gainedfrom the tomogram. This is one reason for the interest in imaging with acoustic or electromagnetic radiation, which are considered safe at low levels. In addition, these modalities measurethe acoustic and electromagneticrefractive index and thus make available information that isn obtainablefrom xt ray tomography. As mentionedin Chapter4, the accuracyof tomographyusing acoustic or electromagneticenergy and straight ray assumptionssuffers from the effects of refraction and/or diffraction. Thesecauseeach projection to not represent integrals along straight lines but, in some caseswhere geometrical laws of propagationapply, paths determinedby the refractive index of the object. When the geometricallaws of propagationdon apply, one can even use the t t conceptof line integrals-as will be clear from the discussions this chapter. in There are two approaches correcting these errors. One approachis to to use an initial estimate of the refractive index to estimatethe path each ray follows. This approachis known as algebraicreconstructionand, for weakly refracting objects, will convergeto the correct refractive index distribution after a few iterations. We will discussalgebraictechniquesin Chapter 7. When the sizes of inhomogeneities the object becomecomparableto or in smaller than a wavelength, it is not possible to use ray theory (geometric propagation) based concepts; instead one must resort directly to wave propagationand diffraction basedphenomena.In this chapter, we will show that if the interaction of an object and a field is modeled with the wave equation, then a tomographicreconstructionapproachbasedon the Fourier Diffraction Theorem is possible for weakly diffracting objects. The Fourier Diffraction Theorem is very similar to the Fourier Slice Theorem of conventionaltomography: In conventional(or straight ray) tomography, the Fourier Slice Theorem says that the Fourier transform of a projection gives the values of the Fourier transform of the object along a straight line. When diffraction effects are included, the Fourier Diffraction Theorem says that a projection yields the Fourier transform of the object over a semicircular arc. This result is fundamentalto diffraction tomography. In this chapter the basics of diffraction tomography are presented for application with acoustic, microwave, and optical energy. For each casewe
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
203
will start with the wave equation and use either the Born or the Rytov approximation to derive a simple expressionthat relates the scatteredfield to the object. This relationship will then be inverted for several measurement geometries to give an estimate of the object as a function of the scattered field. Finally, we will show simulations and experimental results that show the limitations of the method. 6.1 Diffracted Projections Tomography with diffracting energy requires an entirely different approach to the manner in which projections are mathematically modeled. Acoustic and electromagneticwaves don travel along straight rays and the t projections aren line integrals, so we will describethe flow of energy with a t wave equation. We will first consider the propagation of waves in homogeneousmedia, although our ultimate interest lies in imaging the inhomogeneitieswithin an object. The propagationof waves in a homogeneous object is described by a wave equation, which is a second-orderlinear differential equation. Given such an equationand the source fields in an aperture, we can determinethe fields everywhere else in the homogeneousmedium. There are no direct methodsfor solving the problem of wave propagation in an inhomogeneous medium; in practice, approximate formalisms are used that allow the theory of homogeneousmedium wave propagation to be used for generatingsolutions in the presenceof weak inhomogeneities.The better known among these approximate methods go under the names of Born and Rytov approximations. Although in most cases we are interested in reconstructing threedimensional objects, the diffraction tomography theory presented in this chapter will deal mostly with the two-dimensional case. Note that when a three-dimensionalobject can be assumedto vary only slowly along one of the dimensions, a two-dimensional theory can be readily applied to such an object. This assumption, for example, is often made in conventional computerized tomography where images are made of single slices of the object. In any case, we have two reasonsfor limiting our presentationto the two-dimensionalcase: First and most importantly, the ideasbehind the theory are often easier to visualize (and certainly to draw) in two dimensions. Second,the technologyhas not yet made it practical to implement large threedimensional transforms that are required for direct three-dimensional reconstructionsof objects; furthermore, direct display of three-dimensional entities isn easy. t 6.1.1 Homogeneous Wave Equation An acoustic pressure field or an electromagnetic field must satisfy the following differential equation [Go0681 :
204
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(1)
where u represents magnitudeof the field as a function of position 7 and the time t and c is the velocity of the field as a function of position. This form of the wave equationis more complicatedthan needed;most derivations of diffraction tomography are done by considering only one temporal frequencyat a time. This decomposition be accomplished can by finding the Fourier transformof the field with respectto time at eachposition i? Note that the abovedifferential equationis linear so that the solutionsfor different frequencies be addedto find additional solutions. can A field u(i, t) with a temporal frequencyof w radiansper second(rps) satisfiesthe equation [V2+ k2(7)]u(F, t) = 0 (2) where k(J) is the wavenumber the field and is equal to of (3) where A is the field wavelength. At this point the field is at a single s frequencyand we will write it as Real Part { u(J)e-jut}. (4)
In this form it is easy to see that the time dependence the field can be of suppressed the wave equationrewritten as and (V2+k2(Q4(i)=O. (5)
For acoustic(or ultrasonic)tomography,u(J) can be the pressurefield at position? For the electromagnetic case,assuming applicability of a scalar the propagation equation,u(i) may be set equalto the complex amplitudeof the electric field along its polarization. In both cases, u(r> representsthe complex amplitudeof the field. For homogeneous media the wavenumberis constantand we can further simplify the wave equation.Settingthe wavenumber equal to k(7) = krJ the wave equationbecomes (V2+k$(7)=0. (7)
(6)
The vector gradientoperator,V, can be expanded its two-dimensional into representation the wave equationbecomes and a?u(F) + a2zq)
ax2
TOMOGRAPHIC
-+k$4(7)=0. ay2
WITH
(8)
IMAGING
DIFFRACTING
SOURCES
205
where the vector k= (k,, k,,) is the two-dimensionalpropagationvector and u(i) representsa two-dimensional plane wave of spatial frequency (k. This ( form of u(7) representsthe basis function for the two-dimensional Fourier transform; using it, we can represent any two-dimensional function as a weighted sum of plane waves. Calculating the derivatives as indicated in (8), we find that only plane waves that satisfy the condition
satisfy the wave equation. This condition is consistent with our intuitive picture of a wave and our earlier description of the wave equation, since for any frequency wave only a single wavelength can exist no matter in which direction the wave propagates. The homogeneous wave equationis a linear differential equation so we can write the general solution as a weighted sum of each possible plane wave solution. In two dimensions, at a temporal frequency of w, the field u(i) is given by
a(ky)ej(kr m=$ J=-, +kYy) dk,+l
(1 I)
where by (10)
k,=w.
(12)
The form of this equation might be surprising to the reader for two reasons. First we have split the integral into two parts. We have chosen to represent the coefficients of waves traveling to the right by a(ky) and those of waves traveling to the left by p(k,). In addition, we have set the limits of the integrals to go from - 00 to 03. For kz greater than k$ the radical in (12) becomesimaginary and the plane wave becomesan evanescentwave. These are valid solutions to the wave equation, but becauseky is imaginary, the exponentialhas a real or attenuatingcomponent. This real componentcauses the amplitude of the wave to either grow or decay exponentially. In practice, these evanescentwaves only occur to satisfy boundary conditions, always decaying rapidly far from the boundary, and can often be ignored at a distance greater than 10X from an inhomogeneity. We will now show by using the plane wave representationthat it is possible to expressthe field anywhere in terms of the fields along a line. The threedimensional version of this idea gives us the field in three-spaceif we know the field at all points on a plane. Consider a source of plane waves to the left of a vertical line as shown in Fig. 6.1. If we take the one-dimensionalFourier transform of the field along
206
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 6.1: A plum wuve propagating between two planes undergoes a phase shift dependent on the distance between the planes and the direction of the plane wave.
the vertical line, we can decomposethe field into a number of onedimensionalcomponents.Each of these one-dimensional componentscan thenbe attributedto one of the valid planewave solutionsto the homogeneous wave equation,because any one spatial frequencycomponent,k,,, there for can exist only two planewavesthat satisfythe wave equation.Sincewe have alreadyconstrained incident field to propagate the right (all sources the to are to the left of the measurement line), a one-dimensional Fourier component at a frequency of ky can be attributed to a two-dimensionalwave with a propagationvector of (m, ky). We can put this on a more mathematicalbasis if we comparethe onedimensionalFourier transform of the field to the generalform of the wave equation.If we ignore wavesthat are traveling to the left, then the general solution to the wave equationbecomes dk,. m=; J;, a(ky)ej(kxx+kyy) If we also move the coordinatesystemso that the measurement is at x = line 0, the expression the field becomes for equal to the one-dimensional Fourier transformof the amplitudedistribution function a(k,).
This amplitudedistribution function can then be substitutedinto the equation for ~(7) to obtain the fields everywhereright of the line x = 0.
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
207
We will now show how it is possible to relate fields on two parallel lines. Again considerthe situation diagrammedin Fig. 6.1. If we know a priori that all the sourcesfor the field are positioned, for example, to the left of the line at x = lo, then we can decomposethe field u(x = lo, y) into its plane wave the components.Given a plane wave z+,lane (x = lo, y) = (yej(kxb+kyY) wave field undergoesa phaseshift as it propagatesto the line x = II, and we can write
~~~~~~~~~~~~~~~~ y)=~ei(kx O+kyy)e~kx(II-lO)=~p,anewave(~=Io, y)ejWi- 0) (16)
Thus the complex amplitude of the plane wave at x = 1, is related to its complex amplitude at x = 1, by a factor of ejkA O). i- The complete processof finding the field at a line x = Ii follows in three steps: 1) Take the Fourier transform of u(x = lo, u) to find the Fourier decompositionof u as a function of /ry. 2) Propagateeachplane wave to the line x = Ii by multiplying its complex amplitude by the phase factor ejkArl-IO) where, as before, k, = @TyI 3) Find the mverse Fourier transform of the plane wave decompositionto find the field at u(x = I,, u). These stepscan be reversedif, for some reason, one wished to implement on a computer the notion of backward propagation; more on that subject later. 6.1.2 Inhomogeneous Wave Equation For imaging purposes, our main interest lies in inhomogeneousmedia. We, therefore, write a more general form of the wave equation as [V2+k(F)2]u(J)=O. (17)
For the electromagneticcase, if we ignore the effects of polarization we can consider k(7) to be a scalar function representingthe refractive index of the medium. We now write k(7) = kon(q= kO[l + n*(Q (18)
where k. represents the average wavenumber of the medium and ~(9 representsthe refractive index deviations. In general, we will assumethat the object has a finite size and therefore n@) is zero outside the object. Rewriting the wave equation we find (V+ k@(F) = - k$(7)2l](flu(fl (1%
208
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(20)
Here we have used p and E to representthe magnetic permeability and dielectric constantand the subscript zero to indicate their averagevalues. This new term, on the right-handside of (19)) is known as a forcing function for the differential equation(V2 + ki)u(n. Note that (19) is a scalar wave propagationequation.Its use implies that there is no depolarization the electromagnetic as wave propagates throughthe medium. It is known [Ish78] that the depolarizationeffects can be ignored only if the wavelength is much smaller than the correlation size of the inhomogeneities the object. If this condition isn satisfied, then strictly in t speakingwe must use the following vector wave propagationequation: V2,?(rv)+k$n2E(q-2V
[ 1
Vn -.E n l]u(fl
co
where E is the electric field vector. A vector theory for diffraction tomographybasedon this equationhas yet to be developed. For the acoustic case, first-order approximationsgive us the following wave equation[Kak85], [Mor68]: (V2+k$u(7)= -kt[n2(7)(22)
C(Q
where co is the propagationvelocity in the medium in which the object is immersedand c(i) is the propagationvelocity at location iin the object. For the acousticcasewhere only compressional waves in a viscous compressible fluid are involved, we have
c(i)mmi = 1
(24)
where p and K are the local density and the complex compressibility at location Z The forcing function in (22) is only valid provided we can ignore the first and higher order derivativesof the mediumparameters. thesehigher order If derivativescan be ignored, the exact form for the wave equationmust be t used: (V2+k;)u(7)=k;y,u-V * (y,Vu)
(25)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
209
where
K YK=Ko
(26)
P-P0
yp=-. P
(27)
~~ and p. are either the compressibility and the density of the medium in which the object is immersed, or the averagecompressibility and the density of the object, dependingupon how the processof imaging is modeled. On the other hand, if the object is a solid and can be modeled as a linear isotropic viscoelastic medium, the forcing function possesses another more complicated form. Since this form involves tensor notation, it will not be presented here and the interested reader is referred to [Iwa75]. Due to the similarities of the electromagneticand acoustic wave equations, a general form of the wave equation for the small perturbation case can be written as
(28)
(2%
This allows us to describe the math involved in diffraction tomography independentof the form of energy used to illuminate the object. We will consider the field, u(F), to be the sum of two components,uo(i) and u,(J). The component uo(F), known as the incident field, is the field present without any inhomogeneities, or, equivalently, a solution to the equation (V2 + k;)u,(F) = 0. (30)
The component u,(F), known as the scatteredfield, will be that part of the total field that can be attributed solely to the inhomogeneities.What we are saying is that with uo(F) as the solution to the above equation, we want the field u(7) to be given by u(i) = uo(F + u,(fl. Substituting the wave ) equation for u. and the sum representation for u into (28), we get the following wave equation for just the scatteredcomponent: (V2+ k@,(i) = - u(F)o(F). (31)
The scalar Helmholtz equation(31) can be solved for u,(i?) directly, but a t solution can be written in terms of the Green function [Mor53]. The s Green function, which is a solution of the differential equation s (V2+k;)g(717 )= -&(7-F ), (32)
210
COMPUTERIZED
TOMOGRAPHIC
IMAGING
is written in three-spaceas
g(?,P )=g
with R= (i-i /.
(33)
(34)
In two dimensions the solution of (32) is written in terms of a zero-order Hankel function of the first kind, and can be expressedas
In both cases, the Green function, g(?13 is only a function of the s ), difference 7 - P so we will often representthe function as simply g(7 - P). Becausethe object function in (32) representsa point inhomogeneity, the Green function can be consideredto represent the field resulting from a s single point scatterer. It is possible to representthe forcing function of the wave equation as an array of impulses or o(i)@)= j o(i )6(7-7 )u(f ) d7 . (36)
In this equation we have representedthe forcing function of the inhomogeneouswave equation as a summationof impulses weighted by 0(7)u(F) and shifted by Z The Green function represents the solution of the wave s equation for a single delta function; becausethe left-hand side of the wave equation is linear, we can write a solution by summing up the scatteredfield due to each individual point scatterer. Using this idea, the total field due to the impulse 0(7 )6(7 - 7 is )u(i ) written as a summation of scaled and shifted versions of the impulse response,g(F). This is a simple convolution and the total radiation from all sources on the right-hand side of (31) must be given by the following superposition: u,(i)= j g(7-? )o(F )u(F ) di . (37) At first glance it might appear that this is the solution we need for the scatteredfield, but it is not that simple. We have written an integral equation for the scatteredfield, u,, in terms of the total field, u = u. + u,. We still needto solve this equationfor the scatteredfield and we will now discusstwo approximationsthat allow this to be done. 6.2 Approximations to the Wave Equation
In the last section we derived an inhomogeneousintegral equation to represent the scatteredfield, u,(fl, as a function of the object, o(i). This
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
211
equationcan be solved directly, but a solution can be written using either of t the two approximationsto be describedhere. Theseapproximations,the Born and the Rytov, are valid under different conditions but the form of the resulting solutions is quite similar. Theseapproximationsare the basis of the Fourier Diffraction Theorem. Mathematically speaking, (37) is a Fredholm equation of the secondkind. A number of mathematicians have presentedworks describing the solution of scattering integrals [Hoc73], [Co1831which should be consulted for the theory behind the approximations we will present. 6.2.1 The First Born Approximation The first Born approximation is the simpler of the two approaches.Recall that the total field, ~(9, is expressedas the sum of the incident field, uo(iz), and a small perturbation, u,(fi, or u(i)=uo(i)+u,(i). The integral of (37) is now written as
u,(3)= j g(i-i )uo(i )o(i )
(38)
di'
d7'
but if the scatteredfield, u,(3), is small comparedto uo(J) the effects of the secondintegral can be ignored to arrive at the approximation u,(i)=uB(i)= 1 g(i-i')o(i')uo(i') di'. (40)
An even better estimatecan be found by substituting uo(i) + ue(fl for u@) in (40) to find
z@(i)= 1 g(i-i )[uo(i )o(i )+us(i )]
di'.
(41)
(42) An alternate representationis possible if we write u(i)=uo(i)+uu1(i)+u2(i)+~*~ where u(~+~)(Q= ui(i')o(i')g(i-7') s dJ'. (44) (43)
212
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(46)
This representation(46) has a more intuitive interpretation. The Green s function gives the scatteredfield due to a point scattererand thus the integral of (42) can be interpreted as calculating the first-order scatteredfield due to the field Ui. For this reasonthe first-order Born approximation representsthe first-order scatteredfield and Ui representsthe &order scatteredfield. The result can also be interpreted in terms of the Huygens principle; each point in the object produces a scatteredfield proportional to the scattering potential at the site of the scatterer. Each of these partial scattered fields interacts with the other scatteringcentersin the object and if the Born series convergesthe total field is the sum of the partial scatteredfields. While the higher order Born series does provide a good model of the scatteringprocess,reconstructionalgorithms basedon this serieshave yet to be developed. These algorithms are currently being researched; in the meantime, we will study reconstruction algorithms based on first-order approximations [Bar78], [Sla85]. The first Born approximation is valid only when the scatteredfield, u,(J) = m - u,(7), (47)
is smaller than the incident field, u,-,.If the object is a homogeneous cylinder it is possible to expressthis condition as a function of the size of the object and the refractive index. Let the incident wave, uo(fi, be an electromagnetic plane wave propagating in the direction of the unit vector, s For a large . object, the field inside the object will not be well approximated by the incident field
U(i) = U&je&(F)
#:AejkO
(48)
but instead will be a function of the changein refractive index, ns. Along a line through the center of the cylinder and parallel to the direction of propagationof the incident plane wave, the field inside the object becomesa slow (or fast) version of the incident wave, that is,
Since the wave is propagating through the object, the phase difference between the incident field and the field inside the object is approximately equal to the integral through the object of the changein refractive index. For a
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
213
homogeneouscylinder of radius a, the total phase shift through the object becomes PhaseChange= 4ms i (50)
where X is the wavelengthof the incident wave. For the Born approximation to be valid, a necessarycondition is that the change in phase between the incident field and the wave propagating through the object be less than ?r. This condition can be expressedmathematically as
x ang<i *
6.2.2 The First Rytov Approximation
(51)
Another approximation to the scatteredfield is the Rytov approximation which is valid under slightly different restrictions. It is derived by considering the total field to be representedas a complex phaseor [Ish78] u(7> = e+(7) and rewriting the wave equation (17) (V2+kz)u=0 as V2e+ k2e = 0 (53) (54) V2$e+ + (V+)2e + k2e@ 0 = and finally (W2+V24+k;= -o(i). (56) (55) (17) (52)
(Although all the fields, 4, are a function of c to simplify the notation the argument of these functions will be dropped.) Expressing the total complex phase, +, as the sum of the incident phase function 4. and the scattered complex phase4S or
214
COMPUTERIZED
TOMOGRAPHIC
IMAGING
As in the Born approximation, it is possible to set the zero perturbation equation equal to zero. Doing this, we find that k;+(V$0)2+V240=0. Substituting this into (59) we get 2v40 * vq5s+v2q5s=-(V&)2-o(i).
(61) (60)
This equation is still inhomogeneous can be linearized by considering but the relation V2(uo4J = V(Vuo * 4s+ uoV4s)
(62)
or by expanding the first derivative on the right-hand side of this equation V2(u,,4s)=V2uo * 4s+2Vuo * V4s+uoV24s. Using a plane wave for the incident field, u. = A## we find V2uo= - k;uo so that (63) may be rewritten as 2~~~4~ - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This result can be substitutedinto (61) to find (V2+k;)uo4,= -~oNV4s)~+o(iN. (67)
(66)
(63)
(64) (65)
The solution to this differential equationcan again be expressedas an integral equation. This becomes n uo[(V4s)2+o(i )] di'. uo4s= J V g(i-7') VW Using the Rytov approximation we assumethat the term in brackets in the above equation can be approximatedby (V4J2+o(i)=o(i). (69)
When this is done, the first-order Rytov approximation to the function uo4s becomes uo4s= s V g(i-i')uo(i')o(i') di' . (70)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
215
Thus 4,, the complex phaseof the scatteredfield, is given by g(i- i )uo(i ) )o(i di . (71)
(72)
The Rytov approximation is valid under a less restrictive set of conditions than the Born approximation [Che60], [Kel69]. In deriving the Rytov approximation we made the assumptionthat (V4s)2+o(i)=o(F). Clearly this is true only when m s= (V4d2. (74) (73)
If o(F) is written in terms of the change in refractive index o(i) = ki[n2(i) - 1] = kt[(l + ns(i))2- 1] and the square of the refractive index is expandedto find o(F)=ki[(l +2ns(i)+n,Z(i))1] (75) (76) (2%
To a first approximation, the object function is linearly related to the refractive index or o(i)=2k#$(i). (77)
The condition neededfor the Rytov approximation (see(74)) can be rewritten as n ~ (V4d2 6 7 (78)
This can be justified by observingthat to a first approximation the scattered phase, d,, is linearly dependent on the refractive index change, ns, and therefore the first term in (73) can be safely ignored for small ns. Unlike the Born approximation, the size of the object is not a factor in the Rytov approximation. The term V4, is the change in the complex scattered phaseper unit distanceand by dividing by the wavenumber
ko=!f
216 COMPUTERIZED TOMOGRAPHIC IMAGING
(79)
Unlike the Born approximation, it is the changein scatteredphase, &, over one wavelengththat is important and not the total phase.Thus, becauseof the V operator, the Rytov approximation is valid when the phasechangeover a single wavelength is small. Sincethe imaging processis carried out in terms of the field, UB, defined in the previous subsection,we need to show a Rytov approximation expression for uB. Estimating u,(7) for the Rytov case is slightly more difficult. In an experiment the total field, u(J>, is measured. An expression for ~(3 is found by recalling the expressionfor the Rytov solution to the total wave u(i)=uo+u,(i)=e~o++~ and then rearranging the exponentialsto find
u,=e40+4-e+0
(81)
(82)
(83) (84)
Inverting this to find an estimate for the scatteredphase, 4,, we obtain r#&)=ln
[uo 1
4fs+l .
635)
Expanding 4, in terms of (72) we obtain the following estimatefor the Rytov estimate of ue(i): ue(i) = uo(i) In
[ 1
4fs+l uo .
Since the natural logarithm is a multiple-valued function, one must be careful at each position to choose the correct value. For continuous functions this isn difficult becauseonly one value will satisfy the continuity requirement. t On the other hand, for discrete (or sampled)signalsthe choice isn nearly as t simple and one must resort to a phaseunwrapping algorithm to choose the proper phase. (Phaseunwrapping has been describedin a number of works [Tri77], [OCo78], [Kav84], [McG82].) Due to the + 1 factor inside the logarithmic term, this is only a problem if u, is on the order of or larger than ug. Thus both the Born and the Rytov techniquescan be used to estimate usm. While the Rytov approximation is valid over a larger class of objects, it is possible to show that the Born and the Rytov approximations produce the
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
217
same result for objects that are small and deviate only slightly from the averagerefractive index of the medium. Consider first the Rytov approximation to the scatteredwave. This is given by u(i) = e40+% (87)
Substitutingan expressionfor the scatteredphase,(72), and the incident field, (64), we find
u(Q = ejkoS ?+e+exp (-jkoSti)u.d?)
@8)
or
u(q = Uo(fleexp(-jkor%e(r3e (8%
For small uB, the first exponential can be expanded in terms of its power series. Throwing out all but the first two terms we find that u(i)=z.40(i)[l+e-~ko~r
01
uem1
(90)
(91)
Thus for very small objects and perturbations the Rytov solution is approximately equal to the Born solution given in (40). The similarity betweenthe expressionsfor the first-order Born and Rytov solutions will form the basis of our reconstructions.In the Born approximation we measurethe complex amplitude of the scatteredfield and use this as an estimateof the function uB, while in the Rytov case we estimate uB from the phaseof the scatteredfield. Since the Rytov approximation is considered more accuratethan the Born approximation it should provide a better estimate of ue. In Section 6.5, after we have derived reconstruction algorithms based on the Fourier Diffraction Theorem, we will discuss simulations comparing the Born and the Rytov approximations. 6.3 The Fourier Diffraction Theorem
Fundamental to diffraction tomography is the Fourier Diffraction Theorem, which relates the Fourier transform of the measured forward scattereddata with the Fourier transform of the object. The theorem is valid when the inhomogeneities in the object are only weakly scattering. The statementof the theorem is as follows:
When an object, 0(x, y), is illuminated with a plane wave as shown in Fig. 6.2, the Fourier transform of the forward scattered field measured on line TT' gives the values of the 2-D transform, O(wl, 02), of the object along a semicircular arc in the frequency domain, as shown in the right half of the figure.
218
COMPUTERIZED
TOMOGRAPHIC
IMAGING
space domain
frequency
domain
The Fourier Fig. 6.2: Diffraction Theorem relates the Fourier transform of a diffracted projection to the Fourier transform of the object along a semicircular arc. (From [SIa83].)
The importanceof the theoremis madeobviousby noting that if an object is illuminated by plane waves from many directions over 360) the resulting circular arcs in the (pi, w2)-plane will fill up the frequencydomain. The function 0(x, u) may then be recoveredby Fourier inversion. Before giving a short proof of the theorem, we would like to say a few words aboutthe dimensionalityof the object vis-a-visthat of the wave fields. Although the theoremtalks abouta two-dimensional object, what is actually meant is an object that doesnvary in the z direction. In other words, the t theoremis about any cylindrical object whosecross-sectional distribution is given by the function 0(x, y). The forward scattered fields are measured a? on line of detectorsalong TT' in Fig. 6.2. If a truly three-dimensional object were illuminated by the plane wave, the forward scattered fields would now haveto be measured a planararray of detectors.The Fourier transformof by the fields measuredby such an array would give the values of the 3-D transformof the objectover a sphericalsurface.This was first shownby Wolf [Wo169].More recentexpositionsare given in [Nah82] and [Dev84], where the authorshavealso presented new syntheticapertureprocedurefor a full a three-dimensional reconstruction using only two rotational positions of the object. In this chapter, however, we will continue to work with twodimensionalobjects in the sensedescribedhere. A recent work describing someof the errors in this approachis [LuZ84].
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
219
Earlier in this chapter, we expressedthe scatteredfield due to a weakly scattering object as the convolution o(i )g(i)u,# i d7 ) (92) s where us(i) representsthe complex amplitude of the field as in the Born approximation, or the incident field, ua(Q, times the complex scattered phase,+,(q, as in the Rytov approximation. Starting from this integral there are two approachesto the derivation of the Fourier Diffraction Theorem. Many researchers[Mue79], [Gre78], [Dev82] have expandedthe Green s function into its plane wave decompositionand then noted the similarity of the resulting expressionand the Fourier transform of the object. The alternative approachconsistsof taking the Fourier transform of both sidesof (92). In this work we will present both approachesto the derivation of the Fourier Diffraction Theorem; the first becausethe math is more straightforward, the second because it provides a greater insight into the difference between transmission and reflection tomography. 6.3.1 Decomposing the Green Function s We will first consider the decomposition of the Green function into its s plane wave components. The integral equation for the scatteredfield (92) can be considered as a convolution of the Green function, g(7 - ?), and the product of the object s function, o(T), and the incident field, ~~(7). Consider the effect of a single plane wave illuminating an object. The forward scattered field will be measuredat the receiver line as is shown in Fig. 6.3. A single plane wave in two dimensionscan be representedas ~~(7) = eif* where B = (k,, k,J satisfies the relationship k;=k;+k;. by (94) (93) uB(i) =
and HO is the zero-order Hankel function of the first kind. The function H,J has the plane wave decomposition[Mor53]
220
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Incident
plane wave
Fig. 6.3: A typical diffraction tomography experiment is shown. Here a single plane wave is used to illuminate the object and the scattered field is measured on the far side of the object. This is transmission tomography. (From [Pan83].)
Basically, (96) expresses cylindrical wave, Ha, as a superposition plane a of waves.At all points, the wave centered 7 is traveling outward; for points at suchthaty > y the planewavespropagate upwardwhile for y c y the plane wavespropagate downward.In addition,for IQ] I kO, the planewavesare of the ordinary type, propagatingalong the direction given by tan- l (p/o). However, for ICY(> ko, P becomesimaginary, the waves decay exponentially and they are called evanescent waves.Evanescent wavesare usually of no significancebeyondabout 10 wavelengths from the source. Substitutingthis expression,(96), into the expressionfor the scattered field, (92), the scattered field can now be written
u,c+&
i @ )u@ )
(98)
In order to show the first stepsin the proof of this theorem, we will now assumefor notational convenience that the direction of the incident plane
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
221
wave is along the positive y-axis. Thus the incident field will be given by uo(Q = ej% i (99)
where Z,, = (0, ko). Since in transmission imaging the scattered fields are measuredby a linear array located at y = la, where lo is greater than any ycoordinate within the object (see Fig. 6.3), the term Iy - y 1 in the above expressionmay simply be replacedby la - y and the resulting form may be rewritten
uE(x, y=jo)=k jy, da j $? ej[~(x-x )+b(r,-r )leikov dJ . (100)
Recognizing part of the inner integral as the two-dimensional Fourier transform of the object function evaluatedat a frequency of (CY, - ko) we /3 find 4(x, Y = lo) = &
s -m
o, A ej(ux+flto)O(a, p- ko) da P
(101)
where 0 has beenusedto designatethe two-dimensional Fourier transform of the object function. Let Us(w, /a) denote the Fourier transform of the one-dimensional scatteredfield, uB(x, @, with respect to x, that is, UE(w, lo) = ST, uE(x, lo)e-jux dx.
(102)
As mentionedbefore, the physics of wave propagationdictate that the highest angular spatial frequency in the measuredscatteredfield on the line y = 4-,is unlikely to exceedko. Therefore, in almost all practical situations, U,(w, 4~) = 0 for (w ( > ko. This is consistentwith neglecting the evanescent modesas describedearlier. If we take the Fourier transform of the scatteredfield by substituting (101) into (102) and using the following property of Fourier integrals
(103)
&GloO(a,
- ko)
222
COMPUTERIZED
TOMOGRAPHIC
IMAGING
the one-dimensionalFourier transform of the field at the receiver line. The factor
is a simple constantfor a fixed receiver line. As CY varies from - k0 to kO,the coordinates (CX,&? - kc,) in the Fourier transform of the object function trace out a semicircular arc in the (u, u)-planeas shown in Fig. 6.2. This proves the theorem. To summarize, if we take the Fourier transform of the forward scattered data when the incident illumination is propagating along the positive y-axis, the resulting transform will be zero for angular spatial frequencies1 1 > /co. CY For 1 1 < ks, the transform of the data gives values of the Fourier transform (Y of the object on the semicircular arc shown in Fig. 6.2 in the (u, u)-plane. The endpointsof the semicircular arc are at a distanceof fikO from the origin in the frequency domain. 6.3.2 Fourier Transform Approach Another approachto the derivation of the Fourier Diffraction Theorem is possible if the scatteredfield uB(i)= j o(7 )uo(i )g(i-7 ) dt
(106)
is consideredentirely in the Fourier domain. The plots of Fig. 6.4 will be used to illustrate the various transformationsthat take place. Again, consider the effect of a single plane wave illuminating an object. The forward scattered field will be measuredat the receiver line as is shown in Fig. 6.3. The integral equationfor the scatteredfield, (106), can be consideredas a convolution of the Green function, g(i - 7 and the product of the object s ), function, o(i and the incident field, ~~(7). First define the following ), Fourier transform pairs: om 4-bma g(i-7 ) ++ G(R) (107)
u(i) 4-bU(B). The integral solution to the wave equation, (40), can now be written in terms of these Fourier transforms, that is, U,(x) = G(f)(O(7i) * Uo(7i)}
(108)
where * has been used to representconvolution and x = (CY, In (93) an y). expressionfor ~0 was presented.Its Fourier transform is given by u,(A)=27r~(iL-R) (109)
DIFFRACTING SOURCES 223
TOMOGRAPHIC
IMAGING
WITH
Two-dimensional Fig. 6.4: Fourier representation of the Hebnholtz equation. (a) is the Fourier transform of the object, in this case a cylinder, (b) is the Fourier transform of the incident field, (c) is the Fourier transform of the Green function in (95), (d) shows the frequency domain convolution of (a) and (b), and finally (e) is the product in the frequency domain of (c) and (d). (From [Sla83].)
and thus the convolutionof (108)becomes shift in the frequencydomain or a O(X) * u,(x)=2~o(x-~). (110)
This convolution is illustrated in Figs. 6:4(a)-(c) for a plane wave propagating with directionvector, J? = (0, ko). Fig. 6.4(a) showsthe Fourier transformof a single cylinder of radius 1X and Fig. 6.4(b) showsthe Fourier transform of the incident field. The resulting multiplication in the space domain or convolution in the frequencydomain is shown in Fig. 6.4(c). To find the Fourier transform of the Green function the Fourier transformof (32) is calculatedto find
(-A2+/$G(f17
-e-jxei
(111)
224
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(112)
An approximation to G(x) is shown in Fig. 6.4(d). The Fourier transform representation (112) can be misleadingbecauseit in representsa point scattereras both a sink and a source of waves. A single plane wave propagatingfrom left to right can be consideredin two different ways dependingon your point of view. From the left side of the scatterer,the point scattererrepresentsa sink to the wave, while to the right of the scatterer the wave is spreading from a source point. Clearly, it not possible for a s scattererto be both a point sourceand a sink. Later, when our expressionfor the scatteredfield is inverted, it will be necessaryto choose a solution that leads to outgoing waves only. The effect of the convolution shown in (106) is a multiplication in the frequency domain of the shifted object function, (llO), and the Green s function, (112), evaluatedat i = 0. The scatteredfield is written as U,(X)=2n 0(X 4) AZ-k2 (114)
This result is shown in Fig. 6.4(e) for a plane wave propagatingalong the yaxis. Since the largest frequency domain componentsof the Green function s satisfy (113), the Fourier transform of the scatteredfie!d is dominated by a shifted and sampledversion of the object Fourier transform. s We will now derive an expressionfor the field at the receiver line. For simplicity we will continue;0 assumethat the incident field is propagating along the positive y-axis or K = (0, ko). The scatteredfield along the receiver line (x, y = lo) is simply the inverse Fourier transform of the field in (114). This is written as
(115)
which, using (114), can be expressedas
(116)
We will first find the integral with respectto y. For a given (Y,the integral has a singularity for
y1,2= +dzp.
(117)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
225
Using contour integration we can evaluatethe integral with respectto y along the path shown in Fig. 6.5. By adding 1/2r of the residueat eachpole we find u,(x, y) =& where
r, = jO(w v--k4 eje,o
i r2(a; y)eJux da
(118)
(119)
r = -jO(w
2
w-a
Examining the above pair of equationswe seethat rr representsthe solution in terms of plane waves traveling along the positive y-axis, while r2 representsplane waves traveling in the -y direction. As was discussedearlier, the Fourier transform of the Green function s (112) representsthe field due to both a point source and a point sink, but the two solutions are distinct for receiver lines that are outside the extent of the object. First consider the scatteredfield along the line y = IOwhere lo is greater than the y-coordinate of all points in the object. Since all scattered fields originate in the object, plane waves propagating along the positive yaxis representoutgoing waves while waves propagatingalong the negativeyaxis represent waves due to a point sink. Thus for y > object (i.e., the receiver line is abovethe object) the outgoing scatteredwavesare represented by I or ,
u,(x, y) = & l rl(a;
y)ejlw da,
y > object.
(121)
Integration path in the complex plane for inverting the two-dimensional Fourier transform of the scattered field. The correct Dole must be chosen to lead to okgoing fields. (From [Sla84/.)
Fig. 6.5:
Conversely, for a receiver along a line y = lo where lo is less than the ycoordinate of any point in the object, the scatteredfield is representedby r2 or
1 n
u,(x,
y) =k
1 r2(a;
y)ejax
da,
y c object.
ww
226
COMPUTERIZED
TOMOGRAPHIC
IMAGING
In general, the scatteredfield will be written as (123) and it will be understoodthat values that lead only to outgoing waves should be chosenfor the squareroot in the expressionfor r. Taking the Fourier transform of both sides of (123) we find that (a, s U(X, y= lo)e-jax dx= I 10). (124)
But sinceby (119) and (120)) I lo) is equal to a phaseshifted version of the (a, object function, the Fourier transform of the scatteredfield along the line y = lo is related to the Fourier transform of the object along a circular arc. The use of the contour integration is further justified by noting that only those waves that satisfy the relationship cr2+y2=k;
(125)
will be propagatedand thus it is safe to ignore all waves not on the ko-circle. This result is diagrammedin Fig. 6.6. The circular arc representsthe locus of all points (CY,y) such that y = m The solid line shows the outgoing waves for a receiver line at y = lo above the object. This can be consideredtransmissiontomography. Conversely, the broken line indicates the locus of solutions for the reflection tomography case, or y = lo is below the object. 6.3.3 Short Wavelength Limit of the Fourier Diffraction Theorem
Fig. 6.6: Estimates of the two-dimensional Fourier transform of the object are available along the solid arc for transmission tomography and the broken arc for reflection tomography. (Adapted from [Sla84/.)
While at first the derivations of the Fourier Slice Theorem and the Fourier Diffraction Theorem seemquite different, it is interesting to note that in the limit of very high energy waves or, equivalently, very short wavelengthsthe Fourier Diffraction Theorem approachesthe Fourier Slice Theorem. Recall that the Fourier transform of a diffracted projection correspondsto samplesof the two-dimensionalFourier transform of an object along a semicircular arc.
Objects k,
0 / 1 I , \ \ \ \ \,
Reflection
Transmission Objects kx
TOMOGRAPHIC
IMAGING
WITH DIFFRACTING
SOURCES
221
k.
which is given by
and X is the wavelength of the energy. As the wavelength is decreased,the wavenumber, ko, and the radius of the arc in the object Fourier domain s grow. This process is illustrated in Fig. 6.7 where we have shown the semicircular arcs resulting from diffraction experiments at seven different frequencies. An example might make this idea clearer. An ultrasonic tomography experiment might be carried out at a frequency of 5 MHz which corresponds to a wavelengthin water of 0.3 mm. This correspondsto a k. of 333 radians/ meter. On the other hand, a hypothetical coherent x-ray source with a lOOkeV beam has a wavelength of 0.012 PM. The result is that a diffraction experiment with x-rays can give samples along an arc of radius 5 x lo8 radians/meter.Certainly for all physiological features(i.e., resolutions of < 1000 radians/meter)the arc could be consideredto be a straight line and the Fourier Slice Theorem an excellent model for relating the transforms of the projections with the transform of the object. 6.3.4 The Data Collection Process The best that can be hopedfor in any tomographic experiment is to estimate the Fourier transform of the object for all frequencieswithin a disk centered at the origin. For objectswhose spectrahave no frequency content outside the disk, the reconstruction procedure is perfect. There are several different procedures that can be used to estimate the object function from the scatteredfield. A single plane wave provides exact information (up to a frequency of ako) about the Fourier transform of the object along a semicircular arc. Two of the simplest procedures involve
A Objects %
As the frequency of the experiment goes up (wavelength goes down) the radius of the arc increases until the scattered field is closely approximated by the Fourier Slice Theorem discussed in Chapter 3.
Fig. 6.1:
k=
17k,
k =Bk,
228
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 6.8:
With plane
wave
illumination, estimates of the object two-dimensional Fourier s transform are available along the circular arcs.
changing the orientation and frequency of the incident plane waves to move the frequency domain arcs to a new position. By appropriately choosing an orientation and a frequency it is possibleto estimatethe Fourier transform of the object at any given frequency. In addition, it is possible to change the radius of the semicircular arc by varying the frequency of the incident field and thus generatingan estimateof the entire Fourier transform of the object. The most straightforward data collection procedure was discussed by Mueller et al. [Mue80] and consistsof rotating the object and measuringthe scattered field for different orientations. Each orientation will produce an estimateof the object Fourier transform along a circular arc and thesearcs s will rotate as the object is rotated. When the object has rotated through a full 360 an estimateof the object will be available for the entire Fourier disk. The coveragefor this method is shown in Fig. 6.8 for a simple experiment with eight projections of nine sampleseach. Notice that there are two arcs that passthrough eachpoint of Fourier space.Generally, it will be necessary to chooseone estimate as better. On the other hand, if the reflected data are collected by measuringthe field on the same side of the object as the source, then estimatesof the object are available for frequenciesgreater than akO. This follows from Fig. 6.6. Nahamooand Kak [Nah82], [Nah84] and Devaney [Dev84] have proposed a method that requires only two rotational views of an object. Consider an arbitrary source of waves in the transmitter plane as shown in Fig. 6.9. The
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOUkCES
229
Fig. 6.9: A typical synthetic aperture tomography experiment is shown. A transmitter is scanned past the object. For each transmitter position the scattered field is measured. Later, appropriate phases are added to the projections to synthesize any incident plane wave. (From [Sla83/.)
transmittedfield, ur, can be represented a weightedset of plane wavesby as taking the Fourier transform of the transmitter aperturefunction [Goo68]. Doing this we find u,(x) =-$ jy, At(kx)ejkxx dk,. (127)
Moving the sourceto a new position, 7, the plane wave decomposition the of transmittedfield becomes
G iven the plane wave decomposition,the incident field in the plane follows simply as ui(v; x, y)= so)_ (--$ ,4,(kx)ejkxq) ej(kxx+kyy) dk,. (129)
In (124)we presented equationfor the scattered an field from a singleplane wave. Becauseof the linearity of the Fourier transform the effect of each plane wave, ej(+++ can be weightedby the expression bracketsabove ), in and superimposed find the Fourier transformof the total scattered to field due to the incident field u,(x; q) as [Nah82]
230
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Taking the Fourier transform of both sides with respect to the transmitter position, 7, we find that
O(a-kx, Us(kx; a) =4(k) j27 r-k,)
Estimates of the Fourier transform of an object in the synthetic aperture experiment are available in the shaded region.
Fig. 6.10:
By collecting the scatteredfield along the receiver line as a function of transmitter position, 7, we have an expressionfor the scatteredfield. Like the simpler case with plane wave incidence, the scatteredfield is related to the Fourier transform of the object along an arc. Unlike the previous case, though, the coveragedue to a single view of the object is a pair of circular disks as shown in Fig. 6.10. Here a single view consistsof transmitting from all positions in a line and measuringthe scatteredfield at all positions along the receiver line. By rotating the object by 90 it is possible to generatethe complementarydisk and to fill the Fourier domain. The coverageshown in Fig. 6.10 is constructedby calculating (g - x) for all vectors (a and (x) that satisfy the experimental constraints. Not only must each vector satisfy the wave equation but it is also necessarythat only forward traveling plane waves be used. The broken line in Fig. 6.10 shows the valid propagation vectors (- & for the transmitted waves. To each possiblevector ( - x) a semicircular set of vectors representingeachpossible receivedwave can be added.The locus of receivedplane waves is shown as a solid semicircle centeredat eachof the transmittedwaves indicated by an x .
t ky
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
231
The entire coverage for the synthetic aperture approach is shown as the shadedareas. In geophysicalimaging it is not possibleto generateor receive waves from all positions around the object. If it is possible to drill a borehole, then it is possible to perform vertical seismic profiling (VSP) [Dev83] and obtain information about most of the object. A typical experiment is shown in Fig. 6.11. So as to not damagethe borehole, acoustic waves are generatedat the surface using acoustic detonatorsor other methods and the scatteredfield is measuredin the borehole. The coveragein the frequency domain is similar to the synthetic aperture approach in [Nah84]. Plane waves at an arbitrary downward direction are synthesized by appropriately phasing the transmitting transducers. The receiverswill receive any wavestraveling to the right. The resulting coverage for this method is shown in Fig. 6.12(a). If we further assumethat the object function is real valued, we can use the symmetry of the Fourier transform for real-valued functions to obtain the coverage in Fig. 6.12(b). It is also possibleto perform such experimentswith broadbandillumination [Ken82]. So far we have only considerednarrow band illumination wherein the field at each point can be completely describedby its complex amplitude. Now consider a transducerthat illuminates an object with a plane wave of the form A,(t). It can still be called a plane wave becausethe amplitude of the
Borehole \
Scattered
WCWCZ
232
COMPUTERIZED
TOMOGRAPHIC
IMAGING
(4
(b)
Available estimate of the Fourier transform of an object for a VSP experiment (a). If the object function is real valued, then the symmetry of the Fourier transform can be used to estimate the object in the region shown in (b).
Fig. 6.12:
field along planesperpendicular the direction of travel is constant.Taking to the Fourier transformin the time domainwe can decompose field into a this numberof experiments, eachat a different temporal frequency,w. We let At@-, Y, w) = j;- A,(x, y, t)e+jwt dt (132)
where the sign on the exponentialis positive becauseof the convention defined in Section6.1.1. G iven the amplitude of the field at each temporal frequency, it is straightforward to decomposethe field into plane wave componentsby finding its Fourier transform along the transmitterplane. Each plane wave component is then described as a function of spatial frequency, k, = A(-), and temporalfrequency,o. The temporalfrequencyw is related to k, by
km=:
w
TOMOGRAPHIC IMAGING WITH DIFFRACTING SOURCES
(133)
233
where c is the speedof propagationin the media and the wave vector (k,, ky) satisfiesthe wave equation k;+k;=k;. (134)
If a unit amplitude plane wave illumination of spatial frequency k, and a temporal frequency w leads to a scatteredplane wave with amplitude u,(k,, w), then the total scatteredfield is given by a weighted superpositionof the scatteredfields or us(x Y; t) =& s;, do sTk dkA(k,, 0 tile- jutus(kx, W; y)&(W+$Y). (135) For plane wave incidence the coverage for this method is shown in Fig. 6.13(a). Fig. 6.13(b) showsthat by doing four experimentsat 0,90, 180, and 270 it is possible to gather information about the entire object. 6.4 Interpolation Sources and a Filtered Backpropagation Algorithm for Diffracting
In our proof of the Fourier Diffraction Theorem, we showedthat when an object is illuminated with a plane wave traveling in the positive y direction, the Fourier transform of the forward scatteredfields gives values of the arc shown in Fig. 6.2. Therefore, if an object is illuminated from many different directions, we can, in principle, fill up a disk of diameter &2k in the frequency domain with samplesof 0( ulr Q), which is the Fourier transform of the object, and then reconstruct the object by direct Fourier inversion. Therefore, we can say that diffraction tomographydeterminesthe object up to a maximum angular spatial frequency of &2k. To this extent, the reconstructed object is a low pass version of the original. In practice, the loss of resolution causedby this bandlimiting is negligible, being more influenced by considerationssuch as the aperture sizes of the transmitting and receiving elements, etc. The fact that the frequency domain samples are available over circular arcs, whereas for convenient display it is desirable to have samples over a rectangular lattice, is a source of computational difficulty in reconstruction algorithms for diffracting tomography. To help the reader visualize the distribution of the available frequencydomain information, we have shown in Fig. 6.8 the sampling points on a circular arc grid, each arc in this grid correspondingto the transform of one projection. It should also be clear from this figure that by illuminating the object over 360 a double coverageof the frequency domain is generated;note, however, that this double coverage is uniform. We may get a complete coverage of the frequency domain with illumination restricted to a portion of 360; however, in that casethere would
234
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Objects
kY
k=co
*
Objects kx
k=Bk,
(a) Estimates of the Fig. 6.13: Fourier transform of an object for broadband illumination. With four views the coverage shown in (b) is possible.
be patchesin the (wi, &-plane where we would have a double coverage.In reconstructingfrom circular arc grids to rectangulargrids, it is often easierto contend with a uniform double coverage, as opposedto a coveragethat is single in most areasand double in patches. However, for some applications that do not lend themselves to data collection from all possibledirections, it is useful to bear in mind that it is not necessary go completely aroundan object to get completecoverageof the to frequency domain. In principle, it should be possibleto get an equal quality reconstruction when illumination angles are restricted to a 180 plus an interval, the anglesin excessof 180 being requiredto completethe coverage of the frequency domain. There are two computationalstrategiesfor reconstructingthe object from the measurements the scatteredfield. As pointed out in [Sou84a], the two of
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
235
algorithms can be consideredas interpolation in the frequency domain and interpolation in the space domain; and are analogousto the direct Fourier inversion and backprojectionalgorithms of conventionaltomography. Unlike conventional tomography, where backprojection is the preferred approach, the computational expense of space domain interpolation of diffracted projections makesfrequencydomain interpolation the preferred approachfor diffraction tomography reconstructions. The remainder of this section will consist of derivations of the frequency domain and space domain interpolation algorithms. In both cases we will assumeplane wave illumination; the reader is referred to [Dev82], [Pan831 for reconstruction algorithms for the synthetic aperture approach and to [Sou84b] for the general case. 6.4.1 Frequency Domain Interpolation There are two schemesfor frequency domain interpolation. The more conventional approach is polynomial based and assumesthat the data near each grid point can be approximated by polynomials. This is the classical numerical analysisapproachto the problem. A secondapproachis known as the unified frequency domain reconstruction (UFR) and interpolates data in the frequency domain by assuming that the space domain reconstruction should be spatially limited. We will first describe polynomial interpolation. In order to discussthe frequency domain interpolation between a circular arc grid on which the data are generatedby diffraction tomography and a rectangular grid suitable for image reconstruction, we must first select parametersfor representingeach grid and then write down the relationship between the two sets of parameters. In (104), UB(W, 10) was used to denote the Fourier transform of the transmitted data when an object is illuminated with a plane wave traveling along the positive y direction. We now use UB,~(W)to denote this Fourier transform, where the subscript 4 indicates the angle of illumination. This angle is measuredas shown in Fig. 6.14. Similarly, Q(w, 4) will be used to indicate the values of O(w,, w2)along a semicircular arc oriented at an angle C#I shown in Fig. 6.15 or as Q(o, x@-i? - k,,), Iwl <ko(136)
Therefore, when an illuminating plane wave is incident at angle 4, the equality in (104) can be rewritten as
for (wick.
(137)
In most casesthe transmitteddata will be uniformly sampledin space,and a discrete Fourier transform of these data will generate uniformly spaced
236
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 6.14: The angle $I is used to identify each diffraction projection. (From [Pan83j,)
Fig. 6.15: Each projection is measured using the 6 - w coordinate system shown here. (From [Kak;85].)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
237
Fig. 6.16: Uniformly sampling the projection in the space domain leads to uneven spacing of the samples of the Fourier transform of the object along the semicircular arc. (Adapted from (Pan83J.)
samples U&o) in the o domain. Since Q(w) is the Fourier transformof of the object along the circular arc AOB in Fig. 6.15 and since K is the projection of a point on the circular arc on the tangentline CD, the uniform samplesof Q in K translateinto nonuniform samplesalong the arc AOB as shownin Fig. 6.16. We will thereforedesignate eachpoint on the arc AOB by its (0, 4) parameters. [Note that (0, 4) are not the polar coordinates a of point on arc AOB in Fig. 6.15. Therefore,w is not the radial distancein the (wi , wz)-plane. point E shown,the parameter is obtainedby projecting For w E onto line CD.] We continueto denotethe rectangularcoordinatesin the frequencydomain by (wi, wz). Before we presentrelationshipsbetween(w, 4) and (wr, 4, it must be mentionedthat we must considerseparately points generated the A0 the by and OB portions of the arc AOB as r$ is varied from 0 to 27r. We do this because, mentionedbefore, the arc AOB generates double coverageof as a the frequencydomain, as 4 is varied from 0 to 2n, which is undesirable for discussing one-to-one a transformation betweenthe (w, 4) parameters the and (wi, w2)coordinates. We now reserve(w, 4) parameters denotethe arc grid generated the to by portion OB as shownin Fig. 6.15. It is importantto note that for this arc grid, w varies from 0 to k and 4 from 0 to 27r. We now presentthe transformation equations between(w, 4) and (wi, WZ). We accomplishthis in a slightly roundaboutmannerby first defining polar
sampling
is
frequency
domain
238
coordinates(Q, 0) in the (q, w2)-planeas shown in Fig. 6.17. In order to go from (CO,, to (w, 4) , we will first transform from the former coordinatesto w2) (Q, 13) then from (Q, 0) to (w, 4). The rectangularcoordinates(CO,, are and wZ) related to the polar coordinates(Q, 19) (Fig. 6.17) by
e=m-l 2 . 0*I
(139)
In order to relate (Q, 8) to (w, q5),we now introduce a new angle /3, which is the angularposition of a point (q, 02) on arc OB in Fig. 6.17. Note from the figure that the point characterized by angle /3 is also characterized by parameterw. The relationship between w and P is given by w=k sin fl. (140)
The following relationship exists betweenthe polar coordinates(0, 8) on the one hand and the parametersj3 and q5on the other:
A second change of variables is used to relate the projection data to the object s Fourier transform. (From [Kak85] as modified from [Pan83].)
Fig. 6.17:
p=2 sin- -n 2k
(141) (142)
frequency
domain
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
239
By substituting(141) in (140) and then using (138), we can expressw in terms of wI and w2. The result is shown below.
w=ksin
psin-i
(T)]
(143)
4=tan- (z)+sin-i
(F)+i.
(144)
These are our transformation equations for interpolating from the (w, 4) parametersused for data representationto the (wl, w2)parametersneededfor inverse transformation. To convert a particular rectangular point into (w, 4) domain, we substituteits wl and w2 values in (143) and (144). The resulting values for w and 9 may not correspondto any which Q(w, 6) is known. for By virtue of (137), Q(w, 6) will only be known over a uniformly sampledset of values for w and 6. In order to determine Q at the calculated w and 4, we use the following procedure. Given N, x N+ uniformly located samples, Q(wi, dj), we calculate a bilinearly interpolated value of this function at the desired w and q5by using (145)
i=l j=*
where hi(W) =
I-!4
Aw 0
IwIsAw otherwise
(146)
I4l~WJ
otherwise;
(147)
A6 and Aw are the sampling intervals for 4 and w, respectively. When expressed in the manner shown above, bilinear interpolation may be interpreted as the output of a filter whose impulse responseis hlh2. The results obtained with bilinear interpolation can be considerably improved if we first increasethe sampling density in the (w, +)-plane by using the computationally efficient method of zero-extendingthe two-dimensional inversefast Fourier transform (FFT) of the Q(wi, 4j) matrix. The technique consistsof first taking a two-dimensionalinverse FFT of the N, x N4 matrix consisting of the Q(wi, 4j) values, zero-extending the resulting N, x N+
240
COMPUTERIZED
TOMOGRAPHIC
IMAGING
array of numbersto, let say, mN, x nM,, and then taking the FFT of this s new array. The result is an mn-fold increasein the density of samplesin the (w, +)-plane. After computing Q(w, 4) at each point of a rectangulargrid by the procedureoutlined above, the objectf(x, y) is obtainedby a simple 2-D inverse FFT . A different approachto frequency domain interpolation, called the unified frequency domain (UFR) interpolation, was proposed by Kaveh et al. [Kav84]. In this approachan interpolating function is derived by taking into account the object spatial support. Consider an object Fourier transform s s as might be measuredin a diffraction tomographyexperiment. If the Fourier domain data are denotedby F(u, v), then a reconstruction can be written J-(x, u) = i(x, Y) IFT {W, where the indicator function is given by where the object is known to have support (149) elsewhere. If the Fourier transform of i(x, u) is I(u, u), then the spatially limited reconstructioncan be rewritten
f(x, y)=IFT (4~ u) * F(u, u>) (150)
u>>
(148)
by noting that multiplication in the spacedomain is equivalentto convolution in the frequency domain. To perform the inverse Fourier transform fast it is necessaryto have the Fourier domain data on a rectangular grid. First consider the frequency domain convolution; once the data are available on a rectangulargrid the inverse Fourier transform can easily be calculatedas it is for polynomial interpolation. The frequency domain data for the UFR reconstruction can be written as F(u, u)= j j Z(u-u , U-u )F(u , u du du ) . (151)
Now recall that the experimentaldata, F(u , u are only available on the ), circular arcs in the 4 - w spaceshown in Fig. 6.15. By using the changeof variables
(153)
241
(154)
This convolution integral gives us a meansto get the frequency domain data on a rectangulargrid and forms the heart of the UFR interpolation algorithm. This integral can be easily discretized by replacing each integral with a summation over the projection angle, 4, and the spatial frequency of the received field, w. The frequency domain data can now be written as F(u, u) = A,A,EEJ(+,
Z(uTl(4, w), uT2(49
w)
w))
F(TI(~,
w),
T2(6
w))
(155)
where Ad and Aw representthe sampling intervals in the C$- w space. If the indicator function, i(x, u), is taken to be 1 only within a circle of radius R, then its Fourier transform is written Z(u, u)= J,(Rdu2 + u2) Rm (156)
A further simplification of this algorithm can be realized by noting that only the main lobe of the Besselfunction will contribute much to the summationin (155). Thus a practical implementationcan ignore all but the main lobe. This drastically reducesthe computational complexity of the algorithm and leads to a reconstructionschemethat is only slightly more complicatedthan bilinear interpolation. 6.4.2 Backpropagation Algorithms It has recently been shown by Devaney [Dev82] and Kaveh et al. [Kav82] that there is an alternative method for reconstructing images from the diffracted projection data. This procedure, called the filtered backpropagation method, is similar in spirit to the filtered backprojection techniqueof xray tomography. Unfortunately, whereas the filtered backprojection algorithms possessefficient implementations, the same can be said for the t filtered backpropagation algorithms. The latter class of algorithms is computationally intensive, much more so than the interpolation procedure discussedabove. With regard to accuracy, they don seem to possessany t particular advantage especially if the interpolation is carried out after increasing the sampling density by the use of appropriate zero-padding as discussedabove. We will follow the derivation of the backpropagationalgorithm as first
242
COMPUTERIZED
TOMOGRAPHIC
IMAGING
presentedby Devaney [Dev82]. First consider the inverse Fourier transform of the object function, o(i)=1 O(R)ejper di?. s -m s --m (27r)2 (157)
This integral most commonly representsthe object function in terms of its Fourier transform in a rectangular coordinate system representing the frequency domain. As we have already discussed,a diffraction tomography experiment measuresthe Fourier transform of the object along circular arcs; thus it will be easierto perform the integration if we modify it slightly to use the projection data more naturally. We will use two coordinate transformations to do this: the first one will exchangethe rectangular grid for a set of semicircular arcs and the second will map the arcs into their plane wave decomposition. We first exchangethe rectangulargrid for semicircular arcs. To do this we representB = (k,, k,) in (157) by the vector sum if= ko(s -S,) (158)
) The kOrOand kOs used in the backpropagation algorithm are shown here. (From [Pan83/.) Fig. 6.18:
where f = (cos $o, sin +o) and s = (cos x, sin x) are unit vectors representing the direction of the wave vector for the transmitted and the received plane waves,, respectively. This coordinate transformation is illustrated in Fig. 6.18.
frequency
domain
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
243
To find the Jacobianof this transformation write k, = k. (cos x - cos 90) ky = ko (sin x - sin 90) and dk,dk, = Ikt sin (X - &)I dx d& = koh - cos2 (x-do) dx ddo = koJ1 - (3. G)2 dx d& and then (157) becomes 1 1 o(F)=-(27r)2 0 2
2* 2* .
(159)
(160)
(161) (162)
(163)
SS1 -(S 0 0
The factor of l/2 is necessarybecauseas discussedin Section 6.4.1 the (x, 40) coordinate system gives a double coverage of the (k,, ky) space. This integral gives an expressionfor the scatteredfield as a function of the (x, +o) coordinate system. The data that are collected will actually be a function of +o, the projection angle, and K, the one-dimensionalfrequency of the scattered field along the receiver line. To make the final coordinate transformation we take the angle x to be relative to the (K, y) coordinate system. This is a more natural representationsince the data available in a diffraction tomography experiment lie on a semicircle and therefore the data are available only for 0 5 x I ?r. We can rewrite the x integral in (164) by noting cos x = /r/k0 sin x=y/ko and therefore dx=G
0
(165)
(166)
dtc.
(167)
(168)
244
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Using the Fourier Diffraction Theorem as represented (104) we can by approximatethe Fourier transform of the object function, 0, by a simple function of the first-orderBorn field, ug, at the receiverline. Thus the object function in (168) can be written O [ko(s -?,,)I = 27jUB(K, y -
ko)e-jY 0.
(169)
In addition, if a rotatedcoordinatesystemis usedfor 7 = (E, 11) where [=x sin 4-r and 7~=xcos 4+sin 4, then the dot product ko(s &) can be written KC; + (Y - koh.
cos C#J
(170)
(171)
(172)
In backpropagation the projection is backprojected with a depth-dependent filter function. At each depth, 7, the filter corresponds to propagating the field a distance of Aq. (From [Sla83].)
Fig. 6.19:
The coordinates (4,~) are illustratedin Fig. 6.19. Using the resultsabovewe can now write the x integral of (164) as 2j ko k s_, dKj/cj ue(K, 0 0 y-ko)e-jyroeKE+(r-k)~ (173)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
245
jy m r,(w)N(w)-G,(w)
exp (j&)
dw
(175)
IWI Sk09
101 >ko
lwl>k
(179)
Without the extra filter function G,(w), the rest of (175) would correspondto the filtering operation of the projection data in x-ray tomography. The filtering as called for by the transfer function G,(w) is depth dependentdue to the parameterq, which is equal to x cos $ + y sin d. In terms of the filtered projections I&,([, r]) in (175), the reconstruction integral of (174) may be expressedas
fk
9). (181)
The computational procedure for reconstructing an image on the basis of (175) and (181) may be presentedin the form of the following steps: Step 1: In accordancewith (173, filter eachprojection with a separatefilter for each depth in the image frame. For example, if we chose only nine depths as shown in Fig. 6.19, we would need to apply nine different filters to the diffracted projection shown there. (In most casesfor a 128 x 128 reconstruction grid, the number of discrete depthschosenfor filtering the projection will also be around 128. If there are much less than 128, spatial resolution will suffer.) Step 2: To each pixel (x, y) in the image frame, in accordancewith (181), allocate a value of the filtered projection that corresponds to the ,
246
COMPUTERIZED
TOMOGRAPHIC
IMAGING
nearestdepth line. Since it is unlikely that a discrete implementation of (175) will lead to data at the precise location of each pixel, some form of polynomial interpolation (i.e., bilinear) will lead to better reconstructions. Step 3: Repeat the preceding two steps for all projections. As a new projection is takenup, add its contribution to the current sum at pixel (x9 Yh The depth-dependent filtering in Step 1 makes this algorithm computationally very demanding. For example, if we choose Nq depth values, the processing of each projection will take (N,, + 1) fast Fourier transforms (FFTs). If the total number of projections is N+, this translates into (N,, + l)N, FFTs. For most N x N reconstructions,both NV and N+ will be approximately equal to N. Therefore, Devaney filtered backpropagation s algorithm will require approximately N2 FFTs compared to 4N FFTs for frequency domain interpolation. (For precise comparisons,we must mention that the FFTs for the caseof frequencydomain interpolation are longer due to zero-padding.) Devaney [Dev82] has also proposed a modified filtered backpropagation algorithm, in which G,(w) is simply replacedby a single G,,(o) where no = x0 cos C#J y. sin 4, (x0, yo) being the coordinatesof the point where local + accuracy in reconstruction is desired. (Elimination of depth-dependent filtering reducesthe number of FFTs to 2N6.) 6.5 Limitations There are severalfactors that limit the accuracyof diffraction tomography reconstructions.Theselimitations are causedboth by the approximationsthat must be made in the derivation of the reconstruction process and the experimental factors. The mathematical and experimental effects limit the reconstruction in different ways. The most severemathematicallimitations are imposedby the Born and the Rytov approximations. These approximationsare fundamental to the reconstruction process and limit the range of objects that can be examined. On the other hand, it is only possibleto collect a finite amount of data and this gives rise to errors in the reconstructionwhich can be attributed to experimentallimitations. Up to the limit in resolution causedby evanescent waves, and given a perfect reconstructionalgorithm, it is possibleto improve a reconstruction by collecting more data. It is important to understandthe experimentallimitations so that the experimentaldata can be used efficiently. 6.5.1 Mathematical Limitations Computer simulationswere performed to study severalquestionsposedby diffraction tomography. In diffraction tomography there are different
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
241
approximationsinvolved in the forward and inverse directions. In the forward processit is necessaryto assumethat the object is weakly scattering so that either the Born or the Rytov approximation can be used. Once an expression for the scattered field is derived it is necessarynot only to measure the scatteredfields but then numerically implement the inversion process. By carefully designing the simulations it is possible to separatethe effects of the approximations. To study the effects of the Born and the Rytov approximationsit is necessaryto calculate (or even measure)the exact fields and then use the best possible (most exact) reconstructionformulas available. The difference betweenthe reconstruction and the actual object is a measure of the quality of the approximations. 6.5.2 Evaluation of the Born Approximation The exact field for the scatteredfield from a cylinder, as shown by Weeks [Wee641and by Morse and Ingard [Mor68], was calculated for cylinders of various sizes and refractive indexes. In the simulations that follow a single plane wave of unit wavelengthwas incident on the cylinder and the scattered field was measuredalong a line at a distance of 100 wavelengths from the origin. In addition, all refractive index changeswere modeled as monopole scatterers.By doing this the directional dependence dipole scatterersdidn of t have to be taken into account. At the receiver line the receivedwave was measuredat 512 points spacedat l/2 wavelength intervals. In all cases the rotational symmetry of a single cylinder at the origin was used to reduce the computation time of the simulations. The results shown in Fig. 6.20 are for cylinders of four different refractive indexes. In addition, Fig. 6.21 showsplots of the reconstructionsalong a line through the center of eachcylinder. Notice that the y-coordinate of the center line is plotted in terms of change from unity. The simulations were performed for refractive indexes that ranged from a 0.1% change(refractive index of 1.OOl)to a 20% change(refractive index of 1.2). For each refractive index, cylinders of size 1, 2,4, and 10 wavelengths were reconstructed.This gives a range of phasechangesacross the cylinder (see (50)) from 0.004~ to 167r. Clearly, all the cylinders of refractive index 1.001 in Fig. 6.20 were perfectly reconstructed.As (50) predicts, the results get worse as the product of refractive index and radius gets larger. The largest refractive index that was successfullyreconstructedwas for the cylinder in Fig. 6.20 of radius 1 wavelengthand a refractive index that differed by 20 % from the surrounding medium. While it is hard to evaluatequantitatively the two-dimensional reconstructions, it is certainly reasonableto conclude that only cylinders where the phasechangeacrossthe object was less than or equal to 0.87r were adequately reconstructed. In general, the reconstruction for each cylinder where the
248
COMPUTERIZED
TOMOGRAPHIC
IMAGING
phasechange acrossthe cylinder was greater than T shows severe artifacts near the center. This limitation in the phase change across the cylinder is consistentwith the condition expressedin (51). Finally, it is important to note that the reconstructionsin Fig. 6.20 don t show the most severelimitation of the Born approximation, which is that the real and imaginary parts of a reconstruction can get mixed up. For objects that don satisfy the 0.8r phase changelimitation the Born approximation t causessome of the real energy in the reconstruction to be rotated into the imaginary plane. This further limits the use of the Born approximation when it is necessaryto separatelyimage the real and imaginary componentsof the refractive index. 6.5.3 Evaluation of the Rytov Approximation Fig. 6.22 shows the simulated results for 16 reconstructions using the Rytov approximation. To emphasizethe insensitivity of the Rytov approximation to large objects the largest object simulated had a diameter of lOOh. Note that these reconstructionsare an improvement over those published in [Sla84] due to decreasederrors in the phase unwrapping algorithm used. This was accomplishedby using an adaptivephaseunwrapping algorithm as described in [Tri77] and by reducing the sampling interval on the receiver line to 0.125X. It should be pointed out that the rounded edgesof the 1X reconstructions aren due to any limitation of the Rytov approximation but instead are the t result of a two-dimensional low pass filtering of the reconstructions.Recall that for a transmission experiment an estimate of the object Fourier s transform is only available up to frequencies less than &ko. Thus the reconstructionsshown in Fig. 6.22 show the limitations of both the Rytov approximation and the Fourier Diffraction Theorem. 6.5.4 Comparison of the Born and Rytov Approximations Reconstructionsusing exact scattereddata show the similarity of the Born and the Rytov approximations. Within the limits of the Fourier Diffraction Theorem the reconstructionsin Figs. 6.20 and 6.22 of a 1X object with a small refractive index are similar. In both casesthe reconstructedchange in refractive index is close to that of the simulated object. The two approximationsdiffer for objectsthat have a large refractive index changeor have a large radius. The Born reconstructionsare good at a large refractive index as long as the phaseshift of the incident field as predicted by (50) is less than ?r. On the other hand, the Rytov approximation is very sensitive to the refractive index but producesexcellent reconstructionsfor objects as large as
Many thanks to M. Kaveh of the University of Minnesota for pointing this out to the authors.
TOMOGRAPHIC
IMAGING
WITH DIFFRACTING
SOURCES
249
Fig. 6.20:
Reconstructions of 16 different cylinders are shown indicating the effect of cylinder radius and refractive index on the Born approximation. (From [SIa84/.)
lOOh. Unfortunately, for objects with a refractive index larger than a few percentthe Rytov approximationquickly deteriorates. In addition to the qualitativestudiesa quantitativestudy of the error in the Born and Rytov reconstructions also performed. As a measureof error was we used the relative mean squarederror in the reconstructionof the object function integratedover the entire plane. If the actualobject function is o(i) and the reconstructed objectfunction is o (i) , then the relative meansquared error (MSE) is [0(3--o IS
k-m
di
(182)
250
Fig. 6.20:
Continued.
For this study 120 reconstructions were done of cylindersusing the exact scattered data. In eachcasea 512-pointreceiverline was at a distanceof 10X from the center of the cylinder. Both the receiver line and the object reconstruction were sampledat 1/4X intervals. The plots of Fig. 6.23 presenta summaryof the mean squarederror for cylindersof 1, 2, and 3X in radiusand for 20 refractiveindexesbetween1.Ol and 1.20. In eachcasethe error for the Born approximationis shown as a solid line while the Rytov reconstruction shown as a broken line. is Many researchers [Kav82], [Ke169], [Sou83] have postulatedthat the Rytov approximation superiorto the Born but as the actualreconstructions is in Fig. 6.23(a) show for a 1X cylinder this is not necessarily true. While for
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
251
r=l
Ir
<,=I ,001
Fig. 6.21: Cross sections of the cylinders shown in Fig. 6.20 are shown here.
the cylinder of radius 2X there is a region where the Rytov approximation showsless error than the Born reconstruction,this doesnoccur until the t relativeerror is above20%. What is clear is that both the Born and the Rytov approximations only valid for small objectsand that they both produce are similar errors.
6.6 Evaluation of Reconstruction Algorithms T O study the approximations involved in the reconstruction processit is necessary calculatescattered to data assumingthe forward approximations
252
r=2
lndo<=l.DOl P
r--2
Inde<=I.Ol
tb
2;
32
Fig. 6.21:
Continued.
are valid. This can be done in one of two different ways. We have already discussedthat the Born and Rytov approximationsare valid for small objects and small changes in refractive index. Thus, if we calculate the exact scatteredfield for a small and weakly scatteringobject we can assumethat either the Born or the Rytov approximation is exact. A better approachis to recall the Fourier Diffraction Theorem, which says that the Fourier transform of the scatteredfield is proportional to the Fourier transform of the object along a semicircular arc. Since this theorem is the
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
253
L
16 n K Fig. 6.21: Continued.
\I
8 I6 Et K
basis for our inversion algorithm, if we assumeit is correct we can study the approximationsinvolved in the reconstructionprocess. If we assume that the Fourier Diffraction Theorem holds, the exact scatteredfield can be calculatedexactly for objects that can be modeled as ellipses. The analytic expressionfor the Fourier transform of the object along an arc is proportionalto the scatteredfields. This procedureis fast and allows us to calculate scattered fields for testing reconstruction algorithms and experimentalparameters. To illustrate the accuracy of the interpolation-based algorithms, we will
254
COMPUTERIZED
TOMOGRAPHIC
IMAGING
*=I0
lnde*=t.OOt
Fig. 6.21:
Continued.
use the image in Fig. 6.24 as a test object for showing some computer simulation results. Fig. 6.24 is a modification of the Shepp and Logan phantom describedin Chapter 3 to the case of diffraction imaging. The gray levels shown in Fig. 6.24 representthe refractive index values. This test image is a superposition of ellipses, with each ellipse being assigned a refractive index value as shown in Table 6.1. A major advantageof using an image like that in Fig. 6.24 for computer simulation is that one can write analytical expressionsfor the transforms of the diffracted projections. The Fourier transform of an ellipse of semi-major
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
255
Fig. 6.22:
Reconstructions of 16 different cylinders are shown indicating the effect of cylinder radius and refractive index on the Rytov approximation. These reconstructions were calculated by sampling the scattered fields at 16,384 points along a line IOOAfrom the edge of the object. A sampling interval of 6(R + 100)/16,384 where R is the radius of the cylinder, was used to make it easier to unwrap the phase of the scattered fields. (Adapted from /Sla84].)
256
Fig. 6.22:
Continued.
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
251
1.00000,
1.00000
.BMOO
IX
2x
, . I
Fig. 6.23: The relative mean squared errors for reconstructions with the Born (solid) and the Rytov (broken) approximations are shown here. Each plot is a function of the refractive index of the cylinder. The mean squared error is plotted for cylinders of radius IA, 2A, and 3h. (From [SIa84].)
where u and u are spatial angular frequenciesin the x and y directions, respectively,and 5, is a Besselfunction of the first kind and order 1. W h e n the centerof this ellipse is shifted to the point (xl, yt), and the angle of the m a jor axis tilted by CY,as shown in F ig. 6.25(b), its Fourier transform
258
For diffraction tomographic simulations a slightly modified version of the Shepp and Logan head phantom is used. (From [Pan83].)
Fig. 6.24:
becomes
. 27rAJ,{B[((u cos a+u sin CY)A/B)~+(-u sin a+u cos c~y)~]~} + + u2 [((u cos CY u sin CX)A/B)~ (- u sin CY u cos CX)~] + * (184) Now considerthe situation in which the ellipse is illuminated by a plane wave. By the Fourier Diffraction Theoremdiscussed previously, the Fourier transformof the transmittedwave fields measured a line like TT' shown on in Fig. 6.2(left), will be given by the values of the above function on a semicirculararc as shown in Fig. 6.2(right). If we assumeweak scattering and thereforeno interactions amongthe ellipses,the Fourier transformof the
Table 6.1: Summary of parameters for diffraction tomography simulations. Major Axis 0.92 0.874 0.31 0.41 0.25 0.046 0.046 0.046 0.023 0.046 Minor Axis 0.69 0.6624 0.11 0.16 0.21 0.046 0.046 0.023 0.023 0.023 Rotation Angle 90 90 72 108 90 0 0 0 0 90 Refractive Index 1.0 -0.5 -0.2 -0.2 0.1 0.15 0.15 0.15 0.15 0.15
Center Coordinate (0, 0) (0, -0.0184) (0.22, 0) (-0.22, 0) (0, 0.35) a 0.1) (0, -0.1) (-0.08, -0.605) (0, -0.605) (0.06, -0.605)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
259
space domain
space domain
Fig. 6.25: Assuming the Fourier Slice Theorem, the field scattered by an ellipse can be easily calculated. (From [KakBs/.)
total forward scatteredfield measuredon the line TT will be a sum of the values of functions like (184) over the semicircular arc. This procedurewas used to generatethe diffracted projection data for the test image. We must mention that by generating the diffractedprojection data for computer simulation by this procedure, we are only testing the accuracy of the reconstruction algorithm, without checking whether or not the test object satisfies the underlying assumption of weak scattering. In order to test this crucial assumption,we must generate exactly on a computer the forward scattered dataof the object. For multicomponentobjects, such as the one shownin Fig. 6.24, it is very difficult to do so due to the interactions betweenthe components. Pan and Kak [Pan831 presented simulationsshown in Fig. 6.26. Using the a combinationof increasingthe sampling density by zero-paddingthe signal and bilinear interpolation, results were obtainedin 2 minutes of CPU time on a VAX 1l/780 minicomputer with a floating point accelerator(FPA). The reconstructionwas done over a 128 X 128 grid using 64 views and 128 receiver positions. The number of operations required to carry out the interpolation and invert the object function is on the order of NZ log N. The resulting reconstructionis shown in Fig. 6.26(a). Fig. 6.26(b) represents result of backpropagating datato 128 depths the the for each view, while Fig. 6.26(c) is the result of backpropagation only a to single depthcenterednearthe three small ellipses at the bottom of the picture. The results were simulatedon a VAX 1l/780 minicomputer and the resulting reconstructionswere done over a 128 x 128 grid. Like the previous image the input data consistedof 64 projections of 128 points each. There was a significant difference in not only the reconstructiontime but also the resulting quality. While the modified backpropagation took 1.25 only minutes, the resulting reconstructionis much poorer than that from the full backpropagation which took 30 minutes of CPU time. A comparisonof the
260
COMPUTERIZED
TOMOGRAPHIC
IMAGING
various algorithms is shown in Table 6.2. Note that the table doesn t explicitly show the extra CPU time required if zero-padding is used in the frequency domain to make space domain interpolation easier. To a very rough approximation spacedomain interpolation and modified backpropagation algorithms take N* log N stepswhile the full backpropagationalgorithm takes N3 log N steps. 6.7 Experimental Limitations
In addition to the limits on the reconstructionsimposedby the Born andthe Rytov approximations, there are also the following experimental limitations to consider:
l l l l
Limitations causedby ignoring evanescent waves Sampling the data along the receiver line Finite receiver length Limited views of the object.
Each of the first three factors can be modeledas a simple constantlow pass filtering of the scatteredfield. Becausethe reconstructionprocessis linear the net effect can be modeledby a single low passfilter with a cutoff at the lowest of the three cutoff frequencies.The experimentcan be optimized by adjusting the parametersso that each low pass filter cuts off at the same frequency. The effect of a limited number of views also can be modeled as a low pass filter. In this case, though, the cutoff frequency varies with the radial direction. 6.7.1 Evanescent Waves Since evanescentwaves have a complex wavenumber they are severely attenuatedover a distanceof only a few wavelengths.This limits the highest received wavenumberto k,,=;.
(185)
This is a fundamental limit of the propagation process and can only be improved by moving the experiment to a higher frequency (or shorter wavelength). 6.7.2 Sampling the Received Wave After the wave has been scatteredby the object and propagated to the receiver line, it must be measured.This is usually done with a point receiver.
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
261
,600
.sa
.yIo
.w
.wa I
, -.7 -.80 --.r 0.00 ---r _-.-.Eso IL-J .sbo ,730 1.00
.a# 0 -.E4
-.500
,mo
---..
-._
--
,r----
.-_-.-__._._--
-.-___-_ II I@ II II
-1 .oo
-.M
0.00
.eso
.M
1.00
The images show the results of using the (a) interpolation, (b) backpropagation, and (c) modified backpropagation algorithms on reconstruction quality. The solid lines of the graphs represent the reconstructed value along a line through the three ellipses at the bottom of the phantom. (From [Pan83].)
Fig. 6.26:
Unfortunately, it is not possible to sample at every point, so a nonzero sampling interval must be chosen. This introducesa measurement error into the process. By the Nyquist theorem this can be modeled as a low pass filtering operation, where the highest measuredfrequency is given by k meas-a = T where T is the sampling interval.
IMAGING
w36)
262
COMPUTERIZED
TOMOGRAPHIC
Fig. 6.26:
Continued.
6.7.3 The Effects of a Finite ReceiverLength Not only are there physicallimitations on the finest samplinginterval but usually there is a limitation on the amountof datathat can be collected.This generallymeansthat samplesof the receivedwaveform will be collected at only a finite numberof points along the receiverline. This is usuallyjustified by taking data along a line long enoughso that the unmeasured can be data safely ignored. Because the wave propagation of process also introduces this a low passfiltering of the receiveddata. Considerfor a moment a single scattererat some distance,&, from the receiverline. The wave propagating from this single scatterer a cylindrical is wave in two dimensions a sphericalwave in three dimensions.This effect or is diagrammed Fig. 6.27. It is easyto seethat the spatialfrequencies in vary with the position along the receiverline. This effect can be analyzedusing two different approaches. It is easierto analyzethe effect by consideringthe expandingwave to be Table6.2: Comparison algorithms. of Algorithm Frequency Domain Interpolation Backpropagation Modified Backpropagation Complexity fl log N N,,N+Nlog N N,N log N CPU Time (minutes) 2
30
1.25
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
263
Incident Field
Fig. 6.21: An object scatters a field which is measured with a finite receiver line. (From [Sla83].)
locally planar at any point distant from the scatterer. At the point on the receiverline closestto the scattererthere is no spatial variation [Goo68]. This corresponds receiving a planewave or a receivedspatial frequencyof zero. to Higher spatialfrequencies receivedat points along the receiver line that are are farther from the origin. The receivedfrequencyis a function of the sine of the angle betweenthe direction of propagationand a perpendicularto the receiver line. This function is given by k(y) = kmax 8 sin (187)
of where19 the angleand k,,,,, is the wavenumber the incident wave. Thus at is the origin, the angle, 8, is zero and the received frequency is zero. Only at infinity doesthe anglebecomeequalto 90 and the receivedspatial frequency approachthe theoretical maximum. This reasoningcan be justified on a more theoretical basis by considering the phasefunction of the propagatingwave. The received wave at a point (x = 10,v) due to a scattererat the origin is given by
(188)
264
COMPUTERIZED
TOMOGRAPHIC
IMAGING
YKWW.
wave can be found by taking the partial derivative of the phasewith respectto
phase= kom
key -krecv - &q-T
(189) (190)
where k,,, is the spatial frequency received at the point (x = lo, y). From Fig. 6.27 it is easy to see that sin t9=J-.&+ (191)
and therefore (187) and (190) are equivalent. This relation, (190), can be inverted to give the length of the receiver line for a given maximum received frequency, k,,,,. This becomes (192) Sincethe highestreceivedfrequencyis a monotonically increasingfunction of the length of the receiver line, it is easyto seethat by limiting the sampling of the received wave to a finite portion of the entire line a low passedversion of the entire scattered wave will be measured. The highest measured frequency is a simple function of the distance of the receiver line from the scatterer and the length of measured data. This limitation can be better understoodif the maximum received frequency is written as a function of the angle of view of the receiver line. Thus substituting tan l3=YX
(193)
we find
k _ recv ko(y/x)
J(y/x)2+
12
(194)
Thuskc, is a monotonically increasingfunction of the angle of view, 8. It is easy to see that the maximum received spatial frequency can be increased
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
265
either by moving the receiver line closer to the object or by increasingthe length of the receiverline. 6.7.4 Evaluation of the Experimental Effects The effect of a finite receiverlengthwas simulatedand resultsare shownin Fig. 6.28. The spatialfrequencycontentof a wave, found by taking the FFT of the sampled points alongthe receiverline, was compared the theoretical to result as predictedby the Fourier transformof the object. The theory predicts that more of the high frequencycomponents be presentas the length of will the receiverline increases this is confirmed by simulation. and While the above derivation only considereda single scattererit is also approximatelytrue for many scattererscollected at the origin. This is so becausethe inverse reconstructionprocessis linear and each point in the object scattersan independent cylindrical wave. 6.7.5 Optimization Sinceeachof the abovethree factorsis independent the other two, their of effect in the frequencydomain can be found by simply multiplying their frequencyresponses together. As has been describedabove, each of these effects can be modeledas a simple low passfilter so the combinedeffect is also a low passfilter but at the lowest frequencyof the cutoff of the three effects. First consider the effect of ignoring the evanescentwaves. Since the maximum frequencyof the receivedwave is limited by the propagationfilter to
Fig. 6.28: These four reconstructions show the effect of a finite receiver line. Reconstructions of an object using 64 detectors spaced at (a) 0.5X, (b) 1.0X, (c) ISA, and(d) 2.0h are shown here. (From [Sla83/.)
it is easy to combine this expressionwith the expressionfor the Nyquist sampling frequencyinto a single expressionfor the smallest interval. This is given by km, = km,, or 2lr lr -=-. X T Therefore, T=;. (199) (198) (197)
266
COMPUTERIZED
TOMOGRAPHIC
IMAGING
If the received waveform is sampledwith a sampling interval of more than l/2 wavelength, the measured data might not be a good estimate of the received waveform becauseof aliasing. On the other hand, it is not necessary to sample the received waveform any finer than l/2 wavelength since this provides no additional information. Therefore, we concludethat the sampling interval should be close to l/2 wavelength. In general, the experiment will also be constrainedby the number of data points (M) that can be measuredalong the receiver line. The distance from the object to the receiver line will be considereda constantin the derivation that follows. If the received waveform is sampleduniformly, the range of the receiver line is given uniquely by MT Ymax +-. = 2 This is also shown in Fig. 6.27. For a receiver line at a fixed distancefrom the object and a fixed number of receiver points, the choice of T is determinedby the following two competing considerations: As the sampling interval is increased the length of the receiver line increasesand more of the received wave high frequenciesare s measured. On the other hand, increasing the sampling interval lowers the maximum frequency that can be measuredbefore aliasing occurs. The optimum value of T can be found by setting the cutoff frequenciesfor the Nyquist frequencyequalto the highestreceivedfrequencydue to the finite receiver length and then solving for the sampling interval. If this constraint isn met, then some of the information that is passedby one processwill be t attenuatedby the others. This results in 7r -=- key TdpT2 evaluatedat
(201)
and
y=MTT.
Solving for T2 we find that the optimum value for T is given by ~~~(x/X)~+M~+M 8M
(203)
(204)
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
267
a=X
AM
(205)
T 2 M+l h=
(206)
This formula is to be used with the constraint that the smallest positive value for the sampling interval is l/2 wavelength. The optimum sampling interval is confirmed by simulations. Again using the method described above for calculating the exact scattered fields, four simulationswere made of an object of radius 10 wavelengthsusing a receiver line that was 100 wavelengthsfrom the object. In each case the number of receiver positions was fixed at 64. The resulting reconstructionsfor sampling intervals of 0.05, 1, 1.5, and 2 wavelengthsare shown in Fig. 6.28. Equation (206) predicts an optimum sampling interval of 1.3 wavelengthsand this is confirmed by the simulations. The best reconstructionoccurs with a sampling interval between 1 and 1.5 wavelengths. 6.7.6 Limited Views In many applications it is not possible to generateor receive plane waves from all directions. The effect of this is to leave holes where there is no estimate of the Fourier transform of the object. Since the ideal reconstruction algorithm produces an estimate of the Fourier transform of the object for all frequencies within a disk, a limited number of views introduces a selective filter for areas where there are no data. As shown by Devaney [Dev84] for the VSP case, a limited number of views degradesthe reconstruction by low pass filtering the image in certain directions. Devaney results are reproducedin Figs. 6.29 and 6.30. s 6.8 Bibliographic Notes
The paper by Mueller et al. [Mue79] was responsible for focusing the interest of many researchers the area of diffraction tomography, although on from a purely scientific standpoint the technique can be traced back to the now classic paper by Wolf [Wo169] and a subsequentarticle by Iwata and Nagata [Iwa75]. The small perturbation approximations that are used for developing the diffraction tomography algorithms have been discussedby Ishimaru [Ish78] and Morse and Ingard [Mor68]. A discussionof the theory of the Born and the Rytov approximations was presented by Chernov in [Che60]. A
268
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Fig. 6.29: These figures show the coverage in the frequency domain for six different angular receiver limitations. (From fDev84J.)
comparisonof Born and Rytov approximationsis presentedin [Ke169], [Sla84], [Sou83]. The effect of multiple scatteringon first-order diffraction tomographyis describedin [Azi83], [Azi85]. Another review of diffraction tomographyis presented [Kav86]. in Diffraction tomographyfalls under the generalsubjectof inverse scattering. The issuesrelating to the uniqueness stability of inverse scattering and solutionsare addressed [Bal78], [Dev78], [Nasgl], [Sargl]. The mathein matics of solving integral equations for inverse scattering problems is describedin [Co183]. The filtered backpropagation algorithm for diffraction tomography was first advanced Devaney [Dev82]. More recently, Pan and Kak [Pan831 by showed that by using frequency domain interpolation followed by direct Fourier inversion, reconstructions quality comparable that producedby of to the filtered backpropagation algorithm can be obtained.Interpolation-based algorithmswere first studiedby Carter [Car701and Mueller et al. [MuegO], [Sou84b]. An interpolation techniquebased on the known support of the object in the space domain is known as the unified frequency domain reconstruction(UFR) and is describedin [Kav84]. Since the problems are related,the readeris referredto an excellentpaperby Starket al. [Stag11 that describesoptimum interpolation techniquesas applied to direct Fourier inversionof straightray projections.The readeris also referredto [Fer79] to learn how in somecases may be possibleto avoid the interpolation,and still it be able to reconstructan object with direct 2-D Fourier inversion. A diffraction tomography approach that requires only two rotational positions of the object has been advanced Nahamooet al. [Nah84] and by
TOMOGRAPHIC
IMAGING
WITH
DIFFRACTING
SOURCES
269
Images due to the limited field of views as shown in Fig. 6.29. (From [Dev84J.)
Fig. 6.30:
Devaney [Dev83], and its computer implementation has been studiedDistortion by Pan and Kak [Pan83]. Diffraction tomography based on Multiple the reflected data has been studied in great detail by Norton and Linzer [Norgl]. The first experimental diffraction tomography work was done by Carter and Ho using optical energy and is describedin [Car70], [Car74], [HoP76]. An More recently, Kaveh and Soumekh have reported experimental results in The [Kav80], [Kav8 11, [Kav82], [Sou83]. Finally, more accuratetechniquesfor imaging objects that don fall within the domain of the Born and Rytov approximations have been reported in [Joh83], [Tra83], [Sla85], [Bey84], [Bey85a], [Bey85b].
6.9 References
[Azi83] [AzigS] [Ba178] [Bar781 [Bey84] M. Azimi and A. C. Kak, in diffraction imaging caused by multiple scattering, IEEE Trans. Med. Imaging, vol. MI-Z, pp. 176-195, Dec. 1983. ~ scattering and attenuation phenomena in diffraction imaging, TREE 854, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1985. H. P. Baltes (Ed.), Inverse Source Problems in Optics. Berlin: Springer-Verlag, 1978. inverse problem for electromagnetic prospection, V. Barthes and G. Vasseur, in Applied Inverse Problems, P. C. Sabatier, Ed. Berlin: Springer-Verlag, 1978. inversion problem and applications of the generalized Radon G. Beylkin, transform, Commun. Pure Appl. Math., vol. 37, pp. 579-599, 1984.
270
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Imaging of discontinuities in the inverse scattering problem by inversion of a [BeySSa] causd generalized Radon transform, J. Math. Phys., vol. 26-1, pp. 99-108, Jan. 1985. [Bey85b] G. Beylkin and M. L. Oristaglio, Distorted-wave Born and distorted-wave Rytov approximations, Opt. Commun., vol. 53, pp. 213-216, Mar. 15, 1985. [Car701 W. H. Carter, Computational reconstruction of scattering objects from holograms, .I. Opt. Sot. Amer., vol. 60, pp. 306-314, Mar. 1970. [Car741 W. H. Carter and P. C. Ho, Reconstruction of inhomogeneous scattering objects from holograms, Appl. Opt., vol. 13, pp. 162-172, Jan. 1974. New York, NY: [Che60] L. A. Chemov, Wave Propagation in a Random Medium. McGraw-Hill, 1960. [Co1831 D. Colton and R. Kress, Integral Equation Methods in Scattering Theory. New York, NY: John Wiley and Sons, 1983. [Dev78] A. J. Devaney, Nonuniqueness in the inverse scattering problem, J. Math. Phys., vol. 19, pp. 1525-1531, 1978. A filtered backpropagation algorithm for diffraction tomography, Ultra[Dev82] -, son. Imaging, vol. 4, pp. 336-350, 1982. A computer simulation study of diffraction tomography, IEEE Trans. [Dev83] Biomed. Eng., vol. BME-30, pp. 377-386, July 1983. -, Geophysical diffraction tomography, IEEE Trans. Geological Science, [De+%] Special Issue on Remote Sensing, vol. GE-22, pp. 3-13, Jan. 1984. A. F. Fercher, H. Bartelt, H. Becker, and E. Wiltschko, Image formation by [Fer79] inversion of scattered data: Experiments and computational simulation, Appl. Opt., vol. 18, pp. 2427-2439, 1979. New York, NY: [Gag781 R. Gagliardi, Introduction to Communications Engineering. John Wiley and Sons, 1978. [Goo68] J. W. Goodman, Introduction to Fourier Optics. San Francisco, CA: McGrawHill, 1968. [Gre78] J. F. Greenleaf, S. K. Kenue, B. Rajagopalan, R. C. Bahn, and S. A. Johnson, Breast imaging by ultrasonic computer-assisted tomography, in Acoustical Imaging, A. Metherell, Ed. New York, NY: Plenum Press, 1978. [Hoc731 H. Hochstadt, Integral Equations. New York, NY: John Wiley and Sons, 1973. [HOP761 P. C. Ho and W. H. Carter, Structural measurement by inverse scattering in the first Born approximation, Appl. Opt., vol. 15, pp. 313-314, Feb. 1976. A. Ishimaru, Wave Propagation and Scattering in Random Media. New York, [Ish78] NY: Academic Press, 1978. K. Iwata and R. Nagata, Calculation of refractive index distribution from [Iwa75] interferograms using the Born and Rytov approximations, Japan. J. Appl. s Phys., vol. 14, pp. 1921-1927, 1975. S. A. Johnson and M. L. Tracy, Inverse scattering solutions by a sine basis, [Joh83] multiple source, moment method-Part I: Theory, Ultrason. Imaging, vol. 5, pp. 361-375, 1983. ]Kak851 A. C. Kak, Tomographic imaging with diffracting and non-diffracting sources, in Array Signal Processing, S. Haykin, Ed. Englewood Cliffs, NJ: Prentice-Hall, 1985. [Kav80] M. Kaveh, M. Soumekh, and R. K. Mueller, Experimental results in ultrasonic diffraction tomography, in Acoustical Imaging, vol. 9, K. Wang, Ed. New York, NY: Plenum Press, 1980, pp. 433-450. -, A comparison of Born and Rytov approximations in acoustic tomography, [KavBl] in Acoustical Imaging, vol. 11, J. P. Powers, Ed. New York, NY: Plenum Press, 1981, pp. 325-335. Tomographic imaging via wave equation inversion, in Proc. Int. Conf. on [Kav82] Acoustics, Speech and Signal Processing, May 1982, pp. 1553-1556. [Kav84] M. Kaveh, M. Soumekh, and J. F. Greenleaf, Signal processing for diffraction tomography, IEEE Trans. Sonics Ultrason., vol. SU-31, pp. 230-239, July 1984. - __ [Kav86] M. Kaveh and M. Soumekh, Computer-assisted diffractiontomography, in Image Recovery, Theory and Applications, H. Stark, Ed. New York, NY: Academic Press, 1986.
TOMOGRAPHIC
SOURCES
271
J. B. Keller, Accuracy and validity of the Born and Rytov approximations, J. Opt. Sot. Amer., vol. 59, pp. 1003-1004, 1969. [Ken821 S. K. Kenue and J. F. Greenleaf, Limited angle multifrequency diffraction tomography, IEEE Trans. Sonics Ultrason., vol. SU-29, pp. 213-217, July 1982. [LuZ84] Z. Q. Lu, M. Kaveh, and R. K. Mueller, Diffraction tomography using beam waves: Z-average reconstruction, Ultrason. Imaging, vol. 6, pp. 95-102, Jan. 1984. [McG82] R. McGowan and R. Kuc, A direct relation between a signal time series and its unwrapped phase, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP30, pp. 719-726, Oct. 1982. [Mor53] P. M. Morse and H. Feshbach, Methods of Theoretical Physics. New York, NY: McGraw-Hill, 1953. [Mor68] P. M. Morse and K. U. Ingard, Theoretical Acoustics. New York, NY: McGrawHill, 1968. [Mue79] R. K. Mueller, M. Kaveh, and G. Wade, Reconstructive tomography and applications to ultrasonics, Proc. IEEE, vol. 67, pp. 567-587, 1979. [MuegO] R. K. Mueller, M. Kaveh, and R. D. Iverson, A new approach to acoustic tomography using diffraction techniques, in Acoustical Imaging, A. Metherall, Ed. New York, NY: Plenum Press, 1980, pp. 615-628. [Nah81] D. Nahamoo, C. R. Crawford, and A. C. Kak, Design constraints and reconstruction algorithms for transverse-continuous-rotate CT scanners, IEEE Trans. Biomed. Eng., vol. BME-28, pp. 79-97, 1981. [Nah82] D. Nahamoo and A. C. Kak, Ultrasonic diffraction imaging, TR-EE 82-20, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1982. [Nah84] D. Nahamoo, S. X. Pan, and A. C. Kak, Synthetic aperture diffraction tomography and its interpolation-free computer implementation, IEEE Trans. Sonics Ultruson., vol. SU-31, pp. 218-229, July 1984. [Nas81] M. Z. Nashed, Operato-theoretic and computational approaches to illposed problems with application to antenna theory, IEEE Trans. Antennas Propagat., vol. AP-29, pp. 220-231, 1981. [Nor811 S. J. Norton and M. Linzer, Ultrasonic reflectivity imaging in three dimensions: - Exact inverse scattering solutions for plane, cylindrical and spherical apertures, IEEE Trans. Biomed. Enn., vol. BME-28. PP. 202-220, 1981. [OCo78] B. T. O Connor and T. S. Huang, Techniques for determining the stability of twodimensional recursive filters and their application to image restoration, TR-EE 7818, School of Electrical Engineering, Purdue Univ., Lafayette, IN, pp. 6-24, 1978. [Pan831 S. X. Pan and A. C. Kak, A computational study of reconstruction algorithms for diffraction tomography: Interpolation vs. filtered-backpropagation, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-31, pp. 1262-1275, Oct. 1983. [SarSl] T. K. Sarkar, D. D. Weiner, and V. K. Jain, Some mathematical considerations in dealing with the inverse problem, IEEE Trans. Antennas Propagat., vol. AP-29, pp. 373-379, 1981. [Sla83] M. Slaney and A. C. Kak, Diffraction tomography, Proc. S.P.I.E., vol. 413, pp. 2-19, Apr. 1983. [Sla84] M. Slaney, A. C. Kak, and L. E. Larsen, Limitations of imaging with first order diffraction tomography, IEEE Trans. Microwave Theory Tech., vol. MTT-32, pp. 860-873, Aug. 1984. [Sla85] M. Slaney and A. C. Kak, Imaging with diffraction tomography, TR-EE 85-5, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1985. [Sou83] M. Soumekh, M. Kaveh, and R. K. Mueller, Algorithms and experimental results in acoustic tomography using Rytov approximation, in Proc. Int. Conf. on s Acoustics, Speech and Signal Processing, Apr. 1983, PP. 135-138. Fourier domain reconstruction methods with application to diffraction [Sou84a] ~, tomography, in Acoustical Imaging, vol. 13, M. Kaveh, R. K. Mueller, and J. F. Greenleaf, Eds. New York, NY: Plenum Press, 1984, pp. 17-30. [Sot&lb] M. Soumekh and M. Kaveh, Image reconstruction from frequency domain data on arbitrary contours, in Proc. Conf. on Acoustics, Speech and Signal Processing, 1984, pp. 12A.2.1-12A.2.4. [Sta8 l] H. Stark, J. W. Woods, I. Paul, and R. Hingorani, Direct Fourier reconstruction in
[Kel69]
272
COMPUTERIZED
TOMOGRAPHIC
IMAGING
computer tomography, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-29, pp. 237-244, 1981. [Tra83] M. L. Tracy and S. A. Johnson, Inverse scattering solutions by a sine basis, multiple source, moment method-Part II: Numerical evaluations, Ultrason. Imaging, vol. 5, pp. 376-392, 1983. [Tri77] J. M. Tribolet, A new phase unwrapping algorithm, IEEE Trans. Acoust. Speech Signal Processing, vol. ASSP-25, pp. 170-177, Apr. 1977. [Wee641 W. L. Weeks, Electromagnetic Theory for Engineering Applications. New York, NY: John Wiley and Sons, Inc., 1964. [wo169] E. Wolf, Three-dimensional structure determination of semitransparent objects from holographic data, Opt. Commun., vol. 1, pp. 153-156, 1969.
TOMOGRAPHIC
IMAGING
WITH DIFFRACTING
SOURCES
273
An entirely different approach for tomographic imaging consists of assuming that the cross section consists of an array of unknowns, and then setting up algebraic equations for the unknowns in terms of the measured projection data. Although conceptually this approach is much simpler than the transform-based methods discussed in previous sections, for medical applications it lacks the accuracy and the speed of implementation. However, there are situations where it is not possible to measure a large number of projections, or the projections are not uniformly distributed over 180 or 360) both these conditions being necessary requirements for the transformbased techniques to produce results with the accuracy desired in medical imaging. An example of such a situation is earth resources imaging using cross-borehole measurements discussed in Chapter 4. Problems of this type are sometimes more amenable to solution by algebraic techniques. Algebraic techniques are also useful when the energy propagation paths between the source and receiver positions are subject to ray bending on account of refraction, or when the energy propagation undergoes attenuation along ray paths as in emission CT. [Unfortunately, many imaging problems where refraction is encountered also suffer from diffraction effects (see Chap. 4).] As will be obvious from the discussion to follow, in algebraic methods it is essential to know ray paths that connect the corresponding transmitter and receiver positions. When refraction and diffraction effects are substantial (medium inhomogeneities exceed 10% of the average background value and the correlation length of these inhomogeneities is comparable to a wavelength), it becomes impossible to predict these ray paths. If algebraic techniques are applied under these conditions, we often obtain meaningless results. If the refraction and diffraction effects are small (medium inhomogeneities are less than 2 to 3% of the average background value and the correlation width of these inhomogeneities is much greater than a wavelength), in some cases it is possible to combine algebraic techniques with digital ray tracing techniques [And82], [And84a], [And84b] and devise iterative procedures in which we first construct an image ignoring refraction, then trace rays connecting the corresponding transmitter and receiver locations through this distribution, and finally use these rays to construct a more accurate set of
ALGEBRAIC
RECONSTRUCTION
ALGORITHMS
275
algebraic equations. Experimental verification of this iterative procedure for weakly refracting objects has been obtained [And84b]. Space limitations prevent us from discussing here the combined ray tracing and algebraic reconstruction algorithms. Our aim in this section is to merely introduce the reader to the algebraic approach for image reconstruction. First we will show how we may construct a set of linear equations whose unknowns are elements of the object cross section. The Kaczmarz method for solving these equations will then be presented. This will be followed by the various approximations that are used in this method to speed up its computer implementation.
Fig. 7.1: In algebraic methods a square grid is superimposed over the unknown image. Image values are assumed to be constant within each cell of the grid. (From [Ros82].)
276
COMPUTERIZED
TOMOGRAPHIC IMAGING
tion. Let pi be the ray-sum measured with the ith ray as shown in Fig. 7.1. The relationship between the 4 and pi may be expressed as s s
2 j=l Wijfj=Pi,
i=l,
2, ,M
(1)
where M is the total number of rays (in all the projections) and Wij is the weighting factor that represents the contribution of the jth cell to the ith ray integral. The factor Wij is equal to the fractional area of the jth image cell intercepted by the ith ray as shown for one of the cells in Fig. 7.1. Note that most of the wij are zero since only a small number of cells contribute to any s given ray-sum. If M and N were small, we could use conventional matrix theory methods to invert the system of equations in (1). However, in practice N may be as large as 65,000 (for 256 x 256 images), and, in most cases for images of this size, M will also have the same magnitude. For these values of M and N the size of the matrix [ WijJ in (1) is 65,000 X 65,000 which precludes any possibility of direct matrix inversion. Of course, when noise is present in the measurement data and when A4 < N, even for small Nit is not possible to use direct matrix inversion, and some least squares method may have to be used. When both M and N are large, such methods are also computationally impractical. For large values of M and N there exist very attractive iterative methods for solving (1). These are based on the method of projections as first proposed by Kaczmarz [Kac37], and later elucidated further by Tanabe [Tan71]. To explain the computational steps involved in these methods, we first write (1) in an expanded form:
wllfl w21f1+ + w12f2+ w22f2 + w13f3+ + + wINfN=Pl w2NfN=tt)2
* +
wMlfl+wM2f2+
++wMNfN=PM.
(2)
A grid representation with N cells gives an image N degrees of freedom. Therefore, an image, represented by (f,, f2, + * * , fN), may be considered to be a single point in an N-dimensional space. In this space each of the above equations represents a hyperplane. When a unique solution to these equations exists, the intersection of all these hyperplanes is a single point giving that solution. This concept is further illustrated in Fig. 7.2 where, for the purpose of display, we have considered the case of only two variables f, and f2 satisfying the following equations:
wf1+
W2lfi +
W12f2=P1
w22f2
P2.
(3)
277
initial guess
Fig. 7.2: The Kaczmarz method of solving algebraic equations is illustrated for the case of two unknowns. One starts with some arbitrary initial guess and then projects onto the line corresponding to the first equation. The resulting point is now projected onto the line representing the second equation. If there are only two equations, this process is continued back and forth, as illustrated by the dots in the figure, until convergence is achieved. (From [Ros82].)
The computational procedure for locating the solution in Fig. 7.2 consists of first starting with an initial guess, projecting this initial guess on the first line, reprojecting the resulting point on the second line, and then projecting back onto the first line, and so forth. If a unique solution exists, the iterations will always converge to that point. For the computer implementation of this method, we first make an initial guess at the solution. This guess, denoted by f \O), i), * * * , f$, is represented f vectorially by 7) in the N-dimensional space. In most cases, we simply assign a value of zero to all the fi This initial guess is projected on the S. hyperplane represented by the first equation in (2) givingpl), as illustrated in Fig. 7.2 for the two-dimensional case. p is projected on the hyperplane ) represented by the second equation in (2) to yieldp2) and so on. When? *) is projected on the hyperplane represented by the ith equation to yield? the ), process can be mathematically described by
(4)
where 4 = (Wii, Wi2, **a, WiN), and $i* i?i is the dot product of $i with itself. To see how (4) comes about we first write the first equation of (2) as
278
COMPUTERIZED
TOMOGRAPHIC
IMAGING
, Fig. 7.3: The hyperplane w.p = PI (representedby a Iine in this two-dimensional figure) is perpendicular to the vector w ,. (From fRos82J.)
follows:
w * T=p,. ,
(5)
The hyperplane represented by this equation is perp+icular to the vector w. This is illustrated in Fig. 7.3, where the vector OD_ represents i& . This , equation simply says that the projection of a vector OC (for any point C on the hyperplane) on the vector w is of constant length. The unit vector or/ t along w is given by , ( 5) and the perpendicular distance of the hyperplane from the origin, which is
279
&
(7)
jw +o) -HZ
where the length of the vector s pzI=Io~-lal is given by
(8)
(9)
(10)
we can
3 _. - w z= IsI ou(0)wlTpl ,. =
WI * WI
(11)
Substituting (11) in (8), we get (4). As mentioned before, the computational procedure for algebraic reconstruction consists of starting with an initial guess for the solution, taking successive projections on the hyperplanes represented by the equations in (2), eventually yielding PM). In the next iteration, PM) is projected on the hyperplane represented by the first equation in (2), and then successively onto the rest of the hyperplanes in (2), to yieldr2M), and so on. Tanabe [Tan711 has shown that if there exists a unique solutionx to the system of equations GY, then lim
k-m
3ckM) =x .
(12)
A few comments about the convergence of the algorithm are in order here. If in Fig. 7.2 the two hyperplanes are perpendicular to each other, the reader may easily show that given for an initial guess any point in the (fi, fz)-plane, it is possible to arrive at the correct solution in only two steps like (4). On the other hand, if the two hyperplanes have only a very small angle between them, k in (12) may acquire a large value (depending upon the initial guess) before the correct solution is reached. Clearly the angles between the
280
COMPUTERIZED
TOMOGRAPHIC IMAGING
hyperplanes considerably influence the rate of convergence to the solution. If the M hyperplanes in (2) could be made orthogonal with respect to one another, the correct solution would be arrived at with only one pass through the A4 equations (assuming a unique solution does exist). Although theoretically such orthogonalization is possible using, for example, the Gram-Schmidt procedure, in practice it is computationally not feasible. Full orthogonalization will also tend to enhance the effects of the ever present measurement noise in the final solution. Ramakrishnan et al. [Ram791 have suggested a pairwise orthogonalization scheme which is computationally easier to implement and at the same time considerably increases the speed of convergence. A simpler technique, first proposed in [Hou72] and studied in [Sla85], is to carefully choose the order in which the hyperplanes are considered. Since each hyperplane represents a distinct ray integral, it is quite likely that adjacent ray integrals (and thus hyperplanes) will be nearly parallel. By choosing hyperplanes representing widely separated ray integrals, it is possible to improve the rate of convergence of the Kaczmarz approach. A not uncommon situation in image reconstruction is that of an overdetermined system in the presence of measurement noise. That is, we may have M > N in (2) and pl, p2, . . . , pm corrupted by noise. No unique solution exists in this case. In Fig. 7.4 we have shown a two-variable system represented by three noisy hyperplanes. The broken line represents the course of the solution as we successively implement (4). Now the solution doesn converge to a unique point, but will oscillate in the neighborhood of t the intersections of the hyperplanes. When M < N a unique solution of the set of linear equations in (2) doesn t exist, and, in fact, an infinite number of solutions are possible. For example, suppose we have only the first of the two equations in (3) to use for calculating the two unknowns f, and f2; then the solution can be anywhere on the line corresponding to this equation. Given the initial guess PO) (see Fig. 7.3), the best one could probably do under the circumstances would be to draw a projection from p) on this line, and call the resulting 3c1) a solution. Note that the solution obtained in this manner corresponds to that point on the line which is closest to the initial guess. This result has been rigorously proved by Tanabe [Tan711 who has shown that when M < N, the iterative approach described above converges to a solution, call it 3;) such that is minimized. Besides its computational efficiency, another attractive feature of the iterative approach presented here is that it is now possible to incorporate into the solution some types of a priori information about the image one is reconstructing. For example, if it is known a priori that the image f (x, y) is nonnegative, then in each of the solutionsJt(k), successively obtained by using (4), one may set the negative components equal to zero. One may similarly incorporate the information that f (x, v) is zero outside a certain area, if this is known.
IPO - 3;l
281
Fig. 7.4: Illustrated here is the case when the number of equations is greater than the number of unknowns. The lines don intersect at a single unique t point, becausethe observations p,. p2, p, have been assumed to be corrupted by noise. No unique solution exists in this case, and the final solution will oscillate in the neighborhood of intersections of the three lines. (From [Ros82].)
In applications requiring a large number of views and where large-sized reconstructions are made, the difficulty with using (4) can be in the calculation, storage, and fast retrieval of the weight coefficients w,. Consider the case where we wish to reconstruct an image on a 100 x 100 grid from 100 projections with 150 rays in each projection. The total number of weights, w,, needed in this case is 108, which is an enormous number and can pose problems in fast storage and retrieval in applications where reconstruction speed is important. This problem is somewhat eased by making approximations, such as considering WV, be only a function of the to perpendicular distance between the center of the ith ray and the center of the jth cell. This perpendicular distance can then be computed at run time. To get around the implementation difficulties caused by the weight coefficients, a myriad of other algebraic approaches have also been suggested, many of which are approximations to (4). To discuss some of the more implementable approximations, we first recast (4) in a slightly different
282
form:
f(i) =f(iJ J i k=L wt
1)+ pi
wij
(13)
These equations say that when we project the (i - 1)th solution onto the ith hyperplane [ ith equation in (2)] the gray level of the jth element, whose ), current value is f! l) is obtained by correcting its current value by AJJ J where
Note that while pi is the measured ray-sum along the ith ray, qi may be considered to be the computed ray-sum for the same ray based on the (i 1)th solution for the image gray levels. The correction Af, to the jth cell is obtained by first calculating the difference between the measured ray-sum and the computed ray-sum, normalizing this difference by CF==,w&, and then assigning this value to all the image cells in the ith ray, each assignment being weighted by the corresponding w,. With the preliminaries presented above, we will now discuss three different computer implementations of algebraic algorithms. These are represented by the acronyms ART, SIRT, and SART.
(17)
283
for all the cells whose centers are within the ith ray. We are essentially smearing back the difference (pi - qi)/Ni over these image cells. In (17), qi are calculated using the expression in (15), except that one now uses the s binary approximation for wik s. The approximation in (17), although easy to implement, often leads to artifacts in the reconstructed images, especially if Ni isn a good approximat tion to the denominator. Superior reconstructions may be obtained if (17) is replaced by
Afji)=pi-?% Li Ni
where Li is the length (normalized by 6, see Fig. 7.1) of the ith ray through the reconstruction region. ART reconstructions usually suffer from salt and pepper noise, which is caused by the inconsistencies introduced in the set of equations by the approximations commonly used for Wik The result is that the computed rays. sums in (15) are usually poor approximations to the corresponding measured ray-sums. The effect of such inconsistencies is exacerbated by the fact that as each equation corresponding to a ray in a projection is taken up, it changes some of the pixels just altered by the preceding equation in the same projection. The SIRT algorithm described briefly below also suffers from these inconsistencies in the forward process [appearing in the computation of qi in (16)], but by eliminating the continual and competing pixel update as s each new equation is taken up, it results in smoother reconstructions. It is possible to reduce the effects of this noise in ART reconstructions by relaxation, in which we update a pixel by o * AJ; where (Y is less than 1. In ), some cases, the relaxation parameter (Y is made a function of the iteration number; that is, it becomes progressively smaller with increase in the number of iterations. The resulting improvements in the quality of reconstruction are usually at the expense of convergence.
284
COMPUTERIZED
TOMOGRAPHIC IMAGING
Fig. 7.5: (a) The Shepp and Logan head phantom with a subdural hematoma. (b) The gray level distribution of the Shepp and Logan phantom. (From [Kak84J.)
-1.0-0.8
0.0 x
0.2 0.4
285
7.4.1 Modeling the Forward Projection Process In (l), projection data were modeled by Pi=5
j=l
wijfi9
i=l,
2, -**, M.
(19)
This is a good model for the projection process if for Wij we use the s theoretically dictated values-which, as mentioned before, is hard to do for various reasons. To seek alternative methods for modeling the projection process, the relationship between a continuous image and the discrete projection data can be expressed by the following general form
f(X,
y)G(ri(X, y)) dx dy
(20)
ri(X, Y)=O
(21)
is the equation of the ith ray and Ri is the projection operator along the ray. The integral on the right-hand side serves as the definition of the projection operator. Now suppose we assume that in an expansion for the imagef(x, y), we use basis functions bj(x, y) and that a good approximation tof(x, y) is obtained by using N of them. This assumption can be represented mathematically by f(X, Y)=ftXs Y) E 5 gjbjCx, Y)
j=l
(22)
where gj are the coefficients of expansion; they form a finite set of numbers s which describe the image f(x, y) relative to the chosen basis set bj(X, y). Substituting (22) in (20), we can write for the forward process
gjRibj(x, Y)=i
j=l
&au
(23)
where a, represents the line integral of bj(X, y) along the ith ray. This equation has the same basic form as (l), yet it is more general in the sense that gi aren constrained to be image gray level values over an array of points. s t Of course, the form here reduces to (1) if for bj we use the following pixel s basis that is obtained by dividing the image frame into N identical subsquares; these are referred to as pixels and identified by the index j for 1 5 j 5 N:
bj(x, Y)=
1 o
(24)
286
COMPUTERIZED
TOMOGRAPHIC IMAGING
for a set of equidistant points along a straight line cut by the circular reconstruction region. (From [Kak84J.)
In keeping with the nature of J in (l), gj with these basis functions s s represent the average off (x, y) over the jth pixel and Ribj(X, y) represents the length of the intersection of tbe ith ray with the jth pixel. Although (20) implies rays of zero width, if we now associate a finite width with each ray, the elements of the projection matrix will represent the areas of intersection of these ray strips with the pixels. In SART, superior reconstructions are obtained by using a model of the forward projection process that is more accurate than what can be obtained by the choice of pixel basis functions-this is done by using bilinear elements which are the simplest higher order basis functions. The basis functions obtained from bilinear elements are pyramid shaped, each with a support extending over a square region the size of four pixels. It can be shown that the gj appearing in (22) for the case of bilinear elements are the sample values s of the image functionf(x, y) on a square lattice. It can further be shown that whereas the pixel basis leads to a discontinuous image representation, the bilinear elements allow a continuous form of p(x, y) to be regenerated for computation. However, finding the exact ray integrals across such bilinear elements [as called for by Ribj(x, y) in (23)] for a large number of rays is a time-consuming task and we will use an approximation. Rather than try to find separately the individual coefficients aij for a particular ray, we approximate the overall ray integral RJ(X, y) by a finite sum involving a set of Mi equidistant points {f^(sim)}, for 1 5 m I Mi Fig. 7.6: The ray-sumequations [Lytl(O] (see Fig. 7.6):
Pi Mi x J(Sim)AS* m=l
(25)
287
The value f(Sim) is determined from the values gj of f(x, y) on the four neighboring points of the sampling lattice, i.e., by bilinear interpolation. We write %(Sim)= i
j=1
dijrngj
(26)
The coefficient dti,,,is therefore the contribution that is made by the jth image sample to the mth point on the ith ray. Combining (25) and (26), we obtain an approximation to the ray integral pi as a linear function of the image samples
gj:
(27)
m=l j=I Mi
= 2 C dij,gjAS
j=l m=l
for 1lisJ
(28)
=i
j=l
aijgj
where the coefficients au represent the net effect of the linear transformations. They are determined as the sum of the contributions from different points along the ray: Mi aij= C dijmAS. (30) m=l Therefore, au is proportional to the sum of contributions made by the jth image sample to all the points on the ith ray. It is important to the overall accuracy of the model that for m = 1 and for m = Mi, i.e., for the first and last points of the ray within the reconstruction circle, the weights are adjusted so that X7= i ab equals the actual physical length Li. One certainly has latitude about selecting the step size As; setting it equal to half the spacing of the sampling lattice provides a good trade-off between the accuracy of representation and computational cost. 7.4.2 Implementation of the Reconstruction Algorithm
As mentioned before, the results of SART implementation will be shown on 128 x 128 matrices using 100 projections, each with 127 rays. In the model of (29), this corresponds to N = 16,384 picture elements and an overall number of rays I = 12,700. Note that the system of equations is underdetermined by about 25 % , but then the reconstruction circle covers only about 75% of the area of the square sampling lattice. With the au determined by the method just described, the reader will now s be taken through a series of steps that are part of the SART implementation.
288
COMPUTERIZED
TOMOGRAPHIC IMAGING
First, it will be shown that even with the superior forward projection modeling by the use of bilinear elements, one doesn want to carry out a sequential implementation of the reconstruction algorithm. A sequential implementation can be carried out by using the update formula of (4), reexpressed here in terms of SART symbols: (31) where ZJ denotes the ith row vector of the array aij. As described before, the estimate g(k) of the image vector is updated after each ray has been considered. We set the initial estimate g to zero, and we say that one iteration of the algebraic reconstruction technique is completed when all I rays, i.e., all I ray-sum equations, have been used exactly once. Owing to reasons discussed in Section 7.1, for sequential processing the projection data are ordered in such a manner that the angle between the projections considered successively is kept large; for the reconstructions shown here that were obtained with sequential updating, this angle was 73.8 Fig. 7.7(a) illustrates the reconstruction of the test image for one iteration of the sequential implementation. In order to avoid streak artifacts in the final image, the correction terms for the first few projections are de-emphasized relative to those for projections considered later on. The image has been thresholded to the gray level range 0.95-1.05 to illustrate the finer detail. Note that even the larger structures are buried in the salt and pepper noise present when no form of relaxation or smoothing is used. Fig. 7.7(b) shows a line plot through the three small tumors of the phantom (the profile shown is along the line y = - 0.605). We observe that the amplitude variations of the noise largely exceed the density differences characterizing these structures.
1.050.
Fig. 7.1: Reconstruction from one iteration of sequential ART. (a) Image. (b) Line plot through the three small tumors (for y = - 0.605). (From [And84a].)
1.037 -
1.025-
1.012.
1 .ooo* $ .9875'
.9750-
.9500* - 1.00
289
It will now be shown that superior results are obtained if instead of sequentially updating pixels on a ray-by-ray basis we simultaneously apply to a pixel the average of the corrections generated by all the rays in a projection. Stated in a bit more detail, this is what we want to do: For the first ray in a projection we compute as before the corrections to be made at every pixel. Instead of actually applying these corrections, we store them in a separate array to be called the correction array (the size of which is the same as that of the image array). Then we take up the next ray and add the pixel updates generated by this ray to the correction array. And then the next ray, and so on. After we are through all the rays in a projection, we add the correction array (or some fraction thereof) to the image array. This entire process is repeated with every projection. Fig. 7.8(a) illustrates the reconstruction obtained with this method. The precise formula that was used in the reconstruction in Fig. 7.8 for updating the pixel values can be stated as follows:
& I g@+Lg~)+J
aij
p-,
i
aij
(32)
aij
Fig. 1.8: Reconstruction from one iteration of SART. (a) Image. (b) Line plot through the three small tumors (for y = - 0.605). (From [And84a].)
where the summation with respect to i is over the rays intersecting the jth image element for a given scan direction. Compared to the reconstruction of Fig. 7.7 for the sequential scheme, the simultaneous method offers a reduction in the amplitude of the noise. In addition, the noise in the reconstructed image has become more slowly
1.050
1.037
1.025
1.012
1.000
.9875 -
.9750
290
COMPUTERIZED
TOMOGRAPHIC IMAGING
Fig. 7.9: The longitudinal Hamming window for a set of straight rays. (From [And84a].)
undulating compared to the previous salt and pepper appearance. This technique maintains the rapid convergence of ART-type algorithms while at the same time it has the noise suppressing features of SIRT. AS with SIRT, the simultaneous implementation does require the storage of an additional array for the correction terms. The last step, heuristic in nature, in SART consists of modifying the backdistribution of correction terms by a longitudinal Hamming window. The idea of the window is illustrated in Fig. 7.9. The uniform back-distribution according to the coefficients au is replaced by a weighted version. This corresponds to replacing the correction term pi _ a ,T-g (k) aij fj aij
j=l
(33)
Qij
where the weighting coefficients tij are given by [compare with (30)] Mi
tij= C hi,dij,AS* m=l (35)
The sequence hi,, for 1 I m I A4i, is a Hamming window of length A4i. Note that the length of the window varies according to the number of points Mi describing the part of the ray inside the reconstruction circle. The weighted back-distribution of corrections emphasizes the central portions of rays in relation to portions closer to the periphery. Fig. 7.10 illustrates a reconstruction of the test image after one iteration with the longitudinal window in conjunction with the simultaneous scheme previously described. We see an improvement over the reconstructions of Figs. 7.7 and 7.8: the noise is practically gone and all the structures can be fairly well distinguished. If we hadn applied the corrections in a simultaneous scheme t but incorporated the longitudinal Hamming window only for the sequential implementation, we would have arrived at the noisy reconstruction illustrated in Fig. 7.11. An important question that remains to be answered is: What happens when we go through iterations with, say, the simultaneous implementation; meaning that after we have made a reconstruction by going through all the projections once, we go through them all once again using the reconstruction of Fig. 7.10 as our initial solution; and then continue iterating in like fashion?
291
1.025.
1.012.
1.000.
.9875'
.9750.
.9625.
b.95000
-1.00 -.750
-.500
-.250
0.00
,250
.500
,750
Fig. 7.10: Reconstruction from one iteration of SART with a longitudinal Hamming window. (a) Image. (b) Line plot through the three small tumors (fbr y = - 0.605). (From [And84a].)
In Figs. 7.12 and 7.13, we have shown the reconstructions obtained with two and three iterations, respectively. As is evident from the reconstructions, we do gain more contrast, although at the cost of increased salt and pepper noise. All reconstructions shown represent the raw output from the algorithms with no postprocessing applied to suppress noise. For the purpose of comparison, we have included in Fig. 7.14 the reconstruction obtained by using a convolution-backprojection algorithm. Comparing this with Fig. 7.10, we see that the SART reconstruction with one iteration is quite similar, although with further iterations, as displayed in Figs. 7.12 and 7.13, we see an increased amplitude of the salt and pepper noise, which is probably an indication of remaining inconsistencies in the model used for the forward projection process.
292
1.050.
1.037.
1.025.
1.012.
1.000.
.9875-
.9750 -
.9625.
Fig. 7.11: Reconstruction from one iteration of sequential ART with a longitudinal Hamming window. (a) Image. (b) Line plot through the three small tumors (for y = -0.605). (From [AndBla].) Fig. 7.12: Reconstruction from two iterations of SART with a longitudinal Hamming window. (a) Image. (b) Line plot through the three small tumors (for y = - 0.605). (From [And84a].)
intersected by several rays in each projection. This results in the averaging of possible errors committed in the correction procedure such as the one given by (4). Common practice is to have a system with about four times as many equations as unknown pixel values [Her80], [Her78], [She74]. The computational cost, however, is increased directly with the number of rays processed. An additional method has been to use a relaxation factor h < 1 [Gor74], [Her80], [Her76], [Her78], [Hou72], [Swe73] which, although reducing the salt and pepper noise, increases the number of iterations required for convergence. The SART algorithm was first reported in [And84a]. In contrast with the bilinear elements used for SART, the pixel basis is common to much
1.050
1.037 I 1.025.
1.012.
l.OOO-
.9875
.9750
.9625
,, .%oo !-1.00
I .
-.750 00 -.250 0.00 ,250 .500 .750
I
00
293
1.050 I! 1.037
1.025
1.012
1.000
.9675
.9750
.9625
-.750
-.500
-.250
0.00
,250
,500
.750
IO
Fig. 7.13: Reconstruction from three iterations of SART with a longitudinal Hamming window. (a) Image. (b) Line plot through the three small tumors (for y = - 0.605). (From /And84a].) Fig. 7.14: Convolution-backprojection reconstruction of the test image. (a) Image. (b) Line plot through the three small tumors (for y = - 0.605). (From [And84a].)
literature published on algebraic techniques [Din79], [Gi172], [Gor74], [Gor70], [HergO], [Her76], [Her78], [Her73], [Hou72], [Opp75], [She74]. The error-correcting procedure of the basic ART algorithm as given by (4) is discussed in [Gor74], [GonO], [HergO], [Her76], [Her78], [Her73], [Hou72]. As first shown by Hounsfield [Hou72], in order to improve the convergence of a sequential algebraic algorithm one should order the projections in such a manner that successive projections are well separated. This he justified on the basis of high correlation between the information in neighboring projections. Later the scheme was demonstrated to have a deeper
1.025 1
.9750.
9625.
294
mathematical foundation as a tool for speeding up the convergence of ARTtype algorithms. (The proof relies on a continuous formulation of ART, as shown by Hamaker and Solmon [Ham78].) Ramakrishnan et al. [Ram791 have shown how by orthogonalization of the algebraic equations we can increase the speed of convergence of a reconstruction algorithm. The SIRT algorithm was first proposed by Gilbert [Gi172]. A simplified form of the simultaneous technique was used by Oppenheim in [Opp75]. However, the scope of the implementation as described by (32) is much wider. The method can be used advantageously in the general image reconstruction problem for curved rays with overlapping and nonoverlapping ray strips as well as in conjunction with any image representation, provided the forward process can be expressed in the form of (23). A combination of algebraic reconstruction and digital ray tracing appears ideal for imaging lightly refracting objects [Cha79], [Chagl]. A survey of digital ray tracing and ray linking for this purpose is presented in [And82]. If a refracting object has special symmetries, then as shown by Vest [Ves75] it may be possible to reconstruct the object without ray tracing. The reader is referred to [And84b] for experimental demonstrations of how algebraic reconstruction can be combined with digital ray tracing for the cross-sectional imaging of lightly refracting objects.
7.6 References
[And821 [And84a] [And84b] [Bud741 [Cha79] [ChaBl] [Din791 [Gil721 @or701 [Gor ll] [Gor74] [Ham781 [Her71 ] A. H. Andersen and A. C. Kak, Digital ray tracing in two-dimensional refractive fields, J. Acoust. Sot. Amer., vol. 72, pp. 1593-1606, Nov. 1982. Simultaneous algebraic reconstruction technique (SART): A superior implementation of the art algorithm, Ultrason. Imaging, vol. 6, pp. 81-94, Jan. 1984. -, The application of ray tracing towards a correction for refracting effects in computed tomography with diffracting sources, TR-EE 84-14, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1984. T. F. Budinger and G. T. Gullberg, Three-dimensional reconstruction in nuclear medicine emission imaging, IEEE Trans. Nucl. Sci., vol. NS-21, pp. 2-21, 1974. S. Cha and C. M. Vest, Interferometry and reconstruction of strongly refracting asymmetric-refractive-index fields, Opt. Lett., vol. 4, pp. 311-313, 1979. -, Tomographic reconstruction of strongly refracting fields and its application to interferometric measurements of boundary layers, Appl. Opt., vol. 20, pp. 2787-2794, 1981. K. A. Dines and R. J. Lytle, Computerized geophysical tomography, Proc. IEEE, vol. 67, pp. 1065-1073, 1979. P. Gilbert, Iterative methods for the reconstruction of three dimensional objects from their projections, J. Theor. Biol., vol. 36, pp. 105-117, 1972. R. Gordon, R. Bender, and G. T. Herman, Algebraic reconstruction techniques (ART) for three dimensional electron microscopy and X-ray photography, J. Theor. Biol., vol. 29, pp. 471-481, 1970. R. Gordon and G. T. Herman, Reconstruction of pictures from their projections, Commun. Assoc. Comput. Mach., vol. 14, pp. 759-768, 1971. R. Gordon, A tutorial on ART (algebraic reconstruction techniques), IEEE Trans. Nucl. Sci., vol. NS-21, pp. 78-93, 1974. C. Hamaker and D. C. Solmon, The angles between the null spaces of X rays, J. Math. Anal. Appl., vol. 62, pp. l-23, 1978. G. T. Herman and S. Rowland, Resolution in ART: An experimental investigation
ALGEBRAIC RECONSTRUCTION
ALGORITHMS
295
[Her731
[WOI
topp751 [Ram791 [Ros82] [She741 [Sla85]
ofthe resolving power of an algebraic picture reconstruction, J. Theor. Biol., vol. 33, pp. 213-233, 1971. G. T. Herman, A. Lent, and S. Rowland, ART: Mathematics and applications: A report on tlte mathematical foundations and on applicability to real data of the algebraic reconstruction techniques, J. Theor. Biol., vol. 43, pp. l-32, 1973. G. T. Herman and A. Lent, Iterative reconstruction algorithms, Comput. Biol. Med., vol. 6, pp. 273-294, 1976. G. T. Herman and A. Naparstek, Fast image reconstruction based on a Radon inversion formula appropriate for rapidly collected data, SIAM J. Appl. Math., vol. 33, pp. 511-533, Nov. 1977. G. T. Herman, A. Lent, and P. H. Lutz, Relaxation methods for image reconstruction, Commun. A.C.M., vol. 21, pp. 1.52-158, 1978. G. T. Herman, Image Reconstructions from Projections. New York, NY: Academic Press, 1980. G. N. Hounstield, A method of and apparatus for examination of a body by radiation such as x-ray or gamma radiation, Patent Specification 1283915, The Patent Office, 1972. S. Kaczmarz, Angenaherte auflosung von systemen linearer gleichungen, Bull. Acad. Pol. Sci. Lett. A, vol. 6-8A, pp. 355-357, 1937. A. C. Kak, Image reconstructions from projections, in Digitallmage Processing Techniques, M. P. Ekstrom, Ed. New York, NY: Academic Press, 1984. R. J. Lytle and K. A. Dines, Iterative ray tracing between boreholes for underground image reconstruction, IEEE Trans. Geosciences and Remote Sensing, vol. GE-18, pp. 234-240, 1980. B. E. Oppenheim, Reconstruction tomography from incomplete projections, in
Reconstruction Tomography in Diagnostic Radiology and Nuclear Medicine, M. M. Ter Pogossian et al., Eds. Baltimore, MD: University Park Press, 1975.
R. S. Ramakrishnan, S. K. Mullick, R. K. S. Rathore, and R. Subramanian, Orthogonalization, Bernstein polynomials, and image restoration, Appl. Opt., vol. 18, pp. 464-468, 1979. A. Rosenfeld and A. C. Kak, Digital Picture Processing, 2nd ed. New York, NY: Academic Press, 1982. L. S. Shepp and B. F. Logan, The Fourier reconstruction of a head section, IEEE Trans. Nucl. Sci., vol. NS-21, pp. 21-43, 1974. M. Slaney and A. C. Kak, Imaging with diffraction tomography, TR-EE 85-5, School of Electrical Engineering, Purdue Univ., Lafayette, IN, 1985. K. T. Smith, D. C. Salmon, and S. L. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Sot., vol. 83, pp. 1227-1270, 1977. D. W. Sweeney and C. M. Vest, Reconstruction of three-dimensional refractive index fields from multi-directional interferometric data, Appl. Opt., vol. 12, pp. 1649-1664, 1973. K. Tanabe, Projection method for solving a singular system, Numer. Math., vol. 17, pp. 203-214, 1971. C. M. Vest, Interferometry of strongly refracting axisymmetric phase objects, Appl. Opt., vol. 14, pp. 1601-1606, 1975.
[Smi77]
[Swe73] [Tan7 l]
[Ves75]
296
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Reflection Tomography
8
8.1 Introduction
The tomographic images up to this point have generally been formed by illuminating an object with some form of energy (x-rays, microwaves, or ultrasound) and measuring the energy that passes through the object to the other side. In the case of straight ray propagation, the measurement can be of either the amplitude or the time of arrival of the received signal; an estimate is then formed of a line integral of the object attenuation coefficient or s refractive index. Even when the energy doesn travel in a straight line it is t often possible to use either algebraic techniques or diffraction tomography to form an image. Transmission tomography is sometimes not possible because of physical constraints. For example, when ultrasound is used for cardiovascular imaging, the transmitted signal is almost immeasurable because of large impedance discontinuities at tissue-bone and air-tissue interfaces and other attenuation losses. For this reason most medical ultrasonic imaging is done using reflected signals. In the most straightforward approach to reflection imaging with ultrasound, the echoes are recorded as in radar; in medical areas this approach goes by the name of B-scan imaging. The basic aim of reflection tomography is to construct a quantitative crosssectional image from reflection data. One nice aspect of this form of imaging, especially in comparison with transmission tomography, is that it is not necessary to encircle the object with transmitters and receivers for gathering the projection data; transmission and reception are now done from the same side. The same is of course true of B-scan imaging where a small beam of ultrasonic energy illuminates the object and an image is formed by displaying the reflected signal as a function of time and direction of the beam. While in transmission tomography it is possible to use both narrow band and broadband signals, in reflection tomography only the latter type is acceptable. As will become evident by the discussion in this chapter, with short pulses (broadband signals) it is possible to form line integrals of some object parameter over lines of constant propagation delays. Since researchers in reflection tomography are frequently asked to compare B-scan imaging with reflection tomography, in this chapter we will first give a very brief introduction to B-scan imaging, taking great liberties
REFLECTION
TOMOGRAPHY
297
with conceptual detail; for a rigorous treatment of the subject, the reader is referred to [Fat80]. We will then illustrate how reflection tomography can be carried out with plane wave transducers and some of the fundamental limitations of this type of imaging. Our discussion of reflection tomography with plane wave transducers will include a demonstration of the relationship that exists between reflection tomography and the diffraction tomography formalism presented in Chapter 6. Finally, we will describe how reflection tomographic imaging can be carried out with point transducers producing spherical waves.
4$(x,(t-z Y)=Pt>
and
rc Y I= O i(X,
for y=O
elsewhere
where c is the propagation speed of the wave. This function models a pulse, p,(t), propagating down the x-axis, assumed perpendicular to the face of the transducer, with speed c. This is pictorially illustrated in Fig. 8.1(b). At a point (x, y) in the object a portion of the incident field, $i(X, y), will be scattered back toward the transducer. Therefore the amplitude of the scattered
298
COMPUTERIZED
TOMOGRAPHIC
IMAGING
t =t2
,L Pressure = 3 along line ,LPressure t =t 4 along line
A
A
I:
b
b
I,
In B-scan imaging an object is illuminated by a narrow beam of energy. A short (temporal) pulse is transmitted and will propagate through the object. (a) shows a portion of the object illuminated by a pencil beam of energy, (b) shows the pulse at different times within the object, and (c) shows the spherically expanding wave caused by a single scatterer within the object.
Fig. 8.1:
field at the scatterer is given approximately by $(x, y=O) = f(x, y=O)p, ( t-E > .
In traveling back to the receiver, the reflected pulse will be delayed by x/c due to the propagation distance involved and attenuated because the reflected field is diverging as depicted in Fig. 8.1(c). To maintain conservation of energy (in two dimensions here) the amplitude attenuation due to spreading is proportional to l/h. That means the energy density will decay as l/x and, when integrated over the boundary of a circle enclosing the scattering site, the total energy outflow will always be the same regardless of the radius of the circle. Thus the field received due to reflection at x is given by
!h I scatteredx = pt at (
t-t-: >
f(x,y=O)
i. 6
Integrating this with respect to all the reflecting sites along the transmitter
REFLECTION
TOMOGRAPHY
299
t+&(t)= s p,(t-2
z)
f(x;/:=o)
(5)
With the above expression for the scattered field due to a narrow incident beam it is relatively straightforward to find a reconstruction process for the object reflectivity. Certainly the simplest approach is to illuminate the s object with a pulse, p,(t), that looks like an impulse. The scattered field can then be approximated by &(t) = s (t-2 :) f(x;_:=) dx=$,( 5, y=O) . (6)
This expression shows that there is a direct relation between the scattered field at t and the object reflectivity at x = tc/2. This is shown in Fig. 8.2. s With this expression it is easy to see that a reconstruction can be formed using
(7) where f is the estimate of the reflectivity function f. The term 4x/c2 that multiplies the scattered field is known as time gain compensation and it compensates for the spreading of the fields after they are scattered by the object. In B-scan imaging, a cross-sectional image of the object reflectivity s variation is mapped out by a combination of scanning the incident beam and measuring the reflected field over a period of time. Recall that in B-scan imaging the object is illuminated by a very narrow beam of energy. Equation (7) then gives an estimate of the object reflectivity along the line of the s object illuminated by the field. To reconstruct the entire object it is then necessary to move the transducer in such a manner that all parts of the object are scanned. There are many ways this can be accomplished, the simplest
Fig. 8.2:
When an object is illuminated by a pulse there is a direct relationship between the backscattered field and the object reflectivity along a line. s
3ocl
COMPUTERIZED
TOMOGRAPHIC
IMAGING
being to spin the transducer and let each position of the transducer illuminate one line of a fan. This is the type of scan shown in Fig. 8.3. Clearly, the resolution in a B-scan image is a function of two parameters: the duration of the incident pulse and the width of the beam. Resolution as determined by the duration of the pulse is often called the range resolution and the resolution controlled by the width of the beam is referred to as the lateral resolution. The range resolution can be found by considering the system response for a single point scatterer. From (5) the field measured at the point (0, 0) due to a single scatterer of unit strength at x = x0 will be equal to pt t-$ ( r x0
4uo=
>
(8) we
Substituting this in (7), our expression for estimating the reflectivity, obtain the following form for the image of the object reflectivity: s
~(x,y=o)=~~~(r,=~pt~-~
(9)
From this it is easy to see that an incident pulse of width tp seconds will lead to an estimate that is tpc units wide. It is interesting to examine in the frequency domain the process by which the object reflectivity function may be recovered from the measured data. In the simple model described here, the frequency domain techniques can be used by merging the l/A factor with the reflectivity function; this can be done by defining a modified reflectivity function
f y)=fy. (x,
X
(10)
Now the scattered field at the point (0, 0) can be written as the convolution
#s(t)= i .t(t-2 :) f k
and can be expressed in the Fourier domain as
y=O) dx
$&.4= P-t(4F+~,
y=o)
(12)
Given the scattered field in this form it is easy to derive a procedure to estimate the reflectivity of the object. Ideally it is only necessary to divide the
REFLECTION
TOMOGRAPHY
301
(a)
Fig. 8.3: Often, in commercial B-scan imaging a focused beam of energy is moved past the object. An image is formed by plotting the received field as a function of time and transducer position, (a) shows this process schematically. (b) is a transverse 7.5~MHz sonogram of a carcinoma in the upper half of the right breast. (This image is courtesy of Valerie P. Jackson, M.D., Associate Professor of Radiology, Indiana University School of Medicine.) (c) is a drawing of the tissue shown in (b). The mass near the center of the sonogram is lobulated, has some irregular borders and low-level internal echoes, and there is an area of posterior shadowing at the medial aspect of the tumor. Thesefindings are compatible with malignancy.
302
COMPUTERIZED
TOMOGRAPHIC
IMAGING
F(2;,y=O) @& =
(13)
Unfortunately, in most cases this simple implementation doesn work t because there can be frequencies where P,(w) is equal to zero, which can cause instabilities in the division, especially if there is noise present at those frequencies in the measured data. A more noise insensitive implementation can be obtained via Wiener filtering [Fat80].
where we have assumed that the transducer is flush with the plane x = 0. Note that the field is only defined in the positive x half space and is a function of one spatial variable. In the receive mode the situation is slightly more complicated. If $,(x, y, t)
REFLECTION
TOMOGRAPHY
303
If a transducer with a wide beam illuminates the object, then it will measure line integrals over circular arcs of the object s reflectivity.
Fig. 8.4:
is the scattered field, the signal generated at the electrical terminals of the transducer is proportional to the integral of this field. We will ignore the constant of proportionality and write the electrical received signal, p,(t), as
p,(t)= s $40, Y, 0 du. (15)
In order to derive an expression for the received waveform given the field at points distant from the transducer it is necessary to consider how the waves propagate back to the transducer. First assume that there is a line of reflectors at x = x0 that reflect a portion, f(x = x0, y), of the field. As described above we can write the scattered field at the line x = x0 as the product of the incident field and the reflectivity parameter or
$,Y(X=XO,
y,
t)
$i(X=XOy
Y,
t)f(X=XO,
y)
y).
= pt (
t-z >
f(x=xo,
To find the field at the transducer face it is necessary to find the Fourier transform of the field and then propagate each plane wave to the transducer face. This is done by first finding the spatial and temporal Fourier transform of the field at the line of reflectors &(k,, w) = iy, I, $z(x=xo, y, t)e-jkyJ ejwt dy dt. (17)
The function $#,, w) therefore represents the amplitude of the plane wave propagating with direction vectors ( - d(l(~/c)~ - k;, k,). It is important to realize that the above equation represents the field along the line as a function of two variables. For any temporal frequency, o, there is an entire spectrum of plane waves, each with a unique propagation direction. Recall that we are using the convention for the Fourier transform defined in Chapter 6. Thus the forward transform has a phase factor of e -jkyY in the spatial domain, as is conventional, while the temporal Fourier transform uses
304
COMPUTERIZED
TOMOGRAPHIC
IMAGING
e+jwt for the forward transform. The signs are reversed for the inverse transform. With this plane wave expansion for the field it is now easy to propagate each plane wave to the transducer face. Consider an arbitrary plane wave $(x, y)=ej(Q+kyy) (18)
where k, will be negative indicating a wave traveling back toward the transducer. Using (15), the electrical signal produced is quickly seen to be equal to zero for all plane waves when k,, # 0. This is due to the fact that
m s
--m
ejkyy dy=b(k,).
(1%
Those plane waves traveling perpendicular to the face of the transducer (k, = 0) will experience a delay due to the propagation distance x0. In the frequency domain this represents a factor of eja(Xo The electrical response c). due to a unit amplitude plane wave is then seen to be P,(w, k,) = G(ky)ej~Wc). (20)
By summing each of the plane waves at frequency w in (17), the total electrical response due to the scattered fields from the plane x = x0 is given by PJw)= IJ~(/c,=O, 0)ej~WC) (21)
m(t) =&
du.
(22)
Now substituting (14), (17), and (16) into this expression, the received signal can be written
dy
dt IkyEO (23)
LG(f)=&
ST, e-jot du i,
[y,
&(x=x0, y, t )
- f(x=xo, y)ejw(xo/c)ejwr dy dt (24)
REFLECTION
TOMOGRAPHY
30.5
which reduces to
y)ej(xo/c)ejut dy dt .
(25)
s;,f(x=xo,
y) dy.
(26)
The above equation represents the measured signal due to a single line of scatterers at x = x0. Let the total (integrated) reflectivity of the object along the line x = x0 be denoted byfi(xo). The received signal for all parts of the object can be written as the sum of each individual line (since we are assuming that the backscattered fields satisfy the Born approximation and thus the system is linear) and the total measured signal can be written
At)
= ~;/+2~)fi(xl
dx.
(27)
This signal is similar to that of B-scan imaging. Like B-scan the transmitted pulse is convolved with the reflectivity of the object but in each case the reflectivity is summed over the portion of the object illuminated by the incident field. In B-scan the object is illuminated by a narrow beam so each portion of the received signal represents a small area of the object. With reflection tomography the beam is very wide and thus each measurement corresponds to a line integral through the object. Like B-scan imaging the reflectivity of the object can be found by first deconvolving the effects of the incident pulse. If the incident pulse can be approximated by an impulse, then the object reflectivity over line integrals s is equal to fl(X)=Pr
(>
X
2-
(28)
otherwise a deconvolution must be done and the line integrals recovered using (29)
where F,(w), P,(w), and Pt(w) represent the Fourier transform of the corresponding time or space domain signal. (In practice, of course, one may have to resort to techniques such as Wiener filtering for implementing the frequency domain inversion.)
306
COMPUTERIZED
TOMOGRAPHIC
IMAGING
The line integral data in the equation above are precisely the information needed to perform a reconstruction using the Fourier Slice Theorem. As described in Chapter 3, the object reflectivity can be found using the s relationship f(x, y)= 1: jm Se(u)lwlej~t --P) du de (30)
where SO represents the Fourier transform of the projection data measured with the transducer face at an angle of 0 to the horizontal and t=x cos B+y sin 8. (31)
Fig. 8.5: By using a common signal source and combining all the electrical signals, an array of transducers can be used to generate a plane wave for reflection tomography. However, by recording the information separately for each transducer, they can also be used for the more general form of reflection tomography.
Array of Transducers
REFLECTION
TOMOGRAPHY
307
(33)
then the scattered fields can be normalized by dividing the received spectrum by the transmitted spectrum to find (34) Again, as described before, this represents an idealized approach and in practice a more robust filter must be used. Because of the normalization at the array element at location y, the data S y) represent a single plane wave component of the scattered field that is (w, at a temporal frequency of w. If we take a Fourier transform of S (o, y) with respect to the variable y, by using the techniques of Chapter 6 we can derive the following relationship: S (w, ky)= sa S (w, y)e- jkyYdy=F(-w-k,,, --co k,) (35)
Fig. 8.6:
wave transducer gives samples of the two-dimensional Fourier transform of the object along the line indicated by the cross marks. For each spatial frequency, kO, the backscattered field gives information along an arc. A plane wave transducer only measures the dc component; thus the measured signal contains information about only one point of each arc. By rotating the transducer around the object a complete reconstruction can be formed.
which shows that the Fourier transform* S (w, k,) provides us with an estimate of the Fourier transform of the object reflectivity function along a circular arc, as illustrated in Fig. 8.6 for a number of different frequencies. This means that a cross-sectional image of the object could be reconstructed by rotating the object in front of the array, since via such a rotation we should be able to fill out a disk with a hole in the center shaped region in the frequency domain. The reconstruction can be carried out by taking an inverse Fourier transform of this region. Clearly, since the center part of the disk would be missing, the reconstructed image would be a high pass version of the actual reflectivity distribution. Reflection tomography using plane wave transducers, as described in the preceding subsection, is a special case of the more general form presented here. This can be shown as follows: If the signals s(t, y) received by the transducers are simply summed over y, the resulting signal as a function of time represents not only the output from an idealized plane wave receiver but also the Fourier transform of the received field at a spatial frequency of ky = 0. We can, for example, show that the Fourier transform of the summed signal
Note that the expression defined in (32) represents the received signal, S, as a function of temporal frequency, o, and spatial position, y, while (35) represents the normalized signal as a function of both spatial (k,) and temporal (w) frequency.
308
COMPUTERIZED
TOMOGRAPHIC
IMAGING
s -co s --m
(37) (38)
=P,(o)F (
-2 ) 0 c >
which shows that the Fourier transform of the summed signal gives the Fourier transform of the object along the straight lines as given by for O<o<m. These data points are shown as crosses in Fig. 8.6.
REFLECTION
TOMOGRAPHY
309
310
COMPUTERIZED
TOMOGRAPHIC
IMAGING
approaches 1.20, there are some small high frequency ripples. (h, refers to the wavelength at the center frequency of the transducer bandwidth.) Using a more practical frequency range the reconstructions shown in Fig. 8.8 are obtained. Here the data simulate what might be measured with a transducer with a center frequency of 1 MHz and a bandwidth of 1.2 MHz. As would be expected, the reconstructions aren as good as those shown in t Fig. 8.7 because some of the low and high frequency information about the object is missing. Thus there is very little information in the reconstructions other than the location of the edges of the cylinders. The average refractive index of each cylinder isn reconstructed because that is contained in the low t frequencies. A big problem with reflection tomography is that it doesn provide t information about the object at low frequencies. To a certain extent this problem can be rectified by extrapolating the measured object spectrum into the low frequency band where the information is missing. A popular algorithm for such an extrapolation is the Gerchberg-Papoulis algorithm [Ger74], [Pap75]. The Gerchberg-Papoulis algorithm is an iterative procedure to combine information about the Fourier transform of a function (as might be produced by a reflection tomography experiment) with independent space domain constraints. Typically, the spatial constraint might be the known support of the object or the fact that it is always positive. Assume that a reflection tomography experiment has yielded Fo(u, u) as an estimate of the Fourier transform of an object cross section; its inverse s Fourier transform fo(x, y) is then the image that would be the result of the experiment. From the preceding arguments Fo(u, u) is known in a doughnutshaped region of the (u, u) space; we will denote this region by Dp In general, the experiment itself wouldn reveal anything about the object t outside the doughnut-shaped region. If f(x, y) denotes the true cross section and F(u, u) the corresponding transform, we can write
F(u,u)=
(u, u) in Df elsewhere.
(41)
We will invoke the constraint that the object is known to be spatially limited: (KY) inDs elsewhere (42)
where we have used D, to denote the maximum a priori known object size. Typically, the inverse Fourier transform of the known data Fo(u, u) will lead to a reconstruction that is not spatially limited. The goal of the Gerchberg-Papoulis algorithm is to find a reconstruction f *(x, y) that satisfies the space constraint and whose Fourier transform F*(u, u) is equal to that measured by reflection tomography in region DJ. We will now describe how this algorithm can be implemented.
REFLECTION
TOMOGRAPHY
311
312
COMPUTERIZED
TOMOGRAPHIC
IMAGING
Given an initial estimate F,(u, u), a better estimate of the object is found by finding the inverse Fourier transform of Fo(u, u) and setting the first iteration to be
flk
Y)=
IFT{Fo(u, u>>
0
(43)
The next iteration is obtained by Fourier transforming f,(x, y) and then constructing a composite function in the frequency domain as follows:
(u, u) in D, elsewhere
(44)
(FT = Fourier transform). We now construct the next iterate fz(x, y), which is an improvement over f,(x, y), by first inverse Fourier transforming Fl(u, u) and setting to zero any values that are outside the region 0,. This iterative process may be continued to yield f3, f4, and so on, until the difference between two successive approximations is below a prespecified bound. This is shown schematically in Fig. 8.9. The result of applying 150 iterations of the Gerchberg-Papoulis algorithm to the reconstructions of Fig. 8.7 is shown in Fig. 8.10. The reader is referred to [Rob851 for further details on the application of this algorithm to reflection tomography.
(For simplicity we will continue to assume that both the illuminating field and
REFLECTION
TOMOGRAPHY
313
% 3 .Y
314
COMPUTERIZED
TOMOGRAPHIC
IMAGING
the object are two dimensional.) Since we are operating in the reflection mode, we use the same point transducer to record whatever scattered fields arrive at that site. Since the illuminating field is omnidirectional, the scattered field measured at the point transducer will be given by the following integration over the half space in front of the transducer:
&(t)= j f(qt-2
;>
lifl-1 dZ 2
(47)
The reason for the factor 17(- l/2 is the same as that for the factor l/& in our discussion on B-scan imaging and the extra factor of (7(/c represents the propagation delay from the point scatterer back to the transducer. Again, as was done for the B-scan case, the effect of the transmitted pulse can now be deconvolved, at least in principle, and the following estimate for the line integral of the reflection data, g(r), can be made:
where FT{ } indicates a Fourier transform with respect to t and IFT{ } represents the corresponding inverse Fourier transform. The function g(r) is therefore a measure of line integrals through the object where the variable r indicates the distance from the transducer to the measurement arc. The variable r is related to t by r = ct/2, where c is the velocity of propagation in the medium. This type of reflection imaging makes a number of assumptions. Most importantly, for (47) to be valid it is necessary for the Born approximation to hold. This means that not only must the scattered fields be small compared to the incident fields, but the absorption and velocity change of the field must also be small. Second, the scatterers in the object must be isotropic scatterers so that the field scattered by any point is identical no matter from what direction the incident field arrives. These line integrals of reflectivity can be measured from different directions by surrounding the object with a ring of point transducers. The line integrals measured by different transducers can be labeled as g+(r), 4 indicating the direction (and location) of the point transducer in the ring, as shown in Fig. 8.11. By analogy with the straight ray case it seems appropriate to form an image of the object by first filtering each line integral and then backprojecting the data over the same lines on which they were measured. Because the backprojection operation is linear we can ignore the filter function for now and derive a point spread function for the backprojection operator over circular arcs. With this information an optimum filter function h(r) will then be derived that looks surprisingly like that used in straight ray tomography. For now assume that the line integral data, g,Jr), are filtered by the
REFLECTION
TOMOGRAPHY
315
Fig. 8.11:
In reflection
tomography with a point source the transducer rotates around the object at a radius of R and its position is indicated by (R, 4). The measured signal, g+(r), represents line integrals over circular arcs centered at the transducer.
function h(r) to find g;(r) =gdr)*h(r). The backprojection operation over circular arcs can now be written f^(r, +I=& 1: gi [PC+; r, 011& (49)
(50)
where the distance from the transducer at (R, 4) to the reconstruction point at (r, 6) is given by p(qb; r, t9)=JR2+r2-2Rr cos (O-4) . (51)
In order to determine h(r) we will now use (50) to reconstruct the image of a single scatterer; this image is called the point spread function of the backprojection process. For a single scatterer at (r, 6,) the filtered projection is pr,g(r-p(4; r, e))=P,(r-P(+; r, mw) (52)
since pr,+, pI, and h are all functions of distance. The function pr,+ represents a filtered version of the transmitted pulse; in an actual system the filter could be applied before the pulse is transmitted so that simple backprojection would produce an ideal reconstruction. The reconstruction image is then given by
p,,&($;
r, WA+;
(53)
316
COMPUTERIZED
TOMOGRAPHIC
IMAGING
We want h(r) to be such thatfis as close to a Dirac delta function as possible. In order to find an optimum h(r) in this manner, a number of approximations are necessary. First we expand the argument for g:(r) in the equation above ~(4; r, e)++; ro, Bo)=[R2+r2-2Rr cos (e-+)11/2 cos (eo--~)y2. (54)
-[R2+rt--2Rro
We will now assume that the measurement circle is large enough so that (r/ R)2 and (ro/R)2 are both sufficiently small; as a consequence, the terms that contain powers of r/R and ro/Ro greater than 2 can be dropped. Therefore the difference in distances between the two points can be written as ~(4; r, ehe; r0, e,)-r cos (8-+)+ro --g This can be further simplified to ~(4; r, wd4; where X= Jr: + r2 - 2ror cos (e - e,) tan Y= r. sin co-r sin 8 (59) (58) r0, eo)=xcOs (9Y)+Y,+Y~
cos
cos (e,-+I+-
r2 - rt 4R (56)
cos 2(e-+)+g
cos 2ceo-9).
w-4
(57)
yI =-&
(60)
1 y2 = 4~ [ri + r4 - 2r2ri cos 2(e - e,)] l/2 rt sin 2e0-r2 sin 28 ri cos 2eo- r2 cos 28 *
(61)
(62)
(9-
Y)+y,+yz
cos 2(4-a)]
d4.
(63)
REFLECTION
TOMOGRAPHY
317
Let Pr,g(w) denote the Fourier transform of the line integral p,,4(r), that is, (64)
Pr,m(r)=&
jy,
P,,+(w)ejwr dw.
In terms of the Fourier transform of the filtered line integral data, f can be written as f(r, e)= if d4 {I, dw P,,+(w)e /w[Y,+Y2cos2(~-ol)lejwxcos
($- Y).
(65) This result can be further simplified if the measurement radius, R, is large compared to both the radii r and r. and the distance between the point scatterer and the point of interest in the reconstruction. With this assumption it can be shown that both yI and y2 are small and the point spread function can be written [Nor79a]
When the scattering center is located at the origin, the point spread function is obtained by using ~(4; r, e)-d+; and is given by 0, O)=r cos (4-e) (67)
(70)
where we have assumed that Pr,+ is independent of 4 for a scatterer located at the origin. With an expression for the point spread function it is possible to set it equal to a delta function and solve for the optimum filter function. The optimum
318
COMPUTERIZED
TOMOGRAPHIC
IMAGING
impulse response 6(x, y) can be written in polar form as (71) when the scattering center is located at the origin. The optimum filter function is then found by noting the identity m Jo(rw)w do =i 6(r). (72)
s 0
Rewriting the point spread function to put it into this form and using the fact that Jo( *) is an even function, it is easy to show that the optimum form for the filtered line integral data is P,,Jw) =Iw( . 2a Since P,,+(w) is equal to Pr.dw) =~(w)P,*,(w) (74)
the optimum point spread response will occur when the product of the Fourier transform of the transmitted pulse and the reconstruction filter is equal to
2a
then backprojection, without any additional filtering, will produce the optimum reconstruction. This filter function is not practical since it emphasizes the high frequencies. Generally, a more realistic filter will be a low pass filtered version of the optimum filter or H(w)=w N(w)=0 2a for IwI<
WC
elsewhere.
(78)
Using this filter function the point spread function for the reconstruction procedure becomes
(79)
319
Fig. 8.12: A broadband reflection tomogram of five needles is shown here. In this experiment a pixel size of 0.1 mm, an image size of 300 x 300 pixels, 120 projections, and 256 samples per projection were used. This figure shows (a) the needle array, (b) a diagram of a needle array cross section showing sizes and spacing, (c) a reflection tomogram of an array cross section, and (d) a magnified (zoomed) view of(c). (These images are courtesy of Kris Dines, XDA TA Corp., Indianapolis, ZN, based on work sponsored by National Institute of Health Grant #I R43 CA36673-01.)
(80)
where X, is the wavelength of the wave corresponding to the cutoff frequency WC. The reconstruction procedure can be summarized as follows. First use (48) to transform the measured data into measures of line integrals over circular arcs. The data should then be filtered with (49) and then backprojected using (50).
320
COMPUTERIZED
TOMOGRAPHIC
IMAGING
distance between the point transducer and the object was large enough so that the line integrals over circular arcs could be approximated as straight lines; the transducer was 200 mm from the center of a lo-mm object. By assuming the integration path can be approximated by a straight line the maximum error in the integration path is 0.25 mm. The reconstruction of Fig. 8.12(c) shows the resolution that is possible with this method. The five needles suspended in water represent nearly the ideal case since there is no phase shift caused by the object. More experimental work is needed to show the viability of this method in human patients.
8.5 Bibliographic
Notes
There is a large body of work that describes the theory of B-scan imaging; for a sampler the reader is referred to [Fat80], [Fla81], [Fla83]. This technique is in wide use by the medical community and the reader attention s is drawn to the well-known book by Wells [We1771 for an exhaustive treatment of the subject. One of the first approaches to reflection tomography was by Johnson et al. [Joh78] who employed a ray tracing approach to synthetic aperture imaging. This approach attempts to correct for refraction and attenuation but ignores diffraction. In 1979, Norton and Linzer [Nor79a], [Nor79b] published a backprojection-based method for reconstructing ultrasonic reflectivity. A more rigorous treatment and a further generalization of this approach were then presented in [Nor811 where different possible scanning configurations were also discussed. More recently, Dines [Din851 has shown experimental results that establish the feasibility of this imaging modality, although much work remains to be done for improving the quality of the reconstructed image. Also, recently, computer simulation results that show the usefulness of spectral extrapolation techniques to reflection tomography were presented in [Rob85].
8.6 References
[Din851 [Fat801 [Fla81] [Fla83] [Ger74] [Joh78] K. A. Dines, Imaging of ultrasonic reflectivity, presented at the Symposium on Computers in Ultrasound, Philadelphia, PA, 1985. M. Fatemi and A. C. Kak, Ultrasonic B-scan imaging: Theory of image formation and a technique for restoration, Ultrason. Imaging, vol. 2, pp. l-47, Jan. 1980. S. W. Flax, G. H. Glover, and N. J. Pelt, Textural variations in B-mode ultrasonography: A stochastic model, Ultrason. Imaging, vol. 3, pp. 235-257, 1981. S. W. Flax, N. J. Pelt, G. H. Glover, F. D. Gutmann, and M. McLachlan, Spectral characterization and attenuation measurements in ultrasound, Ultrason. Imaging, vol. 5, pp. 95-116, 1983. G. Gerchberg, Super-resolution through error energy reduction, Opt. Acta, vol. 21, pp. 709-720, 1974. S. A. Johnson, J. F. Greenleaf, M. Tanaka, B. Rajagopalan, and R. C. Bahn,
REFLECTION
TOMOGRAPHY
321
Pap751
[Rob851 [We1771
Quantitative synthetic aperture reflection imaging with correction for refraction and attenuation: Application of seismic techniques in medicine, presented at the San Diego Biomedical Symposium, San Diego, CA, 1978. S. J. Norton and M. Linzer, Ultrasonic reflectivity tomography: Reconstruction with circular transducer arrays, Ultrason. Imaging, vol. 1, no. 2, pp. 154-184, Apr. 1979. -, Ultrasonic reflectivity tomography in three dimensions: Reconstruction with spherical transducer arrays, Ultrason. Imaging, vol. 1, no. 2, pp. 210231, 1979. -, Ultrasonic reflectivity imaging in three dimensions: Exact inverse scattering solutions for plane, cylindrical and spherical apertures, IEEE Trans. Biomed. Eng., vol. BME-28, pp. 202-220, 1981. A. Papoulis, A new algorithm in spectral analysis and band limited extrapolation, IEEE Trans. Circuits Syst., vol. CAS-22, pp. 735-742, Sept. 1975. B. A. Roberts and A. C. Kak, Reflection mode diffraction tomography, Ultrason. Imaging, vol. 7, pp. 300-320, 1985. P. N. T. Wells, Biomedical Ultrasonics. London, England: Academic Press, 1977.
322
COMPUTERIZED TOMOGRAPHICIMAGING
Index
Algebraic equations solution by Kaczmarz method, 278 Algebraic reconstruction techniques, 28384 sequential, 289, 293 simultaneous, 285-92 Algebraic techniques reconstruction algorithms, 275-96 Algorithms cone beams, 104 filtered backprojection, 60-63, 72, 104 filtered backpropagation, 234-47 Gerchberg-Papoulis, 313-14 reconstruction, 49-112, 252-61, 275-96, 313-20 re-sorting, 92-93 SIRT, 295 see also Reconstruction algorithms Aliasing artifacts, 177-201 bibliography, 200 in 2-D images, 46 properties, 177-86 Approximations Born, 212-14, 248-53 comparison, 248-52 Rytov, 214-18, 249-53 to wave equation, 21 l-18 ART see Algebraic reconstruction techniques Artifacts aliasing, 177-201 beam hardening, 124 bibliography, 200 polychromaticity, 120-25 Attenuation compensation for positron tomography, 145-47 for single photon emission CT, 137-42 Authors affiliations, 329 Kak, Avinash C., 329 Slaney, Malcolm, 329 Backprojection, 179 filtered, 60-63, 65-72, 82, 84-85, 88, 104-7 star-shaped pattern, 184 weighted, 92, 106 3-D, 104-7
Backpropagation algorithm, 242-47, 262 filtered, 234-47 Bandlimited filter DFT, 74 Beam hardening, 118 artifacts, 120, 124 Bibliographic notes algebraic reconstruction algorithms, 29295 algorithms for reconstructions with nondiffracting sources, 107-10 aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting case, 168-69 reflection tomography, 321 tomographic imaging with diffracting sources, 268-70 Bones, 122 Born approximation, 215-18, 249-53, 258 evaluation, 24849 first, 212-14 Breasts mammograms, 159 sonograms, 302 B-scan imaging, 297-303, 315 commercial, 302 Cancer in breast, 159-60 Carcinoma sonogram, 302 Coincidence testing circuits, 143 of positron emission, 143 Collinear detectors equally spaced, 86-92 Compton effect, 114-15, 119 Computed tomography see Computerized tomography Computerized tomography applications, 132-33 emission, 134-47, 275 graduate courses, ix images, 177-201 noise, 177-201 scanners, 130 ultrasonic, 147-58 x-rays, 120-24
INDEX
323
Cone beams algorithms, 104, 108-9 projection, 101 reconstruction, 102, 108-9 Continuous signals Fourier analysis, 11 Convolution, 8-9, 31-32, 83 aperiodic, 18 calculation, 15 circular, 18, 66 Fourier transforms, 39 CT see Computerized tomography Data collection process, 228-34 Data definition negative time, 25-26 positive time, 26 Data sequences, 26 padding, 23-25 resolution, 23 Data truncation effects, 27-28 Delta functions, 28-30 see also Dirac delta Detectors, 75, 101, 127, 192 arrays, 188 collinear, 86-92 equal spacing, 86-92 ray paths, 190 spacing, 78, 188 xenon gas, 128 DFT see Fourier transforms-discrete Diffracting sources filtered backpropagation algorithm, 23447 interpolation, 234-47 tomographic imaging, 203-73 Diffraction tomography reconstructions limitations, 247-51 Dirac delta, 5-7, 12, 30, 32, 222 see also Delta functions Display resolution in freauencv domain, 22-27 Distortions abasing, 177-201 Dogs heart, 154, 156-57 Education graduate, ix Emission computer tomography, 134-47 Equiangular rays, 77-86 Equispaced detectors collinear, 86-92 reconstruction algorithms, 87 Evanescent waves, 261 ignoring, 266 Fan beam reconstruction from limited number of views, 93-99
Fan beams, 78, 85 projections, 97 reconstruction, 93-99 re-sorting, 92 rotation, 126 scanners, 188 tilted, 105 Fan projections reconstruction, 75-93 FFT (Fast Fourier Transforms) inverse, 42, 240 l-D, 45 2-D, 45-47 FFT output interpretation, 20-22 Filters and filtering, 7 backprojection, 60-63, 65, 68, 72, 82, 84-85, 88, 104-7 backpropagation, 234-47 bandlimited, 74 ideal, 72 low pass, 40-41, 266 shift invariant, 8 Wiener, 306 Finite receiver length effects, 263-66 experiments, 266 reconstruction, 266 Forward projection process modeling, 286-88 Fourier analysis of function, 9-13, 33-35 Fourier diffraction theorem, 218-34, 25354, 259 short wavelength limit, 227-28 Fourier series, IO-13 triangle wave, 12 Fourier slice theorem, 228, 260, 307 tomographic imaging, 49, 56-61 Fourier transforms diffraction, 219, 223-27 discrete, 10, 13-15 fast, 16, 18, 20-26 finite, 10, 16-18, 42-45 generalized, 13 inverse, 13, 17, 34-35, 42, 226 line integrals, 318 Parseval theorem, 39, 44 s properties, 35-41 seismic orofilinrr. 233 I-D, 44145, 562-D, 34-35, 42-45, 222, 226-27, 229, 308 Frequency-shift method, 156-58 Functions continuous, 5-7 discrete, 5-7 Fourier representation, 9-13 Green 220-23 s, Hankel, 220
324
INDEX
linear operations, 7-9 point spread, 29, 32 I-D, 5-7 Gibbs phenomenon, 178 Green function s decomposition, 220-23 Grids representation, 277 square, 276 superimposition, 276 Haunsfield, G. N., l-2, 107 Head phantom of Shepp and Logan, 51-53, 69, 103, 198, 255, 259, 285 Helmholtz equation, 224 Hilbert transforms, 68 Homogeneous wave equation, 204-8 acoustic pressure field, 204 electromagnetic field, 204 Human body x-ray propagation, 116, 195 Hyperplanes, 279 Ideal filter, 72 DFT, 74 Image processing, 28-47 Fourier analysis, 33-35 graduate courses, ix Images and imaging, 276-83 B-scan. 297-303. 315 CT, 177-201 magnetic resonance, 158-68 noise, 177-201 radar, 298-99 reconstructed, 190-94, 281 sag&al, 137 SPECT, 136 Impulse response convolution, 9 of ideal filter, 73 of shift invariant filter, 8-9 Inhomogeneous wave equation, 208-l 1 acoustics, 209 electromagnetic case, 208 Interpolation diffracting sources, 234-47 frequency domain, 236-42 Kaczmarz method for solving algebraic equations, 278 Kak, Avinash C. (Author) affiliations, 329 Limitations diffraction tomography 247-52 experimental, 261-68 mathematical, 247-48 reconstruction,
receivers, 268-70 reflection tomography, 309-13 Line integrals, 49-56 Fourier transforms, 3 18 Magnetic moments, 163 Magnetic resonance imaging, 158-68 Mammograms of female breasts, 159 Medical industry use of x-ray tomography, 132, 168 Modeling forward projection process, 286-88 Moire effect, 46, 178 MRI see Magnetic resonance imaging Negative time of data, 25-26 Noise in CT images, 177-201 in reconstructed images, 190-99 Nondestructive testing use of CT, 133 Nondiffracting sources measurement of projection data, 113-74 reconstruction, 49-112 Nyquist rate, 19-20, 180 Objects blurring, 192 broadband illumination, 235 Fourier transforms, 166, 235, 239 illumination, 300, 304 projections, 50, 59, 165, 182, 239 reflectivity, 300 Operators and operations linear, 7-9, 30-32 shift invariant, 8, 30-32 Parallel projections reconstruction algorithms, 60-75 Parseval theorem s Fourier transforms, 39, 44 Pencil beam of energy, 299 Phantoms, 122 reconstruction, 198, 262 x-rays, 127 Photoelectric effect, 114, 119 Photons emission, 146 emission tomography, 135-37 gamma-rays, 135, 138 Plane waves propagation, 207 Point sources, 28-30 Polychromaticity artifacts in x-ray CT, 120-25 Polychromatic sources for measuring projection data, 117-20
INDEX
325
Positron emission tomography, 142-45 Positron tomography attenuation compensation, 145-47 Projection data measurement, 113-74 sound waves, 113 ultrasound, 113 x-rays, 113 Projections, 49-56 backpropagation, 245 cone beam, 101 curved lines, 95 diffracted, 204- 11 fan beams, 75-93, 97, 192 forward, 286-88 of cylinders, 2 of ellipse, 54, 62 of objects, 50, 59, 165, 182, 239 parallel, 51, 60-75, 77, 100, 185 representation, 276-83 uniform sampling, 238 x-ray, 114-16 3-D, 100-104, 165 Radar imaging, 298-99 Radioactive isotopes use in emission CT, 134-35 Radon transforms, 50, 52, 93-97 Received waves sampling, 261-62 Receivers effect of finite length, 263-66 limited views, 268-70 Reconstructed images continuous case, 190-94 discrete case, 194-99 noise, 190-99 Reconstruction algebraic, 280, 283-92 algorithms, 49-l 12 bones, 122 circular, 287 cone beams, 102 cylinders, 256, 310 diffraction tomography, 247 dog heart, 154, 156-57 s errors, 177-201 fan beams, 93-99 from fan projections, 75-93 iterative, 284, 289-90 large-sized, 282 limitations, 261-68 of ellipse, 178, 181 of images, 83, 122, 198, 281 of Shepp and Logan phantom, 70 phantom, 198 plane wave reflection, 310 refractive index, 154 simultaneous, 284-92
tumors. 290. 294 with n&diffracting sources, 49-l 12 2-D, 100 3-D, 99-107 Reconstruction algorithms, 3 13-20 algebraic, 275-96 cone beams, 103, 108-9 evaluation, 2.52-61 for equispaced detectors, 87 for parallel projections, 60-75 implementation, 288-92 Rectangle function limit, 29 2-D Fourier transform, 34 Reflection tomography, 297-322 experimental results, 320-21 limits, 309-13 of needles, 320 transducers, 307 vs. diffraction tomography, 307-9 with basic aim, 297 with point transmitter/receivers, 313-21 Refractive index tomography ultrasonic, 151-53 Re-sorting algorithm, 92-93 of fan beams, 92 Rocket motors nondestructive testing, 133-34 Rytov approximation, 249-53, 258 evaluation, 249 first. 2 14- 18 Sampling in real system, 186-89 of data, 19-20 of projection, 238 received waves, 261-62 SART see Simultaneous algebraic reconstruction technique Scanning and scanners B-scan imaging, 297-303 CT, 130 different methods, 126-32 fan beams, 188 fourth generation, 129 Scattering x-rays, 125-26 Seismic profiling experiment, 232 Fourier transforms,. 233 Shift invariant operations linear, 30-32 Short wavelength limit of Fourier diffraction theorem, 227-28 Signal processing fundamentals, 5 graduate courses, ix one-dimensional, 5 Simultaneous algebraic reconstruction tech-
326
INDEX
nique, 285-92 implementation, 288 Simultaneous iterative reconstructive technique, 284 algorithm, 295 Single photon emission tomography, 135-37 attenuation compensation, 137-42 Sinograms, 94 SIRT see Simultaneous iterative reconstructive technique Skull simulated, 121 Slaney, Malcolm (Author) affiliations, 329 Sonograms of breast, 302 of carcinoma, 302 Sound waves projection data, 113 SPECT images, 136 Synthetic aperture tomography, 230-3 1 Tomography applications, 1 definition, 1 diffraction, 203, 221, 247, 307-9 emission computed, 134-47, 275 Fourier slice theorem, 49, 56-61 imaging with diffracting sources, 203-73 positron, 145-47 positron emission, 142-45 reflection, 297-322 simulations, 55 synthetic aperture, 230-31 ultrasonic, 147-58, 205 x-ray, l-3, 114-33 x-ray scanner, 1, 107 3-D simulations, 102 see also Computerized tomography, Reflection tomography, Single photon emission tomography, Ultrasonic computed tomography
Transducers plane wave, 303-7 pulse illumination, 3 12 reflection, 303-7 rotation, 316 ultrasonic, 149 Transforms see Fourier transforms, Hilbert transforms, Radon transforms Transmitter/receivers point, 313-21 reflection tomography, 3 13-2 1 Tumors reconstruction, 290-94 Ultrasonic beams propagation, 149, 153 Ultrasonic computed tomography, 147-58 applications, 157-58 attenuation, 153-57 fundamentals, 148-51 refractive index, 151-53 Ultrasonic signals, 152-53 Wave equation approximations, 211-18 homogeneous, 204-8 inhomogeneous, 208-I 1 Weighting functions, 99 backprojection, 92 Windows Hamming, 291, 293-94 smooth, 98 X-rays CT, 120-25 in human body, 195 monochromatic, 114-16 parallel beams, 116 phantoms, 127 photons, 128 projection data, 113 scatter, 125-26 sources, 129 tomography, 114-33 tubes, 115
INDEX
327
329
Correction to Iwa75 reference on page 210 At the end of the first (partial) paragraph on page 210 we refer readers to [Iwa75]. The correct references are either:
q q
B. D. Coleman, "Foundations of linear viscoelasticity," Rev. Mod. Phys., Vol. 9, pp. 443-450, 1981. D. Nahamoo and A. C. Kak, "Ultrasonic Diffraction Imaging," Technical Report TR-EE-82-20, School of Electrical Engineering, West Lafayette, IN, 1982.