Image Registration and Data Fusion in Radiation Therapy
Image Registration and Data Fusion in Radiation Therapy
ABSTRACT. This paper provides an overview of image registration and data fusion
techniques used in radiation therapy, and examples of their use. They are used at all
stages of the patient management process; for initial diagnosis and staging, during
treatment planning and delivery, and after therapy to help monitor the patients’
response to treatment. Most treatment planning systems now support some form of
interactive or automated image registration and provide tools for mapping
information, such as tissue outlines and computed dose from one imaging study to
another. To complement this, modern treatment delivery systems offer means for
acquiring and registering 2D and 3D image data at the treatment unit to aid patient
setup. Techniques for adapting and customizing treatments during the course of Received 7 February 2006
therapy using 3D and 4D anatomic and functional imaging data are currently being Revised 17 March 2006
introduced into the clinic. These techniques require sophisticated image registration Accepted 24 April 2006
and data fusion technology to accumulate properly the delivered dose and to analyse
DOI: 10.1259/bjr/70617164
possible physiological and anatomical changes during treatment. Finally, the
correlation of radiological changes after therapy with delivered dose also requires the ’ 2006 The British Institute of
use of image registration and fusion techniques. Radiology
Data from multiple anatomical and functional imaging technique over another depends on the particular
studies have become important components of patient application and types of image data involved. While
management in radiation therapy. From initial diagnosis exhaustive and detailed reviews of image registration
to treatment planning and from delivery to monitoring algorithms have appeared in the literature [4], this paper
the patient post-therapy, these data drive the decisions is meant to provide a broad overview as well as
about how the patient is treated and help assess the examples of image registration and data fusion techni-
progress and efficacy of therapy. While X-ray CT ques that are employed in radiation therapy.
remains the primary imaging modality for most aspects
of treatment planning and delivery, the use of data from
other modalities such as MRI and MR spectroscopy
Image registration
(MRS) and positron/single photon emission tomography
(PET/SPECT) is becoming increasingly prevalent and The basic task of image registration is to compute the
valuable, especially when taking advantage of highly geometric transformation that maps the coordinates of
conformal treatment techniques such as intensity-modu- corresponding or homologous points between two
lated radiotherapy [1–3]. These additional imaging imaging studies. While there are many different techni-
studies provide complementary information to help ques used to carry this out, most approaches involve the
elucidate the condition of the patient before, during same three basic components. The first and main
and after treatment. The use of time-series image data to component is the transformation model itself, which
assess physiological motion for initial planning as well as can range from a single global linear transformation for
anatomical and functional changes for possible treatment handling rotations and translations (six degrees of
adaptation is becoming more widespread as diagnostic freedom; three rotations and three translations) to a
imaging devices produce quality 4D image data and as completely free form deformation model where the
X-ray imaging systems are incorporated into the treat- transformation is represented by independent displace-
ment room. ment vectors for each voxel in the image data (degrees of
In order to make use of the information from these freedom can reach three times the number of voxels).
multiple imaging studies in an integrated fashion, the The second component is the metric used to measure
data must be geometrically registered to a common how well the images are (or are not) registered, and the
coordinate system. This process is called image registra- third component is the optimizer and optimization
tion. Once different datasets are registered, information scheme used to bring the imaging data into alignment.
such as tissue boundaries, computed dose distributions It is also worth mentioning that these general compo-
and other image or image-derived information can be nents, the transformation model, which defines the
mapped between them and combined. This process is degrees of freedom or parameters, the metric or cost
called data fusion. Figure 1a provides a simple example function used to measure the worth of the registration
of these two processes. and the optimization engine used to reach a final
Numerous techniques exist for both image registration solution, are completely analogous to the components
and data fusion. The choice and advantage of one required by inverse treatment planning systems.
(a) (b)
Figure 1. Schematic of the image registration and data fusion processes. (a) Anatomical information from a spin-echo MR is first
registered and then fused with functional information from a 11C thymidine PET to create a synthetic MR-PET image volume.
(b) General components of the registration process.
Although it is often desirable or necessary to register affine transform, which is a composition of rotations,
numerous imaging studies to each other, the process of translations, scaling (sx,sy,sz) and shearing (shx,shy,shz). A
registration is generally carried out by registering two property of affine transformations is that they preserve
datasets at a time. In radiation therapy, a common collinearity (‘‘parallel lines remain parallel’’). Currently,
strategy is to register each of the imaging studies to the the DICOM imaging standard uses affine transforma-
treatment planning CT, as it is used as the primary tions to specify the spatial relationship between two
dataset for treatment planning and dose calculations. imaging studies [7]. Most commercial treatment plan-
Transformations between studies that are not explicitly ning systems only support image registration using
registered to each other can be easily derived by affine transformations, although support for more
combining the appropriate transforms and inverse trans- sophisticated transformations should appear soon.
forms between the different datasets and the planning The assumption of global rigid movement of anatomy
CT. For the discussions that follow, the two datasets is often violated, especially for sites other than the head
being registered are labelled Study A and Study B. Study and large image volumes that extend to the body surface.
A will be the base or reference dataset that is held fixed Differences in patient setup (arms up versus arms
and Study B will be the homologous or moving dataset down), organ filling and uncontrolled physiological
that is manipulated to be brought into geometric motion confound the use of a single affine transform to
alignment with Study A. Study B’ will refer to the register two imaging studies. In some cases where local
transformed or registered version of Study B (Figure 1b). rigid motion can be assumed, it may be possible to use a
rigid or affine transformation to register sub-volumes of
two imaging studies. For example, the prostate itself may
Transformation model be considered rigid, but it certainly moves relative to the
pelvis, depending on the filling of the rectum and
The transformation model chosen to describe the bladder. By considering only a limited field-of-view that
mapping of coordinates between two studies depends includes just the region of the prostate, it is often possible
on the clinical site, the imaging conditions and the to use an affine transformation to accurately register the
particular application. In the ideal case, where the prostate anatomy in two studies [8–10]. One or more sub-
patient is positioned in an identical orientation during volumes can be defined by simple geometric cropping or
the different imaging studies and the scale and centre of masks derived from one or more anatomical structures
the imaging coordinate systems coincide, the transfor- (Figure 2).
mation is a simple identity transform I and xB5xA for all Even with a limited field-of-view approach, there are
points in the two imaging studies. This situation most many sites in which affine registration techniques are not
closely exists for the data produced by dual imaging sufficient to achieve acceptable alignment of anatomy. In
modality devices such as PET-CT or SPECT-CT these sites, an organ’s size and shape may change as a
machines, especially if physiological motion is controlled result of normal organ behaviour or the motion of
or absent [6]. surrounding anatomy. For example, the lungs change in
Naturally, it is common for the orientation of the both size and shape during the breathing cycle, and the
patient to change between imaging studies, making more shape of the liver can be affected by the filling of the
sophisticated transformations necessary. For situations stomach. When registering datasets that exhibit these
involving the brain, where the position and orientation of kinds of changes, a non-rigid or deformable model must
the anatomy are defined by the rigid skull, a simple be used to accurately represent the transformation
rotate-translate model can be accurately applied. In this between studies.
case, a global linear transformation specified by three Deformable transformation models range in complex-
rotation angles (hx, hy, hz) and three translations (tx,ty,tz) ity from a simple extension of a global affine transforma-
can be used to map points from one image dataset to tion using higher order polynomials with relatively few
another. A more general linear transformation is an parameters, to a completely local or ‘‘free form’’ model
Figure 2. Various strategies for cropping data for limited field-of-view image data. (a) Simple geometric cropping. (b) Piecewise
cropping. (c) Anatomically-based cropping.
where each point or voxel in the image volume can move centred at a series of locations called knots. Changing the
independently and the number of parameters may reach weight or contribution w of each basis function affects
three times the number of voxels considered. Between only a specific portion of the overall deformation. By
these two extremes are transformation models designed increasing the density of knots, more complex and
to handle various degrees of semi-local deformations localized deformations can be modelled.
using a moderate number of parameters, such as splines Another spline based transformation, called thin-plate
[11]. splines, uses a set of corresponding control points
Global polynomials have been used successfully to defined on both image datasets and minimizes a bending
model and remove image distortions in MR and other energy term to determine the transformation parameters
image data as a pre-processing step for image registra- [14–16]. Unlike B-splines, the location of each control
tion [12], but are not typically used for modelling point does have some amount of global influence,
deformation of anatomy because of undesirable oscilla- meaning that changing the position of a control point
tions that occur as the degree of the polynomial in one area will affect the entire deformation in some
increases. Spline-based transformations, such as B- capacity. Using more points reduces the influence of
splines [11, 13] avoid this problem by building up the each point but this comes at a higher computational cost
overall transformation, or deformation function, using a than with B-splines.
set of weighted basis functions defined over (or which Finally, free-form or non-parametric transformation
contribute only over) a limited region. Figure 3 illus- models are represented using vector fields of the explicit
trates this approach for a one-dimensional cubic B- displacements for a grid of points, usually at the voxel
spline. The displacement or deformation, Dx, at a given locations or an integer sub-sample of these (Figure 4).
point is computed as the weighted sum of basis functions Algorithms for solving for the displacements with
(a) (b)
Figure 3. B-spline deformation model. (a) 1D example of the cubic B-spline deformation model. The displacement Dx as a
function of x is determined by the weighted sum of basis functions. The double arrow shows the region of the overall
deformation affected by the weight factor w7. 3D deformations are constructed using 1D deformations for each dimension.
(b) Multiresolution registration of lung data using B-splines. Both knot density and image resolution are varied during
registration. This can help avoid local minima and decrease overall registration time.
(a) (b)
Figure 4. Visualization of (a) deformation computed between datasets registered using B-splines and (b) fluid flow model. The
deformation or displacement is known for every voxel but only displayed for a subset of voxels for clarity ((b) image courtesy of
Gustavo Olivera, University of Wisconson).
non-parametric models use some form of local driving points, but rather try to maximize the overlap between
force to register the image data. Common models include corresponding lines and surfaces extracted from two
fluid flow [17, 18], optical flow (based on intensity image studies, such as the brain or skull surface or pelvic
gradients) [19, 20] and finite element methods [21]. bones. These structures can be easily extracted using
automated techniques and minor hand editing. As with
defining pairs of points, it may be inherently difficult or
Registration metric time consuming to accurately delineate corresponding
lines and surfaces in both imaging studies. Furthermore,
In most registration algorithms, the parameters of a since the extracted geometric features are surrogates for
transformation model which bring two datasets into the entire image volume, any anatomic or machine-based
geometric alignment are determined by maximizing or distortions in the image data away from these features
minimizing a registration metric which measures the will not be taken into account during the registration
similarity or dissimilarity of the two image datasets. process.
Most registration metrics in use today can be classified as
either geometry-based or intensity-based. Geometry-
based metrics make use of features extracted from the Intensity-based metrics
image data, such as anatomic or artificial landmarks and
organ boundaries, while intensity-based metrics use the To overcome some of the limitations of using explicit
image data directly. geometric features to register image data, another class
of registration metric has been developed which uses the
numerical greyscale information directly to measure how
Geometry-based metrics well two studies are registered. These metrics are also
referred to as similarity measures since they determine
The most common geometry-based registration the similarity between the distributions of corresponding
metrics involve the use of points [22], lines [23, 24] or voxel values from Study A and a transformed version of
surfaces [22, 25, 26]. For point matching, the coordinates Study B. Several mathematical formulations are used to
of pairs of corresponding points from Study A and Study measure this similarity. The more common similarity
B are used to define the registration metric. These points measures in clinical use include: sum-of-squared differ-
can be anatomic landmarks or implanted or externally- ences and cross-correlation [27] for registration of data
placed fiducial markers. The registration metric is from X-ray CT studies and mutual information for
defined as the sum of the squared distances between registration of data from both similar and different
corresponding points. To compute the rotations and imaging modalities [15, 28, 29].
translations for a rigid transformation, a minimum of The mutual information metric provides a measure of
three pairs of points are required and for affine the information that is common between two datasets
transformations, a minimum of four pairs of non- [30]. It is assumed that when two datasets are properly
coplanar points are required. Using more pairs of points aligned, the mutual information of the pair is a
reduces the bias that errors in the delineation of any one maximum, which makes it an appropriate registration
pair of points has on the estimated transformation metric. It can be used for a wide range of image
parameters. However, accurately identifying more than registration situations since there is no dependence on
the minimum number of corresponding points can be the absolute intensity values and it is very robust to
difficult as different modalities often produce different missing or limited data. For example, a tumour might
tissue contrasts (a major reason why multiple modalities show up clearly on an MR study but be indistinct on a
are used in the first place) and placing or implanting corresponding CT study. Over the tumour volume the
larger numbers of markers is not always possible or mutual information is low, but no prohibitive penalties
desirable. are incurred. In the surrounding healthy tissue the
Alternatively, line and surface matching techniques do mutual information can be high, and this becomes the
not require a one to one correspondence of specific dominant factor in the registration.
Optimizer and registration scheme map these outlines to the images of the CT study. This
process is called structure mapping (Figure 5).
Most image registration systems use optimization Another approach to combining information from
schemes such as gradient descent or problem specific different imaging studies is to map directly the image
adaptations of these. Registration of datasets is usually intensity data from one study to another so that at each
carried out in a hierarchical fashion, starting with voxel there are two (or more) intensity values rather than
downsized versions of the data and iteratively register- one. The goal is to create a version of Study B (Study B’)
ing successively finer versions. The degrees of freedom
with images that match the size, location and orientation
of the geometric transformation can also be varied to
of those in Study A. These corresponding images can
speed the registration process. An example scheme
then be combined or fused in various ways to help
might begin with simple translations, and then allow
elucidate the relationship between the data from the two
rotations, then low spatial frequency deformations and
studies. Various relevant displays are possible using this
finally the full deformation model [12]. A hierarchical
multistudy data. For example, functional information
approach saves computation time and also helps avoid
from a PET imaging study can be merged with the
local minima, which become more likely as the degrees
anatomic information from an MRI study and displayed
of freedom of the deformation model increase.
as a colourwash overlay (Figure 6). This type of image
For deformable image registration problems using a
synthesis is referred to as image fusion.
large number of degrees of freedom, some form of
A variety of techniques exist to present fused data,
regularization may also be imposed to discourage
‘‘unreasonable’’ deformations such as warping of bones including the use of overlays, pseudo-colouring and
and folding of tissue. One approach to this problem is to modified greyscales. For example, the hard bone features
filter the deformations between iterations of the optimi- of a CT imaging study can be combined with the soft
zation [31]. Another approach is to include a regulariza- tissue features of an MRI study by adding the bone
tion term in the registration metric that penalizes extracted from the CT to the MR dataset. Another
non-physical deformations. The regularization term can method is to display anatomic planes in a side-by-side
even be made spatially variant using known or estimated fashion (Figure 6). Such a presentation allows structures
tissue properties [32]. to be defined using both images simultaneously.
In addition to mapping and fusing image intensities,
3D dose distributions computed in the coordinate system
of one imaging study can be mapped to another. For
Data fusion example, doses computed using the treatment planning
The motivation for registering imaging studies is to be CT can be displayed over an MR study acquired after the
able to map information derived from one study to start of therapy. With these data, regions of post-
another, or to directly combine or fuse the imaging data treatment radiological abnormality can be readily com-
from the studies to create displays that contain relevant pared with the planned doses for the regions. With the
features from each modality. For example, a tumour introduction of volumetric imaging on the treatment
volume may be more clearly visualized using a specific units, treatment delivery CT studies can now be acquired
MR image sequence or coronal image plane rather than to determine more accurately the actual doses delivered.
the axial treatment planning CT. If the geometric By acquiring these studies over the course of therapy and
transformation between the MR study and the treatment registering them to a common reference frame, doses for
planning CT study is known, the clinician is able to the representative treatments can be reformatted and
outline the tumour using images from the MR study and accumulated to provide a more likely estimate of the
Figure 5. Structure mapping. A tumour volume is outlined by the clinician on an MR study and then mapped to the treatment
planning CT using the computed transformation.
Figure 6. Different approaches to display data from multiple studies which have been registered and reformatted. (a) Side-by-
side display with linked cursor. (b) Split screen display. (c) Colourwash overlay.
delivered dose. This type of data can be used as input Regardless of the output of any numerical technique
into the adaptive radiotherapy decision process. used, which may only be a single number, it is important
for the clinician to appreciate how well in three
dimensions the information they define on one study is
Validation mapped to another. There are many possible visualiza-
tion techniques to help to evaluate qualitatively the
It is important to validate the results of a registration results of a registration. Most of these are based on data
before making clinical decisions based on the results. To mapping and fusion display techniques. For example,
do this, most image registration systems provide some paging through the images of a split screen display and
combination of numerical and visual verification tools. A moving a horizontal or vertical divider across regions
common numerical evaluation technique is to define a where edges of structures from both studies are visible
set of landmarks for corresponding anatomic points on can help uncover even small areas of misregistration
Study A and Study B and compute the distance between (Figure 7). Another interesting visual technique involves
the actual location of the points defined on Study A and dynamically switching back and forth between corre-
the resulting transformed locations of the points from sponding images from the different studies at about once
Study B’. This calculation is similar to a ‘‘point per second and focusing on particular regions of the
matching’’ metric but, as discussed earlier, it may be anatomy to observe how well they are aligned.
difficult to accurately and sufficiently define the appro- In addition to comparing how well the images from
priate corresponding points, especially when registering Study A and Study B’ correspond at the periphery of
multimodality data. Also, if deformations are involved, anatomic tissues and organs, outlines from one study can
the evaluation is not valid for regions remote from the be displayed over the images of the other. Figure 8
defined points. shows a brain surface which was automatically segmented
Figure 7. Image-image visual validation using split screen displays of native MR and reformatted CT study.
Figure 8. Image-geometry visual validation structure overlay of CT defined brain outlines over MR images.
from the treatment planning CT study and mapped to geometric information from these datasets to the plan-
the MR study. The agreement between the CT-based ning CT, the transformation between the secondary
outlines at the different levels and planes of the MR dataset and the planning CT is required. Furthermore,
study demonstrate the accuracy of the registration. using the inverse of this transformation, it is also possible
In practice, the accuracy of the registration process to transfer information computed using the planning CT,
depends on a number of factors. For multimodality such as the planned dose, to the secondary dataset.
registration of PET/CT/MR data in the brain, registra- Incorporation of secondary or complementary data
tion accuracy on the order of a voxel size of the imaging from MRI and nuclear medicine imaging studies is
studies can be achieved. Outside the head, many factors becoming increasingly common. MR provides superior
confound single voxel level accuracy, such as machine soft tissue contrast relative to CT and the ability to image
induced geometric and intensity distortions as well as directly along arbitrary planes can aid in the visualiza-
dramatic changes in anatomy and tissue loss or gain. tion and delineation of certain anatomic structures, such
Nevertheless, accuracy at the level of a few voxels is as the optic nerves and chiasm. MR can also provide
certainly possible in many situations. information on localized metabolite concentrations using
spectroscopy [3, 33]. Incorporation of functional informa-
tion from PET and SPECT can help remove ambiguities
Clinical applications that might exist on the treatment planning CT between
the tumour and other conditions such as atelectasis and
Image registration and data fusion are useful at each necrosis [34]. These studies can also indicate nodal
step of the patient management process in radiation involvement and provide a map of local tissue function
therapy; for initial diagnosis and staging, during treat- that can be used construct objective functions for dose
ment planning and delivery, and after therapy to help optimization [3, 35].
monitor the patient’s response. The overall purpose of Figure 9 illustrates the use of MR as a secondary
these tools at each stage is the same; to help to integrate dataset for target and normal structure delineation. An
the information from different imaging studies in a axial and coronal MR study was acquired and registered
quantitative manner to create a more complete repre- to the treatment planning CT using a geometric
sentation of the patient. Over the past several years, transformation which allowed rotations and translations,
treatment planning and treatment delivery systems have as the anatomy in this region moves in a rigid fashion.
evolved to provide direct support for image registration Since the image data were from different modalities, the
and data fusion. Typical examples of how these mutual information registration metric was used. Split-
techniques are used for treatment planning, delivery, screen visualization of the registered datasets was used
and adaptation are described here. to validate the computed transformation which was
judged to be accurate to within 1–2 mm over the image
volume (Figure 7). The gross tumour volume (GTV) was
Treatment planning defined as the region of enhancement in the post Gd-
DPTA contrast MR studies. The clinician outlined this
Most modern treatment planning systems permit the volume on both the axial and coronal sets of MR images.
use of one or more datasets in addition to the treatment The optic nerves and chiasm were outlined on the
planning CT for structure delineation and visualization. coronal MR study. The outlines were used to generate a
These are sometimes referred to as ‘‘secondary’’ datasets. 3D surface description for each tissue and these were
In order to transfer anatomic outlines and other mapped to the coordinate system of the planning CT
using the computed transformation. The outlines of these volumetric image data at the treatment unit using cone-
mapped surfaces for each CT image were derived by beam reconstruction of a set of projection images acquired
intersecting the transformed surfaces along the planes by rotating the treatment gantry around the patient. (See
defined by each image (Figure 5). Because of differences papers by Kirby and Glendinning, Moore et al and Chen
in partial volume averaging between the axial and and Pouliot in this issue). These cone-beam data can be
coronal MR images, the outlines derived from the axial registered directly with the planning CT to determine how
and coronal MR data are not identical. In these cases, the to shift (and possible rotate) the treatment table to properly
clinician has the choice to use one or other outline or to position the patient for treatment [36, 37].
generate a composite outline using a Boolean OR Figure 10 shows an example of an interface for 3D
operation. image-based alignment at the treatment unit using a
For this example, the CT greyscale data did not cone-beam CT dataset and the planning CT. Automated
contribute any information for the definition of the image registration using successively finer data resolu-
GTV, optic nerves, or optic chiasm. Had the physician tion and mutual information are used to determine the
outlined a CT-based GTV, it could have been incorpo- rotations and translations required to align the two
rated directly into the composite GTV or compared with datasets. These are then translated into machine para-
the MR based outlines to reconcile potential ambiguities. meters which can be automatically downloaded to the
At this point, the outlines of the tumour and normal treatment unit and set-up. In this example, the accuracy
structures were used for treatment planning as if they of the registration is assessed using both image-image
were derived using only the planning CT. The final and structure-image overlay displays. These same tools
planning target volume (PTV) was created by uniformly and image data are also available off-line so that the
expanding the composite GTV surface by 5 mm to clinician can track and analyse the progress of the
account for setup uncertainty. A treatment plan and treatments.
dose distribution was generated using the CT data, PTV
and normal structures. The CT-based dose distribution
was then mapped back to the MR study for further Treatment adaptation and customization
visualization of the dose relative to the underlying
anatomy. On-line imaging has made it more convenient to
acquire image data of the patient over the course of
treatment. Using these data, it is possible to uncover
changes in patient anatomy or treatment setup that are
Treatment delivery
significant and dictate changes to the original treatment
Once a treatment plan is created, it is transferred to the plan. Better estimates of individual treatment doses can
treatment unit for delivery. The location and orientation of be computed using these data and the actual machine
the patient on the treatment machine must be adjusted so parameters. By registering these data to the ‘‘base’’
that the centre and orientation of the coordinate system of treatment planning CT, it is possible to construct a more
the treatment plan coincide with that of that of the complete model of the accumulated dose to the patient.
treatment unit. Image registration is typically used to This information can then be used to assess if and how a
carry out this process using images acquired in the treatment plan should be adapted or further customized
treatment room and the planning CT. The most common [38–41].
practice is to generate a pair of orthogonal digitally Figure 11 shows an example of dose accumulation for
reconstructed radiographs (DRRs) from the planning CT two datasets of the patient at different points in the
and register these simulated radiographs with actual breathing cycle. The dose distribution displayed on the
radiographs acquired by a flat-panel imager attached to left image was computed directly using the image
the treatment unit. It is now also possible to acquire dataset shown [42]. The dose distribution displayed on
the middle image was computed from another dataset techniques that have been and are being studied to
and mapped to the image data shown using the B-spline improve the accuracy and utility of both image registra-
deformation field computed by registering the two tion and data fusion. Both processes are now essential
datasets using a sum-of-squares difference metric. The components for modern treatment planning and deliv-
dose distribution on the right is the weighted sum of the ery. As the need, availability and diversity of image data
two distributions. This process can be continued continues to increase, they will be even more important
throughout the course of therapy to provide up-to-date to each part of patient management process. These tools,
information on the delivered dose. however, can not replace clinical judgment. Different
imaging modalities image the same tissues differently
and, although tools may help us to understand better
Summary and differentiate between tumour and non-tumour, they
cannot yet make the ultimate decision of what to treat
Over the past several years there has been an and what to not treat. These decisions still lie with the
explosion of the use of image data from a variety of clinician, although they now have more sophisticated
modalities to aid in treatment planning, delivery and tools to help them make these decisions.
evaluation. In order to make quantitative use of these
data it is necessary to determine the transformation that
relates the coordinates of the individual datasets to one
Acknowledgments
another. The process of finding this transformation is
referred to as image registration. Once the geometric Portions of the text and some of the figures presented
relationship between the datasets has been determined it here were published previously in Kessler ML, Roberson
is possible to utilize the information they provide by M. Image registration and data fusion for radiotherapy
mapping the image data, derived structures, and treatment planning. In: Schlegel W, Bortfeld T, Grosu
computed dose distributions between datasets using a A-L, editors. New technologies in radiation oncology.
process called data fusion. There are many different Springer, 2006.
Figure 11. Example of dose summation/accumulation using registered datasets (courtesy of Mihaela Rosu, The University of
Michigan).
References 22. Kessler ML, Pitluck S, Petti PL, Castro JR. Integration of
multimodality imaging data for radiotherapy treatment
1. Webb S. The physical basis of IMRT and inverse planning. planning. Int J Radiat Oncol Biol Phys 1991;21:1653–67.
Br J Radiol 2003;76:678–89. 23. Balter JM, Pelizzari CA, Chen GT. Correlation of projection
2. Eisbruch A. Intensity-modulated radiation therapy: a radiographs in radiation therapy using open curve seg-
clinical perspective. Semin Radiat Oncol 2002;12:197–8. ments and points. Med Phys 1992;19:329–34.
3. Ling CC, Humm J, Larson S, Amols H, Fuks Z, Leibel S, 24. Langmack KA. Portal imaging. Br J Radiol 2001;74:789–804.
et al. Towards multidimensional radiotherapy (MD-CRT): 25. Pelizzari CA, Chen GT, Spelbring DR, Weichselbaum RR.
biological imaging and biological conformality. Int J Radiat Accurate three-dimensional registration of CT, PET, and/or
Oncol Biol Phys 2000;47:551–60. MR images of the brain. J Comput Assist Tomogr 1989;13:20–6.
4. Maintz JB, Viergever MA. A survey of medical image 26. van Herk M, Kooy HM. Automatic three-dimensional
registration. Med Image Anal 1998;2:1–36. correlation of CT-CT, CT-MRI, and CT-SPECT using
5. Hill DL, Batchelor PG, Holden M, Hawkes DJ. Medical chamfer matching. Med Phys 1994;21:1163–78.
image registration. Phys Med Biol 2001;46:R1–R45. 27. Kim J, Fessler JA. Intensity-based image registration using
6. Townsend DW, Beyer T. A combined PET/CT scanner: the robust correlation coefficients. IEEE Trans Med Imaging
path to true image fusion. Br J Radiol 2002;75:24S–30S. 2004;23:1430–44.
7. DICOM Part 3, PS3.3 – Service Class Specifications, 28. Viola P, Wells WM. Alignment by maximization of mutual
National Electrical Manufacturers Association, Rosslyn, information. Int J Computer Vision 1997;137–54.
Virgina, USA, 2004. 29. Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens
8. McLaughlin PW, Narayana V, Meriowitz A, Troyer S, P. Multimodality image registration by maximization of
Roberson PL, Gonda R Jr, et al. Vessel sparing prostate mutual information. IEEE Trans Med Imaging
radiotherapy: dose limitation to critical erectile vascular 1997;16:187–98.
structures (internal pudendal artery and corpus cavernosum) 30. Roman S. Introduction to coding and information theory,
defined by MRI. Int J Radiat Oncol Biol Phys 2005;61:20–31. Undergraduate Texts in Mathematics, ISBN 0-387-94704-3,
9. McLaughlin PW, Narayana V, Kessler M, McShan D, New York, NY: Springer-Verlag, 1997.
Troyer S, Marsh L, et al. The use of mutual information in 31. Staring M, Klein S, Pluim JP. Nonrigid registration with
registration of CT and MRI datasets post permanent adaptive, content-based filtering of the deformation field.
implant. Brachytherapy 2004;3:61–70. Proc SPIE Medical Imaging 2005: Image Proc. 2005:212–21.
10. Roberson PL, McLaughlin PW, Narayana V, Troyer S, 32. Ruan R, Fessler JA, Roberson M, Balter J, Kessler M.
Hixson GV, Kessler ML. Use and uncertainties of mutual Nonrigid registration using regularization that accommo-
information for CT/MR registration post permanent dates for local tissue rigidity. Proc SPIE Medical Imaging
implant of the prostate. Med Phys 2005;32:473–82. Proc. Vol 6144, 2006. (In press).
11. Unser M, Aldroubi A, Eden M. B-spline signal processing: 33. Graves EE, Pirzkall A, Nelson SJ, Larson D, Verhey L.
part I-theory. IEEE Trans Signal Processing 1993;41:821–33. Registration of magnetic resonance spectroscopic imaging
12. Kybic J, Unser M. Fast parametric elastic image registration. to computed tomography for radiotherapy treatment
IEEE Trans Image Processing 2003;75:1427–42. planning. Med Phys 2001;28:2489–96.
13. Maurer CR, Aboutanos GB, Dawant BM, et al. Effect of 34. Munley MT, Marks LB, Scarfone C, Sibley GS, Patz EF Jr,
geometrical distortion correction in MR on image registra- Turkington TG, et al. Multimodality nuclear medicine
tion accuracy. J Comput Assist Tomogr 1996;20:666–79. imaging in three-dimensional radiation treatment planning
14. Bookstein F. Principal warps: thin-plate splines and the for lung cancer: challenges and prospects. Lung Cancer
decomposition of deformations. IEEE Trans Pattern 1999;23:105–14.
Analysis Machine Intelligence 1989;567–85. 35. Marks LB, Spencer DP, Bentel GC, et al. The utility of
15. Meyer CR, Boes JL, Kim B, Bland PH, Zasadny KR, Kison SPECT lung perfusion scans in minimizing and assessing
PV, et al. Demonstration of accuracy and clinical versatility the physiologic consequences of thoracic irradiation. Int J
of mutual information for automatic multimodality image Radiat Oncol Biol Phys 1993;26:659–68.
fusion using affine and thin-plate spline warped geometric 36. Jaffray DA, Siewerdsen JH, Wong JW, Martinez AA.
deformations. Med Image Anal 1997;1:195–206. Flatpanel cone-beam computed tomography for image-
16. Coselmon MM, Balter JM, McShan DL, Kessler ML. Mutual guided radiation therapy. Int J Radiat Oncol Biol Phys
information based CT registration of the lung at exhale and 2002;53:1337–49.
inhale breathing states using thin-plate splines. Med Phys 37. Mackie TR, Kapatoes J, Ruchala K, Lu W, Wu C, Olivera G,
2004;31:2942–8. et al. Image guidance for precise conformal radiotherapy.
17. Lu W, Chen M, Olivera GH, Ruchala KJ, Mackie T. Fast Int J Radiat Oncol Biol Phys 2003;56:89–105.
free-form deformable registration via calculus of variations. 38. Smitsmans MH, Wolthaus JW, Artignan X, de Bois J, Jaffray
Phys Med Biol 2004;49:3067–87. DA, Lebesque JV, et al. Automatic localization of the
18. Christensen GE, Carlson B, Chao KS, Yin P, Grigsby PW, prostate for on-line or off-line image-guided radiotherapy.
Nguyen K, et al. Image-based dose planning of intracavi- Int J Radiat Oncol Biol Phys 2004;60:623–35.
tary brachytherapy: registration of serial-imaging studies 39. Yan D, Wong J, Vicini F, Michalski J, Pan C, Frazier A, et al.
using deformable anatomic templates. Int J Radiat Oncol Adaptive modification of treatment planning to minimize
Biol Phys 2001;51:227–43. the deleterious effects of treatment setup errors. Int J Radiat
19. Thirion JP. Image matching as a diffusion process: an Oncol Biol Phys 1997;38:197–206.
analogy with Maxwell’s demons. Med Image Anal 40. Yan D, Lockman D, Martinez A, Wong J, Brabbins D, Vicini F,
1998;2:243–60. et al. Computed tomography guided management of inter-
20. Wang H, Dong L, Lii MF, Lee AL, de Crevoisier R, Mohan fractional patient variation. Semin Radiat Oncol 2005;3:168–79.
R, et al. Implementation and validation of a three-dimen- 41. Lam KL, Ten Haken RK, Litzenberg D, Balter JM, Pollock
sional deformable registration algorithm for targeted SM. An application of Bayesian statistical methods to
prostate cancer radiotherapy. Int J Radiat Oncol Biol Phys adaptive radiotherapy. Phys Med Biol 2005;50:3849–58.
2005;61:725–35. 42. Rosu M, Chetty IJ, Balter JM, Kessler ML, McShan DL, Ten
21. Brock KK, Sharpe MB, Dawson LA, Kim SM, Jaffray DA. Haken RK. Dose reconstruction in deforming lung anat-
Accuracy of finite element model (FEM)-based multi-organ omy: dose grid size effects and clinical implications. Med
deformable image registration. Med Phys 2005;32:1647–59. Phys 2005;32:2487–95.