0% found this document useful (0 votes)
175 views10 pages

Image Registration and Data Fusion in Radiation Therapy

This paper provides an overview of image registration and data fusion techniques used in radiation therapy, and examples of their use. They are used at all stages of the patient management process; for initial diagnosis and staging, during treatment planning and delivery, and after therapy to help monitor the patients' response to treatment.

Uploaded by

arakbae
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views10 pages

Image Registration and Data Fusion in Radiation Therapy

This paper provides an overview of image registration and data fusion techniques used in radiation therapy, and examples of their use. They are used at all stages of the patient management process; for initial diagnosis and staging, during treatment planning and delivery, and after therapy to help monitor the patients' response to treatment.

Uploaded by

arakbae
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

The British Journal of Radiology, 79 (2006), S99S108

Image registration and data fusion in radiation therapy


M L KESSLER,
PhD

The University of Michigan, Ann Arbor, MI 48103, USA


ABSTRACT. This paper provides an overview of image registration and data fusion techniques used in radiation therapy, and examples of their use. They are used at all stages of the patient management process; for initial diagnosis and staging, during treatment planning and delivery, and after therapy to help monitor the patients response to treatment. Most treatment planning systems now support some form of interactive or automated image registration and provide tools for mapping information, such as tissue outlines and computed dose from one imaging study to another. To complement this, modern treatment delivery systems offer means for acquiring and registering 2D and 3D image data at the treatment unit to aid patient setup. Techniques for adapting and customizing treatments during the course of therapy using 3D and 4D anatomic and functional imaging data are currently being introduced into the clinic. These techniques require sophisticated image registration and data fusion technology to accumulate properly the delivered dose and to analyse possible physiological and anatomical changes during treatment. Finally, the correlation of radiological changes after therapy with delivered dose also requires the use of image registration and fusion techniques.

Received 7 February 2006 Revised 17 March 2006 Accepted 24 April 2006 DOI: 10.1259/bjr/70617164
2006 The British Institute of Radiology

Data from multiple anatomical and functional imaging studies have become important components of patient management in radiation therapy. From initial diagnosis to treatment planning and from delivery to monitoring the patient post-therapy, these data drive the decisions about how the patient is treated and help assess the progress and efficacy of therapy. While X-ray CT remains the primary imaging modality for most aspects of treatment planning and delivery, the use of data from other modalities such as MRI and MR spectroscopy (MRS) and positron/single photon emission tomography (PET/SPECT) is becoming increasingly prevalent and valuable, especially when taking advantage of highly conformal treatment techniques such as intensity-modulated radiotherapy [13]. These additional imaging studies provide complementary information to help elucidate the condition of the patient before, during and after treatment. The use of time-series image data to assess physiological motion for initial planning as well as anatomical and functional changes for possible treatment adaptation is becoming more widespread as diagnostic imaging devices produce quality 4D image data and as X-ray imaging systems are incorporated into the treatment room. In order to make use of the information from these multiple imaging studies in an integrated fashion, the data must be geometrically registered to a common coordinate system. This process is called image registration. Once different datasets are registered, information such as tissue boundaries, computed dose distributions and other image or image-derived information can be mapped between them and combined. This process is called data fusion. Figure 1a provides a simple example of these two processes. Numerous techniques exist for both image registration and data fusion. The choice and advantage of one
The British Journal of Radiology, Special Issue 2006

technique over another depends on the particular application and types of image data involved. While exhaustive and detailed reviews of image registration algorithms have appeared in the literature [4], this paper is meant to provide a broad overview as well as examples of image registration and data fusion techniques that are employed in radiation therapy.

Image registration
The basic task of image registration is to compute the geometric transformation that maps the coordinates of corresponding or homologous points between two imaging studies. While there are many different techniques used to carry this out, most approaches involve the same three basic components. The first and main component is the transformation model itself, which can range from a single global linear transformation for handling rotations and translations (six degrees of freedom; three rotations and three translations) to a completely free form deformation model where the transformation is represented by independent displacement vectors for each voxel in the image data (degrees of freedom can reach three times the number of voxels). The second component is the metric used to measure how well the images are (or are not) registered, and the third component is the optimizer and optimization scheme used to bring the imaging data into alignment. It is also worth mentioning that these general components, the transformation model, which defines the degrees of freedom or parameters, the metric or cost function used to measure the worth of the registration and the optimization engine used to reach a final solution, are completely analogous to the components required by inverse treatment planning systems.
S99

M L Kessler

(a)

(b)

Figure 1. Schematic of the image registration and data fusion processes. (a) Anatomical information from a spin-echo MR is first registered and then fused with functional information from a 11C thymidine PET to create a synthetic MR-PET image volume. (b) General components of the registration process.

Although it is often desirable or necessary to register numerous imaging studies to each other, the process of registration is generally carried out by registering two datasets at a time. In radiation therapy, a common strategy is to register each of the imaging studies to the treatment planning CT, as it is used as the primary dataset for treatment planning and dose calculations. Transformations between studies that are not explicitly registered to each other can be easily derived by combining the appropriate transforms and inverse transforms between the different datasets and the planning CT. For the discussions that follow, the two datasets being registered are labelled Study A and Study B. Study A will be the base or reference dataset that is held fixed and Study B will be the homologous or moving dataset that is manipulated to be brought into geometric alignment with Study A. Study B will refer to the transformed or registered version of Study B (Figure 1b).

Transformation model
The transformation model chosen to describe the mapping of coordinates between two studies depends on the clinical site, the imaging conditions and the particular application. In the ideal case, where the patient is positioned in an identical orientation during the different imaging studies and the scale and centre of the imaging coordinate systems coincide, the transformation is a simple identity transform I and xB5xA for all points in the two imaging studies. This situation most closely exists for the data produced by dual imaging modality devices such as PET-CT or SPECT-CT machines, especially if physiological motion is controlled or absent [6]. Naturally, it is common for the orientation of the patient to change between imaging studies, making more sophisticated transformations necessary. For situations involving the brain, where the position and orientation of the anatomy are defined by the rigid skull, a simple rotate-translate model can be accurately applied. In this case, a global linear transformation specified by three rotation angles (hx, hy, hz) and three translations (tx,ty,tz) can be used to map points from one image dataset to another. A more general linear transformation is an
S100

affine transform, which is a composition of rotations, translations, scaling (sx,sy,sz) and shearing (shx,shy,shz). A property of affine transformations is that they preserve collinearity (parallel lines remain parallel). Currently, the DICOM imaging standard uses affine transformations to specify the spatial relationship between two imaging studies [7]. Most commercial treatment planning systems only support image registration using affine transformations, although support for more sophisticated transformations should appear soon. The assumption of global rigid movement of anatomy is often violated, especially for sites other than the head and large image volumes that extend to the body surface. Differences in patient setup (arms up versus arms down), organ filling and uncontrolled physiological motion confound the use of a single affine transform to register two imaging studies. In some cases where local rigid motion can be assumed, it may be possible to use a rigid or affine transformation to register sub-volumes of two imaging studies. For example, the prostate itself may be considered rigid, but it certainly moves relative to the pelvis, depending on the filling of the rectum and bladder. By considering only a limited field-of-view that includes just the region of the prostate, it is often possible to use an affine transformation to accurately register the prostate anatomy in two studies [810]. One or more subvolumes can be defined by simple geometric cropping or masks derived from one or more anatomical structures (Figure 2). Even with a limited field-of-view approach, there are many sites in which affine registration techniques are not sufficient to achieve acceptable alignment of anatomy. In these sites, an organs size and shape may change as a result of normal organ behaviour or the motion of surrounding anatomy. For example, the lungs change in both size and shape during the breathing cycle, and the shape of the liver can be affected by the filling of the stomach. When registering datasets that exhibit these kinds of changes, a non-rigid or deformable model must be used to accurately represent the transformation between studies. Deformable transformation models range in complexity from a simple extension of a global affine transformation using higher order polynomials with relatively few parameters, to a completely local or free form model
The British Journal of Radiology, Special Issue 2006

Image registration and data fusion in radiation therapy

(a)

(b)

(c)

Figure 2. Various strategies for cropping data for limited field-of-view image data. (a) Simple geometric cropping. (b) Piecewise cropping. (c) Anatomically-based cropping.

where each point or voxel in the image volume can move independently and the number of parameters may reach three times the number of voxels considered. Between these two extremes are transformation models designed to handle various degrees of semi-local deformations using a moderate number of parameters, such as splines [11]. Global polynomials have been used successfully to model and remove image distortions in MR and other image data as a pre-processing step for image registration [12], but are not typically used for modelling deformation of anatomy because of undesirable oscillations that occur as the degree of the polynomial increases. Spline-based transformations, such as Bsplines [11, 13] avoid this problem by building up the overall transformation, or deformation function, using a set of weighted basis functions defined over (or which contribute only over) a limited region. Figure 3 illustrates this approach for a one-dimensional cubic Bspline. The displacement or deformation, Dx, at a given point is computed as the weighted sum of basis functions

centred at a series of locations called knots. Changing the weight or contribution w of each basis function affects only a specific portion of the overall deformation. By increasing the density of knots, more complex and localized deformations can be modelled. Another spline based transformation, called thin-plate splines, uses a set of corresponding control points defined on both image datasets and minimizes a bending energy term to determine the transformation parameters [1416]. Unlike B-splines, the location of each control point does have some amount of global influence, meaning that changing the position of a control point in one area will affect the entire deformation in some capacity. Using more points reduces the influence of each point but this comes at a higher computational cost than with B-splines. Finally, free-form or non-parametric transformation models are represented using vector fields of the explicit displacements for a grid of points, usually at the voxel locations or an integer sub-sample of these (Figure 4). Algorithms for solving for the displacements with

(a)

(b)

Figure 3. B-spline deformation model. (a) 1D example of the cubic B-spline deformation model. The displacement Dx as a
function of x is determined by the weighted sum of basis functions. The double arrow shows the region of the overall deformation affected by the weight factor w7. 3D deformations are constructed using 1D deformations for each dimension. (b) Multiresolution registration of lung data using B-splines. Both knot density and image resolution are varied during registration. This can help avoid local minima and decrease overall registration time.

The British Journal of Radiology, Special Issue 2006

S101

M L Kessler

(a)

(b)

Figure 4. Visualization of (a) deformation computed between datasets registered using B-splines and (b) fluid flow model. The
deformation or displacement is known for every voxel but only displayed for a subset of voxels for clarity ((b) image courtesy of Gustavo Olivera, University of Wisconson).

non-parametric models use some form of local driving force to register the image data. Common models include fluid flow [17, 18], optical flow (based on intensity gradients) [19, 20] and finite element methods [21].

Registration metric
In most registration algorithms, the parameters of a transformation model which bring two datasets into geometric alignment are determined by maximizing or minimizing a registration metric which measures the similarity or dissimilarity of the two image datasets. Most registration metrics in use today can be classified as either geometry-based or intensity-based. Geometrybased metrics make use of features extracted from the image data, such as anatomic or artificial landmarks and organ boundaries, while intensity-based metrics use the image data directly.

points, but rather try to maximize the overlap between corresponding lines and surfaces extracted from two image studies, such as the brain or skull surface or pelvic bones. These structures can be easily extracted using automated techniques and minor hand editing. As with defining pairs of points, it may be inherently difficult or time consuming to accurately delineate corresponding lines and surfaces in both imaging studies. Furthermore, since the extracted geometric features are surrogates for the entire image volume, any anatomic or machine-based distortions in the image data away from these features will not be taken into account during the registration process.

Intensity-based metrics
To overcome some of the limitations of using explicit geometric features to register image data, another class of registration metric has been developed which uses the numerical greyscale information directly to measure how well two studies are registered. These metrics are also referred to as similarity measures since they determine the similarity between the distributions of corresponding voxel values from Study A and a transformed version of Study B. Several mathematical formulations are used to measure this similarity. The more common similarity measures in clinical use include: sum-of-squared differences and cross-correlation [27] for registration of data from X-ray CT studies and mutual information for registration of data from both similar and different imaging modalities [15, 28, 29]. The mutual information metric provides a measure of the information that is common between two datasets [30]. It is assumed that when two datasets are properly aligned, the mutual information of the pair is a maximum, which makes it an appropriate registration metric. It can be used for a wide range of image registration situations since there is no dependence on the absolute intensity values and it is very robust to missing or limited data. For example, a tumour might show up clearly on an MR study but be indistinct on a corresponding CT study. Over the tumour volume the mutual information is low, but no prohibitive penalties are incurred. In the surrounding healthy tissue the mutual information can be high, and this becomes the dominant factor in the registration.
The British Journal of Radiology, Special Issue 2006

Geometry-based metrics
The most common geometry-based registration metrics involve the use of points [22], lines [23, 24] or surfaces [22, 25, 26]. For point matching, the coordinates of pairs of corresponding points from Study A and Study B are used to define the registration metric. These points can be anatomic landmarks or implanted or externallyplaced fiducial markers. The registration metric is defined as the sum of the squared distances between corresponding points. To compute the rotations and translations for a rigid transformation, a minimum of three pairs of points are required and for affine transformations, a minimum of four pairs of noncoplanar points are required. Using more pairs of points reduces the bias that errors in the delineation of any one pair of points has on the estimated transformation parameters. However, accurately identifying more than the minimum number of corresponding points can be difficult as different modalities often produce different tissue contrasts (a major reason why multiple modalities are used in the first place) and placing or implanting larger numbers of markers is not always possible or desirable. Alternatively, line and surface matching techniques do not require a one to one correspondence of specific
S102

Image registration and data fusion in radiation therapy

Optimizer and registration scheme


Most image registration systems use optimization schemes such as gradient descent or problem specific adaptations of these. Registration of datasets is usually carried out in a hierarchical fashion, starting with downsized versions of the data and iteratively registering successively finer versions. The degrees of freedom of the geometric transformation can also be varied to speed the registration process. An example scheme might begin with simple translations, and then allow rotations, then low spatial frequency deformations and finally the full deformation model [12]. A hierarchical approach saves computation time and also helps avoid local minima, which become more likely as the degrees of freedom of the deformation model increase. For deformable image registration problems using a large number of degrees of freedom, some form of regularization may also be imposed to discourage unreasonable deformations such as warping of bones and folding of tissue. One approach to this problem is to filter the deformations between iterations of the optimization [31]. Another approach is to include a regularization term in the registration metric that penalizes non-physical deformations. The regularization term can even be made spatially variant using known or estimated tissue properties [32].

Data fusion
The motivation for registering imaging studies is to be able to map information derived from one study to another, or to directly combine or fuse the imaging data from the studies to create displays that contain relevant features from each modality. For example, a tumour volume may be more clearly visualized using a specific MR image sequence or coronal image plane rather than the axial treatment planning CT. If the geometric transformation between the MR study and the treatment planning CT study is known, the clinician is able to outline the tumour using images from the MR study and

map these outlines to the images of the CT study. This process is called structure mapping (Figure 5). Another approach to combining information from different imaging studies is to map directly the image intensity data from one study to another so that at each voxel there are two (or more) intensity values rather than one. The goal is to create a version of Study B (Study B) with images that match the size, location and orientation of those in Study A. These corresponding images can then be combined or fused in various ways to help elucidate the relationship between the data from the two studies. Various relevant displays are possible using this multistudy data. For example, functional information from a PET imaging study can be merged with the anatomic information from an MRI study and displayed as a colourwash overlay (Figure 6). This type of image synthesis is referred to as image fusion. A variety of techniques exist to present fused data, including the use of overlays, pseudo-colouring and modified greyscales. For example, the hard bone features of a CT imaging study can be combined with the soft tissue features of an MRI study by adding the bone extracted from the CT to the MR dataset. Another method is to display anatomic planes in a side-by-side fashion (Figure 6). Such a presentation allows structures to be defined using both images simultaneously. In addition to mapping and fusing image intensities, 3D dose distributions computed in the coordinate system of one imaging study can be mapped to another. For example, doses computed using the treatment planning CT can be displayed over an MR study acquired after the start of therapy. With these data, regions of posttreatment radiological abnormality can be readily compared with the planned doses for the regions. With the introduction of volumetric imaging on the treatment units, treatment delivery CT studies can now be acquired to determine more accurately the actual doses delivered. By acquiring these studies over the course of therapy and registering them to a common reference frame, doses for the representative treatments can be reformatted and accumulated to provide a more likely estimate of the

Figure 5. Structure mapping. A tumour volume is outlined by the clinician on an MR study and then mapped to the treatment
planning CT using the computed transformation.

The British Journal of Radiology, Special Issue 2006

S103

M L Kessler

(a)
side display with linked cursor. (b) Split screen display. (c) Colourwash overlay.

(b)

(c)

Figure 6. Different approaches to display data from multiple studies which have been registered and reformatted. (a) Side-by-

delivered dose. This type of data can be used as input into the adaptive radiotherapy decision process.

Validation
It is important to validate the results of a registration before making clinical decisions based on the results. To do this, most image registration systems provide some combination of numerical and visual verification tools. A common numerical evaluation technique is to define a set of landmarks for corresponding anatomic points on Study A and Study B and compute the distance between the actual location of the points defined on Study A and the resulting transformed locations of the points from Study B. This calculation is similar to a point matching metric but, as discussed earlier, it may be difficult to accurately and sufficiently define the appropriate corresponding points, especially when registering multimodality data. Also, if deformations are involved, the evaluation is not valid for regions remote from the defined points.

Regardless of the output of any numerical technique used, which may only be a single number, it is important for the clinician to appreciate how well in three dimensions the information they define on one study is mapped to another. There are many possible visualization techniques to help to evaluate qualitatively the results of a registration. Most of these are based on data mapping and fusion display techniques. For example, paging through the images of a split screen display and moving a horizontal or vertical divider across regions where edges of structures from both studies are visible can help uncover even small areas of misregistration (Figure 7). Another interesting visual technique involves dynamically switching back and forth between corresponding images from the different studies at about once per second and focusing on particular regions of the anatomy to observe how well they are aligned. In addition to comparing how well the images from Study A and Study B correspond at the periphery of anatomic tissues and organs, outlines from one study can be displayed over the images of the other. Figure 8 shows a brain surface which was automatically segmented

(a)

(b)

(c)

Figure 7. Image-image visual validation using split screen displays of native MR and reformatted CT study. S104 The British Journal of Radiology, Special Issue 2006

Image registration and data fusion in radiation therapy

(a)

(b)

(c)

Figure 8. Image-geometry visual validation structure overlay of CT defined brain outlines over MR images.

from the treatment planning CT study and mapped to the MR study. The agreement between the CT-based outlines at the different levels and planes of the MR study demonstrate the accuracy of the registration. In practice, the accuracy of the registration process depends on a number of factors. For multimodality registration of PET/CT/MR data in the brain, registration accuracy on the order of a voxel size of the imaging studies can be achieved. Outside the head, many factors confound single voxel level accuracy, such as machine induced geometric and intensity distortions as well as dramatic changes in anatomy and tissue loss or gain. Nevertheless, accuracy at the level of a few voxels is certainly possible in many situations.

Clinical applications
Image registration and data fusion are useful at each step of the patient management process in radiation therapy; for initial diagnosis and staging, during treatment planning and delivery, and after therapy to help monitor the patients response. The overall purpose of these tools at each stage is the same; to help to integrate the information from different imaging studies in a quantitative manner to create a more complete representation of the patient. Over the past several years, treatment planning and treatment delivery systems have evolved to provide direct support for image registration and data fusion. Typical examples of how these techniques are used for treatment planning, delivery, and adaptation are described here.

Treatment planning
Most modern treatment planning systems permit the use of one or more datasets in addition to the treatment planning CT for structure delineation and visualization. These are sometimes referred to as secondary datasets. In order to transfer anatomic outlines and other
The British Journal of Radiology, Special Issue 2006

geometric information from these datasets to the planning CT, the transformation between the secondary dataset and the planning CT is required. Furthermore, using the inverse of this transformation, it is also possible to transfer information computed using the planning CT, such as the planned dose, to the secondary dataset. Incorporation of secondary or complementary data from MRI and nuclear medicine imaging studies is becoming increasingly common. MR provides superior soft tissue contrast relative to CT and the ability to image directly along arbitrary planes can aid in the visualization and delineation of certain anatomic structures, such as the optic nerves and chiasm. MR can also provide information on localized metabolite concentrations using spectroscopy [3, 33]. Incorporation of functional information from PET and SPECT can help remove ambiguities that might exist on the treatment planning CT between the tumour and other conditions such as atelectasis and necrosis [34]. These studies can also indicate nodal involvement and provide a map of local tissue function that can be used construct objective functions for dose optimization [3, 35]. Figure 9 illustrates the use of MR as a secondary dataset for target and normal structure delineation. An axial and coronal MR study was acquired and registered to the treatment planning CT using a geometric transformation which allowed rotations and translations, as the anatomy in this region moves in a rigid fashion. Since the image data were from different modalities, the mutual information registration metric was used. Splitscreen visualization of the registered datasets was used to validate the computed transformation which was judged to be accurate to within 12 mm over the image volume (Figure 7). The gross tumour volume (GTV) was defined as the region of enhancement in the post GdDPTA contrast MR studies. The clinician outlined this volume on both the axial and coronal sets of MR images. The optic nerves and chiasm were outlined on the coronal MR study. The outlines were used to generate a 3D surface description for each tissue and these were mapped to the coordinate system of the planning CT
S105

M L Kessler

Figure 9. Incorporation of MR image data into the treatment planning process.

using the computed transformation. The outlines of these mapped surfaces for each CT image were derived by intersecting the transformed surfaces along the planes defined by each image (Figure 5). Because of differences in partial volume averaging between the axial and coronal MR images, the outlines derived from the axial and coronal MR data are not identical. In these cases, the clinician has the choice to use one or other outline or to generate a composite outline using a Boolean OR operation. For this example, the CT greyscale data did not contribute any information for the definition of the GTV, optic nerves, or optic chiasm. Had the physician outlined a CT-based GTV, it could have been incorporated directly into the composite GTV or compared with the MR based outlines to reconcile potential ambiguities. At this point, the outlines of the tumour and normal structures were used for treatment planning as if they were derived using only the planning CT. The final planning target volume (PTV) was created by uniformly expanding the composite GTV surface by 5 mm to account for setup uncertainty. A treatment plan and dose distribution was generated using the CT data, PTV and normal structures. The CT-based dose distribution was then mapped back to the MR study for further visualization of the dose relative to the underlying anatomy.

volumetric image data at the treatment unit using conebeam reconstruction of a set of projection images acquired by rotating the treatment gantry around the patient. (See papers by Kirby and Glendinning, Moore et al and Chen and Pouliot in this issue). These cone-beam data can be registered directly with the planning CT to determine how to shift (and possible rotate) the treatment table to properly position the patient for treatment [36, 37]. Figure 10 shows an example of an interface for 3D image-based alignment at the treatment unit using a cone-beam CT dataset and the planning CT. Automated image registration using successively finer data resolution and mutual information are used to determine the rotations and translations required to align the two datasets. These are then translated into machine parameters which can be automatically downloaded to the treatment unit and set-up. In this example, the accuracy of the registration is assessed using both image-image and structure-image overlay displays. These same tools and image data are also available off-line so that the clinician can track and analyse the progress of the treatments.

Treatment adaptation and customization


On-line imaging has made it more convenient to acquire image data of the patient over the course of treatment. Using these data, it is possible to uncover changes in patient anatomy or treatment setup that are significant and dictate changes to the original treatment plan. Better estimates of individual treatment doses can be computed using these data and the actual machine parameters. By registering these data to the base treatment planning CT, it is possible to construct a more complete model of the accumulated dose to the patient. This information can then be used to assess if and how a treatment plan should be adapted or further customized [3841]. Figure 11 shows an example of dose accumulation for two datasets of the patient at different points in the breathing cycle. The dose distribution displayed on the left image was computed directly using the image dataset shown [42]. The dose distribution displayed on
The British Journal of Radiology, Special Issue 2006

Treatment delivery
Once a treatment plan is created, it is transferred to the treatment unit for delivery. The location and orientation of the patient on the treatment machine must be adjusted so that the centre and orientation of the coordinate system of the treatment plan coincide with that of that of the treatment unit. Image registration is typically used to carry out this process using images acquired in the treatment room and the planning CT. The most common practice is to generate a pair of orthogonal digitally reconstructed radiographs (DRRs) from the planning CT and register these simulated radiographs with actual radiographs acquired by a flat-panel imager attached to the treatment unit. It is now also possible to acquire
S106

Image registration and data fusion in radiation therapy

Figure 10. Volumetric registration


at the treatment unit. A cone-beam CT acquired at the time of treatment is registered to the treatment planning CT (larger dataset) to properly position the patient on the treatment table (courtesy of Peter Monroe, PhD, Varian Medical Systems).

the middle image was computed from another dataset and mapped to the image data shown using the B-spline deformation field computed by registering the two datasets using a sum-of-squares difference metric. The dose distribution on the right is the weighted sum of the two distributions. This process can be continued throughout the course of therapy to provide up-to-date information on the delivered dose.

Summary
Over the past several years there has been an explosion of the use of image data from a variety of modalities to aid in treatment planning, delivery and evaluation. In order to make quantitative use of these data it is necessary to determine the transformation that relates the coordinates of the individual datasets to one another. The process of finding this transformation is referred to as image registration. Once the geometric relationship between the datasets has been determined it is possible to utilize the information they provide by mapping the image data, derived structures, and computed dose distributions between datasets using a process called data fusion. There are many different

techniques that have been and are being studied to improve the accuracy and utility of both image registration and data fusion. Both processes are now essential components for modern treatment planning and delivery. As the need, availability and diversity of image data continues to increase, they will be even more important to each part of patient management process. These tools, however, can not replace clinical judgment. Different imaging modalities image the same tissues differently and, although tools may help us to understand better and differentiate between tumour and non-tumour, they cannot yet make the ultimate decision of what to treat and what to not treat. These decisions still lie with the clinician, although they now have more sophisticated tools to help them make these decisions.

Acknowledgments
Portions of the text and some of the figures presented here were published previously in Kessler ML, Roberson M. Image registration and data fusion for radiotherapy treatment planning. In: Schlegel W, Bortfeld T, Grosu A-L, editors. New technologies in radiation oncology. Springer, 2006.

Figure 11. Example of dose summation/accumulation using registered datasets (courtesy of Mihaela Rosu, The University of
Michigan).

The British Journal of Radiology, Special Issue 2006

S107

M L Kessler

References
1. Webb S. The physical basis of IMRT and inverse planning. Br J Radiol 2003;76:67889. 2. Eisbruch A. Intensity-modulated radiation therapy: a clinical perspective. Semin Radiat Oncol 2002;12:1978. 3. Ling CC, Humm J, Larson S, Amols H, Fuks Z, Leibel S, et al. Towards multidimensional radiotherapy (MD-CRT): biological imaging and biological conformality. Int J Radiat Oncol Biol Phys 2000;47:55160. 4. Maintz JB, Viergever MA. A survey of medical image registration. Med Image Anal 1998;2:136. 5. Hill DL, Batchelor PG, Holden M, Hawkes DJ. Medical image registration. Phys Med Biol 2001;46:R1R45. 6. Townsend DW, Beyer T. A combined PET/CT scanner: the path to true image fusion. Br J Radiol 2002;75:24S30S. 7. DICOM Part 3, PS3.3 Service Class Specifications, National Electrical Manufacturers Association, Rosslyn, Virgina, USA, 2004. 8. McLaughlin PW, Narayana V, Meriowitz A, Troyer S, Roberson PL, Gonda R Jr, et al. Vessel sparing prostate radiotherapy: dose limitation to critical erectile vascular structures (internal pudendal artery and corpus cavernosum) defined by MRI. Int J Radiat Oncol Biol Phys 2005;61:2031. 9. McLaughlin PW, Narayana V, Kessler M, McShan D, Troyer S, Marsh L, et al. The use of mutual information in registration of CT and MRI datasets post permanent implant. Brachytherapy 2004;3:6170. 10. Roberson PL, McLaughlin PW, Narayana V, Troyer S, Hixson GV, Kessler ML. Use and uncertainties of mutual information for CT/MR registration post permanent implant of the prostate. Med Phys 2005;32:47382. 11. Unser M, Aldroubi A, Eden M. B-spline signal processing: part I-theory. IEEE Trans Signal Processing 1993;41:82133. 12. Kybic J, Unser M. Fast parametric elastic image registration. IEEE Trans Image Processing 2003;75:142742. 13. Maurer CR, Aboutanos GB, Dawant BM, et al. Effect of geometrical distortion correction in MR on image registration accuracy. J Comput Assist Tomogr 1996;20:66679. 14. Bookstein F. Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans Pattern Analysis Machine Intelligence 1989;56785. 15. Meyer CR, Boes JL, Kim B, Bland PH, Zasadny KR, Kison PV, et al. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations. Med Image Anal 1997;1:195206. 16. Coselmon MM, Balter JM, McShan DL, Kessler ML. Mutual information based CT registration of the lung at exhale and inhale breathing states using thin-plate splines. Med Phys 2004;31:29428. 17. Lu W, Chen M, Olivera GH, Ruchala KJ, Mackie T. Fast free-form deformable registration via calculus of variations. Phys Med Biol 2004;49:306787. 18. Christensen GE, Carlson B, Chao KS, Yin P, Grigsby PW, Nguyen K, et al. Image-based dose planning of intracavitary brachytherapy: registration of serial-imaging studies using deformable anatomic templates. Int J Radiat Oncol Biol Phys 2001;51:22743. 19. Thirion JP. Image matching as a diffusion process: an analogy with Maxwells demons. Med Image Anal 1998;2:24360. 20. Wang H, Dong L, Lii MF, Lee AL, de Crevoisier R, Mohan R, et al. Implementation and validation of a three-dimensional deformable registration algorithm for targeted prostate cancer radiotherapy. Int J Radiat Oncol Biol Phys 2005;61:72535. 21. Brock KK, Sharpe MB, Dawson LA, Kim SM, Jaffray DA. Accuracy of finite element model (FEM)-based multi-organ deformable image registration. Med Phys 2005;32:164759.

22. Kessler ML, Pitluck S, Petti PL, Castro JR. Integration of multimodality imaging data for radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 1991;21:165367. 23. Balter JM, Pelizzari CA, Chen GT. Correlation of projection radiographs in radiation therapy using open curve segments and points. Med Phys 1992;19:32934. 24. Langmack KA. Portal imaging. Br J Radiol 2001;74:789804. 25. Pelizzari CA, Chen GT, Spelbring DR, Weichselbaum RR. Accurate three-dimensional registration of CT, PET, and/or MR images of the brain. J Comput Assist Tomogr 1989;13:206. 26. van Herk M, Kooy HM. Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT using chamfer matching. Med Phys 1994;21:116378. 27. Kim J, Fessler JA. Intensity-based image registration using robust correlation coefficients. IEEE Trans Med Imaging 2004;23:143044. 28. Viola P, Wells WM. Alignment by maximization of mutual information. Int J Computer Vision 1997;13754. 29. Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P. Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging 1997;16:18798. 30. Roman S. Introduction to coding and information theory, Undergraduate Texts in Mathematics, ISBN 0-387-94704-3, New York, NY: Springer-Verlag, 1997. 31. Staring M, Klein S, Pluim JP. Nonrigid registration with adaptive, content-based filtering of the deformation field. Proc SPIE Medical Imaging 2005: Image Proc. 2005:21221. 32. Ruan R, Fessler JA, Roberson M, Balter J, Kessler M. Nonrigid registration using regularization that accommodates for local tissue rigidity. Proc SPIE Medical Imaging Proc. Vol 6144, 2006. (In press). 33. Graves EE, Pirzkall A, Nelson SJ, Larson D, Verhey L. Registration of magnetic resonance spectroscopic imaging to computed tomography for radiotherapy treatment planning. Med Phys 2001;28:248996. 34. Munley MT, Marks LB, Scarfone C, Sibley GS, Patz EF Jr, Turkington TG, et al. Multimodality nuclear medicine imaging in three-dimensional radiation treatment planning for lung cancer: challenges and prospects. Lung Cancer 1999;23:10514. 35. Marks LB, Spencer DP, Bentel GC, et al. The utility of SPECT lung perfusion scans in minimizing and assessing the physiologic consequences of thoracic irradiation. Int J Radiat Oncol Biol Phys 1993;26:65968. 36. Jaffray DA, Siewerdsen JH, Wong JW, Martinez AA. Flatpanel cone-beam computed tomography for imageguided radiation therapy. Int J Radiat Oncol Biol Phys 2002;53:133749. 37. Mackie TR, Kapatoes J, Ruchala K, Lu W, Wu C, Olivera G, et al. Image guidance for precise conformal radiotherapy. Int J Radiat Oncol Biol Phys 2003;56:89105. 38. Smitsmans MH, Wolthaus JW, Artignan X, de Bois J, Jaffray DA, Lebesque JV, et al. Automatic localization of the prostate for on-line or off-line image-guided radiotherapy. Int J Radiat Oncol Biol Phys 2004;60:62335. 39. Yan D, Wong J, Vicini F, Michalski J, Pan C, Frazier A, et al. Adaptive modification of treatment planning to minimize the deleterious effects of treatment setup errors. Int J Radiat Oncol Biol Phys 1997;38:197206. 40. Yan D, Lockman D, Martinez A, Wong J, Brabbins D, Vicini F, et al. Computed tomography guided management of interfractional patient variation. Semin Radiat Oncol 2005;3:16879. 41. Lam KL, Ten Haken RK, Litzenberg D, Balter JM, Pollock SM. An application of Bayesian statistical methods to adaptive radiotherapy. Phys Med Biol 2005;50:384958. 42. Rosu M, Chetty IJ, Balter JM, Kessler ML, McShan DL, Ten Haken RK. Dose reconstruction in deforming lung anatomy: dose grid size effects and clinical implications. Med Phys 2005;32:248795.

S108

The British Journal of Radiology, Special Issue 2006

You might also like