0% found this document useful (0 votes)
68 views

Pattern Recognition Letters: Manning Wang, Zhijian Song

This document discusses an algorithm to automatically locate the centers of fiducial markers used in image-guided neurosurgery. The algorithm compares marker models to surface patches from CT and MRI images to find the marker centers. Automatically finding these centers can help reduce errors and speed up the registration process between patient and image spaces.

Uploaded by

ksh98
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

Pattern Recognition Letters: Manning Wang, Zhijian Song

This document discusses an algorithm to automatically locate the centers of fiducial markers used in image-guided neurosurgery. The algorithm compares marker models to surface patches from CT and MRI images to find the marker centers. Automatically finding these centers can help reduce errors and speed up the registration process between patient and image spaces.

Uploaded by

ksh98
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Pattern Recognition Letters 30 (2009) 414–420

Contents lists available at ScienceDirect

Pattern Recognition Letters


journal homepage: www.elsevier.com/locate/patrec

Automatic localization of the center of fiducial markers in 3D CT/MRI images


for image-guided neurosurgery
Manning Wang, Zhijian Song *
Digital Medical Research Center, Shanghai Medical School, Fudan University, P.O. Box 251, 138 Yixueyuan Road, Shanghai 200032, People’s Republic of China

a r t i c l e i n f o a b s t r a c t

Article history: Image-guided neurosurgery systems (IGNS) play an important role in intracranial surgery. Adhesive
Received 14 September 2007 markers on the skin are widely used for patient-to-image registration, with the centers of these markers
Received in revised form 3 October 2008 serving as fiducial points in point-pair registration. In this paper, we propose a novel algorithm to auto-
Available online 12 November 2008
matically locate the center of these markers. The algorithm compares the marker model and the surface
Communicated by W. Zhao
patches from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images of the patient’s
head. Automatic localization of these centers will help to reduce the human error in registration and
speed up the registration process. Experiments with clinical 3D CT and MRI data confirm that this algo-
Keywords:
Marker center localization
rithm can accurately locate the centers of these markers.
Image-guided neurosurgery Ó 2008 Elsevier B.V. All rights reserved.
Image processing
Registration

1. Introduction In a frameless stereotactic system, some common features are


utilized to register the patient space with the image space. These
Image-guided neurosurgery systems (IGNS) have become an features can be extracted and recorded in the image space as well
important tool for planning and performing intracranial surgery, as the patient space, and a rigid 3D transformation between these
due to its ability to make surgery more effective and less invasive two spaces is calculated by matching these features. The technique
(Gumprecht et al., 1999; Haase, 1999; Raabe et al., 2003; Peters, currently used for patient-to-image registration in IGNS is based
2006). In IGNS, patient-specific Computed Tomography (CT) or on either point-pair matching or surface matching (Eggers et al.,
Magnetic Resonance Imaging (MRI) images are used to guide the 2006).
surgical operation. This guidance is realized by tracking the surgi- In point-pair matching, a set of at least three points is first se-
cal instruments acting on the patient and visualizing them on the lected in the image space, and then the corresponding points are
images. Registration, defined as the alignment of different coordi- recorded by touching them with a tracked pointing device in the
nate systems, is an essential step in IGNS that determines the rela- patient space. Some analytical methods have been developed to
tionship between the patient space and the image space. The first calculate the relationship between the two spaces from point pairs
registration device, known as stereotaxy technology (Al-Rodhan (Eggert et al., 1997). Three kinds of points can be used for this kind
and Kelly, 1992), was introduced approximately 100 years ago. of registration in IGNS: bone-implanted screws, anatomical land-
With this technology, a frame, which works as a reference for marks, and adhesive markers on the skin. Although point-pair reg-
determining the relationship between the images and the patient, istration using bone-implanted screws yields the highest accuracy,
is attached to the patient’s head during image acquisition and sur- it is often avoided in practice because of its invasiveness. Anatom-
gery. The disadvantage of this technology is that the frame is inva- ical landmarks provide only limited accuracy of registration,
sive to the patient and also hampers the surgical process. according to some research, because of the indefinite position of
Substantial progress in computer technology and the invention of superficial landmarks and the difficulty of accessing the distinct
spatial tracking devices have given rise to image-guided neurosur- bone landmarks at the site of surgery (Fright and Linney, 1997;
gery based on a frameless stereotactic system involving space reg- Germano et al., 1999; Helm and Eckel, 1998).
istration and instrument tracking. Unlike point-pair matching, surface-matching attempts to align
the contour of the real patient with the corresponding surface ex-
tracted from the images. The Iterative Closest Point (ICP) method
* Corresponding author. Tel.: +86 21 54237054; fax: +86 21 54237797.
introduced by Chen and Medioni (1992) and Besl and McKay
E-mail addresses: [email protected] (M. Wang), [email protected] (1992) is the most common method used for correlating two sur-
(Z. Song). faces. A comprehensive survey of different extensions of ICP has

0167-8655/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.patrec.2008.11.001
M. Wang, Z. Song / Pattern Recognition Letters 30 (2009) 414–420 415

been published (Rusinkiewicz and Levoy, 2001). Regions used for Some research has examined automatic detection of markers
surface matching usually include the forehead, nose, and areas using 2D image processing techniques. Wang et al. (1996) de-
around the orbits. Since these regions are concentrated in the ante- scribed a two-step knowledge-based method to localize the cen-
rior part of the head, registration error in the occipital region is of- troids of implanted markers. Their method is limited to
ten large. Another problem with surface-matching is the residual cylindrical markers, and the centroid of the marker is used as a
rotational error, which may be small near the registration surface fiducial point for CT and MRI image registration. When they tested
but become noticeable in regions distal from the surface (Marmu- their technique, the authors found a false marker rate of zero for CT
lla et al., 2006; Raabe et al., 2002). Eggers et al. (2006) have re- images and 1.4% (two of 144) for MRI images. The average localiza-
viewed the techniques used in patient-to-image registration and tion error was approximately 0.4 mm. Nevertheless, this method is
have listed the advantages and disadvantages of each approach. not suitable for image-guided neurosurgery because the centroid
Recent clinical research reported by Woerdeman et al. (2007) of the cylindrical marker is inside the marker and untouchable,
showed that point-pair registration based on adhesive markers while the surgeon needs to touch the fiducial point on the patient
on the skin provides much higher accuracy than point-pair regis- during patient-to-image registration. Chen et al. (2006) presented
tration based on anatomical landmarks or registration by surface a method based on edge map construction and curvature-based
matching. Thus, point-pair registration based on adhesive markers object detection to localize the center of marker, but their method
on the skin remains a widely used method in IGNS. localizes markers in 2D slices instead of a 3D volume. Tan et al.
In point-pair registration based on adhesive markers on the (2006) proposed an approach to detect the markers in a 3D volume
skin, it is important that a clearly recognizable fiducial point be de- using a set of 2D templates. They also applied this method to image
fined both in the images and on the patient. In the current IGNS, all registration, reporting the successful detection of 429 of 430 mark-
of the fiducial points in images are selected manually, and the ers. However, they did not describe how to localize the center of
resulting error in selection may jeopardize the registration accu- the marker, which is actually used as the fiducial point for pa-
racy. In addition, the accurate selection of fiducial points in IGNS tient-to-image registration. Gu and Peters (2004) designed a meth-
is time-consuming and difficult to teach new users. Automatic od based on 3D morphological operation, and the average
and accurate locating of the fiducial points from adhesive markers localization error was 0.37 mm for simulated data sets and
on the skin can improve the accuracy of registration as well as 0.31 mm for a CT phantom. However, they did not test their meth-
speed up the registration process and training of new users. od on clinical data, which are generally much noisier.

Fig. 1. Adhesive marker on the skin: (a) Photograph of the marker; (b) photograph of markers adhering to the patient’s head; (c) visualization of markers and the patient’s
head from CT volume data; (d) the projection height image of the marker model.
416 M. Wang, Z. Song / Pattern Recognition Letters 30 (2009) 414–420

In this paper, we propose a new automatic method that uses the In order to compare the model with the surface patch extracted
3D shape of an adhesive marker on the skin to localize its center in from images, we first project the marker model to its bottom plane
3D CT and MRI images. Experiments with clinical data show that and get a Project Height Image (PHI), as illustrated in Fig. 1d. The
this method is feasible and accurate. We present our approach in center of the PHI corresponds to the center of the marker bottom,
the Methods section. In Section 3, we show the results of experi- and the value of each pixel corresponds to the height from the mar-
ments performed on actual clinical data sets, including both CT ker’s upper surface to its bottom surface. In Fig. 1d, we digitize the
and MRI images. Finally, we conclude this paper with a discussion PHI with a pixel size of 0.83 mm to obtain a 25-pixel-by-25-pixel
in Section 4. image. The PHI is enlarged for a clear view. In the PHI of the model,
the light gray donut corresponds to the imaged part of the marker,
2. Methods with a pixel value of 3.5/0.83 = 4.22. The dark gray donut corre-
sponds to the edge of the marker, and the value of the pixels in this
2.1. Overview area and in the inner circle is 0. The black areas located at the four
corners do not belong to the projection of the model; thus, pixels in
Our method compares the marker model and the surface patch these areas are not used in the comparison.
of the patient’s head segmented from 3D CT or MRI images. The We then iterate through all surface voxels in the volume data,
model can be generated from the physical dimension of the mar- and, at each voxel, we calculate a PHI by projecting the surface
ker. In this paper, we use the ‘‘MM3302 Multi-Modality IGS/CAS patch around this voxel onto a plane that is approximately parallel
Fiducial Marker” from IZI Medical Products, which is widely used to the patch and that passes through the voxel. The value of each
in image-guided neurosurgery. Fig. 1a is a photograph of the mar- pixel on this PHI is the distance from the plane to the surface voxel
ker. The part that can be imaged is a 3.5 mm-thick donut, with an on the patch that projects onto this pixel. If more than one surface
inner diameter of 5 mm and outer diameter of 15 mm. Underneath voxel projects onto one pixel, the longest distance is used. We
the donut lies a thin layer with a diameter of 21 mm that is used to compare the PHI at each voxel with that of the model and record
adhere to the patient’s head. The center of the marker bottom is the voxel with the difference below a threshold as a candidate
used as the fiducial point in image-guided neurosurgery. Fig. 1b for the marker center. Subsequently all candidates are clustered,
shows an image of a patient’s head with several markers, and and the number of result groups is equal to the number of markers.
Fig. 1c shows the 3D visualization of a patient’s head and markers Within each group, the voxel with the smallest difference is taken
based on CT volume data. to be the marker center.

Fig. 2. Visualization of a marker and its projection height image: (a) visualization of a patient’s head and the markers adhering to it; (b) the projection height image of a
surface patch around the center of a marker; this is the same marker as that surrounded by the square in (a); (c) the outer donut of the projection height image; (d) the final
projection height image after compensation.
M. Wang, Z. Song / Pattern Recognition Letters 30 (2009) 414–420 417

The key step in this process is to find a projection direction that


is approximately perpendicular to the surface patch. We explore a
two-step method to find this projection direction. First, we choose
the vector from the voxel to the center of the volume as the initial
direction. This is a good way to approximate the direction perpen-
dicular to the patch when dealing with a volume data set contain-
ing the whole head, which is required for image-guided
neurosurgery. Second, we compare the initial PHI under this pro-
jection direction with that of the model, and we calculate the dif-
ferences between these two PHIs in the area corresponding to the
edge of the marker. A set of parameters for adjusting the projection
direction is calculated and is then used to compute a compensation
value for each pixel on the initial PHI. The final PHI is obtained by
adding the compensation value to the initial value. In the following
subsections, we describe in detail the projection of the surface
patch, the calculation of the compensation parameters, and the
center localization based on comparison of PHIs.

2.2. Projection of the surface patch at each surface voxel

For most commercial IGNS, a threshold is used to segment the


patient from the background in CT or MRI images. The surface of
the skin and the markers can be visualized after the segmentation.
The threshold may be selected either automatically or manually. Fig. 3. Calculation of compensation parameters.
We define the volume data as a function defined on V(V  R3).
The voxel is the grid point with its coordinates denoted as
X(X 2 V), and v(X) is the value of voxel X. We denote NX n as the ble hi between 0° and 180° with an interval of 1°. The rotation axis
set of all voxels in a n * n * n subvolume centered at X, where n is is the line that divides the PHI into two parts, and the difference of
an odd number. For any threshold T, we define the set of surface the sum of all pixel values in each part has the maximum value. If
voxels as the rotation axis is expressed as l(haxis), then
V s ¼ fXjvðXÞ P T; and 9X 0 2 N3X ; vðX 0 Þ < Tg ð1Þ  
X X 
 
haxis ¼ arg max  vðPi Þ  vðPi Þ; ð2Þ
For any surface voxel Xi 2 Vs, we take the direction from Xi to the h P 2S P 2S

i 1 i 2
center of V as the initial projection direction and the plane passing
through Xi as the projection plane. All voxels in N 2rþ1
Xi are projected where l(haxis) divides the PHI into two parts. S1 is the set of all pixels
onto the projection plane, where r is the radius of the marker model on one side of l(haxis), and S2 is the set of all pixels on the other side.
measured in pixels. The value of each pixel on the projection image v(Pi) is the value of pixel Pi.
is the longest one from all distances between the projection plane Another parameter for compensation is the rotation angle a. If a
and the surface voxels projected onto the pixel. In this way, we is small, the compensation value for each pixel can be approxi-
get the initial PHI for a surface patch around voxel Xi. A marker mated by rotating this pixel around l(haxis) by a. This approxima-
on a patient head is illustrated in Fig. 2a, and Fig. 2b shows the tion is valid if the volume data contain the entire head and only the
PHI at the center of this marker. entire head, as required by IGNS. Suppose that OV is the line
l(haxis + 900) and the length from point V to O is 1; if the compensa-
tion value for point V is h, the compensation value for any point Pi
2.3. Computation of the compensation parameters on the image can be calculated as follows:

On the PHI, we denote the donut which corresponds to the pro- CompðPi Þ ¼ h  OP i  cosðhaxis þ 900  hi Þ ð3Þ
jection of the outer edge as the ‘‘outer donut”. For a PHI from a
marker center, if the projection direction is perpendicular to the where OP i is the length from Pi to O, and l(hi) is the line passing Pi
surface patch, the value of all pixels in the outer donut should be and O. Here, h depends on the rotation angle, and, if h is calculated,
0. Otherwise, the value of most of these pixels will not be 0, and the compensation value of each pixel on the initial PHI can be ob-
the result should be the same as if the patch were rotated around tained. The objective of compensation is that all pixels in the outer
an axis passing through the marker center before projection. The donut of the PHI have a value of 0. In practice, h is calculated by
outer donut in Fig. 2b is extracted and illustrated in Fig. 2c. We minimizing the following sum:
can see that the left side of the marker is rotated upward, while    
X  X 
the right side is rotated downward. The next steps in the method    
 ½vðP i Þ þ CompðPi Þ þ  ½vðPi Þ þ CompðPi Þ; ð4Þ
are designed to use the values of pixels in the outer donut to com- P 2S  P 2S 
i 3 i 4
pensate for the rotation.
Fig. 3 illustrates how the compensation parameters are calcu- where S3 is the set of all pixels on one side of the rotation axis in the
lated. The square area is the extent of PHI, and the circle covers outer donut, and S4 is the set of all pixels on the other side. After h is
the effective area within PHI. Point O is the image center, and OX, obtained, the compensation value for each pixel on the initial PHI
OY are the coordinate axes. Any line passing through O on the im- can be calculated from formula (3), and the final PHI can be calcu-
age can be described as l(hi), where hi is the angle between the line lated by adding the compensation value to the initial value. Fig. 2d
and OX. If line OA is the rotation axis, the values of pixels on the is the final PHI after compensation. From this image, we can see that
line are 0. The pixel value is positive on one side of the line and the projection direction of the surface patch is determined much
negative on the other side. In practice, we search within all possi- more accurately.
418 M. Wang, Z. Song / Pattern Recognition Letters 30 (2009) 414–420

2.4. Further processing patch by the total number of pixels in the PHI. For example, the
cover ratio of the projection in Fig. 2b is 0.93. Given that the cover
Two more procedures are needed before the marker center can ratio for each projection is calculated, a threshold can be set and all
be localized based on the PHI difference. The first procedure is to voxels with a cover ratio under the threshold are considered iso-
normalize the difference. The voxel size is different for different lated surface voxels and will not be considered as candidates for
data sets, so the resolution of the PHI is different, since the marker marker centers.
model’s PHI is digitized based on the voxel size. For example, if vol-
ume data are up-sampled to have twice the resolution, the differ- 2.5. Clustering the candidates and localizing the marker center
ence may become four times greater, since the resolution of PHI is
also doubled, and there are four times more pixels involved in the After normalizing the difference and deleting the isolated sur-
comparison. For this reason, the original differences of PHIs are not face voxels, a universal threshold can be set for selecting candi-
comparable between different data sets, and there is not a single dates of the marker centers. For each marker, there may be a
threshold that can be applied to all data. For finding a universal number of voxels that meet these criteria. They are the voxels
threshold for all data sets, the difference must be normalized around the true center of the marker. The distance between every
according to the number of voxels involved in comparison. pair of candidates from the same marker is very small, while the
The second procedure deals with isolated surface voxels. An iso- distance of pairs from different markers must be much bigger.
lated surface voxel is a voxel that has a small number of surface The candidates are clustered using the Nearest-Neighbor Algo-
voxels in its neighborhood. Isolated surface voxels may exist due rithm (Duda et al., 2001) with the Euclidean distance as the simi-
to noise in the images or special anatomical structures, such as larity metric. The threshold used to terminate the algorithm is a
ears. In the PHI of these voxels, only a small number of pixels are distance that is twice the diameter of the marker. After clustering,
covered by the projection of surface voxels and are therefore in- every group corresponds to a marker, and in each group, the voxel
volved in computing the difference between PHIs. As a result, the with the smallest normalized difference is considered the marker
calculated difference may be small. For example, if a surface voxel center.
has no other neighbor voxels on the surface, the difference will be For example, for the data set in Fig. 4, if the cover ratio threshold
0. A cover ratio of the projected patch is used to find whether a is set to be 0.9 and the normalized difference threshold is set to 1,
voxel is an isolated voxel or not. The cover ratio is calculated by 111 marker candidates can be recognized. All these candidates are
dividing the count of the pixels covered by the projected surface clustered into five groups; within each group, five candidates with

Fig. 4. Localization of the marker center. The lower right image is the 3D visualization of the head and the markers from a CT data set. The crosshairs in the other three
sections are moved to the localized center of the marker that is at the middle of two eyebrows; this corresponds to the marker surrounded by the square on the 3D image.
M. Wang, Z. Song / Pattern Recognition Letters 30 (2009) 414–420 419

Table 1 Table 2
The clustering of marker center candidates for the data set in Fig. 4. The clustering Performance of our automatic localization method. A total of 75 markers from 15 CT
produces five groups, each of which corresponds to one marker. The five candidate volume data were tested; 69 of them were localized perfectly, five with a Type 1 error,
positions showing the smallest normalized difference in their respective groups are and one with a Type 2 error. A total of 52 markers from 10 MRI volumes were tested,
listed within each group and the candidate with ‘‘*” after ‘‘No.” is the localized marker and 47 of them were localized perfectly, three with a Type 1 error, and two with a
center. Type 2 error.

Group No. Coordinates of candidate marker centers Normalized difference Modality Patients Markers Perfectly Acceptably
between marker model tested used identified identified
X Y Z
and the surface patches
Type 1 Type 2
1 1 123 84 36 0.63
CT 15 75 69 5 1
2* 124 84 35 0.56
MRI 10 52 47 3 2
3 124 84 36 0.60
4 124 85 35 0.66
5 124 85 36 0.65
2 6 130 34 63 0.71 considerations, the accuracy of the results was not compared at
7 130 35 62 0.68 the sub-voxel level. Instead, we judged the automatic localization
8* 131 34 62 0.67
to be perfect if it matched the voxel selected by the expert. If the
9 131 35 62 0.70
10 131 35 63 0.67 marker center localized by our method and that selected by the ex-
pert had one pixel difference in one axis (Type 1), or one pixel dif-
3 11 172 113 20 0.61
12 173 112 20 0.49 ference in two axes (Type 2), the localization was considered
13* 173 113 20 0.48 acceptable. Otherwise, if the difference in one axis was larger than
14 174 112 20 0.51 1 or there were differences in three axes, the automatic localization
15 174 113 20 0.51 was considered unacceptable.
4 16 214 48 52 0.81 All data were scanned under the imaging protocol for image-
17 215 47 52 0.76 guided neurosurgery. The volume data contained continuous,
18* 215 48 51 0.75
19 215 48 53 0.77
non-overlapping slices. The thickness of each slice was consistent.
20 216 48 52 0.79 Scanning was performed with a standard soft tissue algorithm. Cir-
5 21 220 99 34 0.75
cular or square Field of View (FOV) was adopted, and the smallest
22 221 98 34 0.68 FOV was used to encompass the entire head. The resolution of both
23 221 98 35 0.70 CT and MRI slices was 512 * 512 pixels, and the thickness ranged
24 221 99 34 0.65 from 1.4 mm to 3.0 mm. All images were stored in DICOM format.
25* 222 98 35 0.64
We tested our method on 15 CT volumes and 10 MRI volumes
randomly selected from two hospitals. The CT and MRI volumes in-
volved a total of 75 and 52 adhesive markers on the skin, respec-
tively. Among them, 69 and 47 markers were perfectly identified,
minimal normalized difference are listed in Table 1. The candidates respectively, while six and five markers were acceptably identified.
with a ‘‘*” after ‘‘No.” are the localized marker centers. The results are illustrated in Table 2. In these experiments, the
threshold of the normalized difference was set to 1.5, and the
2.6. Logical steps of the marker center localizing method threshold of the cover ratio was set to 80%. There was no falsely
identified marker in our experiments.
To summarize the preceding discussion, our algorithm to local- All experiments were performed on the platform of the IGNS
ize marker centers consists of the following steps: developed in our laboratory. The workstation had two 3.0 GHz
CPUs with Hyper-Threading technology and 2 GB memory. Parallel
(1) Calculation of the PHI of the marker model. The projection programming of the algorithm was implemented. The execution
direction is perpendicular to the bottom of the marker, and time depended on the size and modality of the data. For CT data
the projection plane is the plane passing through the marker containing approximately 100 slices, the execution time was
center. The center of the PHI is placed at the marker center. approximately 20 s, while for MRI data of the same size, the execu-
(2) Calculation of the initial PHIs for all surface voxels. For each tion time was approximately 40 seconds.
surface voxel, the surface patch in its neighborhood is pro-
jected along an initial projection direction.
4. Discussion
(3) Calculation of the compensation parameters, and calculation
of the final PHIs.
In this paper, we demonstrate a new automatic marker center
(4) Normalizing the final PHI difference and deleting isolated
localization algorithm that performs well on both CT and MRI
surface voxels.
data. Point-pair registration based on adhesive markers on the
(5) Selecting candidates for the marker center by thresholding
skin is a popular approach for patient-to-image registration in
on the normalized PHI difference, and clustering all candi-
IGNS, and IZI MM3002 is the most widely used marker. Our algo-
dates using the nearest neighborhood algorithm.
rithm can be used in these IGNS for automatic localization of the
(6) After clustering, each group corresponds to a marker, and
marker center, and it is helpful in reducing the human errors in
the center of the marker is the surface voxel with the small-
patient-to-image registration and in speeding up the registration
est normalized difference in the group.
process.
In our experiments, the execution time needed for MRI data was
more than that for CT data, even when the volume size was the
3. Validation same. This is because the threshold segmentation produces a large
number of surface voxels within the head in MRI data, especially at
We tested our method with 3D CT and MRI data sets, and we the boundary of the skull. The most time-consuming part of our
compared the localization of our method with the marker center algorithm is the projection of the surface patch for each surface
carefully selected manually by an IGNS expert. Out of practical voxel, so more surface voxels lead to longer projecting time.
420 M. Wang, Z. Song / Pattern Recognition Letters 30 (2009) 414–420

Our future work will focus mainly on optimizing the algorithm Fright, W.R., Linney, A.D., 1997. Registration of 3-D head surfaces using multiple
landmarks. IEEE Trans. Med. Imaging 12, 515–520.
to reduce the execution time. We will try to find ways to speed up
Germano, I.M., Villalobos, H., Silvers, A., Post, K.D., 1999. Clinical use of the optical
the projection of the surface patch, or find some methods of pre- digitizer for intracranial neuronavigation. Neurosurgery 45, 261–270.
processing that can eliminate surface voxels that cannot be the Gu, L., Peters, T.M., 2004. 3D automatic fiducial marker localization approach for
marker center. We will also test the algorithm on more clinical frameless stereotactic neuro-surgery navigation. Lecture Notes in Computer
Sciences 3150, 329–336.
data, especially on MRI data obtained using different scanning Gumprecht, H.K., Widenka, D.C., Lumenta, C.B., 1999. BrainLab vectorvision
protocols. neuronavigation system: Technology and clinical experiences in 131 cases.
Neurosurgery 44 (1), 97–104.
Haase, J., 1999. Image-guided neurosurgery/neuronavigation/the surgiscope—
Acknowledgements reflexions on a theme. Minim. Invasive Neurosurg. 42 (2), 53–59.
Helm, P.A., Eckel, T.S., 1998. Accuracy of registration methods in frameless
The authors are grateful for financial support from the Science stereotaxis. Comput. Aided Surg. 3, 51–56.
Marmulla, R., Mühling, J., Lüth, T., Hassfeld, S., 2006. Physiological shift of facial skin
and Technology Commission of the Shanghai Municipality of China and its influence on the change in precision of computer-assisted surgery. Br. J.
(Grant No. 06dz22103) and from the National Science Fund of Chi- Oral Maxillofac. Surg. 44, 273–278.
na (NSFC; Grant No. 30570506). This research was also supported Peters, T.M., 2006. Image-guidance for surgical procedures. Phys. Med. Biol. 51,
505–540.
in part by the Shanghai Leading Academic Discipline Project (Pro- Raabe, A., Krishnan, R., Wolff, R., Hermann, E., Zimmermann, M., Seifert, V., 2002.
ject No. B112). Laser surface scanning for patient registration in intracranial image-guided
surgery. Neurosurgery 50, 797–801.
Raabe, A., Krishnan, R., Seifert, V., 2003. Actual aspects of image-guided surgery.
References
Surg. Technol. Int. 11, 314–319.
Rusinkiewicz, S., Levoy, M., 2001. Efficient variants of the ICP algorithm. In: Proc.
Al-Rodhan, N.R., Kelly, P.J., 1992. Pioneers of stereotactic neurosurgery. Stereotact. 3rd Internat. Conf. on 3-D Digital Imaging and Modeling, Quebec: IEEE, pp. 145–
Funct. Neurosurg. 58 (1–4), 60–66. 152.
Besl, P.J., McKay, N.D., 1992. A method for registration of 3-D shapes. IEEE Trans. Tan, J., Chen, D., Chaudhary, V., Sethi, I.K., 2006. A template based technique for
Pattern Anal. Machine Intell. 14 (2), 239–256. automatic detection of fiducial markers in 3D brain images. Internat. J. Comput.
Chen, Y., Medioni, G., 1992. Object modelling by registration of multiple range Assisted Radiol. Surg. 1, 47–48.
images. Image Vision Comput. 10 (3), 145–155. Wang, M.Y., Maurer, C.R., Fitzpatrick, J.M., Maciunas, R.J., 1996. An automatic
Chen, D., Tan, J., Chaudhary, V., Sethi, I.K., 2006. Automatic fiducial localization in technique for finding and localizing externally attached markers in CT and MR
brain images. Internat. J. Comput. Assisted Radiol. Surg. 1, 45–46. volume images of the head. IEEE Trans. Biomed. Eng. 43 (6), 627–637.
Duda, R.O., Hart, P.E., Stork, D.G., 2001. Pattern Classification, second ed. John Wiley Woerdeman, P.A., Willems, P.W.A., Noordmans, H.J., Tulleken, C.A.F., Sprenkel,
& Sons. J.W.B., 2007. Application accuracy in frameless image-guided neurosurgery: A
Eggers, G., Muhling, J., Marmulla, R., 2006. Image-to-patient registration techniques comparison study of three patient-to-image registration methods. J. Neurosurg.
in head surgery. Internat. J. Oral Maxillofac. Surg. 35 (12), 1081–1095. 106, 1012–1016.
Eggert, D.W., Lorusso, A., Fisher, R.B., 1997. Estimating 3D rigid body
transformations: A comparison of four major algorithms. Machine Vision
Appl. 9, 272–290.

You might also like