Biometric Recognition Using 3D Ear Shape
Biometric Recognition Using 3D Ear Shape
Abstract—Previous works have shown that the ear is a promising candidate for biometric identification. However, in prior work, the
preprocessing of ear images has had manual steps and algorithms have not necessarily handled problems caused by hair and
earrings. We present a complete system for ear biometrics, including automated segmentation of the ear in a profile view image and
3D shape matching for recognition. We evaluated this system with the largest experimental study to date in ear biometrics, achieving a
rank-one recognition rate of 97.8 percent for an identification scenario and an equal error rate of 1.2 percent for a verification scenario
on a database of 415 subjects and 1,386 total probes.
Index Terms—Biometrics, ear biometrics, 3D shape, skin detection, curvature estimation, active contour, iterative closest point.
1 INTRODUCTION
TABLE 1
Recent Ear Recognition Studies
images per person. The images were selected from a video invariant to initialization, scale, rotation, and noise. The
stream. The first three of these are used as gallery images and experiment displays the robustness of the technique to extract
the last three are probe images. They reported that the the 2D ear. Their extended research applies the force field
recognition rate for the registered people was approximately technique to ear biometrics [17]. In the experiments, they used
100 percent and the rejection rate for unknown people was 252 images from 63 subjects with four images per person
100 percent. collected during four sessions over a five month period; any
Bhanu and Chen [4] presented a 3D ear recognition subject is excluded if the ear is covered by hair. A classification
method using a local surface shape descriptor. Twenty range rate of 99.2 percent is claimed on this 63-person data set. The
images from 10 individuals are used in the experiments and a data set comes from the XM2VTS face image database [22].
100 percent recognition rate is reported. In [8], Chen and Choras [10], [11] introduces an ear recognition method
Bhanu used a two-step ICP algorithm on a data set of based on geometric feature extraction from 2D images of the
30 subjects with 3D ear images. They reported that this ear. The geometric features are computed from the edge-
method yielded two incorrect matches out of 30 people. In detected intensity image. They claim that error-free recogni-
these two works, the ears are manually extracted from profile tion is obtained on “easy” images from their database. The
images. They also presented an ear detection method in [7]. In “easy” images are images of high quality with no earring and
the offline step, they built an ear model template from each of hair covering and without illumination changes. No detailed
20 subjects using the average histogram of the shape index experimental setup is reported.
[21]. In the online step, first, they used step edge detection and Pun and Moon [25] surveyed the literature on ear
thresholding to find the sharp edge around the ear boundary biometrics up to that point in time. They summarized
and then applied dilation on the edge image and connected- elements of five approaches for which experimental results
component labeling to search for ear region candidates. Each have been published [6], [16], [4], [5], [31]. In Table 1, we
potential ear region is a rectangular box, and it grows in four compare different aspects of these and other published works.
directions to find the minimum distance to the model We previously looked at various methods of 2D and 3D ear
template. The region with minimum distance to the model recognition and found that an approach based on 3D shape
template is the ear region. They get 91.5 percent correct matching gave the best performance. The detailed description
detection with a 2.5 percent false alarm rate. No recognition of the comparison of different 2D and 3D methods can be
results are reported based on this detection method. found in [29]. This work found that an ICP-based approach
Hurley et al. [16] developed a novel feature extraction statistically and significantly outperformed the other ap-
technique using force field transformation. Each image is proaches considered for 3D ear recognition and also statisti-
represented by a compact characteristic vector which is cally and significantly outperformed the 2D “eigen-ear”
YAN AND BOWYER: BIOMETRIC RECOGNITION USING 3D EAR SHAPE 1299
Fig. 1. Sample images used in the experiments. (a) Two-dimensional image. (b) Minor hair covering. (c) Presence of earring. (d) Three-dimensional
depth image of (a). (e) Three-dimensional depth image of (b). (f) Three-dimensional depth image of (c).
Fig. 2. Examples of images discarded for quality control reasons. (a) Hair-covered ear. (b) Hair-covered ear. (c) Subject motion.
result [6]. Approaches that rely on the 2D intensity image Vivid 910 range scanner. One 640 480 3D scan and one
alone can only take into account pose change in the image 640 480 color image were obtained in a period of several
plane in trying to align the probe image to the gallery image. seconds. Examples of the raw data are shown in Figs. 1a
Approaches that take the 3D shape into account can account and 1d. The Minolta Vivid 910 is a general-purpose
for more general pose change. Based on our previous work, an 3D sensor, which is not specialized for application in face
ICP-based approach for 3D ear shape is used as the matching or ear biometrics.
algorithm in this current study. From 497 people that participated in two or more image
Of the publications reviewed here, only two [8], [4] deal acquisition sessions, there were 415 who had good-quality
with biometrics based on 3D ear shape. The largest data set 2D and 3D ear images in two or more sessions. Among them,
for 2D or 3D studies, in terms of number of people, is 110 there are 237 males and 178 females. There are 70 people who
[31]. The presence or absence of earrings is not mentioned, wore earrings at least once and 40 people who have minor
except for [30] and [6] in which earrings are excluded. hair covering around the ear. This data is not a part of the Face
Comparing with the publications reviewed above, the Recognition Grand Challenge (FRGC) data set (http://
work presented in this paper is unique in several aspects. face.nist.gov/frgc/), which contains frontal face images
We report results for the largest ear biometrics study to date rather than profile images.
in terms of number of people, which is 415, and in terms of No special instructions were given to the participants to
number of images, which is 1,801. Our work is able to deal make the ear images particularly suitable for this study and,
with the presence of earrings and with a limited amount of as a result, 455 out of 2,256 images were dropped for
occlusion by hair. Ours is the only work to fully auto- various quality control reasons: 381 instances with hair
matically detect the ear from a profile view and segment the obscuring the ear and 74 cases with artifacts due to motion
ear from the surroundings. during the scan. See Fig. 2 for examples of these problems.
Using the Minolta scanner in the high-resolution mode that
we used may make the motion artifact problem more
3 EXPERIMENTAL METHODS AND MATERIALS frequent as it takes 8 seconds to complete a scan.
In each acquisition session, the subject sat approximately The earliest good image for each of the 415 people was
1.5 meters away from the sensor with the sensor looking at enrolled to create the gallery for the experiments. The
the left side of the face. Data was acquired with a Minolta gallery is the set of images that a “probe” image is matched
1300 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 8, AUGUST 2007
Fig. 5. Ear region with skin detection. (a) Original 2D color image. (b) After preprocessing. (c) After skin detection.
YAN AND BOWYER: BIOMETRIC RECOGNITION USING 3D EAR SHAPE 1301
that the hair and clothes are fully removed. Our skin detection
method is based on the work of Hsu et al. [15]. The major
obstacle to using color to detect the skin region is that the
appearance of skin-tone color can be affected by lighting. In
their work, a lighting compensation technique is introduced
to normalize the color appearance. In order to reduce the
dependence of skin-tone color on luminance, a nonlinear
transformation is applied to the luma, blue, and red chroma
(YCbCr) color space. A parametric ellipse in the color space is
then used as a model of skin color, as described in [15].
Fig. 7. Varying ear pit location versus segmentation results. (a) Ear pit
(automatically found). (b) Ear pit (manually found).
Fig. 9. Active contour results using only color or depth information. (a) Only using color (incorrect segmentation). (b) Only using depth (incorrect
segmentation).
Fig. 10. Active contour growing on a real image. (a) Iteration ¼ 0. (b) Iteration ¼ 25. (c) Iteration ¼ 75. (d) Iteration ¼ 150.
do this, the Eimage in (3) is replaced by (6). Consequently, the 5 MATCHING 3D EAR SHAPE FOR RECOGNITION
final energy E is represented by (7):
We have previously compared using an ICP approach on a
EImage ¼ wdepth rImagedepth ðx; yÞ þ wCr rImageCr ðx; yÞ; ð6Þ point-cloud representation of the 3D data and a PCA-style
approach on a range-image representation of the 3D data
[29] and found better performance using an ICP approach
Z1 h i on the point-could representation. The problem with using
1
E¼ jX0 ðsÞj2 þ jX00 ðsÞj2 a range image representation of the 3D data is that
2
0 ð7Þ landmark points must be selected ahead of time to use for
þ wdepth rImagedepth ðx; yÞ þ wCr rImageCr ðx; yÞ normalizing the pose and creating the range image. Errors
!
or noise in this process can lead to recognition errors in the
wcon n ðsÞ: PCA or other algorithms that use the range image. Our
experience is that the ICP style approach using the point
In order to prevent the active contour from continuing to cloud representation can better adapt to inexactness in the
grow toward the face, we modify the internal energy of initial registration, though, of course, at the cost of some
points to limit the expansion when there is no depth jump increase in the computation time for the matching step.
within a 3 5 window around the given point. The Given a set of source points P and a set of model points X,
threshold for the maximum gradient within the window the goal of ICP is to find the rigid transformation T that best
is set as 0.01. With these improvements, the active contour aligns P with X. Beginning with a starting estimate T0 , the
algorithm works effectively in separating the ear from the algorithm iteratively calculates a sequence of transforma-
hair and earrings and the active contour stops at the jawline tions Ti until the registration converges. At each iteration,
the algorithm computes correspondences by finding closest
close to the ear.
points and then minimizes the mean square distance
The initial contour is an ellipse with the ear pit as center.
between the correspondences. A good initial estimation of
Approximately, the major axis is 15 mm and the minor axis
the transformation is required and all source points in P are
is 10 mm and the major axis is vertical. Fig. 10 illustrates the assumed to have correspondences in the model X. The ear
steps of active contour growing for a real image. Fig. 11 pit location from the automatic ear extraction is used to give
shows examples in which the active contour deals with hair the initial translation for the ICP algorithm. The following
and earrings. The 3D shape within the final contour is sections outline our refinements to improve the ICP
cropped out of the image for use in the matching algorithm. algorithm for use in matching ear shapes.
1304 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 8, AUGUST 2007
Fig. 11. Active contour algorithm dealing with earring and blonde hair. (a) Earring and blonde hair. (b) Blonde hair. (c) Earring and blonde hair.
(d) Earring. (e) Earring and blonde hair. (f) Earring and blonde hair.
5.1 Computation Time Reduction An “outlier” match occurs when there is a poor match
It is well known that the basic ICP algorithm can be time between a point on the probe and a point on the gallery. To
consuming. In order to make it more practical for use in improve performance, outlier match elimination is accom-
biometric recognition, we use a k-d tree data structure in the plished in two stages. During the calculation of the
search for closest points, limit the maximum number of transformation matrix, the approach is based on the
iterations to 40, and stop if the improvement in mean square assumption that, for a given noise point p on the probe
difference between iterations drops below 0.001. This allows a surface, the distance from p to the associated closest point gp
probe shape to be matched against a gallery of 415 ear shapes on the gallery surface will be much larger than the average
in 10 minutes or better than 40 shape matches per minute. distance [32], [19]. For each point p on the probe surface, we
find the closest point gp on the gallery surface. Let D ¼
This is with an average of 6,000 points in a gallery image and
dðp; gp Þ represent the distance between the two points. Only
1,400 in a probe image. The ICP algorithm is implemented in
those pairs of points whose D is less than a threshold are
C++ based on the VTK 4.4 library [1] and run on a dual-
used to calculate the transformation matrix. Here, the
processor 2.8-GHz Pentium Xeon system. The current
threshold is set as mean distance þ R 2, where R is the
computation speed is obviously more than sufficient for a
resolution of the probe surface.
verification scenario in which a probe is matched against a The second stage occurs outside the transformation matrix
claimed identity. It is also sufficient for an identification calculation loop. After the first step, a transformation matrix
scenario involving a few tens of subjects. is generated to minimize the error metric. We apply this
5.2 Recognition Performance Improvement transformation matrix on the source surface S and obtain a
new surface S 0 . Each point on the surface S 0 will have a
Ideally, if two scans come from the same ear with the same
distance to the closest point on the target surface. We sort all
pose, the error distance should be close to zero. However,
of the distance values and use only the lower 90 percent to
with pose variation and scanning error, the registration
calculate the final mean distance. Other thresholds (99, 95, 85,
results can be greatly affected by data quality. Our approach
80, and 70 percent) were tested and 90 percent gives the best
to improve performance focuses on reducing the effect of performance, which is consistent with the experiments of
noise and using a point-to-surface error metric for sparse other researchers [24].
range data.
5.2.2 Point-to-Point versus Point-to-Surface Approach
5.2.1 Outlier Elimination Two approaches are considered for matching points from
The general ICP algorithm requires no extracted features or the probe to points on the gallery: point-to-point [2] and
curvature computation [2]. The only preprocessing of the point-to-surface [9]. In the point-to-point approach, we try
range data is to remove “spike” outlier points. In a 3D face to find the closest point on the target surface. In the point-
image, the eyes and mouth are common places for holes to-surface approach, we use the output from the point-to-
and spikes to occur. Three-dimensional ear images do point algorithm first. Then, from the closest point obtained
exhibit some spikes and holes due to oily skin or sensor earlier on the target surface, all of the triangles around this
error, but these occur less frequently than in 3D face images. point are extracted. Then, the real closest point is the point
YAN AND BOWYER: BIOMETRIC RECOGNITION USING 3D EAR SHAPE 1305
TABLE 2
ICP Performance by Using Point-to-Surface, Point-to-Point, and Revised Version, and
Time Is for One Probe Matched to One Gallery Shape
*Recognition rates and execution times quoted elsewhere in the paper are for the G1, P2 instance of the algorithm using our “mixed” ICP.
on any of these triangles with the minimum distance to the 6 EXPERIMENTAL RESULTS
source point. In general, point-to-surface is slower, but also In an identification scenario, our algorithm achieves a rank-
more accurate in some situations. one recognition rate of 97.8 percent on our 415-subject data
As shown in Table 2, the point-to-point approach is fast and set with 1,386 probes. The cumulative match characteristic
accurate when all of the points on the source surface can find a (CMC) curve is shown in Fig. 12a. In a verification scenario,
good closest point on the target surface. But, if the gallery is
our algorithm achieves an EER of 1.2 percent. The receiver
subsampled, the point-to-point approach loses accuracy.
operating characteristic (ROC) curve is shown in Fig. 12b.
Since the probe and gallery ear images are taken on different
This is an excellent performance in comparison to previous
days, they vary in orientation. When both gallery and probe
images are subsampled, it is difficult to match points on the
probe surface to corresponding points on the gallery surface.
This generally increases the overall mean distance value. But,
this approach is much faster than point-to-surface.
On the other hand, the greatest advantage of the point-
to-surface approach is that it is accurate through all of the
different subsample combinations. Even when the gallery is
subsampled by every four rows and columns, the perfor-
mance is still acceptable.
Our final algorithm attempts to exploit the trade-off
between performance and speed. The point-to-point ap-
proach is used during the iterations to compute the
transformation matrix. One more point-to-surface iteration
is done after obtaining the transformation matrix to compute
the error distance. This revised algorithm works well due to
the good quality of the gallery images, which makes it
possible for the probe images to find the corresponding
points. As a biometrics application and especially in a
verification scenario, we can assume that the gallery image
is always of good quality and the ear orientation exposes the
most part of ear region. The final results reflecting the revised
algorithm are shown in Table 2.
Table 2 leads to two conclusions: The first is that, when
the gallery and probe surfaces have similar resolution, the
mixed algorithm is always more accurate than pure point-
to-point matching and has similar computation time. The
second is that, when the gallery surface is more densely
sampled than the probe surface, the mixed algorithm is Fig. 12. The performance of ear recognition. (a) CMC curve. (b) ROC
both faster and more accurate than point-to-surface ICP. curve.
1306 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 8, AUGUST 2007
TABLE 3
Results of Off-Angle Experiments with a 24-Subject Data Set
Fig. 13. Examples of asymmetric ears. (a) Right ear. (b) Left ear. (c) Right
ear. (d) Mirrored left ear.
Fig. 14. Example images acquired for off-angle experiments. (a) Straight-on. (b) Fifteen degrees off. (c) Thirty degrees off. (d) Forty-five degrees off.
YAN AND BOWYER: BIOMETRIC RECOGNITION USING 3D EAR SHAPE 1307
separating the ear from hair and earring. The recognition [6] K. Chang, K. Bowyer, and V. Barnabas, “Comparison and
Combination of Ear and Face Images in Appearance-Based
subsystem uses an ICP-based approach for 3D shape Biometrics,” IEEE Trans. Pattern Analysis and Machine Intelligence,
matching. The experimental results demonstrate the power vol. 25, pp. 1160-1165, 2003.
of our automatic ear extraction algorithm and 3D shape [7] H. Chen and B. Bhanu, “Human Ear Detection from Side Face
Range Images,” Proc. Int’l Conf. Image Processing, pp. 574-577, 2004.
matching applied to biometric identification. The system [8] H. Chen and B. Bhanu, “Contour Matching for 3D Ear Recogni-
has a 97.8 percent rank-one recognition rate and a 1.2 percent tion,” Proc. Seventh IEEE Workshop Application of Computer Vision,
pp. 123-128, 2005.
EER on a time-lapse data set of 415 persons with 1,386 probe [9] Y. Chen and G. Medioni, “Object Modeling by Registration of
images. Multiple Range Images,” Image and Vision Computing, vol. 10,
The system as outlined in this paper is a significant and pp. 145-155, 1992.
[10] M. Choras, “Ear Biometrics Based on Geometrical Feature
important step beyond existing work in ear biometrics. It is Extraction,” Electronic Letters on Computer Vision and Image
fully automatic, handling preprocessing, cropping, and Analysis, vol. 5, pp. 84-95, 2005.
matching. The system addresses issues that plagued earlier [11] M. Choras, “Further Developments in Geometrical Algorithms for
Ear Biometrics,” Proc. Fourth Int’l Conf. Articulated Motion and
attempts to use 3D ear images for recognition, specifically Deformable Objects, pp. 58-67, 2006.
partial occlusion of the ear by hair and earrings. [12] L.D. Cohen, “On Active Contour Models and Balloons,” Computer
There are several directions for future work. We Vision, Graphics, and Image Processing. Image Understanding, vol. 53,
no. 2, pp. 211-218, 1991.
presented techniques for extracting the ear image from hair [13] D. Cremers, “Statistical Shape Knowledge in Variational Image
and earrings, but there is currently no information on Segmentation,” PhD dissertation, Dept. of Math. and Computer
whether the system is robust when subjects wear eye- Science, Univ. of Mannheim, Germany, July 2002.
[14] P. Flynn and A. Jain, “Surface Classification: Hypothesis Testing
glasses. We intend to examine whether eyeglasses can cause and Parameter Estimation,” Proc. IEEE Conf. Computer Vision
a shape variation in the ear and whether this will affect the Pattern Recognition, pp. 261-267, 1988.
algorithm. Additionally, we are interested in further [15] R.-L. Hsu, M. Abdel-Mottaleb, and A. Jain, “Face Detection in
Color Images,” IEEE Trans. Pattern Analysis and Machine Intelli-
quantifying the effect of pose on ICP matching results. gence, vol. 24, pp. 696-706, 2002.
Further study should result in guidelines that provide best [16] D. Hurley, M. Nixon, and J. Carter, “Force Field Energy
practices for the use of 3D images for biometric identifica- Functionals for Image Feature Extraction,” Image and Vision
tion in production systems. Also, speed and recognition Computing J., vol. 20, pp. 429-432, 2002.
[17] D. Hurley, M. Nixon, and J. Carter, “Force Field Energy
accuracy remain important issues. We have proposed Functionals for Ear Biometrics,” Computer Vision and Image
several enhancements to improve the speed of the algo- Understanding, vol. 98, pp. 491-512, 2005.
rithm, but the algorithm might benefit from adding feature [18] A. Iannarelli, Ear Identification. Paramont Publishing, 1989.
[19] A.E. Johnson, http://www-2.cs.cmu.edu/vmr/software/
classifiers. We have both 2D and 3D data and they are meshtoolbox, 2004.
registered with each other, which should make it straight- [20] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active Contour
forward to test multimodal algorithms. Models,” Int’l J. Computer Vision, vol. 1, pp. 321-331, 1987.
[21] J. Koenderink and A. van Doorn, “Surface Shape and Curvature
The 2D and 3D image data sets used in this work are Scales,” Image and Vision Computing, vol. 10, pp. 557-565, 1992.
available to other research groups. See the Web page at [22] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre,
www.nd.edu/~cvrl for the release agreement and details. “XM2VTSDB: The Extended M2VTS Database,” Audio and Video-
Based Biometric Person Authentication, pp. 72-77, 1999.
[23] B. Moreno, A. Sanchez, and J. Velez, “On the Use of Outer Ear
ACKNOWLEDGMENTS Images for Personal Identification in Security Applications,” Proc.
IEEE Int’l Carnaham Conf. Security Technology, pp. 469-476, 1999.
Biometrics research at the University of Notre Dame is [24] K. Pulli, “Multiview Registration for Large Data Sets,” Proc. Second
Int’l Conf. 3-D Imaging and Modeling, pp. 160-168, Oct. 1999.
supported by the US National Science Foundation under
[25] K. Pun and Y. Moon, “Recent Advances in Ear Biometrics,” Proc.
Grant CNS01-30839, by the Central Intelligence Agency, by Sixth Int’l Conf. Automatic Face and Gesture Recognition, pp. 164-169,
the US Department of Justice/National Institute for Justice May 2004.
under Grants 2005-DD-CX-K078 and 2006-IJ-CX-K041, by the [26] H.-Y. Shum, M. Hebert, K. Ikeuchi, and R. Reddy, “An Integral
Approach to Free-Form Object Modeling,” IEEE Trans. Pattern
National Geo-Spatial Intelligence Agency, and by UNISYS Analysis and Machine Intelligence, vol. 19, pp. 1366-1370, 1997.
Corp. The authors would like to thank Patrick Flynn and [27] B. Victor, K. Bowyer, and S. Sarkar, “An Evaluation of Face and
Jonathon Phillips for useful discussions about this work. The Ear Biometrics,” Proc. 16th Int’l Conf. Pattern Recognition, pp. 429-
432, 2002.
authors would also like to thank the anonymous reviewers for [28] C. Xu and J. Prince, “Snakes, Shapes, and Gradient Vector Flow,”
providing useful feedback. These comments were important IEEE Trans. Image Processing, vol. 7, pp. 359-369, 1998.
in improving the clarity and presentation of the research. [29] P. Yan and K.W. Bowyer, “Ear Biometrics Using 2D and 3D
Images,” Proc. 2005 IEEE CS Conf. Computer Vision and Pattern
Recognition (CVPR ’05)—Workshops, p. 121, 2005.
[30] P. Yan and K.W. Bowyer, “Empirical Evaluation of Advanced Ear
REFERENCES Biometrics,” Proc. 2005 IEEE CS Conf. Computer Vision and Pattern
[1] http://www.vtk.org, 2006. Recognition (CVPR ’05)—Workshops, p. 41, 2005.
[2] P. Besl and N. McKay, “A Method for Registration of 3-D Shapes,” [31] T. Yuizono, Y. Wang, K. Satoh, and S. Nakayama, “Study on
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, Individual Recognition for Ear Images by Using Genetic Local
pp. 239-256, 1992. Search,” Proc. 2002 Congress Evolutionary Computation, pp. 237-242,
[3] P.J. Besl and R.C. Jain, “Invariant Surface Characteristics for 3D 2002.
Object Recognition in Range Images,” Computer Vision Graphics [32] Z. Zhang, “Iterative Point Matching for Registration of Freeform
Image Processing, vol. 33, pp. 30-80, 1986. Curves and Surfaces,” Int’l J. Computer Vision, vol. 13, pp. 119-152,
[4] B. Bhanu and H. Chen, “Human Ear Recognition in 3D,” Proc. 1994.
Workshop Multimodal User Authentication, pp. 91-98, 2003.
[5] M. Burge and W. Burger, “Ear Biometrics in Computer Vision,”
Proc. 15th Int’l Conf. Pattern Recognition, vol. 2, pp. 822-826, 2000.
1308 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 29, NO. 8, AUGUST 2007
Ping Yan received the BS (1994) and MS (1999) Kevin W. Bowyer currently serves as the chair
degrees in computer science from Nanjing of the Department of Computer Science and
University and the PhD degree in computer Engineering, University of Notre Dame. His
science and engineering from the University of research efforts have concentrated on data
Notre Dame in 2006. Her research interests mining and biometrics. The Notre Dame Bio-
include computer vision, image processing, metrics Research Group has been active as part
evaluation, and implementation of 2D/3D bio- of the support team for the US government’s
metrics and pattern recognition. She is currently Face Recognition Grand Challenge program and
a postdoctoral researcher at the University of Iris Challenge Evaluation program. His paper
Notre Dame. “Face Recognition Technology: Security Versus
Privacy,” published in IEEE Technology and Society, was recognized
with a 2005 Award of Excellence from the Society for Technical
Communication, Philadelphia Chapter. He is a fellow of the IEEE and a
golden core member of the IEEE Computer Society. He has served as
editor-in-chief of the IEEE Transactions on Pattern Analysis and
Machine Intelligence and on the editorial boards of Computer Vision
and Image Understanding, Image and Vision Computing Journal,
Machine Vision and Applications, International Journal Pattern Recogni-
tion and Artificial Intelligence, Pattern Recognition, Electronic Letters in
Computer Vision and Image Analysis, and Journal of Privacy Technol-
ogy. He received an Outstanding Undergraduate Teaching Award from
the University of South Florida College of Engineering in 1991 and the
Teaching Incentive Program Awards in 1994 and 1997.