0% found this document useful (0 votes)
15 views8 pages

Fusing Iris, Palmprint and Fingerprint in A Multi-Biometric Recognition System (Conference)

This paper discusses a trimodal biometric recognition system that integrates iris, palmprint, and fingerprint data to enhance identification accuracy and reliability. It utilizes various feature extraction techniques, including wavelet transforms and Gabor filters, and implements six fusion algorithms, including a novel Maximum Inverse Rank method, to combine results from the different biometric modalities. Experimental results demonstrate the effectiveness of the proposed system compared to existing multi-modal recognition systems.

Uploaded by

lightcandle028
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views8 pages

Fusing Iris, Palmprint and Fingerprint in A Multi-Biometric Recognition System (Conference)

This paper discusses a trimodal biometric recognition system that integrates iris, palmprint, and fingerprint data to enhance identification accuracy and reliability. It utilizes various feature extraction techniques, including wavelet transforms and Gabor filters, and implements six fusion algorithms, including a novel Maximum Inverse Rank method, to combine results from the different biometric modalities. Experimental results demonstrate the effectiveness of the proposed system compared to existing multi-modal recognition systems.

Uploaded by

lightcandle028
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

2016 13th Conference on Computer and Robot Vision

Fusing Iris, Palmprint and Fingerprint in a Multi-Biometric Recognition System

Habibeh Naderi, Behrouz Haji Soleimani Babak Nadjar Araabi, Hamid Soltanian-Zadeh
and Stan Matwin Electrical and Computer Engineering Department
Faculty of Computer Science University of Tehran
Dalhousie University Tehran, Iran
Halifax, Canada Email: [email protected], [email protected]
Email: {hb283594, bh926751, st837183}@dal.ca

Abstract—This paper presents a trimodal biometric recogni- frequencies of iris texture are located in the low and middle
tion system based on iris, palmprint and fingerprint. Wavelet frequency channels. Another iris recognition method is pro-
transform and Gabor filter are used to extract features in posed in [1] in which iris image is transformed using Haar
different scales and orientations from iris and palmprint.
Minutiae extraction and alignment is used for fingerprint and Kekre transformations, and the feature vector of iris is
matching. Six different fusion algorithms including score- extracted using the fractional energies of the transformed
based, rank-based and decision-based methods are used to image.
combine the results of three modules. We also propose a A palmprint identification method was proposed in [5]
new rank-based fusion algorithm Maximum Inverse Rank by using Real Gabor function (RGF) filtering to generate
(MIR) which is robust with respect to variations in scores
and also bad ranking from a module. CASIA datasets for palm-codes. [6] introduced another palmprint recognition
iris, palmprint and fingerprint are used in this study. The algorithm which extracts Line Edge Map (LEM) of a palm-
experiments show the effectiveness of our fusion method and print and then utilizes Hausdorff distance as a dissimilarity
our trimodal biometric recognition system in comparison to measure. In [7], another palmprint identification technique is
existing multi-modal recognition systems. proposed which transforms the line edge map of palmprint
Keywords-Decision Fusion; Multi-modal Biometrics; Iris using Cosine, Haar and Kekre transformations and makes
Recognition; Fingerprint Recognition; Palmprint Recognition; use of low frequency coefficients of transformed image as
Human Identification. palmprint features.
A minutiae based fingerprint identification method is
I. I NTRODUCTION proposed in [8]. This technique first detects the minutiae
A biometric system deals with inherent physical or points and after aligning query and template images, it
behavioral characteristics in each individual to determine calculates the similarity score by counting the number of
their identity. Biometric recognition has a wide variety of matched minutiae between two images. Minutiae points are
security-related applications like access control, time and at- local descriptors of fingerprint and they cannot represent the
tendance management system, government and law enforce- global ridge structure. To overcome this issue, [9] proposed a
ment, passport-free automated border-crossings, national ID wavelet based algorithm which uses a bank of Gabor filters
systems, anti-terrorism, computer login, cell phones and to extract fingerprint signature after locating the reference
other wireless-device based authentication [1]. Human iden- point of fingerprint. A more recent minutiae based algorithm
tification using biometrics has attracted the attention of many is presented in [10] which uses descriptor-based Hough
researchers since it is very demanding and also getting near transform for minutiae alignment and matching.
perfect accuracy is crucial especially for security related Unimodal biometric systems may not achieve the re-
applications. In this study, we have built a multi-modal quired level of performance and reliability in particular
biometric recognition system based on iris, palmprint and applications. Problems like noise in recorded data, non-
fingerprint in order to increase the accuracy and reliability universality, intra-class variations, inter-class similarities and
of the recognition. spoof attacks will affect the effectiveness and functionality
Most of the works on iris recognition are based on using of unimodal biometric systems. Some of these limitations
Gabor filter to extract iris signature [2]. A great body of can be overcome using multi-modal biometric systems since
work have been done on localizing the iris and detecting they benefit from multiple sources of information.
its boundaries using different methods such as Hough trans- Multi-modal biometric systems combine measurements
form, gradient based edge detection and integro-differential from different biometric traits to enhance the strengths and
operators [3]. In [4], a multi-resolution iris recognition based mitigate the weaknesses of the individual measurements.
on Wavelet Packet Transform (WPT) is presented. The main In a multi-modal biometric system, information fusion can
motivation of using WPT is the observation that dominant occur in various levels: sensor level, feature level, matching

978-1-5090-2491-9/16 $31.00 © 2016 IEEE 327


DOI 10.1109/CRV.2016.18
score level and decision level. [11] investigated the image 3) Feature Matching: Gabor filtering each block of the
level fusion of palmprint and iris. It combines Baud Limited iris ROI gives us a binary code result that has a real and
Image Product (BLIP) and phase-based image matching to an imaginary component representing orthogonal directions.
build a multi-biometric system. Another feature level fusion We then use Hamming distance to measure the similarity
method is proposed in [12]. It employs a frequency-based between two sets of iris codes and compare them bit by bit.
approach to make a homogeneous biometric feature vector
from iris and fingerprint. B. Palmprint Recognition System
The rest of the paper is organized as follows: in section II The palm is the inner surface of the hand between the
we explain our three unimodal biometric systems including wrist and the fingers. Palmprint refers to the various lines
their preprocessing, feature extraction and matching. Section on the palm including the principle lines, the wrinkles and
III describes our multi-modal recognition system, fusion the fine ridges [15]. The human palmprint contains rich
algorithms and their properties and also our proposed MIR information which is unique for each person. This makes
fusion method. The datasets used in this study are explained the palmprint a very suitable biometric feature for person
in section IV. Experimental results and comparison with recognition.
existing multi-modal recognition systems are presented in In this paper, we have implemented a multispectral palm-
section V. print recognition approach similar to the one introduced in
[15] which consists of three main modules: the region of
II. U NIMODAL B IOMETRIC S YSTEMS interest extraction, the feature extraction and the decision
making.
A. Iris Recognition System 1) Region of Interest Extraction: In order to generate the
ROI of a palmprint image, we need to define some reference
Iris Recognition System: Iris is the annular region of landmarks which can describe justify the relative translation,
the eye located between pupil and sclera. It has distinctive scale and rotation between different images. Therefore, we
spatial patterns which makes it unique for each person. follow several steps in this module to be able to localize
Moreover, iris texture is not affected by aging and remains these landmarks. First of all, we use a thresholding technique
stable over time. Therefore, iris recognition is a very reliable to obtain a binary image. In order to do that we plot the
and non-invasive method for human authentication. histogram of the gray values of the image to determine an
In this paper, we have modified one of the state-of-the- appropriate threshold value. Then we apply a border tracing
art methods in iris recognition [13] which consists of three algorithm to get the contours of the hand shape. We next use
major phases: image preprocessing, feature extraction, and binary pixel connectivity to remove all smaller objects which
feature matching. appear due to the noise but are not connected to the hand. We
1) Image Preprocessing: In this phase, we undertake four also adopt the eight neighborhood directions while tracking
steps: enhancement and de-noising, iris localization, polar the hand contour to normalize it. Afterwards, we use binary
to Cartesian texture mapping, and extraction of iris valid hole filling algorithm to fill any holes that may exist within
region of interest (ROI). For enhancement and de-noising the hand pixels. After obtaining a binary hand image, we go
we use histogram equalization to increase the contrast. through each column of it and calculate the gradient between
Afterwards, we detect the center of the eye and remove every two consecutive rows in that column. Wherever the
the light reflections from the pupil area. Then, to localize gradients become non-zero, we have a binary discontinuity
the iris, we use Canny edge detection and Hough transform that corresponds to the edges of the fingers. Having the edges
to detect the inner and outer boundaries of the iris. Since of the fingers, we compute the gaps between the fingers
the inner and outer boundaries are represented by circles, in each column and figure out the mid points of all the
we consider the iris area in polar coordinates and map it gaps. By proceeding to the next columns and following the
to Cartesian space for simplicity of further steps [14]. Iris mid points of the gaps, we fit a second order polynomial
valid ROI is then obtained by removing one third of the to each valley’s set of mid points to eventually reach the
projected iris texture from the top. Figure 1 illustrates the endpoint of the valleys. The column-wise search finishes
preprocessing steps for iris images. when we find all four endpoints between the fingers. Then
2) Feature Extraction: One of the crucial parts of every we discard the endpoint of the valley between thumb and
recognition system is to find a set of features that can the index finger and consider the index-middle and ring-
best describe the texture and capture the most important small fingers endpoints as our two landmarks. Connecting
information of the image. Gabor filter with its various these two landmarks forms a reference line segment which
orientation bandwidths and multi-resolution separability, has represents the Y-axis. To centralize the ROI over the palm,
been found particularly appropriate for texture representation the X-axis is located at the two-third distance from the index-
and discrimination. Hence, we utilize modified 2D Gabor middle landmark point and perpendicular to Y-axis. Finally,
filter to extract the iris features effectively. we extract the square with side length equals to 3/2 of Y-

328
(a) Raw iris image (b) Hist. equalization (c) Filling pupil (d) Hough circles

(e) Polar to Cartesian


Figure 1: Different steps of our algorithm for iris image preprocessing

Y axis

(a) Raw palm image (b) Binary image (c) Tracking midpoints (d) Landmarks and Y-axis

(e) Rotated image (f) Boundaries (g) Cropped image


Figure 2: Different steps of our algorithm to extract the ROI of a palmprint

axis segment of palmprint’s ROI. Figure 2 demonstrates all fingerprint with another can be made. In this research,
the steps in the ROI extraction of our palmprint algorithm. we consider the two most important types of minutiae
2) Feature Extraction: This module works similar to the including ridge endings and ridge bifurcations. Each minutia
iris feature extraction part; we again employ 2D Gabor filter is described by a quintuple containing x and y coordinates,
to extract the most salient features of the palmprint’s texture. its orientation, and the corresponding ridge segment.
3) Decision Making: Like what we have done in iris First we estimate the orientation field by looking at local
recognition, we utilize Hamming distance to determine the neighborhood of the pixels. Considering the fact that ridges
degree to which two palmprints match. For each query are local maximum gray points of their neighborhood, we
image we assign it to the subject which gives the minimum convert the fingerprint image to a binary one by assuming
Hamming distance. anything that is not ridge is background. Then we use a
C. Fingerprint Identification System thinning algorithm to minimize the width of ridges. This
way, minutiae extraction will be much simpler. For each
Fingerprint is a set of graphical configurations of ridges
pixel on a ridge we count the number of ridge pixels in
and furrows on the surface of a fingertip. These patterns and
its eight neighbors. If it has only one ridge neighbor, we
minute details are permanent for a given finger and unique
consider it as ridge ending and if it has more than two we
for each individual person. Therefore, fingerprint is one of
will consider it as ridge bifurcation. At this point, we have
the widely used biometric features in human identification.
obtained a set of minutiae for each fingerprint which can be
In this paper we have employed an advanced minutiae-
used for matching them against others. Figure 3 illustrates
based fingerprint recognition algorithm [8][16] which in-
the steps for extracting minutiae from fingerprints.
volves two fundamental steps: feature extraction and feature
matching. 2) Feature Matching: Since fingerprints can be recorded
1) Feature Extraction: The major features of a finger- in different positions and rotations on the scanner, an align-
print image are called minutiae. Minutiae points work as ment step is necessary before we can match two sets of
landmarks in a fingerprint using which comparisons of one minutiae. Therefore, the feature matching consists of two

329
(a) (b) (c) (d) (e) (f)
Figure 3: Different steps of our algorithm for fingerprint feature extraction (a) Raw image (b) Binary image (c) Thinned
image (d) Orientation map (e) Endpoints in red and bifurcation points in green and (f) Minutiae extracted from the fingerprint

major steps: alignment step and matching step. In the align- outputs similarity score. Thus, we first convert the fingerprint
ment step, we estimate the amount of translation, rotation, score to a dissimilarity score. Then we apply z-score nor-
and scaling that may have been occurred between the two malization to make the scores in the same scale. There exist
images. Then the input minutiae are aligned to the template other normalization techniques such as min-max, tanh and
minutiae by applying the appropriate transformations. As adaptive [17]. However, we chose z-score since we achieved
described in [8], in the matching step we use a bounding the best recognition results using it.
box for each minutia of the template. This bounding box Throughout the rest of the section we will use the follow-
represents acceptable region for the corresponding input ing notation. Let K be the number of classifiers, C be the
minutia. This way, we can count the number of matched number of subjects, pic be the score value that i-th classifier
minutiae between two images and use it as a similarity score. gives to c-th subject, ric be the rank of c-th subject in
i-th classifiers ranking and Sc be the combined matching
III. M ULTI - MODAL R ECOGNITION S YSTEMS score given to c-th class. Having Sc computed by the fusion
Designing a multimodal biometric recognition system has algorithm, the class that has the maximum Sc will be the
two substantial aspects: building the individual unimodal final fused decision c∗ = arg maxSc . In the following, we
c
biometric systems (i.e. classifiers) and developing an ef- will briefly describe the chosen algorithms.
fective fusion strategy to combine the obtained output of
individual classifiers. In this study, we employed six well- A. Borda Count
known fusion methods to combine our three aforementioned In this method each classifier provides a rank for every
biometric systems. We also propose our own fusion method, output class based on its score. Then the combined matching
Maximum Inverse Ranking (MIR) which works based on score for each class is calculated as follows.
the ranking of the scores in different classifiers. These
fusion methods aggregate the information in either decision K

level or score level. Different fusion algorithms can also Sc = (C − ric + 1) (1)
be categorized into cardinal, ordinal and nominal methods. i=1

The cardinal fusion methods work with raw score values B. Ordered Weighted Averaging (OWA)
generated by each classifier. The ordinal fusion methods
make use of the ranking of matching scores instead of their This commonly used fusion method assigns a weight to
exact values. Lastly, the nominal fusion methods neither use each classifier which determines to what extent the final
score values nor their rankings, but they just work based on decision should be influenced by that classifier. The weights
the labels produced by classifiers. are assigned based on how confidently classifiers have made
their decisions. The weight for i-th classifier is calculated as
Individual classifiers may calculate the matching scores
follows.
in different ways. Hence, the outputs of different modalities
are not necessarily homogeneous. For instance, one may |mi1 − mi2 |
output a similarity score while others output a distance-based wi = (2)
mi1
score. Moreover, their output scores can also be in different
numerical scales. For these reasons, normalization step is where mi1 and mi2 are the first and second greatest scores
inevitable to make all the matching scores homogeneous given to output classes by i-th classifier, respectively. OWA
and comparable. Unlike our iris and palmprint modules computes the combined matching score for each class as a
which provide distance-based scores, our fingerprint module weighted sum of the scores for that class:

330
K fusion methods. Borda count and majority voting fall into
i=1 wi pic
Sc =  K
(3) ordinal and nominal categories, respectively. MIR fits into
i=1 wi
the ordinal group of methods.
C. Greatest Max Selector
IV. DATASETS
This non-trainable combiner considers the maximum
A. Iris
matching score assigned to each class by different classifiers
as its combined matching score. For the iris recognition part we have used CASIA-Iris-
Interval dataset. The resolution of the images in this dataset
Sc = max pic (4) is 320 × 280 pixels. It contains images from left and right
i eyes of 249 subjects. Since the iris texture and pattern for
D. Greatest Mean Selector the left and right eyes of a person are independent of each
This fusion method calculates the combined matching other, we cannot use both left and right for recognition of
score for each class by taking the average among all the a person. In this paper we have only used images from the
scores given to that specific class by different classifiers. left eye. We need at least five pictures for each person in
order to use them as train and test pictures in 5-fold cross-
K
1  validation. Since some of the subjects in CASIA-Iris dataset
Sc = pic (5) have less than five images for the left eye, we only selected
K i=1
the ones with more than five images for the left eye and
E. Greatest Product Selector randomly picked five of them. By doing that, our resulting
In this method, the combined matching score for each dataset contains 173 subjects and five pictures per subject.
class is achieved by multiplying all the scores given to that B. Palmprint
specific class by different classifiers. For the palmprint recognition we have selected CASIA-
K
 Palmprint dataset. The resolution of the images in this
Sc = pic (6) dataset is 640 × 480 pixels which is higher than iris. This
i=1 dataset contains 5502 palmprint images from left and right
F. Majority Voting hands of 312 subjects. Here again we have only used the
images from the right hands because of the differences
This is the most intuitive fusion method which aggregates
in textures and patterns of the right and left palmprints.
in decision level. Each individual classifier makes its own
Since we want to join the palmprint dataset to the iris
decision and the final decision will be the class who gets
dataset and use them in a multi-modal biometric recognition
the maximum number of votes.
system and we only have 173 subjects for the iris, we
G. Maximum Inverse Rank (MIR) cannot use more than 173 subjects for the palmprint dataset.
In our MIR method, each classifier provides a ranking of Therefore, we randomly picked 173 subjects and for each
different classes by ordering the scores. Then the combined subject randomly chose five images from the right hand.
matching score for each class is calculated as the summation The resulting palmprint dataset contains 173 subjects, five
of that class’s inversed ranks: pictures per each.
C. Fingerprint
K
1 As a fingerprint recognition database we have used
Sc = (7)
r
i=1 ic CASIA-Fingerprint dataset. The resolution of the images in
this dataset is 328×356 pixels. This dataset contains 20,000
Since our method works with inverse ranks, as the rank
images from 500 subjects, 40 images per subject. For each
value increases, its effect on the combined score fades
subject, eight fingers (thumb, index, middle and ring fingers)
gradually and it becomes negligible with respect to the top
and five images per finger are captured. The five pictures
ranks (i.e. low rank values). As a result, the top ranked
of a finger have different rotations and levels of pressure
classes will contribute the most to the final combined score.
on the scanner. This dataset is considered as a difficult and
This characteristic makes MIR a robust fusion algorithm that
noisy fingerprint dataset because of the significant intra-class
is not easy to be perturbed. In other words, if we have two or
variations. In order to be able to join this dataset with the
more fairly accurate classifiers, adding another less accurate
other two and use them in a multi-modal recognition system,
or even random classifier will not significantly affect the
again we only selected 173 subjects. Moreover, for each
overall ranking.
subject we only picked the images from the right index finger
Among the above seven fusion methods, OWA and the
due to the independence between the prints of different
other three non-trainable combiners (including the greatest
fingers. Thus, the resulting fingerprint dataset consists of
mean, max and product selectors) [18] belong to cardinal
173 subjects, five images per each.

331
V. R ESULTS known to be fairly independent of each other. Thus, we can
simply merge the datasets in any order and assume they
A. Unimodal Recognition Results
belong to the same subjects. As a multi-modal recognition
For the iris and palmprint we first calculated all the pair- system, we first consider the three bimodal recognition
wise intra-class and inter-class hamming distances between systems (i.e. iris-palm, iris-finger and palm-finger) and then
the extracted features from Gabor filter. Figures 4a and the combination of three biometric. In each combination of
4b illustrate the distribution of within-class and between- biometrics we have used all seven aforementioned decision
class distances for iris and palmprint, respectively. Figure fusion algorithms and compared the results. The z-score
4c demonstrates the intra-class and inter-class matching normalization is used to make the matching scores from
scores for fingerprint images. As we can see from the different biometric components have the same scale. For the
figure, the distributions of within-class and between-class evaluation of multi-modal recognition systems we again used
hamming distances for iris and palmprint are well-separated. 100 times repeated 5-fold cross-validation to get the average
Consequently, we expect to get good recognition results for and standard deviation of accuracies.
iris and palmprint. On the other hand, the distributions for Table I represents the results of three bimodal and also
the fingerprint have too much overlap. We can see from the trimodal recognition systems using seven decision fusion
figure 4c that there are no between-class fingerprint images methods. For each fusion algorithm the first row represents
with high matching score, but there are lots of within-class the Correct Classification Rate (CCR) and its standard
images with low matching scores which is because of huge deviation and the second row shows the p-value of t-test
variations between different images from the same person. against the best performing multi-modal recognition system
This leads to a low recognition rate for the fingerprint. in that fusion algorithm. As we can see from the table, in
If we want to pick a threshold for deciding if an iris or most of the fusion algorithms the trimodal recognition gives
palmprint image belongs to a person or not, the optimal the best accuracy. Since the fingerprint alone has the lowest
choice for such a threshold would be the point where the recognition rate among the three biometric, the bimodal
two distributions collide. By increasing or decreasing the systems having the fingerprint module are performing worse
threshold, the false acceptance rate or false rejection rate than the iris-palmprint. Although fingerprint alone does not
will increase, respectively. Fingerprint is different from the have high accuracy, it is still much better than random.
other two in that it gives similarity score instead of distance. Therefore, adding the fingerprint module to the iris-palm
The optimal choices of the threshold for iris, palmprint and system to make a trimodal system will potentially increase
fingerprint are 0.4298, 0.207 and 33, respectively. the accuracy. We observe this in five out of seven fusion al-
For the purpose of evaluation we used 5-fold cross- gorithms but in product selector and Borda count adding fin-
validation as we have five images per subject. At first fold gerprint is decreasing the accuracy. This behavior is expected
we randomly pick one image from each subject as test from those two fusion methods. Product selector multiplies
image and the rest is considered as train images. Then we the scores of different modules, so if one of the modules
calculate the dissimilarity of the test image to all of the give unreasonable score it will spoil the multiplication and
training images in database and assign it to the subject with affect the others. Borda count adds up the ranks of different
highest similarity only if its similarity is greater than the modules, so if one of them have a bad ranking system it will
deciding threshold. Then we count the number of correct degrade the quality of overall ranking. Therefore, trimodal
recognitions in 173 test images to calculate the accuracy biometric recognition is better than bimodal recognition if a
of the fold. In the second fold we randomly pick one of robust and reliable fusion method is selected. Our proposed
the images per each subject that has not been used for MIR fusion algorithm is proved to be a robust method
testing so far and calculate the accuracy. After the fifth fold with respect to bad rankings since its trimodal accuracy
every single image in the database has been used for testing is significantly better than its bimodal ones. Moreover, the
once. In order to calculate a reliable average and standard proposed MIR method is the best performing fusion method
deviation of accuracy we have repeated the 5-fold cross- among all nominal and ordinal algorithms. By combining the
validation 100 times. By using the optimal threshold for each three biometrics and building a trimodal recognition system
biometric we achieved 0.9729±0.0101, 0.9690±0.0066 and we achieved an accuracy of 0.9976 ± 0.0035 using OWA
0.3880 ± 0.0303 correct classification rate for iris, palmprint fusion algorithm.
and fingerprint, respectively. The combinations of iris and palmprint, iris and finger-
print and also palmprint and fingerprint as bimodal recog-
B. Multi-modal Recognition Results nition systems have already been used in the literature.
There are no datasets that contain iris, palmprint and But the combination of iris, palmprint and fingerprint as a
fingerprints of same subjects at one place. On the other trimodal system has never been investigated. Table II shows
hand, no relationship between iris, palmprint and fingerprint the results of our multi-modal recognition systems as well
of a person has been proven so far. Therefore, they are as some of the existing results in the literature. The methods

332
(a) Iris (b) Palmprint (c) Fingerprint
Figure 4: Distribution of intra-class and inter-class hamming distances for (a) Iris (b) Palmprint and (c) distribution of
intra-class and inter-class matching scores for fingerprint.

Table I: Comparison of different fusion algorithms in various multi-biometric configurations. For each fusion algorithm the
first row represents the average accuracy and its standard deviation and the second row shows the p-value of t-test against
the best performing multi-biometric in that fusion algorithm.

Iris+Palm+Finger Iris+Palm Iris+Finger Palm+Finger


0.9964 ± 0.0037 0.9910 ± 0.0063 0.9481 ± 0.0130 0.9191 ± 0.0158
Greatest Max
1.0000 < 10−4 < 10−4 < 10−4
0.9973 ± 0.0033 0.9966 ± 0.0039 0.9800 ± 0.0087 0.9783 ± 0.0098
Greatest Mean
1.0000 0.0733 < 10−4 < 10−4
0.9584 ± 0.0127 0.9933 ± 0.0049 0.9151 ± 0.0159 0.9304 ± 0.0146
Greatest Product
< 10−4 1.0000 < 10−4 < 10−4
0.9976 ± 0.0035 0.9969 ± 0.0033 0.9799 ± 0.0095 0.8667 ± 0.0193
OWA
1.0000 0.0573 < 10−4 < 10−4
0.9834 ± 0.0078 0.9579 ± 0.0099 0.6604 ± 0.0338 0.6717 ± 0.0325
Majority Voting
1.0000 < 10−4 < 10−4 < 10−4
0.8550 ± 0.0214 0.9823 ± 0.0088 0.7699 ± 0.0243 0.7690 ± 0.0239
Borda Count
< 10−4 1.0000 < 10−4 < 10−4
0.9958 ± 0.0053 0.9938 ± 0.0050 0.8886 ± 0.0198 0.8855 ± 0.0209
MIR
1.0000 0.0004 < 10−4 < 10−4

shown in this table are not necessarily using the same dataset variations of score values. These characteristics makes MIR
and configuration. But in general it gives an overview of an interesting fusion method with results comparable to the
existing results. As we can see, in the bimodal ones we have widely-used OWA method. Our experiments also show the
achieved a higher recognition rate than others. As a trimodal effectiveness of our trimodal recognition system as it has
recognition system we can see that our proposed biometrics achieved a better recognition rate than bimodal ones and
combination have better accuracy than the existing trimodal also existing trimodal biometric systems.
systems even though our fingerprint dataset is a difficult and R EFERENCES
challenging dataset. This shows the reliability and robustness
of the chosen biometrics and their combination. [1] D. S. Thepade and P. R. Mandal, “Novel iris recognition
technique using fractional energies of transformed iris images
VI. C ONCLUSION using haar and kekre transforms,” International Journal Of
Scientific & Engineering Research, vol. 5, no. 4, 2014.
In this paper we have used iris, palmprint and finger-
print recognition as building blocks to create a multi-modal [2] J. Daugman, “How iris recognition works,” Circuits and
recognition system. Several well-known fusion algorithms Systems for Video Technology, IEEE Transactions on, vol. 14,
have been used in order to combine the output of individual no. 1, pp. 21–30, 2004.
modules. We have also proposed a new rank-based fusion [3] S. Sheela and P. Vijaya, “Iris recognition methods-survey,”
method Maximum Inverse Rank (MIR) which is more robust International Journal of Computer Applications, vol. 3, no. 5,
than other rank-based methods such as Borda count with pp. 19–25, 2010.
respect to bad rankings from a module. For instance, if
[4] J. Wang and M. Xie, “Iris feature extraction based on
one of the individual components is generating even a wavelet packet analysis,” in Communications, Circuits and
random ranking, MIR will not be affected that much. Rank- Systems Proceedings, 2006 International Conference on,
based methods have also less sensitivity with respect to vol. 1. IEEE, 2006, pp. 31–34.

333
Table II: Comparison of accuracies of our bimodal and trimodal biometric system with existing multi-modal recognition
systems. For our proposed approach we are displaying our best achieved accuracy and also MIR’s accuracy.

3 modalities Iris + Palm Iris + Finger Palm + Finger


OWA 0.9976 OWA 0.9969 Mean 0.9800 Mean 0.9783
Iris + Palm + Finger (Ours) Ours Ours Ours
MIR 0.9958 MIR 0.9938 MIR 0.8886 MIR 0.8855
Face + Finger + Finger Vein [19] 0.9940 [20] 0.9906 [12] 0.9429 [21] 0.9667
Face + Finger + Hand Geometry [22] 0.9860 [23] 0.9442 [24] 0.7200 [25] 0.8800
Face + Palm + Speech [26] 0.9700

[5] A. Kumar and H. C. Shen, “Palmprint identification using [16] L. Hong, Y. Wan, and A. Jain, “Fingerprint image en-
palmcodes,” in Image and Graphics (ICIG’04), Third Inter- hancement: algorithm and performance evaluation,” Pattern
national Conference on. IEEE, 2004, pp. 258–261. Analysis and Machine Intelligence, IEEE Transactions on,
vol. 20, no. 8, pp. 777–789, 1998.
[6] F. Li, M. K. Leung, and X. Yu, “Palmprint identification using
hausdorff distance,” in Biomedical Circuits and Systems, 2004 [17] R. Snelick, U. Uludag, A. Mink, M. Indovina, and A. Jain,
IEEE International Workshop on. IEEE, 2004, pp. S3–3. “Large-scale evaluation of multimodal biometric authenti-
cation using state-of-the-art systems,” Pattern Analysis and
[7] S. D. Thepade and S. S. Gudadhe, “Palm print identification Machine Intelligence, IEEE Transactions on, vol. 27, no. 3,
using fractional coefficient of transformed edge palm images pp. 450–455, 2005.
with cosine, haar and kekre transform,” in Information &
Communication Technologies (ICT), 2013 IEEE Conference [18] L. I. Kuncheva, Combining pattern classifiers: methods and
on. IEEE, 2013, pp. 1232–1236. algorithms. John Wiley & Sons, 2004.

[8] A. K. Jain, L. Hong, S. Pankanti, and R. Bolle, “An identity- [19] M. He, S.-J. Horng, P. Fan, R.-S. Run, R.-J. Chen, J.-L. Lai,
authentication system using fingerprints,” Proceedings of the M. K. Khan, and K. O. Sentosa, “Performance evaluation of
IEEE, vol. 85, no. 9, pp. 1365–1388, 1997. score level fusion in multimodal biometric systems,” Pattern
Recognition, vol. 43, no. 5, pp. 1789–1800, 2010.
[9] A. K. Jain, S. Prabhakar, L. Hong, and S. Pankanti,
“Filterbank-based fingerprint matching,” Image Processing, [20] A. Meraoumia, S. Chitroub, and A. Bouridane, “Multimodal
IEEE Transactions on, vol. 9, no. 5, pp. 846–859, 2000. biometric person recognition system based on fingerprint
& finger-knuckle-print using correlation filter classifier,” in
[10] A. A. Paulino, J. Feng, and A. K. Jain, “Latent fingerprint Communications (ICC), 2012 IEEE International Conference
matching using descriptor-based hough transform,” Informa- on. IEEE, 2012, pp. 820–824.
tion Forensics and Security, IEEE Transactions on, vol. 8,
no. 1, pp. 31–45, 2013. [21] K. Krishneswari and S. Arumugam, “Multimodal biometrics
using feature fusion,” Journal of Computer Science, vol. 8,
[11] J. Liu, Y. Hou, J. Wang, Y. Li, Q. Wang, J. Man, H. Xie, and no. 3, pp. 431–435, 2012.
J. He, “Fusing iris and palmprint at image level for multi-
biometrics verification,” in Fourth International Conference [22] A. Jain, K. Nandakumar, and A. Ross, “Score normaliza-
on Machine Vision (ICMV 11). International Society for tion in multimodal biometric systems,” Pattern recognition,
Optics and Photonics, 2011, pp. 83 501Q–83 501Q. vol. 38, no. 12, pp. 2270–2285, 2005.

[12] V. Conti, C. Militello, F. Sorbello, and S. Vitabile, “A [23] S. Hariprasath and T. Prabakar, “Multimodal biometric
frequency-based approach for features fusion in fingerprint recognition using iris feature extraction and palmprint fea-
and iris multimodal biometric identification systems,” IEEE tures,” in Advances in Engineering, Science and Management
Transactions on Systems, Man, and Cybernetics Part C, (ICAESM), 2012 International Conference on. IEEE, 2012,
Applications and reviews”, vol. 40, no. 4, p. 384, 2010. pp. 174–179.

[13] H. Meng and C. Xu, “Iris recognition algorithms based on [24] A. Baig, A. Bouridane, F. Kurugollu, and G. Qu, “Fingerprint-
gabor wavelet transforms,” in Mechatronics and Automation, iris fusion based identification system using a single hamming
Proceedings of the 2006 IEEE International Conference on. distance matcher,” in 2009 Symposium on Bio-inspired Learn-
IEEE, 2006, pp. 1785–1789. ing and Intelligent Systems for Security. IEEE, 2009, pp.
9–12.
[14] J. M. Ali and A. E. Hassanien, “An iris recognition system
to enhance e-security environment based on wavelet theory,” [25] B. V Evelyn, “Multi-modal biometric template security:
AMO-Advanced Modeling and Optimization, vol. 5, no. 2, pp. Fingerprint and palmprint based fuzzy vault,” Journal of
93–104, 2003. Biometrics & Biostatistics, vol. 3, no. 6, pp. 150–155, 2012.

[15] Z. Khan, A. Mian, and Y. Hu, “Contour code: Robust and [26] R. Raghavendra, “Novel mixture model–based approaches
efficient multispectral palmprint encoding for human recog- for person verification using multimodal biometrics,” Signal,
nition,” in Computer Vision (ICCV), 2011 IEEE International Image and Video Processing, vol. 7, no. 5, pp. 1015–1028,
Conference on. IEEE, 2011, pp. 1935–1942. 2013.

334

You might also like