0% found this document useful (0 votes)
23 views16 pages

Strategies For Exploiting Independent CL

Uploaded by

esteveman3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views16 pages

Strategies For Exploiting Independent CL

Uploaded by

esteveman3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Hindawi Publishing Corporation

Mathematical Problems in Engineering


Volume 2014, Article ID 585139, 15 pages
http://dx.doi.org/10.1155/2014/585139

Research Article
Strategies for Exploiting Independent Cloud Implementations of
Biometric Experts in Multibiometric Scenarios

P. Peer,1 C. EmeršiI,1 J. Bule,1 J. Cganec-Gros,2 and V. Štruc3


1
Faculty of Computer and Information Science, University of Ljubljana, Tržaška cesta 25, 1000 Ljubljana, Slovenia
2
Alpineon d.o.o., Ulica Iga Grudna 15, 1000 Ljubljana, Slovenia
3
Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, 1000 Ljubljana, Slovenia

Correspondence should be addressed to V. Štruc; [email protected]

Received 18 October 2013; Revised 8 January 2014; Accepted 22 January 2014; Published 13 March 2014

Academic Editor: Yue Wu

Copyright © 2014 P. Peer et al. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Cloud computing represents one of the fastest growing areas of technology and offers a new computing model for various
applications and services. This model is particularly interesting for the area of biometric recognition, where scalability, processing
power, and storage requirements are becoming a bigger and bigger issue with each new generation of recognition technology. Next
to the availability of computing resources, another important aspect of cloud computing with respect to biometrics is accessibility.
Since biometric cloud services are easily accessible, it is possible to combine different existing implementations and design new
multibiometric services that next to almost unlimited resources also offer superior recognition performance and, consequently,
ensure improved security to its client applications. Unfortunately, the literature on the best strategies of how to combine existing
implementations of cloud-based biometric experts into a multibiometric service is virtually nonexistent. In this paper, we try to close
this gap and evaluate different strategies for combining existing biometric experts into a multibiometric cloud service. We analyze
the (fusion) strategies from different perspectives such as performance gains, training complexity, or resource consumption and
present results and findings important to software developers and other researchers working in the areas of biometrics and cloud
computing. The analysis is conducted based on two biometric cloud services, which are also presented in the paper.

1. Introduction develop scalable biometric technology, capable of operating


on large amounts of data, and to ensure sufficient storage
Biometric technology is slowly gaining ground and is making capacity and processing power [1].
its way into our daily lives. This development is exemplified A possible solution for the outlined issues is the develop-
best by the last generation of smart-phones, which is starting ment of biometric technology for the cloud, where the cloud
to adopt fingerprint technology as means of improving platform ensures appropriate scalability, sufficient amount
security and is bringing biometrics closer to our minds of storage, and parallel processing capabilities. With the
than ever. While biometric technology for personal devices, widespread availability of mobile devices, the cloud also pro-
such as notebooks and mobile phones, is slowly gaining vides an accessible entry point for various applications and
traction, its broader use on the Internet is still quite modest. services relying on mobile clients [1]. The enormous potential
The main reason for this setting pertains mainly to open of cloud-based biometric solutions was also identified by
issues with respect to the accessibility and scalability of the various companies which are currently developing or have
existing biometric technology [1]. Scalability issues are also of only recently released their biometric cloud services to the
relevance to other deployment domains of biometrics, such market.
as forensics or law-enforcement, where biometric databases While a cloud platform can ensure the necessary infras-
are expected to grow significantly over the next few years tructure and resources for the next generation of biomet-
to accommodate several hundred millions (or even billions) ric technology, the technology itself must ensure the best
of identities [2]. To meet these demands, it is necessary to possible recognition (e.g., verification) performance. In this
2 Mathematical Problems in Engineering

respect, it is necessary to stress that biometric techniques of all fusion strategies is presented and nonperformance
relying on a single biometric trait (i.e., unimodal biometric related characteristics of the fusion strategies are also pre-
experts) can only be improved to a certain extent in terms sented and elaborated on. The paper is concluded with some
of performance. From a certain point forward, it may be final comments and directions for future work in Section 6.
either too costly or not yet feasible to further improve
their performance. However, if performance is of paramount
2. Related Work
importance, the use of multibiometrics may represent a
feasible solution. The problem of combining unimodal biometric experts
The term multibiometrics refers to biometric technology into multibiometric systems has been studied extensively in
that exploits several biometric experts and, hence, relies on the literature (see, e.g., [3–9]). In general, the process of
several biometric traits of the same individual for iden- combining unimodal systems (usually referred to as fusion)
tity inference. Multibiometric systems can offer substantial can be conducted at the following [3].
improvements in terms of accuracy and improvements in
terms of flexibility and resistance to spoofing attacks. They (i) The Signal or Sensor Level. Sensor level fusion can
also introduce higher tolerance to noise and data corruption benefit multisample systems which capture multiple
and also reduce the failure-to-enroll rate [3]. snapshots of the same biometric. The process com-
In this paper, we address the problem of building (cloud- monly referred to as mosaicing, for example, captures
based) multibiometric systems based on existing implemen- two or more impressions of the same biometric trait
tations of unimodal biometric experts. Building (cloud- and creates an enhanced composite biometric sample
based) multibiometrics systems from existing implementa- that is better suited for recognition [3, 10].
tions of biometric experts, instead of developing the system (ii) The Feature Level. Fusion at the feature level involves
from scratch, has several advantages. The most obvious integrating evidence of several biometric feature vec-
advantage is the reduction in effort needed to implement a tors of the same individual [3] obtained from multiple
multibiometric system. Furthermore, it is possible to choose information sources. It is generally believed that
the single expert systems from different vendors according to fusion at this level ensures better recognition results
the desired specifications and performance capabilities. On than fusion at the later levels (i.e., the decision or
the other hand, it is necessary to understand the process of matching score levels) as the features sets typically
combining the single expert systems from the perspectives contain richer information about the raw biometric
of potential performance gains, additional resources needed, samples [3, 11].
implementation complexity, and the like. Ideally, we would
like to combine different (existing) biometric cloud services (iii) The Matching Score Level. The matching scores still
into a multibiometric service with significant performance contain relatively rich information about the input
gains and hence large improvements in security, but without biometric samples and it is also rather easy to combine
the need for large modifications of existing client applica- matching scores of different experts. Consequently,
tions. Such multibiometric services would be of great interest information fusion at the matching score level is the
to existing end-users of biometric cloud services and would most commonly used approach in multibiometric
exhibit significant market value. systems [3]. The matching score data from different
To better understand the problem outlined above, we biometric experts may not be homogeneous, may not
present in this paper an analysis of different (fusion) strategies be on the same numerical scale, or do not follow the
for combining existing cloud-based biometric experts. To be same probability distribution [3]. These reasons make
as thorough as possible, we conduct the analysis under the score level fusion a demanding problem.
assumption that only limited information, such as classifi- (iv) The Decision Level. This type of fusion is sensible
cation decisions or similarity scores, can be obtained from when the unimodal biometric experts provide access
the existing cloud services (i.e., from the unimodal biometric only to the final stage in the process of biometric
experts). The fusion strategies are analyzed from different recognition, namely, the final classification result
perspectives such as performance gains, training complexity, [3]. Different techniques can be considered at this
or resource consumption. The analysis is carried out based on level, for example, the AND- and OR-rules, majority
two biometric cloud services developed in the scope of the voting, weighted majority voting, and others [3, 9, 12].
KC CLASS project [1], the first being a face recognition cloud
service and the second being a fingerprint recognition cloud Among the different types of fusion techniques studied in
service. The results of the analysis are important to engineers, the literature, fusion techniques applied at the matching score
software developers, and other researchers working in the level are by far the most popular. This is also evidenced by
areas of biometrics, cloud computing, and other related areas. Table 1, where a short overview of recent studies on biometric
The rest of the paper is structured as follows. In Section 2, fusion is presented. Note that matching score level fusion
prior work in the area of multibiometrics is surveyed. In techniques clearly dominate the research in this area.
Section 3, the baseline (cloud-based) unimodal biometric When exploiting existing implementations of biometric
experts are introduced. In Section 4, different strategies for experts, such as in our case, not all of the listed fusion levels
combining the biometric experts are presented and their are possible. Fusion at the first two levels requires data to
characteristics are discussed. In Section 5, a detailed analysis be extracted right after the sample acquisition or the feature
Mathematical Problems in Engineering 3

Table 1: A few multibiometric systems discussed in the recent literature.

Author and year Biometric modalities Fusion level Approach used


Face, fingerprint, speech,
Nandakumar et al., 2008 [4] Matching score level Likelihood ratio-based fusion
iris
Matching score level, Quality estimates via a Bayesian belief
Maurer and Baker, 2008 [38] Fingerprint, speech
quality-based fusion network (modified sum-rule)
Benchmarking 22 different biometric
Poh et al., 2009 [39] Face, fingerprint, iris Matching score level
fusion algorithms
Enhanced score-level fusion based on
Lin and Yang, 2012 [40] Face Matching score level
boosting
Optimal fusion scheme at decision
Face (two face recognition Matching score level,
Tao and Veldhuis, 2009 [21] level by AND- or OR-rule (score levels:
algorithms) decision level
sum-rule, likelihood ratio, SVM)
Face (two face recognition Sequential fusion algorithm
Vatsa et al., 2010 [41] Matching score level
algorithms) (likelihood ratio test + SVM)
Poh et al., 2010 [42] Face, fingerprint Matching score level Quality-based score normalization
Addressing missing values in
Poh et al., 2010 [43] Face, fingerprint, iris Matching score level multimodal system with neutral point
method
Fingerprint, palm print, Likelihood ratio, SVM, AdaBoost of
Nanni et al., 2011 [44] Matching score level
face neural networks
Matching score level,
Poh and Kittler, 2012 [45] Face, fingerprint A general Bayesian framework
quality-based fusion
Feature level fusion framework using
Nagar et al., 2012 [46] Face, fingerprint, iris Feature level
biometric cryptosystems
Tao and Veldhuis, 2013 [47] Face, speech Matching score level Native likelihood ratio via ROC

Cloud platform
Cloud platform
PIN 001 Fingerprint recognition
PIN 001 Face recognition engine
engine Cloud service
Cloud service (library) Fingerprint (library)
PIN = 001? biometrics
PIN = 001? Face biometrics PIN 001
(MVC4 application) PIN 001 (MVC3 application)
Database Database

(a) Cloud implementation of face recognition service (b) Cloud implementation of fingerprint recognition service

Figure 1: Illustration of basic architecture of the biometric cloud services.

extraction process, which is usually not possible, as existing services that were used for the evaluation presented in the
(e.g., commercial) services commonly do not expose APIs experimental section.
for accessing the required data (i.e., signals or features).
Existing cloud services typically only allow access to the 3.1. The Cloud Implementations. As we can see from Figure 1,
decision and/or the matching score level. Hence, these two both (i.e., face and fingerprint) services share a similar archi-
levels also form the basis for our assessment presented in the tecture, which is more or less a characteristic for biometric
experimental section. cloud services. Both services feature a background worker
(i.e., typically implemented in the form of a programming
3. Baseline Systems library), which represents the recognition engine of the
cloud service. The biometric database is implemented in
To be able to evaluate different strategies for combining the form of an SQL database, while the communication
independent implementations of biometric experts into a with potential clients of the services is conducted through a
multibiometric cloud service, we first require access to RESTful Interface. Note that both services are implemented
unimodal biometric cloud services. In this section, we briefly and optimized for the task of biometric verification (and
introduce the basics of the unimodal face and fingerprint not biometric identification) and as such are capable of
4 Mathematical Problems in Engineering

returning either the verification result (i.e., the class label) (i) Segmentation and Image Enhancement. Fingerprint
or a matching score indicating the similarity between the scans are first subjected to a segmentation procedure,
input biometric sample and the template of the claimed where the fingerprint pattern is separated from the
identity. These characteristics are common to most biometric background. Through the segmentation procedure,
cloud-based verification systems and define the boundaries the processing time is shortened and the matching
for possible strategies that may be explored for combining accuracy is increased. Since fingerprints are often
existing implementations of biometric cloud services. degraded due to various external factors, the fin-
gerprint patterns are enhanced by binarizing the
3.2. The Face Recognition Engine. The core component of the captured fingerprint samples and ridge profiling [18].
face recognition service is the face recognition engine, which (ii) Minutiae Extraction. The minutiae pattern is obtained
relies on Gabor-LBP features (LBP-Local Binary Patterns). from the binarized profiled image by thinning of
Below, we briefly summarize the basics. the ridge structures, removal of structure imperfec-
tions from the thinned image, and the final process
(i) Face Detection, Localization, and Preprocessing. Facial of minutiae extraction. For each detected minutia,
images are first preprocessed to remove illumination its type (bifurcation or ending), spatial coordinates
artifacts and then subjected to the Viola-Jones face (𝑥, 𝑦), and the orientation of the ridge containing the
detector to extract the facial region [13]. Next, facial minutia are stored as the templates for each given
landmark localization with PSEF correlation filters identity [18].
is performed to find anchor points in the faces that (iii) Matching and Verification. Given a claimed identity
serve as the basis for geometrical normalization of the and an input fingerprint sample, the claim of identity
images [14]. The normalized images are rescaled to a is validated by comparing the template computed
fixed size of 128 × 128 pixels and finally subjected to from the test/live/input sample and the template cor-
the photometric normalization technique presented responding to the claimed identity using a minutiae-
by Tan and Triggs in [15]. based matching algorithm. Here, two fingerprints
match when a sufficient number of minutiae match
(ii) Feature Extraction and Supporting Representation. The
by type, location, and orientation.
main feature representation used by the face recog-
nition service relies on Gabor magnitude responses
and Local Binary Patterns (LBPs). Here, the nor- 4. Fusion Strategies
malized facial images are first filtered with a bank
of 40 Gabor filters. The Gabor magnitude responses The task of combining different experts into a multiexpert
are then encoded with the LBP operator and local system is common to many problems in the areas of pattern
LBP histograms are computed from patches of all recognition and machine learning and is not restricted solely
computed responses. The local histograms are ulti- to the area of biometric recognition. Nevertheless, each
mately concatenated into a global feature vector that problem has its specifics and it is important to understand
forms the template for the given identity. To improve the fusion task in the context of the specific problem one
recognition performance, a vector of the first few is trying to solve. The cloud implementations of the two
DCT coefficients of the normalized facial image is biometric experts presented in the previous section were
also added to the template. designed for the problem of biometric verification. We,
therefore, commence this section by formalizing the problem
(iii) Verification. In the verification stage, the claim of of biometric verification and introducing the fusion task with
identity is validated by comparing the template com- respect to the presented formalization. In the second part
puted from the test/live/input image to the template of the section, we introduce different fusion strategies and
of the claimed identity. Here, the Bhattacharyya elaborate on their characteristics.
distance is used to measure the similarity between
the histograms of the LBP encoded Gabor magnitude 4.1. Prerequisites. Let us assume that there are 𝑁 identities
responses and a simple whitened cosine similarity registered in the given biometric system and that these iden-
measure is used to match the DCT coefficients. tities are labeled with 𝜔1 , 𝜔2 , . . . , 𝜔𝑖 , . . . , 𝜔𝑁 and that there are
Both similarity scores are then stacked together a total of 𝐽 biometric experts at our disposal. Furthermore,
with image-quality measures (see [16] for details— let us assume that we are given a feature vector x(𝑗) of the 𝑗th
𝑄-stack) and the newly combined feature vector is expert, where 𝑗 ∈ {1, 2, . . . , 𝐽} and a claimed identity 𝜔𝑖 from
subjected to an AdaBoost classifier to obtain the final the pool of the 𝑁 enrolled identities (in general, there could
matching score based on which identity inference is be a different number of identities enrolled in each of the 𝐽
conducted. biometric experts, but for the sake of simplicity, we assume
that this number (i.e., 𝑁) is the same for all experts). The
3.3. The Fingerprint Recognition Engine. The core component aim of biometric verification is to assign the pair (𝜔𝑖 , x(𝑗) )
of the fingerprint recognition service is the minutiae-based to class 𝐶1 (a genuine/client claim) if the claim of identity is
fingerprint recognition engine first presented in [17]. Below, found to be genuine and to class 𝐶2 (an illegitimate/impostor
we briefly summarize the basics. claim) otherwise. Commonly, the validity of the identity
Mathematical Problems in Engineering 5

claim is determined based on the so-called matching score where & and | denote the logical AND and OR operators
𝑑(𝑗) , which is generated by comparing the feature vector x(𝑗) to and the superscript indices (1) and (2) stand for the face and
the template corresponding to the claimed identity 𝜔𝑖 [19, 20]; fingerprint experts, respectively, and 𝑘 ∈ {1, 2}.
that is, While the decision level fusion strategies are easy to
implement and offer a straightforward way of consolidating
𝐶 , if, 𝑑(𝑗) ≤ 𝜃 for 𝑗 ∈ {1, 2, . . . , 𝐽} experts opinions, potential client applications relying on
(𝜔𝑖 , x(𝑗) ) = { 1 (1)
𝐶2 , otherwise, these strategies do not possess the flexibility of freely choos-
ing the operating point of the combined biometric system.
where 𝜃 stands for the decision threshold. Here, we assume Instead, the operating point is determined by the operating
that small matching scores correspond to large similarities points of the single expert systems.
and large matching scores correspond to small similarities.
In multibiometric systems, several (i.e., 𝐽) biometric
4.3. Matching-Score-Level Fusion Rules. The second strategy
experts are available for classifying the given pair (𝜔𝑖 , x(𝑗) ), that can be exploited to combine the verification results of
where 𝑖 ∈ {1, 2, . . . , 𝑁} and 𝑗 ∈ {1, 2, . . . , 𝐽}, with respect to several biometric experts is fusion at the matching score
(1). Thus, after the verification process, the following families level using fixed fusion rules [5]. Most cloud-based biometric
of results are typically available: services (including our two) can be queried for a similarity
(𝑗) score rather than the final classification decision. The client
C = {𝐶𝑘 | 𝑗 = 1, 2, . . . , 𝐽; 𝑘 ∈ {1, 2}} , application then implements the classification procedure (see
(2) (1)) using a desired value for the decision threshold 𝜃. Such an
D = {𝑑(𝑗) | 𝑗 = 1, 2, . . . , 𝐽} , operating mode is implemented in most biometric services
(𝑗)
as it gives the client applications the possibility of choosing
where 𝐶𝑘 denotes the classification result and 𝑑(𝑗) is the their own operating points and, hence, selecting a trade-off
matching score produced by the 𝑗th expert; 𝑘 ∈ {1, 2}. between security and user-convenience.
Applying different functions on the results of the ver- The general form for consolidating several expert opin-
ification procedure from (2) gives rise to different fusion ions at the matching score level is
procedures. Some of these procedures that also represent
valid fusion strategies with respect to the two cloud imple-
𝜙 : {𝑑(1) , 𝑑(2) , . . . , 𝑑(𝐽) } 󳨀→ 𝑑fused , (5)
mentations presented in one of the previous sections are
presented in the remainder of the paper. Note also that we
will assume that 𝐽 = 2 from this point on, since we only have where 𝜙 is the fusion function and 𝑑fused ∈ R represents the
two cloud services at our disposal. All presented strategies are, combined matching score that can be exploited for the final
however, easily extended for the case, where 𝐽 > 2. identity inference using (1). Note that the decision threshold
for the fused scores needs to be recalculated for all desired
4.2. Decision-Level Fusion Rules. The first strategy for com- operating points and cannot be found in the specifications of
bining the verification results of two independent cloud the cloud services anymore.
implementations of biometric experts one may consider is to For our assessment presented in the experimental sec-
combine the results at the decision level. The decision level tion, we implemented two fixed matching-level fusion rules,
represents the most basic way of combining expert opinions namely, the weighted sum-rule and the weighted product-
of several biometric systems. The experts are simply queried rule. The two rules are defined as follows:
(𝑗)
for the classification result 𝐶𝑘 (for 𝑗 = 1, 2, . . . , 𝐽 and
𝜙SUM (𝑑(1) , 𝑑(2) ) = 𝑑fused = 𝑤𝑑(1) + (1 − 𝑤) 𝑑(2) , (6)
𝑘 ∈ {1, 2}) and the results are then combined into the final
decision: 𝑤 (1−𝑤)
𝜙PRO (𝑑(1) , 𝑑(2) ) = 𝑑fused = (𝑑(1) ) ⋅ (𝑑(2) ) , (7)
𝜓: {𝐶𝑘(1) , 𝐶𝑘(2) , . . . , 𝐶𝑘(𝐽) } 󳨀→ 𝐶𝑘fused , where 𝑘 ∈ {1, 2} ,
(3) where the superscript indices (1) and (2) again denote the
face and fingerprint experts, respectively, 𝑑fused represents
where 𝐶𝑘fused is the combined classification result and 𝑘 ∈ the combined matching score, and the real-valued 𝑤 ∈
{1, 2}. [0, 1] stands for the weighting factor balancing the relative
Several options are in general available for choosing the importance of the face and fingerprint scores. Note here that
fusion function 𝜓, but two of the most common are the AND- the weighted product-rule in (7) could also be represented
and OR-rules [21]. In the context of the two cloud-based as a weighted log-sum fusion rule making it very similar
biometric experts at our disposal, the two rules, which assume to the weighted sum-rule in (6). However, as shown in [9],
that the class labels 𝐶1 and 𝐶2 are binary encoded, that is, the two fusion rules are based on different assumptions. The
𝐶1 = 1 and 𝐶2 = 0, are defined as interested reader is referred to [9] for more information on
this topic.
𝜓AND (𝐶𝑘(1) , 𝐶𝑘(2) ) = 𝐶𝑘fused = 𝐶𝑘(1) & 𝐶𝑘(2) , It should be emphasized here that the matching scores
(4) of independent biometric systems are typically of a het-
𝜓OR (𝐶𝑘(1) , 𝐶𝑘(2) ) = 𝐶𝑘fused = 𝐶𝑘(1) | 𝐶𝑘(2) , erogeneous nature—they are not on the same numerical
6 Mathematical Problems in Engineering

the class label instead of the output of their discriminant


functions. However, with a little bit of tweaking, most existing
implementations can be altered to return the output of the
discriminant function as well.
Different from the fixed fusion rules, classifiers are capa-
ble of learning the decision boundary irrespective of how
the feature vectors are generated. Hence, the output scores
of the different experts can be nonhomogeneous (distance
or similarity metric, different numerical ranges, etc.) and
no processing is required (in theory) prior to training the
classifier [3].
In the experimental section, we assess the relative use-
fulness of the classifier-based fusion strategy based on two
Figure 2: The linear and nonlinear decision boundaries are dis- classifiers: a Support Vector Machine (SVM) (with a linear
played, dividing genuine (in blue) and impostor (in red) classes.
kernel) [24, 25] and a Multilayer Perceptron (MLP) [26]
classifier. The former falls into the group of linear classifiers
(in our case), while the latter represents an example of a
range. Score normalization is, therefore, used to transform nonlinear classifier.
the scores to a common range prior to combining them
[22, 23]. In this work, min-max normalization is used as it is
quite simple and typically gives satisfactory results. Min-max
5. Experiments
normalization transforms all the scores to a common range
5.1. Database, Protocol, and Performance Measures. To eval-
of [0, 1].
uate the different fusion strategies, a bimodal chimeric
database is constructed from the XM2VTS and FVC2002
4.4. Fusion with Classifiers. The third strategy one may databases [27, 28]. A chimeric database represents a database
consider when combining biometric experts (again at the in which biometric modalities from different databases are
matching score level) is to use pattern classifiers. Similar to combined and assigned common identities. Since the bio-
the fixed fusion rules presented in the previous section, it metric samples in the initial databases are not taken from
is first necessary to obtain similarity scores from the cloud the same identities, this procedure creates artificial (chimeric)
services rather than classification labels. Rather than combin- subjects. Note that such a procedure is reasonable due to
ing the scores to a single scalar value using fixed rules, the the fact that biometric modalities are generally considered to
matching scores are concatenated into “new” feature vectors be independent one from another (e.g., a facial image says
which are then classified into one of two classes: “genuine” nothing about the fingerprint of the subject and vice versa)
or “impostor” (i.e., classes 𝐶1 and 𝐶2 in (1)) [3]. In this [29]. The constructed chimeric database consists of facial
setting, the classifier is actually used to indirectly learn the imagery and fingerprint data of 200 subjects with each subject
relationship between the vector of matching scores provided having a total of 8 biometric samples for each modality. A few
by the biometric experts and the a posteriori probabilities of sample images from the chimeric database are presented in
the genuine and the impostor classes [3]. Once trained, the Figure 3.
discriminant function associated with the given classifier can For the experiments, the data is divided into two disjoint
be used to produce combined matching scores. parts of 100 subjects (with 8 biometric samples per each
The described procedure can be formalized as follows: modality). The first part is used for learning open hyper-
parameters of the fusion procedures (e.g., fusion weights,
𝜉 : {𝑑(1) , 𝑑(2) , . . . , 𝑑(𝐽) } 󳨀→ 𝑑fused = 𝛿 (x󸀠 ) , (8) decision thresholds, etc.), while the the second is used for
evaluating the fusion techniques on unseen testing data
𝑇
where x󸀠 = [𝑑(1) , 𝑑(2) , . . . , 𝑑(𝐽) ] denotes the new feature with fixed hyperparameters. Each of the experimental runs
vector and 𝛿(⋅) stands for the discriminant function of the consists of enrolling each of the 800 biometric samples
given classifier. (i.e., face and fingerprint samples) from the given part into
The classifier learns a decision boundary between the two the corresponding (biometric) cloud service and matching
classes, which can be either linear or nonlinear, depending the same 800 samples against all enrolled samples. This
on the choice of classifier. In Figure 2, where a toy example experimental setup results in 640, 000 matching scores (800×
is presented, the impostor class is represented with red color 800) for the training and testing parts, out of which 6400
and the genuine class with blue. The straight line represents correspond to genuine verification attempts and 633600
a linear decision boundary between the two classes, whereas correspond to illegitimate verification attempts. Note that
the curved line represents a nonlinear boundary. Note that prior to the experiments the matching scores are normalized
during verification, any new matching score vector is clas- using min-max score normalization [23].
sified into the genuine/impostor class depending on which For evaluation purposes, standard performance measures
side of the decision boundary it falls. Thus, most existing typically used in conjunction with two-class recognition
implementations of the most common classifiers return problems are adopted, namely, the false acceptance error rate
Mathematical Problems in Engineering 7

IDs: 0001 0002 0003 0004 0005

Figure 3: Sample images from the constructed chimeric database.

(FAR) and the false rejection error rate (FRR). The two error where
rates are defined as [30–33]
𝜃ver001 = argmin |FAR (𝜃) − 0.0001| . (15)
󵄨󵄨 󵄨
󵄨󵄨{𝑑imp | 𝑑imp ≤ 𝜃}󵄨󵄨󵄨 𝜃
FAR (𝜃) = 󵄨 󵄨󵄨 󵄨
󵄨,
󵄨󵄨{𝑑imp }󵄨󵄨󵄨 The presented performance metrics are typically computed
󵄨 󵄨 (9) based on client and impostor score populations generated on
󵄨󵄨 󵄨
󵄨{𝑑 | 𝑑 > 𝜃}󵄨󵄨󵄨 the training data. To obtain an estimate of the generalization
FRR (𝜃) = 󵄨 cli󵄨󵄨 cli 󵄨󵄨 , capabilities of a given fusion technique on unseen data, the
󵄨󵄨{𝑑cli }󵄨󵄨
thresholds 𝜃eer , 𝜃ver01 , and 𝜃ver001 are applied to client and
where {𝑑cli } and {𝑑imp } represent sets of client and impostor impostor score populations generated on the evaluation data.
scores generated during the experiments, | ⋅ | denotes a Thus, during test time, the FAR and FRR defined in (9) are
cardinality measure, and 𝜃 represents the decision threshold, computed based on the fixed thresholds and then combined
and the inequalities assume that dissimilarity measures were into the half-total error rate (HTER) as follows:
used to produce the matching scores (it is assumed that large
1
similarities between biometric samples result in small values HTER (𝜃𝑘 ) = (FAR𝑒 (𝜃𝑘 ) + FRR𝑒 (𝜃𝑘 )) , (16)
of the matching scores and vice versa). 2
Note that both error rates, FAR and FRR, represent where 𝑘 ∈ {eer, ver01, ver001} and the subscript index 𝑒
functions of the decision threshold 𝜃. Selecting different indicates that the error rates FAR and FRR were computed on
values of the decision threshold, therefore, results in different the evaluation set. Alternatively, it is also possible to evaluate
error rates that form the basis for various performance the verification rate and the false acceptance error rate at a
metrics. In this paper, three such metrics are used, namely, specific decision threshold set during training; that is,
the equal error rate (EER), which is defined with the decision
threshold that ensures equal values of the FAR and FRR on VER𝑒 (𝜃𝑘 ) = 1 − FRR𝑒 (𝜃𝑘 ) , with FAR𝑒 (𝜃𝑘 ) , (17)
the training set, that is,
where, in our case, 𝑘 again stands for 𝑘 ∈ {eer, ver01, ver001}.
1
EER = (FAR (𝜃eer ) + FRR (𝜃eer )) , (10) In addition to the quantitative performance metrics,
2 performance curves are also used to present the results of the
where experiments. Specifically, Receiver Operating Characteristic
(ROC) Curves and Expected Performance Curves (EPC) are
𝜃eer = argmin |FAR (𝜃) − FRR (𝜃)| , (11) generated during the experiments to better highlight the
𝜃
differences among the assessed techniques [34]. ROC curves
the verification rate at the false acceptance error rate of 0.1% plot the dependency of the verification rate (VER) and the
([email protected]), which is defined as false acceptance rate (FAR) with respect to varying values
of the decision threshold 𝜃. ROC curves are usually plotted
[email protected] = 1 − FRR (𝜃ver01 ) , (12) using a linear scale on the 𝑥- and 𝑦-axes; however, to better
highlight the difference among the assessed procedures at the
where lower values of the false acceptance rate, a log scale for the
𝑥-axis is used in this paper.
𝜃ver01 = argmin |FAR (𝜃) − 0.001| , (13) To generate EPC, two separate image sets are needed.
𝜃
The first image set, that is, the training set, is used to find
and the verification rate at the false acceptance error rate of a decision threshold that minimizes the following weighted
0.01% ([email protected]): error (WER) for different values of 𝛼:

[email protected] = 1 − FRR (𝜃ver001 ) , (14) WER (𝜃, 𝛼) = 𝛼FAR (𝜃) + (1 − 𝛼) FRR (𝜃) , (18)
8 Mathematical Problems in Engineering

1 1

0.8 0.8
Verification rate (VER)

Verification rate (VER)


0.6 0.6

0.4 0.4

0.2 0.2

0 0
10−3 10−2 10−1 100 10−3 10−2 10−1 100
False acceptance rate (FAR) False acceptance rate (FAR)
Face modality Fingerprint modality
(a) (b)

Figure 4: ROC curves of the experiments: face recognition (a) and fingerprint recognition (b).

Table 2: Quantitative comparison of the biometric modalities. 7%, while the fingerprint modality ensures an error rate of a
little more than 1.5%.
Procedure EER [email protected] [email protected]
It is interesting to look at the distribution of the client
Face 0.0720 0.6394 0.3748 and impostor similarity scores of the single experts in the
Fingerprint 0.0163 0.9691 0.9556 fingerprint—versus face-score—space; see Figure 5. Since the
optimal decision boundary appears to be different from a hor-
izontal or vertical line (this would correspond to conducting
identity inference based on one of the biometric experts),
where 𝛼 denotes a weighting factor that controls the relative performance gains (at least on the matching score level)
importance of the FAR and FRR in the above expression. can be expected by combining the two experts. Different
Next, the second image set, that is, the testing/evaluation strategies for doing so are evaluated in the remainder.
image set, is employed to estimate the value of the HTER
at the given 𝛼 and with the estimated value of the decision
threshold 𝜃. When plotting the HTER (obtained on the 5.3. Assessing Decision-Level Strategies. One possible strategy
testing/evaluation image sets) against different values of the for combining the outputs of the cloud implementations of
weighting factor 𝛼, an example of an EPC is generated [35]. the face and fingerprint recognition experts is to consolidate
the opinions of the two experts at the decision level. In
this setting, the cloud services are asked to make a decision
5.2. Single Expert Assessment. Before evaluating the feasibil- regarding the validity of the identity claim made with the
ity and efficiency of different strategies for combining the given biometric sample. Since no similarity scores are sent
cloud implementations of the biometric experts, it is neces- to the client application, the operating point (i.e., the ratio of
sary to establish the baseline performance of the unimodal the FAR and FRR) of the cloud recognition service cannot be
biometric systems, that is, the face and fingerprint systems. changed and is determined by the settings on the service side.
To this end, the training data (note that the data used during In our case, the operating point of the cloud services is set to
this series of experiments represents the training data for the equal error rate (EER).
the fusion techniques; for the unimodal systems, data is still Two decision-level fusion schemes are implemented
valid testing/evaluation data) from our chimeric database is for the experiments, namely, the AND- and OR-rules, as
used to generate all of the relevant performance metrics and described in Section 4. The results of the experiments (on the
performance curves introduced in the previous section. The training data) are shown in Table 3 in the form of various
results of the experiments are presented in the form of ROC performance metrics. Note that it is not possible to generate
curves in Figure 4 and with quantitative performance metrics ROC curves for this series of experiments, since no similarity
in Table 2. scores are available.
As expected, the fingerprint recognition system performs Several observations can be made based on the presented
much better than the face recognition system, especially at the results. Both fusion strategies result in similar performance
lower values of the false acceptance error rates. At the equal in terms of the HTER with the difference that the AND-
error rate, for which the cloud-based biometric experts were rule favors small FARs, while the OR-rule favors small
also optimized, the face modality results in an error of around FRRs. When compared to the performance of the single
Mathematical Problems in Engineering 9

1
Fingerprint scores 0.8

Fingerprint scores
0.6 0.95

0.4
0.9

0.2 Impostor scores


Impostor scores Client scores
Client scores 0.85
0 0.6 0.7 0.8 0.9 1
0 0.2 0.4 0.6 0.8 1 Face scores
Face scores

Figure 5: Plot of scores in the face-fingerprint-score plane.

Table 3: Quantitative comparison of the decision-level fusion rules corresponding performance measures in Table 4. Note that
(training data). the sum and product fusion rules perform very similarly;
both are significantly better than the unimodal (single expert)
Procedure HTER FAR FRR
systems. The EER, for example, falls by more than 50% with
AND-rule 0.0440 0.0011 0.0869 both fusion rules when compared to the better performing
OR-rule 0.0440 0.0862 0.0014 single expert system. While these results are encouraging,
it needs to be taken into account that the ROC curves for
the fusion techniques shown in Figure 7 were generated by
expert systems presented in Table 2, the decision-level fusion optimizing the open hyperparameter 𝑤 on the same data
schemes performed better than the face expert but a little that was used for constructing the curves in the first place.
worse than the fingerprint expert. All in all, the decision- This means that the performance of the fusion techniques
level fusion rules did not prove to be too useful for improving may be somewhat biased. To analyze this issue, we present
the recognition performance of the single expert systems comparative experiments on the evaluation/testing data of
but should rather be considered as a way of changing the the constructed chimeric database in Section 5.5.
operating point of the combined system toward lower FARs Next to fixed fusion rules, the second possibility for
or FRRs in cases where only decision-level outputs can be combining the similarity scores of the single expert systems is
obtained from cloud implementations of biometric experts. to stack the similarity scores into a two-dimensional feature
vector and use the constructed vector with a classifier. To
5.4. Assessing Matching-Score-Level Strategies. When assess- evaluate this possibility, the training part of the chimeric
ing matching-score-level strategies for combining the two database is used to train SVM (Support Vector Machine
biometric experts, we first focus on the fixed fusion rules and [24, 25]) and MLP (Multilayer Perceptron [26]) classifiers.
present experiments related to classifier fusion strategies in For the SVM classifier, a linear kernel is selected, and for
the second part of this section. the MLP classifier, an architecture with two hidden layers
The performance of the sum and product fusion rules (each with 5 neurons) is chosen. This setting results in a
is first examined on the training part of the constructed classifier capable of finding linear decision boundary between
chimeric database. Before reporting the final performance, it the client and impostor classes (i.e., the SVM) and a classifier
is necessary to find (or learn) appropriate values for the open capable of setting the decision boundary in a nonlinear
hyperparameter 𝑤 of the sum and product fusion rules ((6) manner (i.e., the MLP). Once trained, both classifiers are
and (7)). To this end, the value of 𝑤 is gradually increased applied to the training data to compute performance metrics
from 0 to 1 with a step size of 0.1 and the values of EER, and construct performance curves. The results of this series of
[email protected], and [email protected] are computed for each experiments are shown in Figure 8 and Table 5. Note that with
value of 𝑤. The results of this series of experiments are shown most existing software solutions for training SVM and MLP
in Figure 6. Note that both the sum and product fusion rules classifiers (see, e.g., [36, 37]) a little of tweaking is needed
peak in their performance at a value of 𝑤 = 0.3. This value is, to obtain the similarity scores required for constructing
therefore, selected for both fusion rules for all the following ROC curves, as the classifiers usually output only class
experiments. labels. When looking at the performance of the SVM and
To compare the performance of the sum and product MLP classifiers, it is obvious that they did not ensure any
fusion rules with fixed hyperparameters to that of the single additional performance improvements when compared to
expert systems, we generate ROC curves from the scores the fixed fusion rules. While this could be expected for
obtained on the training part of the chimeric database. the linear SVM classifier, it is somehow unexpected that
The performance curves are shown in Figure 7 and the the MLP classifier did not improve the performance over
10 Mathematical Problems in Engineering

0.95
0.08
0.9

0.85
0.06

[email protected]
EER

0.8

0.04 0.75

0.7
0.02
0.65

0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Hyperparameter w Hyperparameter w
Sum-rule fusion Sum-rule fusion
Product-rule fusion Product-rule fusion
(a) (b)

0.95

0.85

0.75
[email protected]

0.65

0.55

0.45

0.35 0 0.2 0.4 0.6 0.8 1


Hyperparameter w
Sum-rule fusion
Product-rule fusion
(c)

Figure 6: EERs, [email protected], and [email protected] for the sum and product fusion rules for different values of the hyperparameter 𝑤.

the fixed sum and product fusion rules. It seems that, without The first thing to notice from the presented results is the
any additional information such as quality or confidence fact that, similar to the training data, all fusion strategies
measures, it is extremely difficult to significantly improve (except for the decision-level fusion techniques) result in
upon the performance fixed fusion rules on our chimeric performance improvements when compared to either of the
database. two single expert systems. Next, the performance achieved on
the training data is also achieved on the evaluation/testing
5.5. Generalization Capabilities of Different Strategies. The data, which suggests that no overfitting took place during
last series of verification experiments aimed at examining training. Especially important here is also the fact that all
the generalization capabilities of the fusion strategies on results are very well calibrated indicating that a desired
the evaluation/testing part of the chimeric database. Here, operating point (i.e., the ratio between the FAR and FRR)
all decision thresholds and hyperparameters of all assessed can easily be selected and maintained even after the fusion
fusion strategies are fixed on the training part of the data. The process.
testing/evaluation data is then used to generate performance Last but not least, it is also important to stress that,
metrics and curves, which are shown in Figure 9 and Table 6 among the matching-score-level fusion strategies, no particu-
for this series of experiments. lar strategy has a clear advantage in terms of performance on
Mathematical Problems in Engineering 11

1 1

0.99
0.995

Verification rate (VER)


Verification rate (VER)

0.98
0.99
0.97

0.985
0.96

0.95 0.98
10−3 10−2 10−1 100 10−3 10−2 10−1 100
False acceptance rate (FAR) False acceptance rate (FAR)
Product-rule Face expert SVM
Sum-rule Fingerprint expert MLP

Figure 7: ROC curves for the fusion rules (training data). Figure 8: ROC curves for fusion techniques with classifiers (train-
ing data).

Table 4: Quantitative comparison of the fusion rules with learned


parameter 𝑤 (𝑤 = 0.3 for both techniques)—training data. Table 5: Quantitative comparison of fusion techniques with classi-
fiers (training data).
Procedure EER [email protected] [email protected]
Product-rule 0.0066 0.9866 0.9733 Procedure EER [email protected] [email protected]
Sum-rule 0.0063 0.9875 0.9736 SVM 0.0064 0.9869 0.9733
MLP 0.0063 0.9881 0.9731

our chimeric database. This suggests that other criteria next to


performance should be taken into account when selecting the (iii) Storage requirements: memory requirements for stor-
best strategy for combining different cloud-based biometric ing metadata needed by the given fusion strategy (e.g.,
experts. for storing support vectors, hyperparameters, etc.).
(iv) Performance gains: performance related criterion that
5.6. Subjective Analysis. In the previous sections, different reflects the results of the experiments conducted in
fusion strategies for combining cloud implementations of the previous sections.
biometric experts were assessed only from the perspective (v) Client disturbance: relating to the amount of work
of performance. However, when combining different bio- needed to rewrite existing client applications and the
metric experts into a multibiometric system, other criteria need for including additional external resources (e.g.,
are important as well. One may, for example, be interested programming libraries, etc.).
in how existing client applications need to be modified to (vi) Calibration: referring to the generalization capabil-
enable multiexpert biometric recognition, how difficult it is to ities of the given fusion strategy and the capability
reach a specific operating point in the multibiometric system, of ensuring the same operating point across different
whether additional libraries need to be included in the client data sets.
applications, and so forth. To evaluate the fusion strategies
based on other (nonperformance related) criteria as well, a (vii) OP complexity: relating to the complexity of setting a
grade (low—L, medium—M, or high—H) was assigned to specific operating point for the multibiometric system
each strategy in seven different categories (note that these (e.g., the EER operating point, etc.).
grades are of a subjective nature and reflect the perception
of the authors). These categories include the following. Our ratings are presented in Table 7 in the form of grades
and in Figure 10 in the form of Kiviat graphs. With the
(i) Training complexity: the complexity of the training generated graphs, a larger area represents a more suitable
procedure for the given fusion strategy (e.g., training fusion strategy according to the selected criteria (note that
the classifier, setting hyperparameters, etc.). the same weight has been given here to all criteria. If a certain
criterion is considered more important than others, this could
(ii) Run-time complexity: the run-time complexity be reflected in the final Kiviat graphs).
required to apply the given fusion strategy to the data Note that the fixed fusion rules (i.e., the sum- and
(e.g., applying the trained classifier to the data, etc.). product-rules) turned out to be suited best for combining
12 Mathematical Problems in Engineering

0.16

0.14

0.12

0.1

Error rate
0.08

0.06

0.04

0.02

0
0.2 0.4 0.6 0.8
𝛼

Face
Fingerprint
(a) EPC-curves for the unimodal experts
0.16
0.01
0.14

0.12 0.008
Error rate

0.1 0.006
Error rate

0.08
0.004
0.06
0.002
0.04

0.02 0
0.2 0.4 0.6 0.8
0 𝛼
0.2 0.4 0.6 0.8
𝛼 Sum-rule SVM
Product-rule MLP
Sum-rule SVM
Product-rule MLP
(b) EPC-curves for the fusion techniques

Figure 9: EPC-curves of the experiments on the evaluation data.

Table 6: Quantitative comparison of the fusion rules on the evaluation/testing data. Here, the symbol n/a stands for the fact that this value
is not computable.

@𝜃eer @𝜃ver01 @𝜃ver001


HTER𝑒 VER𝑒 FAR𝑒 HTER𝑒 VER𝑒 FAR𝑒 HTER𝑒 VER𝑒 FAR𝑒
Face 0.0716 0.9280 0.0712 0.1808 0.6394 0.0010 0.3126 0.3748 9.95e − 5
Fingerprint 0.0133 0.9897 0.0162 0.0096 0.9819 0.0012 0.0129 0.9744 1.78e − 4
Sum-rule 0.0045 0.9973 0.0064 0.0034 0.9944 0.0011 0.0061 0.9880 1.14e − 4
Product-rule 0.0046 0.9972 0.0065 0.0033 0.9945 0.0011 0.0063 0.9875 1.25e − 4
AND-rule 0.0411 0.0811 0.0012 n/a n/a n/a n/a n/a n/a
OR-rule 0.0411 0.0013 0.0863 n/a n/a n/a n/a n/a n/a
SVM 0.0046 0.9972 0.0063 0.0032 0.9947 0.0011 0.0060 0.9881 1.36e − 4
MLP 0.0046 0.9973 0.0066 0.0036 0.9939 0.0012 0.0062 0.9877 1.10e − 4
Mathematical Problems in Engineering 13

Table 7: Comparison of the fusion strategies based on the perception of the authors and conducted experimentation. High, medium, and
low are denoted by H, M, and L, respectively.

Fusion Training Run-time Storage Performance Client OP


Calibration
technique complexity complexity requirements gains disturbance complexity
Sum-rule L L L M L H L
Product-rule L L L M L H L
AND-rule L L L L L L H
OR-rule L L L L L L H
SVM H M M M M H M
Neural
M L L M M H L
network

Training Training
complexity complexity
L L
OP complexity Run-time OP complexity
Run-time
L M complexity L M complexity
M L M L
H H M H M
H
H H
M L M L
H H H H
H M L H M L
Calibration L Storage Calibration L Storage
M requirements M requirements
M M
L L
H H
Client Client
Performance Performance
disturbance gains disturbance gains
(a) Sum- and product-rule fusion (b) Decision-level fusion
Training
Training complexity
complexity
L L
OP complexity OP complexity
Run-time Run-time
L M L M complexity
complexity L
M L M
H M H H M
H
H H
L M L
H M H H
H M L M L
Calibration H Calibration H L Storage
L Storage
M requirements M requirements
M M
L L
H H
Client Client
Performance Performance
disturbance gains disturbance gains
(c) SVM fusion (d) MLP fusion

Figure 10: Kiviat graphs of the fusion techniques generated based on the selected evaluation criteria.

different cloud implementations of biometric experts into experts into a multibiometric recognition system. For the
a multibiometric system as they provide a good trade- analysis, we used our own implementations of cloud-based
off between the complexity of the training and run-time face and fingerprint verification services and a specially
procedures, expected performance gains, flexibility in setting constructed chimeric database. The results of our analysis
the operating point, calibration, and the need for modifying suggest that fixed fusion rules that combine the single expert
existing client applications. systems at the matching score level are the most suitable for
the studied task as they provide a good trade-off between
6. Conclusion expected performance gains and other important factors such
as training complexity, run-time complexity, calibration, and
We have presented an analysis of different strategies for client disturbance. As part of our future work, we plan to
combining independent cloud implementations of biometric examine possibilities of including confidence measures in
14 Mathematical Problems in Engineering

the fusion strategies, as these have the potential to further [11] A. Ross and R. Govindarajan, “Feature level fusion using hand
improve the recognition performance of the combined multi- and face biometrics,” in Proceedings of the SPIE Conference on
biometric system. We also plan to develop biometric cloud Biometric Technology for Human Identification, vol. 5779, pp.
services combining more than two single expert systems. 196–204, March 2005.
[12] L. Lam and C. Y. Suen, “Application of majority voting to pattern
recognition: an analysis of its behavior and performance,” IEEE
Conflict of Interests Transactions on Systems, Man, and Cybernetics A, vol. 27, no. 5,
pp. 553–568, 1997.
The authors declare that there is no conflict of interests
regarding the publication of this paper. [13] P. Viola and M. J. Jones, “Robust real-time face detection,”
International Journal of Computer Vision, vol. 57, no. 2, pp. 137–
154, 2004.
Acknowledgments [14] V. Štruc, J. Zganec Gros, and N. Pavešić, “Principal directions
of synthetic exact filters for robust real-time eye localization,”
The work presented in this paper was supported in part in Lecture Notes in Computer Science, vol. 6583, pp. 180–192,
by The National Research Program P2-0250(C) Metrology 20112011.
and Biometric Systems, by the National Research Pro-
[15] X. Tan and B. Triggs, “Enhanced local texture feature sets
gram P2-0214 Computer Vision, and by the EU, European for face recognition under difficult lighting conditions,” IEEE
Regional Fund, in scope of the framework of the Opera- Transactions on Image Processing, vol. 19, no. 6, pp. 1635–1650,
tional Programme for Strengthening Regional Development 2010.
Potentials for the Period 2007–2013, Contract no. 3211-10- [16] K. Kryszczuk and A. Drygajlo, “Improving biometric verifica-
000467 (KC Class), the European Union’s Seventh Frame- tion with class-independent quality information,” IET Signal
work Programme (FP7-SEC-2011.20.6) under Grant agree- Processing, vol. 3, no. 4, pp. 310–321, 2009.
ment no. 285582 (RESPECT), and the European Union’s Sev- [17] U. Klopčiv and P. Peer, “Fingerprint-based verification system: a
enth Framework Programme (FP7-SEC-2010-1) under Grant research prototype,” in Proceedings of the 17th International Con-
agreement no. 261727 (SMART). The authors additionally ference on Systems, Signals and Image Processing (IWSSIP ’10), A.
appreciate the support of COST Actions, IC1106 and IC1206. Conci and F. Leta, Eds., vol. 1, pp. 150–153, 2010.
[18] J. Fieriez-Aguilar, J. Ortega-Garcia, J. Gonzalez-Rodriguez,
References and J. Bigun, “Kernel-based multimodal biometric verification
using quality signals,” in Proceedings of the Biometric Technology
[1] P. Peer, J. Bule, J. Zganec Gros, and V. Štruc, “Building cloud- for Human Identification Conference, pp. 544–554, April 2004.
based biometric services,” Informatica, vol. 37, no. 1, pp. 115–122, [19] A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to bio-
2013. metric recognition,” IEEE Transactions on Circuits and Systems
[2] E. Kohlwey, A. Sussman, J. Trost, and A. Maurer, “Leveraging for Video Technology, vol. 14, no. 1, pp. 4–20, 2004.
the cloud for big data biometrics: meeting the performance [20] V. Štruc and N. Pavešić, “The corrected normalized correlation
requirements of the next generation biometric systems,” in coefficient: a novel way of matching score calculation for LDA-
Proceedings of the IEEE World Congress on Services (SERVICES based face verification,” in Proceedings of the 5th Interna-
’11), vol. 1, pp. 597–601, July 2011. tional Conference on Fuzzy Systems and Knowledge Discovery
[3] A. Ross, K. Nandakumar, and A. Jain, Handbook of Multibio- (FSKD ’08), pp. 110–115, Shandong, China, October 2008.
metrics, Springer Science+Business Media, LLC, New York, NY,
[21] Q. Tao and R. Veldhuis, “Threshold-optimized decision-level
USA, 2006.
fusion and its application to biometrics,” Pattern Recognition,
[4] K. Nandakumar, Y. Chen, S. C. Dass, and A. K. Jain, “Likelihood vol. 42, no. 5, pp. 823–836, 2009.
ratio-based biometric score fusion,” IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 342– [22] S. Chaudhary and R. Nath, “A multimodal biometric recogni-
347, 2008. tion system based on fusion of palmprint, fingerprint and face,”
in Proceedings of the International Conference on Advances in
[5] L. Kuncheva, Combining Pattern Classifiers: Methods and Algo-
Recent Technologies in Communication and Computing (ART-
rithms, Wiley-Interscience, Hoboken, NJ, USA, 2004.
Com ’09), pp. 596–600, October 2009.
[6] A. B. Khalifa and N. B. Amara, “Bimodal biometric verification
[23] A. Jain, K. Nandakumar, and A. Ross, “Score normalization in
with different fusion levels,” in Proceedings of the 6th Interna-
multimodal biometric systems,” Pattern Recognition, vol. 38, no.
tional Multi-Conference on Systems, Signals and Devices (SSD
12, pp. 2270–2285, 2005.
’09), vol. 1, pp. 1–6, March 2009.
[7] L. I. Kuncheva, “A theoretical study on six classifier fusion [24] C. Cortes and V. Vapnik, “Support-vector networks,” Machine
strategies,” IEEE Transactions on Pattern Analysis and Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
Intelligence, vol. 24, no. 2, pp. 281–286, 2002. [25] V. Vapnik, Statistical Learning Theory, Wiley-Interscience, New
[8] F. Alkoot and J. Kittler, “Experimental evaluation of expert York, NY, USA, 1998.
fusion strategies,” Pattern Recognition Letters, vol. 20, no. 11–13, [26] C. Bishop, Neural Networks for Pattern Recognition, Oxford
pp. 1361–1369, 1999. University Press, Oxford, UK, 1995.
[9] J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, “On combining [27] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre,
classifiers,” IEEE Transactions on Pattern Analysis and Machine “Xm2vtsdb: the extended m2vts database,” in Proceedings of
Intelligence, vol. 20, no. 3, pp. 226–239, 1998. the 2nd International Conference on Audio and Video-based
[10] X. Xia and L. O’Gorman, “Innovations in fingerprint capture Biometric Person Authentication (AVBPA ’99), vol. 1, pp. 72–77,
devices,” Pattern Recognition, vol. 36, no. 2, pp. 361–369, 2003. 1999.
Mathematical Problems in Engineering 15

[28] D. Maio, D. Maltoni, R. Cappelli, J. Wayman, and A. Jain, [44] L. Nanni, A. Lumini, and S. Brahnam, “Likelihood ratio based
“Fvc 2002: second fingerprint verification competition,” in features for a trained biometric score fusion,” Expert Systems
Proceedings of the 16th International Conference on Pattern with Applications, vol. 38, no. 1, pp. 58–63, 2011.
Recognition, vol. 1, pp. 811–814, 2002. [45] N. Poh and J. Kittler, “A Unified framework for biometric expert
[29] N. Poh and S. Bengio, “Using chimeric users to construct fusion fusion incorporating quality measures,” IEEE Transactions on
classifiers in biometric authentication tasks: an investigation,” in Pattern Analysis and Machine Intelligence, vol. 34, no. 1, pp. 3–
Proceedings of the International Conference on Acoustics, Speech 18, 2012.
and Signal Processing (ICASSP ’06), vol. 1, pp. V1077–V1080, [46] A. Nagar, K. Nandakumar, and A. K. Jain, “Multibiometric
May 2006. cryptosystems based on feature-level fusion,” IEEE Transactions
[30] R. Gajsek, F. Mihelic, and S. Dobrisek, “Speaker state recogni- on Information Forensics and Security, vol. 7, no. 1, pp. 255–268,
tion using an hmm-based feature extraction method,” Computer 2012.
Speech & Language, vol. 27, no. 1, pp. 135–150, 2013. [47] Q. Tao and R. Veldhuis, “Robust biometric score fusion by naive
[31] R. Gajšek, S. Dobrišek, and F. Mihelič, Analysis and Assessment likelihood ratio via receiver operating characteristics,” IEEE
of State Relevance in Hmm-Based Feature Extraction Method, Transactions on Information Forensics and Security, vol. 8, no.
Lecture Notes in Computer Science, Springer, New York, NY, 2, pp. 305–313, 2013.
USA, 2012.
[32] B. Vesnicer and F. Mihelic, “The likelihood ratio decision
criterion for nuisance attribute projection in gmm speaker
verification,” EURASIP Journal of Advances in Signal Processing,
vol. 2008, Article ID 786431, 11 pages, 2008.
[33] M. Günther, A. Costa-Pazo, C. Ding et al., “The 2013 face
recognition evaluation in mobile environment,” in Proceedings
of the 6th IAPR International Conference on Biometrics, June
2013.
[34] S. Bengio and J. Marithoz, “The expected performance curve:
a new assessment measure for person authentication,” in Pro-
ceedings of the Speaker and Language Recognition Workshop
(Odyssey), pp. 279–284, Toledo, Spain, 2004.
[35] V. Štruc and N. Pavešić, “The complete gabor-fisher classifier
for face recognition under variable lighting,” EURASIP Journal
of Advances in Signal Processing, vol. 2010, no. 31, pp. 1–26, 2010.
[36] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support
vector machines,” ACM Transactions on Intelligent Systems and
Technology, vol. 2, no. 3, pp. 1–27, 2011.
[37] S. Nissen, “Implementation of a fast artificial neural network
library (fann),” Tech. Rep., Department of Computer Science,
University of Copenhagen, Kbenhavn, Denmark, 2003.
[38] D. E. Maurer and J. P. Baker, “Fusing multimodal biometrics
with quality estimates via a Bayesian belief network,” Pattern
Recognition, vol. 41, no. 3, pp. 821–832, 2008.
[39] N. Poh, T. Bourlai, J. Kittler et al., “Benchmarking quality-
dependent and cost-sensitive score-level multimodal biometric
fusion algorithms,” IEEE Transactions on Information Forensics
and Security, vol. 4, no. 4, pp. 849–866, 2009.
[40] W.-Y. Lin and C.-J. Yang, “An enhanced biometric score fusion
scheme based on the adaboost algorithm,” in Proceedings
of the International Conference on Information Security and
Intelligence Control (ISIC ’12), pp. 262–265, 2012.
[41] M. Vatsa, R. Singh, A. Noore, and A. Ross, “On the dynamic
selection of biometric fusion algorithms,” IEEE Transactions on
Information Forensics and Security, vol. 5, no. 3, pp. 470–479,
2010.
[42] N. Poh, J. Kittler, and T. Bourlai, “Quality-based score nor-
malization with device qualitative information for multimodal
biometric fusion,” IEEE Transactions on Systems, Man, and
Cybernetics A, vol. 40, no. 3, pp. 539–554, 2010.
[43] N. Poh, D. Windridge, V. Mottl, A. Tatarchuk, and A. Eliseyev,
“Addressing missing values in kernel-based multimodal bio-
metric fusion using neutral point substitution,” IEEE Transac-
tions on Information Forensics and Security, vol. 5, no. 3, pp. 461–
469, 2010.
Advances in Advances in Mathematical Problems Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
in Engineering
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of


World Journal
Hindawi Publishing Corporation
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

International Journal of Advances in


Combinatorics
Hindawi Publishing Corporation
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of International Journal of Abstract and Discrete Dynamics in


Complex Analysis
Hindawi Publishing Corporation
Mathematics
Hindawi Publishing Corporation
Stochastic Analysis
Hindawi Publishing Corporation
Applied Analysis
Hindawi Publishing Corporation
Nature and Society
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International Journal of
Journal of
Applied Mathematics
Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences

Journal of Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014


Function Spaces
Hindawi Publishing Corporation Hindawi Publishing Corporation
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like