B7_2014_Chapter_Image_Processing_Biometry
B7_2014_Chapter_Image_Processing_Biometry
Applications
1 Introduction
1
2 Ana Cernea and J. L. Fernández-Martı́nez
2 Biometric Applications
Fig. 1 Iris samples for iris recognition from Casia Iris database.
The first operational automatic iris recognition system was developed by Daug-
man in 1993 [12, 11], using 2D Gabor wavelets. The Hamming distance between
the iris code of the test and the training iris images was used for recognition. An
other important approach is proposed by Wildes and consists of computing a bi-
nary edge map followed by a Hough transform to detect circles. For matching two
irises, Wildes applies a Laplacian of Gaussian filter at multiple scales to produce a
template and computes the normalized correlation as a similarity measure [36]. A
detailed survey of recent efforts on iris recognition can be found in [7].
4 Ana Cernea and J. L. Fernández-Martı́nez
Signature recognition can be divided into two main areas depending on the method
of data acquisition: on-line and off-line signature recognition [29], [40]. In off-line
signature recognition, signature is available on a document which is scanned to get
the digital image representation. The on-line signature recognition, also known as
dynamic signature uses special hardware, such as a digitizing tablet or a pressure
sensitive pen to acquire the signature in real time.
Image Processing Methods for Biometric Applications 5
The most popular pattern recognition techniques applied for signature recogni-
tion are dynamic time warping algorithm and Gaussian Mixture Models, and Hid-
den Markov Models. Detailed surveys of the current techniques can be consulted in
[31, 20]
The face recognition problem consists in, given a new incoming face image, identi-
fying the individual corresponding to it, from a database of different face images of
known individuals. The learning database (Bd) contains N face images
corresponding to C individuals (or classes). We will assume that for each of these
individuals the learning database contains n p poses that are used to perform the
classification of the new input images, and the total number of poses in Bd verifies
N = n p ·C. Typically we dispose of n p = 5 poses for each individual.
In order to perform the classification, images are represented by a feature vec-
tor vk ∈ Rs , obtained for each method of analysis. Faces are described by different
kind of image attributes: statistical based features, spectral features, and image seg-
mentation/regional descriptors features (texture-based features). The dimension s
of vk depends on the dimensionality reduction achieved by each technique. In the
case of the spectral methods we have used the same energy threshold θ = 99.5%
to achieve the dimensionality reduction. All these attributes can be calculated for
gray scale and color images, both, locally or globally. In the case of global analysis
the attribute features are calculated over the whole size of the image. In the case
of local features, this analysis is performed by dividing the images into blocks. For
each block the local attributes are computed and the final feature vector is formed by
merging all the local attributes into an unique vector, always computed in the same
order. Figure 4 shows the sketch of this process. The use of local features increases
the dimension of the attribute space, nevertheless it is also expected to increase the
discriminative power of the analysis. In this paper we have used a partition of the
images into 8 × 4 blocks, nevertheless finer subdivisions could be also adopted.
6 Ana Cernea and J. L. Fernández-Martı́nez
The cosine criterion is related to the existence of a dot product (for instance for
p=2).
Although both criteria applied to the same image are equivalent, they provide dif-
ferent recognition accuracies when applied over the whole set of images (testing
database). In the cases where cos(I, Ik ) provides higher accuracies, that means this
criterion performs better with normalized images nI , nIk , since
vI vk
cos(I, Ik ) = · = nI · nIk .
∥vI ∥ ∥vk ∥
The algorithm presented in this paper belongs to the class of non-supervised clas-
sification since the distance d(J, Ik ), and the cosine cos(J, Ik ), are defined ad-hoc,
that is, no information from the learning database classes is used to optimize the
classification of the testing database.
Image Processing Methods for Biometric Applications 7
The image histogram is the simplest attribute that can be calculated and describes
the frequency of the brightness in the image. The shape of the histogram provides
information about the nature of the image. For example, a narrow histogram means
a low contrast, meanwhile a histogram shifted to the right indicates a bright image;
a histogram with two major peaks (called bimodal) implies the presence of an object
that is in contrast with the background, etc [35]. Due to this fact it is expected that
the histogram will have a great discriminative power. Also, this attribute is fast to be
computed and can be precalculated for all the images in the learning database.
For a gray-scale digital image I the histogram represents the discrete probability
distribution of the gray-levels in the image. For this purpose the gray-scale space
([0, 255] for an 8-bit image) is divided into L bins, and the number of pixels in
each class ni , (i = 1, L) is calculated. In this case the attribute vector has dimension
L:
HI = (n1 , ..., nL ).
Relative frequencies can be also used by dividing the absolute frequencies ni by the
total number of pixels in the image.
In the case of RGB images the same analysis can be performed considering the
color channels IR , IG and IB independently, and merging all the channels histograms
together, as follows:
The histogram can be calculated either globally over the whole image or locally.
Global histograms do not capture the spatial distribution of the color channels. Local
histograms provide this kind of information. Their calculation follows the general
procedure for computing any local attribute explained above (see Figure 4).
4.2 Percentiles
The p-percentile of a color (or gray) channel ci is defined as the number x p such as
The p-percentiles are related to the cumulative probability functions of each color
channel. In practice, we compute the percentiles 1%, 99% and 5% to 95% with 5%
step of probability. This produces a feature vector of dimension 21 for each color
channel.
8 Ana Cernea and J. L. Fernández-Martı́nez
Percentiles and histograms are related because the histogram is the density prob-
ability function f (t) of each color channel, and the cumulative probability function
F(x) is the integral:
∫ x
F(x) = f (t)dt ⇔ f (x) = F ′ (x).
−∞
4.3 Variogram
The variogram describes the spatial distribution in each color channel. In spatial
statistics [17] the variogram describes the degree of spatial dependence of a spatial
random field or stochastic process, the gray-scale in this case. For a given value of
vector h, defined by a modulus and direction, the variogram is an index of dissimi-
larity between all pairs of values separated by vector h.
The omnidirectional p-variogram is the mean of the p-absolute difference be-
tween the color values of the N(h) pairs of pixels that are located at the same dis-
tance h:
1 N(h)
γi (h) = ∑ |ci (xk ) − ci (xk + h)| p .
N(h) k=1
(1)
for different pairs of d and θ . Figure 5 shows the spatial relationships between a
pixel and its adjacent pixels, and the corresponding displacement vector (d, θ ).
To calculate the GLCM matrix for a given pair (d, θ ) the algorithm proceeds as
follows:
• First the matrix Fd,θ (i, j) ∈ M(n×n) of absolute frequencies of pairs of pixels with
gray levels i and j at a distance d in the direction θ is built. For instance, figure 6
shows the Fd,θ matrix for a 4 × 4 image with 5 gray levels, for the displacement
vector (1, 0).
• Secondly, Fd,θ is normalized as follows:
Fd,θ (i, j)
Pd,θ (i, j) = (2)
n
∑i=1 ∑nj=1 Fd,θ (i, j)
Different statistical moments can be calculated from the GLCM matrix [6]:
• Contrast is a measure of local image variation, typically it means the intensity
change between a pixel and its neighbor over the whole image.
n n
C=∑ ∑ |i − j|2 Pd,θ (i, j).
i=1 j=1
10 Ana Cernea and J. L. Fernández-Martı́nez
where
n n
µi = ∑ ∑ iPd,θ (i, j),
i=1 j=1
n n
µ j = ∑ ∑ jPd,θ (i, j),
i=1 j=1
n n
σi2 = ∑ ∑ (i − µi )2 Pd,θ (i, j),
i=1 j=1
n n
σ 2j = ∑ ∑ ( j − µ j )2 Pd,θ (i, j).
i=1 j=1
• Entropy
n n
S =−∑ ∑ Pd,θ (i, j) log Pd,θ (i, j)
i=1 j=1
In the present case we have used a lag d = 1 for the directions 0, 45, 90, 135. This
analysis provides an attribute vector of dimension 20 for each image.
Edges are determined by sets of pixels where there is an abrupt change of intensity.
If a pixel’s gray level value is similar to those around it, there is probably not an edge
at that point. However, if a pixel has neighbors with widely varying gray levels, it
may represent an edge. Thus, an edge is defined by a discontinuity in the gray-level
values [35]. More precisely, we can consider an edge as a property associated to a
pixel where the image function f (x, y) changes rapidly in the neighborhood of that
pixel. In this case the function f : R2 → R represents the pixels intensities. Related
Image Processing Methods for Biometric Applications 11
√
∂f2 ∂f2
|∇ f (x, y)| = + ,
∂x ∂y
( )
∂f ∂f π
θ (x, y) = arctg , ± .
∂y ∂x 2
Fig. 7 Canny edges detection method applied to a face sample from the ORL database.
6 Spectral Attributes
In this section we briefly introduce the spectral methods that are used in this paper.
Spectral decomposition methods consist in finding an orthonormal base which best
separates the image projections and to reduce the dimensionality of the attributes
space with respect to the pixels space RN pixels . Spectral methods can be divided into
two categories: 1) those that act in the whole database of images (PCA, 2DPCA
and Fisher’s LDA); 2) those that act on single images, although they could be also
applied to the whole training image database, such as: DCT, DWT and DWHT.
12 Ana Cernea and J. L. Fernández-Martı́nez
A grey digital image I(m, n) can be regarded as the matrix of a linear operator be-
tween two linear spaces Rn and Rm . In the case of a color image there exist three
different linear operators, one for each color channel (R, G and B).
Given an image I(m, n), it is possible to define in several ways, two different
orthogonal transformations U,V , such as:
I = USV T , (3)
• Matrix S is blocky diagonal if U and V contain as columns the left and right sin-
gular vectors provided by the singular value decomposition of I. In other cases,
such as DCT and DWT, these orthogonal transformations serve to decorrelate the
pixels of I by compressing its energy onto the first harmonics (spectral modes of
the image). The pixel decorrelation is based on the fact that orthogonal transfor-
mations (in our case U and V ) induce rotations to the principal axes of the image
in Rm and Rn . In the case of PCA, this rotation is induced by the orthonormal
basis calculated through the experimental covariance matrix.
• Energy compression consists in finding the number of transformed pixels p, q of
S (1), such as:
∥I − S(1 : p, 1 : q)∥F < θ
where θ is a prescribed energy threshold, and S(1 : p, 1 : q) represents the p × q
upper block of S. The dimensionality reduction is achieved from m × n pixels to
p × q frequency components. Interesting to remark that if the SVD were used
only the q first singular values are needed, because S is blocky - diagonal.
Orthonormal transformations based on a set of training images {I1 , I2 , ..., IN }, follow
similar principles.
where µ = 1
N ∑Nk=1 Xk is the images sample mean, N is the number of sample im-
ages contained in the learning database, N pixels is the number of pixels of each
image (we suppose that all images in the sample have the same dimensions),
X = [X1 , X2 , ..., XN ] ∈ MN pixels×N , where Xi ∈ RN pixels are the database images trans-
formed into 1 − D column vectors, and Xc = X − µ is the centered image matrix. S is
a symmetric semidefinite positive matrix, thus, it admits orthogonal diagonalization
as follows:
S = UD1U T ,
where U ∈ MN pixels×N pixels .
The dimensionality reduction is obtained by retaining the q first principal com-
ponents uk ∈ U, such as:
q
|σtot
2
− ∑ λk | < θ ,
k=1
where θ is the energy cutoff and λk are the non-null eigenvalues of D1 . Finally,
defining W = U(:, 1 : q), each image Xi is projected onto the base of the eigenvectors
of the first q eigenvalues of U, obtaining the feature vector Yi = W T Xi , and achieving
a dimensionality reduction from Npixels to the q first principal coordinates.
6.2.2 2DPCA
2DPCA for face recognition has been introduced by Yang et al. [37]. As opposed to
conventional PCA, 2DPCA is based on 2D matrices, Ii , rather than 1D vectors, Xi .
That is, each image Ii does not need to be previously transformed into a column vec-
tor and image covariance is constructed directly using the original image matrices.
This covariance matrix is, in contrast with the scatter matrix of PCA, much smaller.
14 Ana Cernea and J. L. Fernández-Martı́nez
Fig. 8 The first six eigenfaces for the ORL training database
The 2DPCA diagonalizes the following mean centered covariance matrix to find the
orthogonal projection basis:
1 N
SM = ∑ (Ii − I)T (Ii − I),
N i=1
where I is the mean image matrix calculated pixel by pixel over the learning
database. As SM is the sum of symmetric and semidefinite matrix, it admits or-
thogonal diagonalization like in the PCA case. The dimensionality reduction and
the feature projection follows the same logic than for PCA.
Fisher Linear Discriminant Analysis (LDA) [15], for short Fisherfaces has been ap-
plied to the face recognition problem by Belhumeur [5]. This technique is also based
on a linearly projection W , which attempts to reduce dimensionality while preserv-
ing as much of the class discriminatory information as possible. More precisely,
the transformation W is selected such as the projections of the sample database X
have the maximum degree of separation. Fisher’s solution to this problem is the cal-
culation of W which maximizes the differences between classes, normalized by the
scatter within-class. For that purpose, between- and within-class scatter matrices are
defined:
Image Processing Methods for Biometric Applications 15
C
SB = ∑ Ni (µi − µ )(µi − µ )T ,
i=1
C
SW = ∑ ∑ (Xk − µi )(Xk − µi )T ,
i=1 Xk ∈Ci
where C is the total number of classes in the database, Ni is the number of images in
each class Ci , and µi is the mean of the images in class i. Numerical implementation
of this method can be consulted in [38]. Numerical experimentation has shown that
Fisherfaces have error rates lower than eigenfaces in face recognition [5]. This is
due to the fact that Fisher’s LDA is a supervised projection method.
Fig. 9 The first six Fisher’s faces for the ORL training database
Conversely to the above described spectral methods, these techniques do not in-
volve diagonalization, and the reduced bases (and the corresponding projections of
the images) are calculated for each individual image. Thus, these techniques are
more suitable to be implemented in large databases concerning real authentication
systems.
16 Ana Cernea and J. L. Fernández-Martı́nez
where
π (2i + 1)u π (2 j + 1)v
D(i, j) = Ik (i, j) · cos cos ,
2s 2n
u = 0, ..., s − 1, and v = 0, ..., n − 1, being
{ 1
√ , i f α = 0,
c(α ) = √ 2
N
N , i f α ̸= 0.
N is either the number of rows (s) or columns (n) of the image. The DCT can be
expressed in matrix form as an orthogonal transformation
T
DCT = UDC IkVDC ,
where matrices UDC and VDC are orthogonal. This transformation is separable and
can be defined in higher dimensions. The feature vector of an image Ik is constituted
by the q1 − q2 block of DCT , DCT (1 : q1 , 1 : q2 ), where q1 , q2 are determined by
energy reconstruction considerations using the Frobenius norm of the image Ik . The
energy methodology used to find the q1 , q2 values is the same in all these spectral
methods.
Wavelets are compact functions (defined over a finite interval) with zero mean and
some regularity conditions (vanishing moments). The Wavelet transform converts
a function into a linear combination of basic functions, called wavelets, obtained
from a prototype wavelet through dilatations, contractions and translations. DWT
was applied to face recognition by Kakarwal and Deshmukh [23].
The discrete wavelet transform (DW T ) of an image I ∈ M (m, n) is defined as
follows:
DW T = UWT IVW ,
Image Processing Methods for Biometric Applications 17
where H represents a low pass or averaging portion of the wavelet filter, and G is
the high pass or differencing portion. In all the cases we have
( ) ( )
HIH T HIGT BV
DW T = = .
GIH T GIGT HD
B is the blur, V are the vertical differences, H are the horizontal differences and D
are the diagonal differences. DWT can be applied several times to further reduce
the dimension of the attribute vector. In the present case we apply DWT twice and
we use the blur B as feature attribute to solve the face recognition problem. In the
present case we have used the transform having a maximum number of vanishing
moments: the Daubechies-2 family.
The Hadamard transform is the projection of a signal onto a set of square waves
called Walsh functions. The Walsh functions are real and only take the values +1
and −1.
The Hadamard matrix of order n can be obtained by defining its element in the
ith row and jth (i, j = 0, 1, ..., 2n − 1) column, as follows:
n−1
Hn (i, j) = (−1)∑k=0 ik jk = ∏ (−1)ik jk ,
n−1
k=0
where
n−1
i= ∑ ik 2k = (in−1 in−2 ...i1 i0 ), (ik = 0, 1),
k=0
n−1
j= ∑ jk 2k = ( jn−1 jn−2 ... j1 j0 ), ( jk = 0, 1),
k=0
Hm HmT = 2m I2m .
N M
DW HT (u, v) = ∑ ∑ Hm (u, i)I(i, j)Hn (v, j) =
j=1 i=1
N M
∑ ∑ I(i, j)(−1)∑k=0 uk ik +∑l=0 vl jl ,
m−1 n−1
=
j=1 i=1
where u, v, i and j are represented in binary form, being m = log2 M and n = log2 N
[30]. Furthermore, the DWHT can be written in matrix form:
DW HT (I) = Hm IHnT ,
To perform the numerical analysis we have used the ORL database of faces provided
by AT&T Laboratories Cambridge. The ORL database contains 400 grey scale im-
ages, ten different poses of 40 distinct individuals taken during a period of two
years. All the images were taken against a dark homogeneous background, varying
the lighting, facial expressions and facial details. The database provides upright and
frontal poses. The size of each image is 92x112 pixels, with 256 grey levels per
pixel.
In all the experiments over ORL, the learning database is composed of five poses
of each individual, that are randomly selected. The rest of the poses in the database
are used as probe images for establishing the accuracy of the classification for each
spectral technique, using both, global and local features. For each attribute the clas-
sification is performed 100 different times, randomly choosing the learning database
and the set of probe images (200 images). Nevertheless, once the database has been
generated, it is used to perform classification with all the different image processing
methods under the same numerical conditions. For instance, in all the cases the en-
ergy cutoff used in spectral decomposition for the reconstruction has been fixed to
99.5% of the Frobenius norm of the transform.
Two different kind of analysis are performed using global and local attributes.
Although the use of local features increases the dimension of the attribute space, it
is expected to increase its discriminative power. Thus, in the algorithm presented in
this paper, no individual classification with posterior fusion of scores is performed.
Finally, we provide statistics for the classification calculated over 100 different sim-
ulations of each classifier(attribute): minimum, maximum, median and mean accu-
Image Processing Methods for Biometric Applications 19
racy, and also interquartile range and standard deviation, for the distance computed
using different norms and the cosine criteria.
Attribute Type of analysis Criterion min max median mean IQR std
Histogram Local cos 92.00 100.00 98.00 97.67 1.50 1.30
Variogram Local L2 84.50 94.50 90.00 90.03 3.00 1.89
Percentiles Local L2 90.50 98.50 95.50 95.13 2.00 1.61
Texture Local cos 84.50 93.50 91.00 90.80 2.50 1.83
Edges Local cos 53.50 68.50 62.75 62.80 4.00 2.96
PCA Local L2 91.00 98.50 95.00 94.79 2.50 1.77
Fisher Local L2 90.50 98.50 95.00 94.63 2.50 1.72
DCT Local L3 89.00 97.50 94.00 93.80 2.50 1.85
2DPCA Local L2 90.50 98.00 95.00 94.64 2.50 1.69
DWT Global L2 91.00 99.00 95.50 95.44 2.00 1.58
DWHT Local L3 88.50 97.50 94.50 93.90 2.75 1.91
Table 1 Accuracies statistics for the selected image attributes.
References
1. A.N. Akansu and R. Poluri. Walsh-like nonlinear phase orthogonal codes for direct sequence
cdma communications. Signal Processing, IEEE Transactions on, 55(7):3800 –3806, july
2007.
2. Selim Aksoy and Robert M. Haralick. Content-based image database retrieval using vari-
ances of gray level spatial dependencies. In In Proc. of IAPR Intl. Workshop on Multimedia
Information Analysis and Retrieval, pages 3–19, 1998.
3. A. Batool and A. Tariq. Computerized system for fingerprint identification for biometric
security. In Multitopic Conference (INMIC), 2011 IEEE 14th International, pages 102–106,
2011.
4. K.G. Beauchamp. Applications of Walsh and related functions, with an introduction to se-
quency theory. Microelectronics and signal processing. Academic Press, 1984.
5. Peter N. Belhumeur, Jo˜ao Hespanha, and David J. Kriegman. Eigenfaces vs. fisherfaces:
Recognition using class specific linear projection. In Bernard Buxton and Roberto Cipolla,
editors, Computer Vision ECCV ’96, volume 1064 of Lecture Notes in Computer Science,
pages 43–58. Springer Berlin Heidelberg, 1996.
6. Manish H. Bharati, J.Jay Liu, and John F. MacGregor. Image texture analysis: methods and
comparisons. Chemometrics and Intelligent Laboratory Systems, 72(1):57 – 71, 2004.
7. Kevin W. Bowyer, Karen Hollingsworth, and Patrick J. Flynn. Image understanding for iris
biometrics: A survey. Comput. Vis. Image Underst., 110(2):281–307, May 2008.
8. John Canny. A computational approach to edge detection. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, PAMI-8(6):679–698, 1986.
9. Raffaele Cappelli, Matteo Ferrara, and Davide Maltoni. Minutiae-based fingerprint matching.
In Cross Disciplinary Biometric Systems, volume 37 of Intelligent Systems Reference Library,
pages 117–150. Springer Berlin Heidelberg, 2012.
10. M.A. Dabbah, W.L. Woo, and S.S. Dlay. Secure authentication for face recognition. In
IEEE Symposium on Computational Intelligence in Image and Signal Processing, 2007. CIISP
2007., pages 121 –126, april 2007.
11. J. G. Daugman. High confidence visual recognition of persons by a test of statistical indepen-
dence. IEEE Trans. Pattern Anal. Mach. Intell., 15(11):1148–1161, November 1993.
12. J.G. Daugman. Complete discrete 2-d gabor transforms by neural networks for image anal-
ysis and compression. IEEE Transactions on Acoustics, Speech and Signal Processing,
36(7):1169–1179, 1988.
13. Anil K. Jain Davide Maltoni, Dario Maio and Salil Probhakar. Handbook of Fingerprint
Recognition. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2003.
14. Huimin Deng and Qiang Huo. Minutiae matching based fingerprint verification using delau-
nay triangulation and aligned-edge-guided triangle matching. In Takeo Kanade, Anil Jain,
and NaliniK. Ratha, editors, Audio- and Video-Based Biometric Person Authentication, vol-
ume 3546 of Lecture Notes in Computer Science, pages 270–278. Springer Berlin Heidelberg,
2005.
15. Ronald A. Fisher. The use of multiple measurements in taxonomic problems. Annals Eugen,
7:179–188, 1936.
16. G.N.Srinivasan and Dr.G.Shobha. Statistical texture analysis. In Proceedings Of World
Academy Of Science, Engineering And Technology, volume 36, pages 1264–12699, 2008.
17. P. Goovaerts. Geostatistics for natural resources evaluation. Applied geostatistics series.
Oxford University Press, Incorporated, 1997.
18. Ziad M. Hafed and Martin D. Levine. Face recognition using the discrete cosine transform.
Int. J. Comput. Vision, 43(3):167–188, July 2001.
19. M. Hassan, I. Osman, and M. Yahia. Walsh-hadamard transform for facial feature extraction
in face recognition. Int. J. of Comp. and Communication Eng., 1(7):436–440, 2007.
20. D. Impedovo and G. Pirlo. Automatic signature verification: The state of the art. Systems,
Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, 38(5):609–
635, 2008.
Image Processing Methods for Biometric Applications 21
21. Rabia Jafri and Hamid R Arabnia. A survey of face recognition techniques. Journal of Infor-
mation Processing Systems, 5(2):41–68, june 2009.
22. A.K. Jain, S. Prabhakar, L. Hong, and S. Pankanti. Filterbank-based fingerprint matching.
Image Processing, IEEE Transactions on, 9(5):846–859, 2000.
23. S. Kakarwal and R. Deshmukh. Wavelet transform based feature extraction for face recogni-
tion. International Journal of Computer Science and Application Issue, –:–, 2010.
24. M. Kirby and L. Sirovich. Application of the karhunen-loeve procedure for the character-
ization of human faces. IEEE Transactions onPattern Analysis and Machine Intelligence,
12(1):103–108, January 1990.
25. Dushyant Khosla Mary Lourde R. Fingerprint identification in biometric security systems.
International Journal of Computer and Electrical Engineering, 2(5):852–855, october 2010.
26. M.M. Min and Y. Thein. Intelligent fingerprint recognition system by using geometry ap-
proach. In Current Trends in Information Technology (CTIT), 2009 International Conference
on the, pages 1–5, 2009.
27. Hyeonjoon Moon and P Jonathon Phillips. Computational and performance aspects of pca-
based face-recognition algorithms. Perception, 30(3):303–321, 2001.
28. A. Nait-Ali. Hidden biometrics: Towards using biosignals and biomedical images for security
applications. In 7th International Workshop on Systems, Signal Processing and their Applica-
tions (WOSSPA), 2011, pages 352 –356, may 2011.
29. Réjean Plamondon and Sargur N. Srihari. On-line and off-line handwriting recognition: A
comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell., 22(1):63–84, January 2000.
30. W.K. Pratt, J. Kane, and H.C. Andrews. Hadamard transform image coding. Proceedings of
the IEEE, 57(1):58 – 68, jan. 1969.
31. K. R. Radhika, G. N. Sekhar, and M. K. Venkatesha. Pattern recognition techniques in on-line
hand written signature verification - a survey. In Multimedia Computing and Systems, 2009.
ICMCS ’09. International Conference on, pages 216–221, 2009.
32. Rauf Kh. Sadykhov, Vladimir A. Samokhval, and Leonid P. Podenok. Face recognition algo-
rithm on the basis of truncated Walsh-Hadamard transform and synthetic discriminant func-
tions. In FGR, pages 219–222. IEEE Computer Society, 2004.
33. L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human
faces. Journal of the Optical Society of America A, 4(3):519–524, 1987.
34. Matthew Turk and Alex Pentland. Eigenfaces for recognition. Journal of Cognitive Neuro-
science, 3(1):71–86, January 1991.
35. Scott E Umbaugh. Computer Vision and Image Processing: A Practical Approach Using
CVIPtools. Prentice Hall Professional Technical Reference, 1998.
36. R.P. Wildes. Iris recognition: an emerging biometric technology. Proceedings of the IEEE,
85(9):1348–1363, 1997.
37. Jian Yang, David Zhang, Alejandro F. Frangi, and Jing yu Yang. Two-dimensional pca: A
new approach to appearance-based face representation and recognition. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 26:131–137, 2004.
38. Hua Yu and Jie Yang. A direct LDA algorithm for high-dimensional data with application to
face recognition. Pattern Recognition, 34:2067–2070, 2001.
39. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld. Face recognition: A literature survey.
ACM Comput. Surv., 35(4):399–458, December 2003.
40. Alessandro Zimmer and Lee Luan Ling. A hybrid on/off line handwritten signature verifica-
tion system. In Proceedings of the Seventh International Conference on Document Analysis
and Recognition - Volume 1, ICDAR ’03, pages 424–, Washington, DC, USA, 2003. IEEE
Computer Society.