Face Recognition With GNU Octave/MATLAB: Philipp Wagner
Face Recognition With GNU Octave/MATLAB: Philipp Wagner
Face Recognition With GNU Octave/MATLAB: Philipp Wagner
Octave/MATLAB
Philipp Wagner
http://www.bytefish.de
July 18, 2012
Contents
1 Introduction 1
2 Face Recognition 2
2.1 Face Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 Reading the images with GNU Octave/MATLAB . . . . . . . . . . . . . . . . . 3
2.2 Eigenfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.1 Algorithmic Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.2 Eigenfaces in GNU Octave/MATLAB . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Fisherfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Algorithmic Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Fisherfaces in GNU Octave/MATLAB . . . . . . . . . . . . . . . . . . . . . . . 10
3 Conclusion 13
1 Introduction
In this document Ill show you how to implement the Eigenfaces [13] and Fisherfaces [3] method
with GNU Octave/MATLAB , so youll understand the basics of Face Recognition. All concepts
are explained in detail, but a basic knowledge of GNU Octave/MATLAB is assumed. Originally
this document was a Guide to Face Recognition with OpenCV. Since OpenCV now comes with the
cv::FaceRecognizer, this document has been reworked into the official OpenCV documentation at:
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/index.html
I am doing all this in my spare time and I simply cant maintain two separate documents on the
same topic any more. So I have decided to turn this document into a guide on Face Recognition
with GNU Octave/MATLAB only. Youll find the very detailed documentation on the OpenCV
cv::FaceRecognizer at:
By the way you dont need to copy and paste the code snippets, all code has been pushed into my
github repository:
github.com/bytefish
github.com/bytefish/facerecognition guide
1
Everything in here is released under a BSD license, so feel free to use it for your projects. You are
currently reading the GNU Octave/MATLAB version of the Face Recognition Guide, you can compile
the Python version with make python .
2 Face Recognition
Face recognition is an easy task for humans. Experiments in [6] have shown, that even one to three
day old babies are able to distinguish between known faces. So how hard could it be for a computer?
It turns out we know little about human recognition to date. Are inner features (eyes, nose, mouth)
or outer features (head shape, hairline) used for a successful face recognition? How do we analyze an
image and how does the brain encode it? It was shown by David Hubel and Torsten Wiesel, that our
brain has specialized nerve cells responding to specific local features of a scene, such as lines, edges,
angles or movement. Since we dont see the world as scattered pieces, our visual cortex must somehow
combine the different sources of information into useful patterns. Automatic face recognition is all
about extracting those meaningful features from an image, putting them into a useful representation
and performing some kind of classification on them.
Face recognition based on the geometric features of a face is probably the most intuitive approach to
face recognition. One of the first automated face recognition systems was described in [9]: marker
points (position of eyes, ears, nose, ...) were used to build a feature vector (distance between the
points, angle between them, ...). The recognition was performed by calculating the euclidean distance
between feature vectors of a probe and reference image. Such a method is robust against changes in
illumination by its nature, but has a huge drawback: the accurate registration of the marker points
is complicated, even with state of the art algorithms. Some of the latest work on geometric face
recognition was carried out in [4]. A 22-dimensional feature vector was used and experiments on
large datasets have shown, that geometrical features alone dont carry enough information for face
recognition.
The Eigenfaces method described in [13] took a holistic approach to face recognition: A facial image is
a point from a high-dimensional image space and a lower-dimensional representation is found, where
classification becomes easy. The lower-dimensional subspace is found with Principal Component
Analysis, which identifies the axes with maximum variance. While this kind of transformation is
optimal from a reconstruction standpoint, it doesnt take any class labels into account. Imagine a
situation where the variance is generated from external sources, let it be light. The axes with maximum
variance do not necessarily contain any discriminative information at all, hence a classification becomes
impossible. So a class-specific projection with a Linear Discriminant Analysis was applied to face
recognition in [3]. The basic idea is to minimize the variance within a class, while maximizing the
variance between the classes at the same time (Figure 1).
Recently various methods for a local feature extraction emerged. To avoid the high-dimensionality of
the input data only local regions of an image are described, the extracted features are (hopefully) more
robust against partial occlusion, illumation and small sample size. Algorithms used for a local feature
extraction are Gabor Wavelets ([14]), Discrete Cosinus Transform ([5]) and Local Binary Patterns
([1, 11, 12]). Its still an open research question how to preserve spatial information when applying a
local feature extraction, because spatial information is potentially useful information.
AT&T Facedatabase The AT&T Facedatabase, sometimes also known as ORL Database of Faces,
contains ten different images of each of 40 distinct subjects. For some subjects, the images were
taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling /
not smiling) and facial details (glasses / no glasses). All the images were taken against a dark
homogeneous background with the subjects in an upright, frontal position (with tolerance for
some side movement).
1 Parts of the description are quoted from face-rec.org.
2
Yale Facedatabase A The AT&T Facedatabase is good for initial tests, but its a fairly easy
database. The Eigenfaces method already has a 97% recognition rate, so you wont see any
improvements with other algorithms. The Yale Facedatabase A is a more appropriate dataset
for initial experiments, because the recognition problem is harder. The database consists of
15 people (14 male, 1 female) each with 11 grayscale images sized 320 243 pixel. There are
changes in the light conditions (center light, left light, right light), facial expressions (happy,
normal, sad, sleepy, surprised, wink) and glasses (glasses, no-glasses).
The original images are not cropped or aligned. Ive prepared a Python script available in
src/py/crop_face.py, that does the job for you.
Extended Yale Facedatabase B The Extended Yale Facedatabase B contains 2414 images of 38
different people in its cropped version. The focus is on extracting features that are robust to
illumination, the images have almost no variation in emotion/occlusion/. . .. I personally think,
that this dataset is too large for the experiments I perform in this document, you better use
the AT&T Facedatabase. A first version of the Yale Facedatabase B was used in [3] to see how
the Eigenfaces and Fisherfaces method (section 2.3) perform under heavy illumination changes.
[10] used the same setup to take 16128 images of 28 people. The Extended Yale Facedatabase
B is the merge of the two databases, which is now known as Extended Yalefacedatabase B.
The face images need to be stored in a folder hierachy similar to <datbase name>/<subject name>/<filename
>.<ext>. The AT&T Facedatabase for example comes in such a hierarchy, see Listing 1.
Listing 1:
philipp@mango :~/ facerec / data / at$ tree
.
| - - README
| - - s1
| | - - 1. pgm
| | - - ...
| | - - 10. pgm
| - - s2
| | - - 1. pgm
| | - - ...
| | - - 10. pgm
...
| - - s40
| | - - 1. pgm
| | - - ...
| | - - 10. pgm
The function in Listing 3 can be used to read in the images for each subfolder of a given directory.
Each directory is given a unique (integer) label, you probably want to store the folder name as well.
The function returns the images as a data matrix and the corresponding classes, the width and height
of the images (well need this in later code). This function is really basic and theres much to enhance,
but it does its job.
3
Listing 3: src/m/read images.m
function [ X y width height ] = read_images ( path_fn )
% get files for a given path
folder = list_files ( path_fn ) ;
% initialize the empty return values
X =[];
y =[];
width =0;
height =0;
% start counting with class index 1
classIdx = 1;
% for each file ...
for i =1: length ( folder )
subject = folder { i };
% ... get files in this subdir
images = list_files ([ path_fn , filesep , subject ]) ;
% ... ignore a file or empty folder
if ( length ( images ) == 0)
continue ;
end
% ... for each image
for j =1: length ( images )
% ... get the absolute path
filename = [ path_fn , filesep , subject , filesep , images { j }];
% ... read the image
T = double ( imread ( filename ) ) ;
% ... get the image information
[ height width channels ] = size ( T ) ;
% ... and grayscale if it s a color image
if ( channels == 3)
T = 0.2989 * T (: ,: ,1) + 0.5870* T (: ,: ,2) + 0.1140 * T (: ,: ,3) ;
end
% ... reshape into a row vector and append to data matrix
X = [ X ; reshape (T ,1 , width * height ) ];
% ... append the corresponding class to the class vector
y = [y , classIdx ];
end
% ... increase the class index
classIdx = classIdx + 1;
end % ... for - each folder .
end
2.2 Eigenfaces
The problem with the image representation we are given is its high dimensionality. Two-dimensional
p q grayscale images span a m = pq-dimensional vector space, so an image with 100 100 pixels
lies in a 10, 000-dimensional image space already. Thats way too much for any computations, but are
all dimensions really useful for us? We can only make a decision if theres any variance in data, so
what we are looking for are the components that account for most of the information. The Principal
Component Analysis (PCA) was independently proposed by Karl Pearson (1901) and Harold Hotelling
(1933) to turn a set of possibly correlated variables into a smaller set of uncorrelated variables. The
idea is that a high-dimensional dataset is often described by correlated variables and therefore only a
few meaningful dimensions account for most of the information. The PCA method finds the directions
with the greatest variance in the data, called principal components.
4
2. Compute the the Covariance Matrix S
n
1X
S= (xi )(xi )T (2)
n i=1
Svi = i vi , i = 1, 2, . . . , n (3)
4. Order the eigenvectors descending by their eigenvalue. The k principal components are the
eigenvectors corresponding to the k largest eigenvalues.
The k principal components of the observed vector x are then given by:
y = W T (x ) (4)
where W = (v1 , v2 , . . . , vk ). The reconstruction from the PCA basis is given by:
x = Wy + (5)
The Eigenfaces method then performs face recognition by:
1. Projecting all training samples into the PCA subspace (using Equation 4).
2. Projecting the query image into the PCA subspace (using Listing 5).
3. Finding the nearest neighbor between the projected training images and the projected query
image.
Still theres one problem left to solve. Imagine we are given 400 images sized 100 100 pixel. The
Principal Component Analysis solves the covariance matrix S = XX T , where size(X) = 10000 400
in our example. You would end up with a 10000 10000 matrix, roughly 0.8GB. Solving this problem
isnt feasible, so well need to apply a trick. From your linear algebra lessons you know that a M N
matrix with M > N can only have N 1 non-zero eigenvalues. So its possible to take the eigenvalue
decomposition S = X T X of size N xN instead:
X T Xvi = i vi (6)
and get the original eigenvectors of S = XX T with a left multiplication of the data matrix:
Listing 4: src/m/pca.m
function [W , mu ] = pca (X , y , k )
[n , d ] = size ( X ) ;
mu = mean ( X ) ;
Xm = X - repmat ( mu , rows ( X ) , 1) ;
if (n > d )
C = Xm * Xm ;
[W , D ] = eig ( C ) ;
% sort eigenvalues and eigenvectors
[D , i ] = sort ( diag ( D ) , descend ) ;
5
W = W (: , i ) ;
% keep k components
W = W (: ,1: k ) ;
else
C = Xm * Xm ;
% C = cov ( Xm ) ;
[W , D ] = eig ( C ) ;
% multiply with data matrix
W = Xm * W ;
% normalize eigenvectors
for i =1: n
W (: , i ) = W (: , i ) / norm ( W (: , i ) ) ;
end
% sort eigenvalues and eigenvectors
[D , i ] = sort ( diag ( D ) , descend ) ;
W = W (: , i ) ;
% keep k components
W = W (: ,1: k ) ;
end
end
The observations are given by row, so the projection in Equation 4 needs to be rearranged a little:
Listing 5: src/m/project.m
function Y = project (W , X , mu )
if ( nargin <3)
Y = X*W;
else
Y = (X - repmat ( mu , rows ( X ) , 1) ) * W ;
end
end
Listing 6: src/m/reconstruct.m
function X = reconstruct (W , Y , mu )
if ( nargin <3)
X = Y * W ;
else
X = Y * W + repmat ( mu , rows ( Y ) , 1) ;
end
end
Now that everything is defined its time for the fun stuff. The face images are read with Listing 3 and
then a full PCA (see Listing 4) is performed.
Thats it already. Pretty easy, no? Each principal component has the same length as the original
image, thus it can be displayed as an image. [13] referred to these ghostly looking faces as Eigenfaces,
thats where the Eigenfaces method got its name from. Well now want to look at the Eigenfaces,
but first of all we need a method to turn the data into a representation GNU Octave/MATLAB
understands. The eigenvectors we have calculated can contain negative values, but the image data is
excepted as unsigned integer values in the range of 0 to 255. So we need a function to normalize the
data first (Listing 8):
Listing 8: src/m/normalize.m
function X = normalize (X , l , h )
6
minX = min ( X (:) ) ;
maxX = max ( X (:) ) ;
% % Normalize to [0...1].
X = X - minX ;
X = X ./ ( maxX - minX ) ;
% % Scale to [ low ... high ].
X = X .* (h - l ) ;
X = X + l;
end
Listing 9 then turns the image into the expected representation, by first normalizing the values to a
range between [0, 255] and then casting the matrix to unsigned integer values.
Listing 9: src/m/toGrayscale.m
function Y = toGrayscale (X , width , height )
Y = normalize (X , 0 , 255) ;
if ( nargin ==3)
Y = reshape (Y , height , width ) ;
end
Y = uint8 ( Y ) ;
end
Listing 10 then does a subplot for the first (at most) 16 Eigenfaces.
Ive used the jet colormap, so you can see how the grayscale values are distributed within the spe-
cific Eigenfaces. You can see, that the Eigenfaces do not only encode facial features, but also the
illumination in the images (see the left light in Eigenface #4, right light in Eigenfaces #5):
7
Weve already seen in Equation 5, that we can reconstruct a face from its lower dimensional approxi-
mation. So lets see how many Eigenfaces are needed for a good reconstruction. Ill do a subplot with
10, 30, . . . , 310 Eigenfaces:
10 Eigenvectors are obviously not sufficient for a good image reconstruction, 50 Eigenvectors may
already be sufficient to encode important facial features. Youll get a good reconstruction with ap-
proximately 300 Eigenvectors for the AT&T Facedatabase. There are rule of thumbs how many
Eigenfaces you should choose for a successful face recognition, but it heavily depends on the input
data. [15] is the perfect point to start researching for this.
The k-Nearest Neighbor matching is left out for this example. Please see the GNU Octave/MATLAB
code at https://github.com/bytefish/facerec/tree/master/m to see how it is implemented, its
all there.
2.3 Fisherfaces
The Linear Discriminant Analysis was invented by the great statistician Sir R. A. Fisher, who success-
fully used it for classifying flowers in his 1936 paper The use of multiple measurements in taxonomic
problems [8]. But why do we need another dimensionality reduction method, if the Principal Compo-
nent Analysis (PCA) did such a good job?
The PCA finds a linear combination of features that maximizes the total variance in data. While
this is clearly a powerful way to represuccsent data, it doesnt consider any classes and so a lot of
8
y
sW1
1
sB1
sB3
sB2
3
sW3
2
sW2
x
Figure 1: This figure shows the scatter matrices SB and SW for a 3 class problem. represents
the total mean and [1 , 2 , 3 ] are the class means.
discriminative information may be lost when throwing components away. Imagine a situation where
the variance is generated by an external source, let it be the light. The components identified by a
PCA do not necessarily contain any discriminative information at all, so the projected samples are
smeared together and a classification becomes impossible.
In order to find the combination of features that separates best between classes the Linear Discriminant
Analysis maximizes the ratio of between-classes to within-classes scatter. The idea is simple: same
classes should cluster tightly together, while different classes are as far away as possible from each
other. This was also recognized by Belhumeur, Hespanha and Kriegman and so they applied a
Discriminant Analysis to face recognition in [3].
X = {X1 , X2 , . . . , Xc } (8)
Xi = {x1 , x2 , . . . , xn } (9)
c
X
SB = Ni (i )(i )T (10)
i=1
Xc X
SW = (xj i )(xj i )T (11)
i=1 xj Xi
9
1 X
i = xj (13)
|Xi |
xj Xi
Fishers classic algorithm now looks for a projection W , that maximizes the class separability criterion:
|W T SB W |
Wopt = arg maxW (14)
|W T SW W |
Following [3], a solution for this optimization problem is given by solving the General Eigenvalue
Problem:
SB vi = i Sw vi
1
SW SB vi = i vi (15)
Theres one problem left to solve: The rank of SW is at most (N c), with N samples and c classes. In
pattern recognition problems the number of samples N is almost always samller than the dimension
of the input data (the number of pixels), so the scatter matrix SW becomes singular (see [2]). In
[3] this was solved by performing a Principal Component Analysis on the data and projecting the
samples into the (N c)-dimensional space. A Linear Discriminant Analysis was then performed on
the reduced data, because SW isnt singular anymore.
The optimization problem can be rewritten as:
The transformation matrix W , that projects a sample into the (c 1)-dimensional space is then given
by:
W = WfTld Wpca
T
(18)
One final note: Although SW and SB are symmetric matrices, the product of two symmetric matrices
is not necessarily symmetric. so you have to use an eigenvalue solver for general matrices. OpenCVs
cv::eigen only works for symmetric matrices in its current version; since eigenvalues and singular values
arent equivalent for non-symmetric matrices you cant use a Singular Value Decomposition (SVD)
either.
10
mu_i = mean ( Xi ) ; % mean vector for current class
Xi = Xi - repmat ( mu_i , n , 1) ;
Sw = Sw + Xi * Xi ;
Sb = Sb + n * ( mu_i - mu ) *( mu_i - mu ) ;
end
% solve general eigenvalue problem
[W , D ] = eig ( Sb , Sw ) ;
% sort eigenvectors
[D , i ] = sort ( diag ( D ) , descend ) ;
W = W (: , i ) ;
% keep at most (c -1) eigenvectors
W = W (: ,1: k ) ;
end
The functions to perform a PCA (Listing 4) and LDA (Listing 12) are now defined, so we can go
ahead and implement the Fisherfaces from Equation 18.
For this example I am going to use the Yale Facedatabase A, just because the plots are nicer. Each
Fisherface has the same length as an original image, thus it can be displayed as an image. Well again
load the data, learn the Fisherfaces and make a subplot of the first 16 Fisherfaces.
The Fisherfaces method learns a class-specific transformation matrix, so the they do not capture
illumination as obviously as the Eigenfaces method. The Discriminant Analysis instead finds the facial
features to discriminate between the persons. Its important to mention, that the performance of the
Fisherfaces heavily depends on the input data as well. Practically said: if you learn the Fisherfaces for
well-illuminated pictures only and you try to recognize faces in bad-illuminated scenes, then method
11
is likely to find the wrong components (just because those features may not be predominant on
bad illuminated images). This is somewhat logical, since the method had no chance to learn the
illumination.
The Fisherfaces allow a reconstruction of the projected image, just like the Eigenfaces did. But since
we only identified the features to distinguish between subjects, you cant expect a nice approximation
of the original image. We can rewrite Listing 11 for the Fisherfaces method into Listing 15, but this
time well project the sample image onto each of the Fisherfaces instead. So youll have a visualization,
which features each Fisherface describes.
12
Fisherface #1 Fisherface #2 Fisherface #3 Fisherface #4
The k-Nearest Neighbor matching is left out for this example. Please see the GNU Octave/MATLAB
code at https://github.com/bytefish/facerec/tree/master/m to see how it is implemented, its
all there.
3 Conclusion
This document explained and implemented the Eigenfaces [13] and the Fisherfaces [3] method with
GNU Octave/MATLAB, Python. It gave you some ideas to get started and research this highly active
topic. I hope you had fun reading and I hope you think cv::FaceRecognizer is a useful addition to
OpenCV.
More maybe here:
http://www.opencv.org
http://www.bytefish.de/blog
http://www.github.com/bytefish
References
[1] Ahonen, T., Hadid, A., and Pietikainen, M. Face Recognition with Local Binary Patterns.
Computer Vision - ECCV 2004 (2004), 469481.
[2] A.K. Jain, S. J. R. Small sample size effects in statistical pattern recognition: Recommendations
for practitioners. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 3 (1991),
252264.
[3] Belhumeur, P. N., Hespanha, J., and Kriegman, D. Eigenfaces vs. fisherfaces: Recogni-
tion using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine
Intelligence 19, 7 (1997), 711720.
[4] Brunelli, R., and Poggio, T. Face recognition through geometrical features. In European
Conference on Computer Vision (ECCV) (1992), pp. 792800.
[5] Cardinaux, F., Sanderson, C., and Bengio, S. User authentication via adapted statistical
models of face images. IEEE Transactions on Signal Processing 54 (January 2006), 361373.
13
[6] Chiara Turati, Viola Macchi Cassia, F. S., and Leo, I. Newborns face recognition: Role
of inner and outer facial features. Child Development 77, 2 (2006), 297311.
[7] Duda, R. O., Hart, P. E., and Stork, D. G. Pattern Classification (2nd Edition), 2 ed.
November 2001.
[8] Fisher, R. A. The use of multiple measurements in taxonomic problems. Annals Eugen. 7
(1936), 179188.
[9] Kanade, T. Picture processing system by computer complex and recognition of human faces.
PhD thesis, Kyoto University, November 1973.
[10] Lee, K.-C., Ho, J., and Kriegman, D. Acquiring linear subspaces for face recognition under
variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 27,
5 (2005).
[11] Maturana, D., Mery, D., and Soto, A. Face recognition with local binary patterns, spatial
pyramid histograms and naive bayes nearest neighbor classification. 2009 International Confer-
ence of the Chilean Computer Science Society (SCCC) (2009), 125132.
[12] Rodriguez, Y. Face Detection and Verification using Local Binary Patterns. PhD thesis, Ecole
Polytechnique Federale De Lausanne, October 2006.
[13] Turk, M., and Pentland, A. Eigenfaces for recognition. Journal of Cognitive Neuroscience
3 (1991), 7186.
[14] Wiskott, L., Fellous, J.-M., Kruger, N., and Malsburg, C. V. D. Face recognition
by elastic bunch graph matching. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND
MACHINE INTELLIGENCE 19 (1997), 775779.
[15] Zhao, W., Chellappa, R., Phillips, P., and Rosenfeld, A. Face recognition: A literature
survey. Acm Computing Surveys (CSUR) 35, 4 (2003), 399458.
14