Face Recognition Techniques: ECE533 - Image Processing Project
Face Recognition Techniques: ECE533 - Image Processing Project
Face Recognition
Techniques
Jorge Orts
Index
Introduction………………………………………… 2
Approach…………………………………………… 3
Work performed…………………………………… 4
Results………………………………………………. 9
Discussion…………………………………………. 12
References…………………………………………. 14
1
INTRODUCTION
This project deals with the topic of face recognition techniques using
digital image processing. Face recognition has always been a very
challenging task for the researches. On the one hand, its applications may
be very useful for personal verification and recognition. On the other hand, it
has always been very difficult to implement due to all different situation that
a human face can be found.[6] Nevertheless, the approaches of the last
decades have been determining for face recognition development. Due to
the difficulty of the face recognition task, the number of techniques is large
and diverse. In addition, the applications involve a huge number of
situations.
As it has already said previously, the applications for face recognition are
very varied. We can divide them into two big groups, the applications that
required face identification and the ones that need face verification. The
difference is that the first one uses a face to match with other one on a
database; on the other hand, the verification technique tries to verify a
human face from a given sample of that face.[6] Face recognition could be
also divided into two different groups, according to their field of application.
The main reason for promoting this technique is law enforcement
application; however, it can also be used for commercial application.
Among the law enforcement applications, some representative examples are
2
mug shot albums, video surveillance and shoplifting.[3] Concerning
commercial applications we can differentiate between entertainment (video
games, virtual reality and training programs), smart cards (driver’s license,
passport and voter registration) and information security (TV parental
control, cell phone and database security).[7]
It has already been stated that face recognition techniques have always
been a very challenging task for researches because of all difficulties and
limitations. Human faces are not an invariant characteristic; in fact, a
person’s face can change very much during short periods of time (from one
day to another) and because of long periods of time (a difference of months
or years). One problem of face recognition is the fact that different faces
could seem very similar; therefore, a discrimination task is needed. On the
other hand, when we analyze the same face, many characteristics may have
changed. Ones of the most important problems are changes in illumination,
variability in facial expressions, the presence of accessories (glasses,
beards, etc); finally, the rotation of a face may change many facial
characteristics. [6]
APPROACH
The paper principally deals with the comparison of two different methods
for face recognition. The project is based on two articles that describe these
two different techniques; they are attached at the references as source [3]
and [4]. These methods are “Face Recognition Using Eigenfaces” and “Face
recognition using line edge map”.
3
Finally, a discussion about the best characteristics of each method will
be carried out. Depending of the technique, and more important of the work
performed to make the article, different situation of face position, lighting, etc
will be commented. The main goal of this paper is find a good face
recognition technique depending on the situation.
WORK PERFORMED
During this section, the two face recognition algorithms will be explained
in order to understand the basis of each one. They have been selected
because two main reasons: they are very known and spread techniques for
face recognition; moreover, they represent different ways to approach the
problem as it was stated before. The first technique is based on the so-
called Karhunen-Loève transformation using eigenfaces for recognition. The
second one tries a new algorithm using line edge maps to improve the
previous methods such as the eigenfaces.
4
3. Determine if the image is a face; to do so, we have to see of it is
close enough to the face space.
5
The last step is to classify a face image. We just need to transform the new
image into its eigenfaces components; i.e. its project into face space. We
have to calculate the vector of weights Ω T = [ϖ 1 ,ϖ 2 ...ϖ M ′ ], where
ϖ k = u kT (I − Ψ ) for k = 1,2...M ′ ; and M ′ represents not the total eigenfaces, but
the ones with greater values. The criterion to determine which the matched
face image is is to determine the image face class k that gives the minimum
Euclidean distance ε k = (Ω − Ω k ) , where Ω k is the vector that describes the
face image number k. We can see an example of this procedure in Figure 2.
Image (a) and (b) represents the case when the input image is near face
space (it is a face) and a face class (face matched). On the other hand,
image (c) shows an example of an input image distant from face space (in
fact, it is a flower, not a human face) and not near a known face class. We
could also find an input image that is not near face space but it still is near a
face class; it would be detected as a false positive and it depends on the
value of threshold set to compare the Euclidean distance explained
previously.
6
Face recognition using line edge map
This algorithm describes a new technique based on line edge maps
(LEM) to accomplish face recognition. In addition, it proposes a line
matching technique to make this task possible. In opposition with other
algorithms, LEM uses physiologic features from human faces to solve the
problem; it mainly uses mouth, nose and eyes as the most characteristic
ones.
In order to measure the similarity of human faces the face images are
firstly converted into gray-level pictures. The images are encoded into
binary edge maps using Sobel edge detection algorithm. This system is
very similar to the way human beings perceive other people faces as it was
stated in many psychological studies. The main advantage of line edge
maps is the low sensitiveness to illumination changes, because it is an
intermediate-level image representation derived from low-level edge map
representation.[3] The algorithm has another important improvement, it is the
low memory requirements because the kind of data used. In Figure 3, there
is an example of a face line edge map; it can be noticed that it keeps face
features but in a very simplified level.
7
One of the most important parts of the algorithm is the Line Segment
Hausdorff Distance (LHD) described to accomplish an accurate matching of
face images. This method is not oriented to calculate exact lines form
different images; its main characteristic is its flexibility of size, position and
orientation. Given two LEMs M l = {m1l , m2l ...m lp ,} (face from the database) and
{
T l = t1l , t 2l ...t ql } (input image to be detected); the LHD is represented by the
vector d (mil , t lj ) . The elements of this vector represent themselves three
difference distance measurements: orientation distance, parallel distance
and perpendicular distance respectively.
( )
d θ mil , t lj
(
d m ,t l
i
l
j ) (
= d mil , t lj )
d ml , t l
( )
⊥ i j
( )
d θ mil , t lj = f θ mil , t lj(( )) d (m , t ) = min(l
l
i
l
j 1
,l 2
) ( )
d ⊥ mil , t lj = l ⊥
8
( ) ( ) ( )
d mil , t lj = d θ2 mil , t lj + d 2 mil , t lj + d ⊥2 mil , t lj ( )
After having defined the distance between two lines, line segment
Hausdorff distance (pLHD) is defined as
( ) (( ) (
H pLHD M l , T l = max h M l , T l , h T l , M l ))
1
where h(M l , T l ) = ∑ (
l ml min t l ∈T l d mil , t lj ) and l ml is the length of
∑ lml
i
mil ∈M l
i ji i
mil ∈M l
segment mil .
RESULTS
Since the objective of this project is not the implementation of the
algorithms, but the description and comparison of them, the results will be
reported from the experiments performed by the authors of the articles. Both
methods were tested using variations of face orientation, illumination and
size.
9
During the first experiment no face was rejected as unknown because of
the infinite threshold, statistics were collected measuring the mean accuracy
as a function of the difference between the training conditions and the test
conditions.[4] The results were a 96% of accuracy with illumination changes,
85% with orientation variation and a 64% when the sized changed.
The second experiment tried both a low threshold and a high one in order
to compare the accuracy of recognition and the rejected images. With a low
value of θ∈ , many images were rejected because they were considered not
belonging to the database; however, a high correct classification percentage
was achieved. On the other hand, using a high value of θ∈ , the great
majority of images were accepted but the errors increased. Finally,
adjusting the threshold to obtain a 100% of recognitions, the unknown rates
were 19% for lighting variations, 39% for orientation and 60% for size. If θ∈
was set to obtain only a 20% of unknown rate, correct recognitions were
100%, 94% and 74% respectively.[4]
Tables with the corresponding results are shown in order to make a good
comparison with the other algorithm discussed in this paper. The results
show probabilities of correct detection of the different algorithms and some
experiments include variation of parameters such as number eigenvectors or
light direction.
Each table is labeled so its content can be understood. Not all the results
from the article are showed in this project, but it has the necessary to make
a good comparison of the general characteristics of the algorithm.
10
Table 1: Results for size variations for edge map, eigenface (20 eigenvectors)
and LEM.
11
Table 4: Results for the three algorithms and different face poses
DISCUSSION
Since results for the eigenfaces algorithm are reported by both articles,
this project will take conclusion from the results showed in [4]. The main
reason is that the results from [3] of the eigenfaces method do not describe
the exact procedure; therefore, the reported results will not be as reliable as
expected. Other reason is that an article that tries to demonstrate that LEM
algorithm is better than others like eigenfaces might not be as objective as
[3].
The first conclusion that can be said from the results of the eigenfaces
algorithm is related to the threshold to determine a match in the input image.
It was demonstrated that the accuracy of recognition could achieve perfect
recognition; however, the quantity of image rejected as unknown increases.
The dependence of accuracy and features changing is other characteristic to
take into account. The results show that there is not very much changes
with lighting variations; whereas size changes make accuracy fall very
quickly. In order to avoid the most important weakness a multiscale
approach should be added to the algorithm. [4]
As it was predicted, for lighting variations the LEM algorithm kept high
levels of correct recognitions. In addition, LEM method always managed the
highest accuracy compared with eigenfaces and edge map. The
disagreement between two articles about the results of eigenfaces with
12
lighting variations could be due to a matter of concept. [4] took into account
only the images recognized as faces to test the algorithm; however, [3] might
have considered also the rejected images to make the statistics.
13
REFERENCES
[1] L.C. Jain, “Intelligent biometric techniques in fingerprint and face recognition”.
Boca Raton: CRC Press, 1999.
[3] Yongsheng Gao; Leung, M.K.H., “Face recognition using line edge map”.
Pattern Analysis and Machine Intelligence, IEEE Transactions on , Volume: 24
Issue: 6 , June 2002, Page(s): 764 -779.
[4] M.A. Turk, A.P. Pentland, “Face Recognition Using Eigenfaces”. Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 3-6 June
1991, Maui, Hawaii, USA, pp. 586-591.
[6] De Vel, O.; Aeberhard, S., “Line-based face recognition under varying pose”.
Pattern Analysis and Machine Intelligence, IEEE Transactions on , Volume: 21
Issue: 10 , Oct. 1999, Page(s): 1081 -1088.
14