0% found this document useful (0 votes)
76 views4 pages

Face Recognition Using PCA Based Algorithm and Neural Network

Face recognition using PCA an ANN

Uploaded by

Bhavya Sahay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views4 pages

Face Recognition Using PCA Based Algorithm and Neural Network

Face recognition using PCA an ANN

Uploaded by

Bhavya Sahay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

International Conference and Workshop on Emerging Trends in Technology (ICWET 2010) – TCET, Mumbai, India

Face Recognition Using PCA Based Algorithm and Neural


Network
S R Barahate J Saturwar
Department of Computer Engineering Department of Computer Engineering
Yadavrao Tasgaonkar College of Engineering and Shivajirao S. Jondhale College of Engineering
Management, Karjat. Dombivali(E).
M: +91-9322623336 M:+91-9821472959
[email protected] [email protected]

ABSTRACT issue in many applications such as security systems, credit card


In this paper we develop a computational model to verification and criminal identification. For example, the ability to
identify the unknown person’s face by comparing characteristics model a particular face and distinguish it from a large number of
of face to those of known individuals. Principal Component stored face models would make it possible to vastly improve
Analysis, based on information theory concepts, seek a criminal identification. Although it is clear that people are good at
computational model that best describe a face. Eigenface face recognition, it is not at all obvious how faces are encoded or
approach is the principal component analysis method, in which decoded by the human brain. Human face recognition has been
small set of characteristic pictures are used to describe the studied for more than twenty years.
variation between face images. Goal is to find out the The first step of human face identification is to extract
eigenvectors (eigenfaces) of the covariance matrix of the the relevant features from facial images. Research in the field
distribution, spanned by a training set of face images. Later, every primarily intends to generate sufficiently reasonable familiarities
face image is represented by a linear combination of these of human faces so that another human can correctly identify the
eigenvectors. The eigenface algorithm has been applied to extract face. The question naturally arises as to how well facial features
the basic face of the human face images stored in database of can be quantized. If such a quantization if possible then a
faces (e.g. ORL face database). Recognition is performed by computer should be capable of recognizing a face given a set of
projecting a new image into the subspace spanned by the features. Face recognition is one of the most successful
eigenfaces and then classifying the face by comparing its position applications of image analysis and understanding and has gained
in face space with the positions of known individuals. In this much attention in recent years. Face Recognition is an emerging
approach we treat the face recognition problem as an intrinsically field of research with many challenges such as large set of images,
two-dimensional (2-D) recognition problem rather than requiring improper illuminating conditions. Various algorithms were
recovery of three dimensional geometry, taking advantage of the proposed and research groups across the world reported different
fact that faces are normally upright and thus may be described by and often contradictory results when comparing them. It has been
2-D characteristic views. observed that these different approaches fall into two major
categories that are given below:
Categories and Subject Descriptors
I.4.7 Image Processing and Computer vision 1.1 Feature based Recognition
General Terms This is based on the extraction of the properties of individual
Algorithms, Measurement, Performance, Security, Verification, organs located on a face such as eyes, nose and mouth, as well as
Experimentation. their relationships with each other. Feature vectors describing the
characteristics of face images are evaluated by using deformable
Keywords templates and active contour models, where excessive geometry
Biometrics, Face recognition, and the minimization of energy functions are involved.

Keywords 1.2 Principal Component Analysis


Eigenfaces, Eigenvalues, Eigenvector, Feature vector.

1. INTRODUCTION This approach is based on information theory concepts; seek


The face is our primary focus of attention in social a computational model that best describes a face, by extracting the
intercourse, playing a major role in conveying identity and most relevant information contained in that face.
emotion. Although the ability to infer intelligence or character Kirby and Sirovich [1] developed a technique for
from facial appearance is suspect, the human ability to recognize efficiently representing pictures of faces using principal
faces is remarkable. Face recognition has become an important component analysis. Starting with an ensemble of original face
images, they calculated a best coordinate system for image
Permission to make digital or hard copies of all or part of this work for compression, where each coordinate is actually an image that they
personal or classroom use is granted without fee provided that copies are termed an "eigenpicture". They argued that, at least in principle,
not made or distributed for profit or commercial advantage and that copies
any collection of face images can be approximately reconstructed
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior by storing a small collection of weights for each face, and a small
specific permission and/or a fee. set of standard pictures (the eigenpictures). The weights
ICWET’10, February 26‐27, 2010, Mumbai, Maharashtra, India.
Copyright 2010 ACM 978‐1‐60558‐812‐4.…$10.00.
249
International Conference and Workshop on Emerging Trends in Technology (ICWET 2010) – TCET, Mumbai, India

describing each face are found by projecting the face image onto In this phase, the acquisition and the preprocessing of
each eigenpicture. the face images that are going to be added to the face library are
Turk and A. Pentland [2] argued that, if a multitude of performed. Face images are stored in a face library in the system.
face images can be reconstructed by weighted sum of a small Every action such as training set or eigenface formation is
collection of characteristic features or eigenfaces, perhaps an performed on this face library. Face library is initially empty. In
efficient way to learn and recognize faces would be to build up the order to start the face recognition process, this initially empty face
characteristic features by experience over time and recognize library has to be filled with face images. The proposed face
particular faces by comparing the feature weights needed to recognition system operates on 128 x 128 x 8, HIPS formatted
approximately reconstruct them with the weights associated with image files. Each face is represented by two entries in the face
known individuals. Therefore, each individual is characterized by library: One entry corresponds to the face image itself (for the
a small set of feature or eigenfaces weights needed to describe and sake of speed, no data compression is performed on the face
reconstruct them. image that is stored in the face library) and the other corresponds
to the weight vector associated for that face image. Weight
The aim of this paper is to develop a system to extract features vectors of the face library members are empty until a training set
from a grey scale image of human frontal face to represent the is chosen and eigenfaces are formed.
features using eigenfaces.
 Training phase
2. TYPICAL FACE RECOGNITION
SYSTEM After adding face images to the initially empty face
The proposed face recognition system passes through three main library, the system is ready to perform training set and eigenface
phases during a face recognition process. Three major functional formations. Those face images that are going to be in the training
units are involved in these phases and they are depicted in Fig. 1. set are chosen from the entire face library. Because that the face
The basic steps involved in Face Recognition using Eigenfaces library entries are normalized, no further pre-processing is
Approach [3][4] are as follows: necessary at this step. After choosing the training set, eigenfaces
are formed and stored for later use. Eigenfaces are calculated from
2.1 Initialization the training set, keeping only the M images that correspond to the
1. Acquire initial set of face images known as Training Set. highest eigenvalues. These M eigenfaces define the M-
dimensional "face space". As new faces are experienced, the
2. Calculate eigenfaces from training set keeping only M’ images
eigenfaces can be updated or recalculated. The corresponding
that correspond to highest eigenvalues. These M’ images define
distribution in the M-dimensional weight space is calculated for
the face-space.
each face library member, by projecting its face image onto the
3. Calculate distribution in this M-dimensional space for each "face space" spanned by the eigenfaces. Now the corresponding
known person by projecting their face images onto this face- weight vector of each face library member has been updated
space. which were initially empty. The system is now ready for the
recognition process.
2.2 To Recognize New Face Images
1. For given input image, calculate a set of weights based on M’  Recognition and Learning phase
eigenfaces by projecting this new image onto each of eigenfaces.
2. Determine whether the image is face or not by checking if the After choosing a training set and constructing the
image is sufficiently close to face-space. weight vectors of face library members, now the system is ready
to perform the recognition process. User initiates the recognition
3. If the image is face, then classify the weight pattern as either process by choosing a face image. After obtaining the weight
known or unknown person. vector, it is compared with the weight vector of every face library
4. The weight pattern can be compared with known weight member within a user defined "threshold". If there exists at least
patterns to match faces. one face library member that is similar to the acquired image
within that threshold then, the face image is classified as "known".
Otherwise, a miss has occurred and the face image is classified as
"unknown". After being classified as unknown, this new face
image can be added to the face library with its corresponding
weight vector for later use (learning to recognize).
3. EIGENFACE APPROACH
The training set of images is given as input to find
eigenspace. Using these images, the average face image is
computed. The difference of these images is represented by
covariance matrix. This is used to calculate Eigenvectors and
Eigenvalues. These are the Eigenfaces which represent various
face features.
Sort the eigenvalues, and consider higher of them since they
Fig. 1: Face recognition system represent maximum variations. This becomes eigenspace spanned
by the eigenfaces which has lower dimension than original
 Face Library Formation Phase images. Now given two test images are projected onto this
eigenspace to give the weight vector also known as Face key for
that image.

250
International Conference and Workshop on Emerging Trends in Technology (ICWET 2010) – TCET, Mumbai, India

The Euclidean distance between these two face key vectors is 4.2 Effect of choice of M’
calculated. If this is below some threshold value, then two images Results of choosing various values of M’ which are less
are said to be matching that means they belong to same person. than actual value M which is 60 for ORL Database which is
Depending on this result, False Acceptation Rate (FAR) and False considered. Here it is clear that by using all M eigenvectors, the
Rejection Rate (FRR) are found. These are used to change value success rate will be 100%. Total number of images considered for
of Threshold. testing are 180. While the training set size is 60. The following
table gives us idea of Error and Success rates for various values of
The eigenfaces procedure [3] is as follows: M’.
Table 1: Different choices of M’
Eigenvectors Errors Success
1. We assume the Training sets of images are, Γ , Γ2,. . ., Γm
M’ Quant Rate Quant Rate
with each image is I (x, y). Convert each image into set of 10 13 10.83% 130 72.23%
vectors and new full –size matrix ( ), where is the 20 6 5.00% 140 77.78%
number of training images and is 30 6 1.67% 150 83.34%
2. Find the mean face by: 50 2 1.67% 160 88.89%
60 0 0.00% 180 100%

3. Calculate mean subtracted face: 4.3 Neural Network


Φi = i= 1, 2,…, m (2) The back propagation neural network is used for the
classification and recognition purposes. Table 2 shows the
And a set of matrix is obtained with
training results using neural network. In this experiment, 8
A = is the mean-subtracted matrix vector
patterns are used, 8 inputs per-pattern, 5 hidden neurons, 3 output
with its size Amp. neurons, 0.9 for momentum, 0.7 for learning rate and the error
4. By implementing the matrix transformations, the vector were set to 0.001 for stopping condition. Also, just a few training
matrix is reduced by: examples were required for this method to reach a performance of
Cmm = Amp ATpm (3) about 80 percent and judging by the learning curve, this seems to
Where C is the covariance matrix and T is transpose matrix. be the maximum performance possible with this set of data.
5. Find the Eigen vectors, Vmm and eigenvalues, λm from the C
matrix using Jacobi method [4-7] and order the Eigen vectors
Table 2: Training result using Backpropogation Neural
by highest eigenvalues.
Network
6. Apply the eigenvectors matrix, Vmm and adjusted matrix, Φm.
Pattern Actual Network Output MSE
These vectors determine linear combinations of the training
S1 Pattern
0 0 0 0.0660 0.0021 0.0081 0.0014
set image to form the eigenfaces, Uk by:
S2 0 0 1 0.0064 0.0069 0.9957 0.0003
S3 0 1 0 0.0068 0.9992 0.0131 0.0000
S4 0 1 1 0.0068 0.9987 0.9903 0.0000
Instead of using m eigenfaces, m’< m which We consider for S5 1 0 0 0.8935 0.0052 0.0121 0.0038
training are more than 1 for each individuals or class. m’ is S6 1 0 1 0.9881 0.0015 0.9971 0.0001
the total class used. S7 1 1 0 0.9837 0.9997 0.04824 0.0008
7. Based on the eigenfaces, each image has its face vector by: S8 1 1 1 0.9658 0.9994 0.95567 0.0010
Wk = UTk (Γ – Ψ), k = 1, 2, . . ., m’ (5)
and mean subtracted vector of size and eigenface is In the recognition step, the identity of human face is determined if
Upm’ . The weights form a Feature vector: any network output error value is less than error (0.001). The
ΩT recognition rate worked perfectly if the entire training pattern
used for recognition. The recognition performance is decrease
8. A face can be reconstructed by using its feature, ΩT vector its dramatically if only one image per class used in learning phase.
previous eigenfaces, m’ as : However, when face images with different pose are added in
Γ’ = Ψ + Φf (6) learning step, the recognition rate increase.
Where

5. CONCLUSION
4. RESULTS & DISCUSSION In this proposed scheme, we used the eigenfaces to represent the
The experiment was done using 180 images on 20 features vectors for human faces. The features are extracted from
different persons, (9 images each) from the database of Olivetti the original image to represents unique identity used as inputs to
Research Laboratory ORL in U.K; all taken between April 1992 the neural network to measure similarity in classification and
and April 1994. Diverse facial details and expressions made the recognition. The eigenfaces has proven the capability to provide
major difference between the images from every subject. Images the significant features and reduces the input size for neural
are all in 92x112 pixels, with 256 gray levels per pixel. The goal network. Thus, the network speed for recognition is raise.
of the experiment was to find the value of M’ to classify new
images from already known persons and recognize them.

251
International Conference and Workshop on Emerging Trends in Technology (ICWET 2010) – TCET, Mumbai, India

6. REFERENCES
[1] Kirby et al., 1990, “Application of the Karhunen- Loeve procedure for
the characterization of human faces”, IEEE Trans. Pattern Analysis
and Machine Intelligence, 12: 103-108.
[2] Turk, M.A. and A.L. Pentland,1991,“Face recognition using
eigenfaces”, Proc. IEEE Computer Society Conf. Computer Vision
and Pattern Recognition, pp: 586-591.
[3] Zhujie, Y.L.Y., 1994, “Face recognition with eigenfaces”, Proc. IEEE
Intl. Conf. Industrial Technol., pp: 434-438.
[4] Firdaus, et al., 2005, “Face recognition using neural network”, Proc.
Intl. Conf. Intelligent Systems (ICIS).
[5] Firdaus, M. et al., 2006, “Dimensions reductions for face recognition
using principal component analysis”, Proc. 11th Intl. Symp. Artificial
life and Robotics (AROB 11th 06).7
[6] Nazish, et al., 2001, “Face recognition using neural network”, Proc.
IEEE INMIC 2001. pp: 277-281.

252

You might also like