0% found this document useful (0 votes)
115 views5 pages

An Application of Linear Algebra For The Optimal Image Recognition IJERTV2IS2273

This document discusses using linear algebra techniques like Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for optimal face recognition. PCA is commonly used to project face images into a lower-dimensional feature space, but it has poor discriminatory power. LDA aims to maximize between-class variance and minimize within-class variance, improving on PCA. The proposed approach applies PCA to LDA sub-bands to extract features for face recognition, aiming to improve accuracy while reducing computational load compared to standard PCA-based methods. Experimental results suggest this Eigenfaces with LDA approach provides better representation and lower error rates than PCA alone.

Uploaded by

All in One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views5 pages

An Application of Linear Algebra For The Optimal Image Recognition IJERTV2IS2273

This document discusses using linear algebra techniques like Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) for optimal face recognition. PCA is commonly used to project face images into a lower-dimensional feature space, but it has poor discriminatory power. LDA aims to maximize between-class variance and minimize within-class variance, improving on PCA. The proposed approach applies PCA to LDA sub-bands to extract features for face recognition, aiming to improve accuracy while reducing computational load compared to standard PCA-based methods. Experimental results suggest this Eigenfaces with LDA approach provides better representation and lower error rates than PCA alone.

Uploaded by

All in One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181
Vol. 2 Issue 2, February- 2013

An Application of Linear Algebra for the Optimal Image


Recognition
1
Neeraj Kumar, 2Nirvikar
1
Assistant Professor, CSE, Institute of Technology Roorkee, Roorkee
2
Assistant Professor, CSE, IEC College of Engineering & Technology, Greater Noida

Abstract: The Real-Time approach of detection and identification of human faces in a present day scenario is
too difficult and to develop a system for the same is in progress. In this article our approach is for two-
dimensional recognition of the faces, taking the advantage of the facts that faces are normally up-right. Face
Images are projected onto a Feature Space i.e. Face Space. The Eigenface method uses Principal Component
Analysis (PCA) to linearly project the image space to a low dimensional feature space. The LDA method is an
enhancement of the Eigenface method that it maximizes the ratio of between-class scatter to that of within-
class scatter, therefore, it works better than PCA. Linear Discriminant Analysis (LDA) which effectively see
only the Euclidean Structure of face space. Experimental results suggest that the proposed Eigen Faces with
LDA approach provides a better representation and achieves lower error rates in face recognition.

Keywords: Feature Space, Eigenface, PCA, LDA, Euclidean Structure, Face Recognition.
approach attempts to capture and define the face as
1. Introduction: a whole. The face is treated as a two-dimensional
pattern of intensity variation. Under this approach,
Face recognition is done by scanning a person’s
RRTT
face is matched through identifying its underlying
face and matching it against a database of known
statistical regularities.
faces and it’s a biometric approach. Face
However common PCA-based methods suffer
recognition is defined as the identification of a
IIJJEE

from two limitations i.e. poor discriminatory


person from an image of their face. Face
power and large computational load. It is well
Recognition is done by two ways i.e. firstly Face
known that PCA gives a very good representation
Identification and secondly, Face Verification. The
of the faces. Given two images of the same person,
task of recognition of human faces is quite
the similarity measured under PCA representation
complex. The human face is full of information but
is very high. Yet, given two images of different
working with all the information is time
persons, the similarity measured is still high. That
consuming and less efficient. It is better get unique
means PCA representation gets a poor
and important information and discards other
discriminatory power and further improve the
useless information in order to make system
discriminability of PCA by adding Linear
efficient. Face recognition systems can be widely
Discriminant Analysis (LDA). But, to get a precise
used in areas where more security is needed, like
result, a large number of samples for each class are
Air ports, Military bases, Government offices etc.
required. The second problem in PCA-based
Automatic face recognition by computer can be
method is the high computational load in finding
divided into two approaches, namely, content-
the eigenvectors. The computational complexity of
based and face-based. In content-based approach,
this is O (d2) where d is the number of pixels in
recognition is based on the relationship between
the training images which has a typical value of
human facial features such as eyes, mouth, nose,
128x128. The computational cost is beyond the
profile silhouettes and face boundary. The success
power of most existing computers. Fortunately,
of this approach relies highly on the accurately is
from matrix theory, we know that if the number of
difficult. Every human face has similar facial
training images, N, is smaller than the value of d,
features; a small derivation in the extraction may
the computational complexity will be reduced to
introduce a large classification error. Face-based

www.ijert.org 1
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 2 Issue 2, February- 2013
O (N2). Yet still, if N increases, the computational linear combinations of the original p variables the
load will be increased in cubic order. In view of first k components display as much as possible of
the limitations in existing PCA-based approach, the variation among objects.
we proposed a new approach in using PCA –
applying PCA on LDA sub-band for feature 1.3. Geometric Rationale of PCA:
extraction. The objects are represented as a cloud of n
points in a multidimensional space with an
1.1. Digital Image Processing: axis for each of the p variables.
An image can be defined as a twodimension The centroid of the points is defined by the
function f (x, y) (2D image), where x and y are mean of each variable.
spatial coordinates, and the amplitude of f at any The variance of each variable is the average
pair of (x, y) is gray level of the image at that squared deviation of its n values around the
point. For example, a grey level image can be mean of that variable.
represented as:
n
1 2
Vi X im Xi (2.1.1)
fij where fij f ( xi , yj ) (1.1) n 1m 1
When x, y and the amplitude value of f are finite,
discrete quantities, the image is called “a digital • Degree to which the variables are linearly
image”. The finite set of digital values is called correlated is represented by their covariances.
picture elements or pixels. Typically, the pixels are
stored in computer memory as a twodimensional n
1
array or matrix of real number. Cij X im X i X jm X j
Color images are formed by a combination of n 1m 1
RRTT
(2.1.2)
individual 2D images. Many of the image
processing techniques for monochrome images can Objective of PCA is to rigidly rotate the axes of
IIJJEE

be extend to color image (3D) by processing the this p-dimensional space to new positions
three components image individually. (principal axes) that have the following properties:
Ordered such that principal axis 1 has the
1.2. PCA (Principal Component Analysis): highest variance, axis 2 has the next highest
It’s the most widely-used and well-known of the variance, .... , and axis p has the lowest
“standard” multivariate methods invented by variance
Pearson (1901) and Hotelling (1933) first applied Covariance among each pair of the principal
in ecology by Goodall (1954) under the name axes is zero (the principal axes are
“factor analysis” (“principal factor analysis” is a uncorrelated).
synonym of PCA). It is a way of identifying
patterns in data, and expressing the data in such a
1.4. PCA for images or Eigen-faces:
way as to highlight their similarities and
The Eigenface method is based on linearly
differences. Since patterns in data can be hard to
projecting the image space to a low dimensional
find in data of high dimension, where the luxury of
feature space. The Eigenface method, which uses
graphical representation is not available, PCA is a
principal components analysis (PCA) for
powerful tool for analyzing data. The other main
dimensionality reduction, yields projection
advantage of PCA is that once you have found
directions that maximize the total scatter across all
these patterns in the data, and you compress the
classes, i.e., across all images of all faces.
data, i.e. by reducing the number of dimensions,
Let us consider a set of N sample images{x1, x2, ...,
without much loss of information. This technique
xN} taking values in an n-dimensional image
used in image compression. It takes a data matrix
space, and assume that each image belongs to one
of n objects by p variables, which may be
of classes{X1, X2, ...,Xc}. Let us also consider a
correlated, and summarizes it by uncorrelated axes
linear transformation mapping the original n-
(principal components or principal axes) that are

www.ijert.org 2
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 2 Issue 2, February- 2013
dimensional image space into an m-dimensional
feature space, where m < n. The new feature 2. Linear Discriminant Analysis (LDA):
vectors Yk∈ Rm are defined by the following linear LDA selects Eigen-vectors U in such a way that
transformation: the ratio of the between-class scatter and the within
class scatter is maximized. PCA on the other hand
(2.2.1) does not take into account any difference in class.
LDA computes the projection U that maximizes
Where W ∈ Rn×m is a matrix with orthonormal the ratio:
columns.
If the total scatter matrix ST is defined as (2.1)

(2.2.2)

where µ ∈ Rn is the mean image of all samples, Where SB and SW are the between class scatter
then after applying the linear transformation WT, matrix and the within class scatter matrix
the scatter of the transformed feature vectors {y1, respectively, such that:
y2, ..., yN} is WTSTW. In PCA, the projection Wopt
is chosen to maximize the determinant of the total
scatter matrix of the projected samples, i.e. (2.2)

(2.2.3) and

(2.2.4) (2.3)
RRTT
Where {wi|i = 1,2, ..., m} is the set of
n-dimensional eigenvectors of ST corresponding to
the m largest Eigen values {λi|i = 1,2, ..., m}, i.e.,
IIJJEE

M is the number of the classes, Ni is the number of


samples in class i and µi is the mean of class i.
(2.2.5) Uopt can be found by solving the generalized Eigen
value problem.
Since these eigenvectors have the same dimension LDA assumes that the whole dataset is given in
as the original images, they are referred to as Eigen advance, and is trained in one batch. However, in a
pictures in and Eigen-faces in. Classification is streaming environment, new samples are being
performed using a nearest neighbor classifier in the presented continuously, possibly without end. The
reduced feature space. Most Expressive Features addition of these new samples will lead to the
(MEF): vectors show the tendency of PCA to changes of the original mean vector µ, within class
capture major variations in the training set such scatter matrix SW, as well as between-class
distance matrix SB, therefore the whole
as lighting direction.
discriminant Eigen space model should be
updated.
1.5. Algorithm for Training:
Let X and Y are two sets observations, where X is
Step-1: Align training images X1, X2, …, XN.
the presented observation set, and Y is a set of new
Step-2: Compute average face u = 1/N Σ Xi.
observations. Let their discriminant Eigen space
Step-3: Compute the difference image φi = Xi –u.
models be Ω = (Swx, SBx, µx, N) and Ψ = (SWy,
Step-4: Compute the covariance matrix (total
SBy, µy, L), respectively. This updating problem
scatter matrix)
is to compute the new fisher space model
ST = (1/N) φi φiT = BBT, B= [φ1, φ2 … φN]. Φ = (Swv, SBv, µv, N + L) using fisher space
Step-5: Compute the eigenvectors of the
models Ω and Ψ. Most Discriminating Features
covariance matrix, W.

www.ijert.org 3
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 2 Issue 2, February- 2013
(MDF): the features (projections) obtained PCA is proper to dimension reduction. PCA are
using LDA. the maximal variance dimensions the relevant
3. Experiments and Interpretations: dimensions for preservation? LDA perform
A set of face images were used for the PCA dimensionality reduction “while preserving as
approach and the results is interpreated as below much of the class discriminatory information as
with input facial images and their Eigen faces are possible”. LDA is proper to pattern classification if
listed in Figure 1 and Figure 2. the number of training samples of each class is
large. Seeks to find directions along which the
classes are best separated. Takes into consideration
the scatter within-classes but also the scatter
between-classes. For example of face recognition,
Figure 1: The Input facial Images (PCA Approach) more capable of distinguishing image variation due
to identity from variation due to other sources such
as illumination and expression. So LDA plays
slightly better than PCA for Eigen face
recogniations but within some limitations. But can
be improved with some other better techniques.
Figure 2: The Eigen Faces for the set of input
facial Images (PCA Approach) 6. References:
A set of face images were used for the LDA [1] M.TurkandA.Pentland, “Eigen faces for
approach and the results is interpreated as below Recognition,” Journal of Cognitive Neuroscience,
with input facial images and their Eigen faces are 1991.
[2] W.Zhao, A.Krishnaswamy, R.Chellappa,
RRTT
listed in Figure 3 and Figure 4.
“Discriminant Analysis of Principal Components
for Face Recognition,” In Proceedings,
IIJJEE

International Conferenceon Automatic Face and


Gesture Recognition.336–341.
Figure3: The Input facial Images (LDA Approach) [3] M. Turk, “A random walk through
eigenspace”, IEICE Trans. Inf. & Syst., vol. E84-
D, no. 12, pp. 1586–1695, December 2001.
[4] S. Kim, S. T. Chung, S. Jung, and S. Cho, “An
improved illumination normalization based on
anisotropic smoothing for face recognition,”
Figure 4: The Eigen Faces for the set of input International Journal of Computer Science and
Engineering, Vol. 2, No.3, pp.89-95, 2008.
facial Images (PCA Approach) [5] K. R. Singh, M. A. Zaveri, and M.
M.Raghuwanshi, “Illumination and pose invariant
4. PCA vs LDA(Results Comparision): face recognition: a technical review,” International
Training Testing PCA LDA Journal of Computer Information Systems and
Images Images
Industrial Applications (IJCISIM), Vol.2, pp.29-
2 8 72 78
38, 2010.
3 6 73 79
4 6 74 82
[6] X. Zhang and Y. Gao, “Face recognition across
5 5 79 87 pose: A review” Pattern Recognition, Vol. 42, No.
11, pp. 2876–2896, 2009.
[7] G. Shakhnarovich and B. Moghaddam, “Face
100 recognition in subspaces,” Springer, Heidelberg,
Training
50 May 2004.
Images
[8] A. Jain, L. Hong, and S. Pankanti, “Biometric
0 Testing identification,” Communications of the ACM, Vol.
1 2 3 4 5 Images 43, No. 2, Feb. 2000.
[9] Imola K. Fodor,” A survey of dimension
5. Conclusion: reduction techniques”, June 2002

www.ijert.org 4
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 2 Issue 2, February- 2013
[10] www.elsivercomputerscience.com
[11] Mat Lab 7.0 “Image Processing Tool Box”.

RRTT
IIJJEE

www.ijert.org 5

You might also like