Pca Versus Lda in Implementing of Neural Classifiers For Face Recognition
Pca Versus Lda in Implementing of Neural Classifiers For Face Recognition
*Military Technical Academy, Bucharest, Romania, **University Titu Maiorescu, Bucharest, Romania,
Abstract: This paper propose to better determine people's face recognition methods using images. Using
the selection of features principal component analysis and linear discriminant analysis I conducted
experiments by implementing two methods called Self Organizing Map classification (SOM and CSOM 1).
I tried to get the best recognition rates for different color components, followed by selection of features
concatenation. Applications training consisted of images from a database of its own. Analysis was performed
by changing a number of 80 to 360 neurons trained with a pitch of 40 to obtain a recognition rate of 100%.
Finally I concluded for the best versions of selections for features that the two methods give proposing
future research.
Keywords: principal component analysis, linear discriminant analysis, Self Organizing Map, number of
neurons, the recognition rate.
1. METHODS OF EXTRACTION AND The process of obtaining such a
SELECTION OF FEATURES FROM representation of the region of interest is known
IMAGES as the step of description / retrieval features.
Description is directly related to the chosen
Extraction and selection characteristics for data structure for representation, against which
faces of people in an image are two important there is a strong dependence on the developed
stages on the success and performance of the application.
recognition and face detection (fig. 1). The possibilities for description of a region
is diversified depending on the methods
implemented for selection of features:
• characterization of the region contour shape
(contour descriptors);
• characterize the region based on its interior
(descriptors regional or time);
• topological description of the region of interest
(textures);
• morphological description of the region of
interest (morphological descriptors).
Fig. 1. Extraction and selection of features in Choosing a proper description is essential
an image for the success of the shape recognition
process. Also, a fundamental principle which
A correct implementation of the recognition oversees construction shape descriptors is their
regions obtained from image segmentation invariance principle to various types of linear
process requires input video representation in or nonlinear transformations applied form of
a stable form of data analyzed by eliminating interest. The desired invariance of the set of
redundant information and retaining information descriptors used at the starting point, scaling,
for face recognition. translation, rotation and reflection.
113
PCA Versus LDA in Implementing of Neural Classifiers for Face Recognition
Practical experience shows that the most In this paper we experienced select those
important aspect for the recognition of forms features that contain the greatest amount of
is the selection characteristics / properties or information on that form.
descriptors used. We presented the results of selection methods
Selection is a process characteristic features that provide an application characteristics in
data compression and can be equated with a the area of military interest through proven
linear or nonlinear transformation of the initial performance on recognition rate.
space of n-dimensional observations assumed 2.1. Principal components analysis (PCA).
in a space with fewer dimensions. Principal components analysis (PCA Principal
The transformation performed conservation Component Analysis) is a standard method of
information and enables the development of data analysis that enables the detection of the
algorithms in real time with efficient algorithms most prominent trends in a set of data.
in terms of computation time and memory PCA reduces the number of variables that the
resources required only small spaces. size of a data set.
If a single class of forms, selection of
characteristics is considered optimal if the
dimensionality reduction achieved with the
original information preservation majority.
If there are several classes of shapes,
selection efficiency characteristics is given
in particular the possibility of separability
of classes, which depends mostly on the
distribution of classes and selected classifier.
As a reference in this work we
have demonstrated in various [1-7], the
performance of classifiers, and as a measure
of the effectiveness of selected features can be
considered the possibility of their error. Fig. 2. Version for representation using PCA
Note that most transformations used to projection
select characteristics are linear but non-linear
transformations can be used even if they are
The picture shows a network in a two-
difficult to implement
dimensional subspace. PCA is used to view the
They can provide a higher efficiency
data by reducing the dimensionality of the data.
expressing the dependence of the real forms
The three variables are reduced to a smaller
observed raw data extracted and selected
characteristics of those forms. number, two new variables called principal
components (PC). Using PCA, we can identify
2. METHODS FOR THE SELECTION OF two-dimensional plane which best describes the
CHARACTERISTICS varied data.
Space using PCA rotating selection of
Size space features large extent influence original data that the axes of the new terms
the efficiency and performance of classification have the largest variation of data in a certain
algorithms. Thus, a number of classification direction. Axes and new variables are called
algorithms effective in small spaces become principal components are ordered and variation.
impractical in larger spaces. The first component, PC 1 is the direction
Therefore we sought to implement changes with the largest variance of the data. PC
to prioritize the importance of characteristics Division 2 is the largest variance that remained
and allow transformed space thus reducing after the first orthogonal component. The
its size by removing the least significant data, representation allows to obtain the required
while retaining the essential information for number of components that covers a space and
classification. the desired amount of variance.
114
Review of the Air Force Academy No 1 (28) 2015
Let X be a space cloud data. The main LDA tries to find the best projection
components of this set are the directions along direction vectors belonging to different classes
drive are best separated.
which elongation is the most significant
cloud. Knowing these directions can serve both
purposes of classification and to determine the
most important characteristics of point cloud
analysis.
Most transformations used to select
characteristics are linear, nonlinear
transformations while having a higher
complexity, are more difficult to implement, but
may have a higher efficiency, better expressing
the dependence of the forms observed raw data
observed characteristics selected these forms
[11]. Fig. 3. Variant of projection for representation
Karhunen-Loeve transform is a linear using LDA
method for selecting features. Let X be an
n-dimensional random vector. Looking for an Presentation of the algorithm and the
orthogonal transformation enabling optimal applications that have been described in [10].
representation of the vector X with respect There were also tested for applications of LDA
to the minimum mean square error criterion. transform two-dimensional vector and pixel
Projecting cloud directions given by its classification of soil and vegetation images
main components, the immediate effect is a using LDA transformed.
compression of the information contained in
that crowd. 3. EXPERIMENTAL RESULTS
According to reference [8], identifying
the main components of cloud data analysis Through various experiments we tried
reduces to determining the values of vectors validating theoretical interest arising from the
and eigenvalues of the matrix analyzed crowd comparative study of methods for selecting
dispersal. features using PCA and LDA methods for
Linear nature of the standard PCA method, classification as SOM and CSOM1 80-400
performed by linear projection data analyzed neurons in neural networks trained with a step
components or main directions suppose 40.
but a number of major shortcomings in the The database contains 556 personal photos
processing of input data. Thus, they developed of 46 subjects who were asked to stimulate
a series of nonlinear generalizations of the different physiognomic states, against a pale
classical variant, an example being Kernel PCA and normal lighting. It was shown in [6].
algorithm presented in reference [9]. The main criterion for validating the
2.2. Linear discriminant analysis (LDA). importance of selection methods analyzed is the
Linear discriminant analysis (Linear performances in the subsequent classification
Discriminant Analysis, LDA), as the main / recognition and entry forms as classifier we
component analysis is a statistical method for used a trained neural network.
selecting the characteristics. Experiments to determine the most effective
Unlike PCA, in which case projection method of selection.
for the purposes of seeking to maximize total The experiment consisted in selecting 4 of
covariance matrix, here is seeking a projection the 46 subjects of the database in 7 test images
in terms of maximizing the covariance matrix of / 7 training images.
the covariance matrix interclase and minimizing
the accumulated inside other classes.
115
PCA Versus LDA in Implementing of Neural Classifiers for Face Recognition
The 7 images drive and 7 test images were Table 1. Results using the database 7i / 7t, the
chosen at random, and are analyzed in terms recognition rate for PCA method C1C2C3
of the form RGB color spaces, C1, C1C2 and components [%] feature selection process
C1C2C3. followed by concatenation
The original images were reduced in size by
using the Paint for 90/120 pixels.
116
Review of the Air Force Academy No 1 (28) 2015
Table 4. Results using the database 7i / 7 t, The same conditions were used as in the
the recognition rate for RGB component PCA previous experiment to give the following
method [%], using the classifier CSOM results:
4. CONCLUSIONS