6.report Face Recognition
6.report Face Recognition
2013
CHAPTER 1
INTRODUCTION
1.1 OBJECTIVES Our project is mainly concerned with developing a Face Recognition system using Eigenface recognition Algorithm. The Eigenface recognition is one of the mostsuccessful techniques that have been used in image recognition and compression. 1.2 FACE RECOGNITION Computerized human face recognition has been an active research area for the last 20 years. It has many practical applications, such as bankcard identification, aces control, mug shots searching, security monitoring, and surveillance systems. Face recognition is used to identify one or more persons from still images or a video image sequence of a scene by comparing input images with faces stored in a database. It is a biometric system that employs automated methods to verify or recognize the identity of person based on his/her physiological characteristic. In general, a biometric identification system makes use of either physiological characteristics or behavior patterns to identify a person. Because of human inherent protectiveness of his/her eyes, some people are reluctant to use eye identification systems. Face recognition has the benefit of being a passive, nonintrusive system to verify personal identity in a natural and friendly way. The face is our primary focus of attention in social intercourse, playing a major role in conveying identity and emotion. Hence face recognition has become an important issue in many applications such as security systems, credit card verification and criminal identification. Face Recognition is an emerging field of research with many challenges such as large set of images, improper illuminating conditions. Much of the work in face recognition by computers has focused on detecting individual features such as the eyes, nose, mouth and head outline, and defining aface model by the position, size, and relationships among these features. Such approaches have proven to depend on the precise features. Computational models of face recognition are interesting because they can contribute not only to theoretical knowledge but also to practical applications. Unfortunately, developing a computational model of face detection and recognition is Dept. of ECE, BGSIT Page 1
2013
quite difficult because faces are complex, multidimensional and meaningful visual stimuli. The user should focus his attention toward developing a sort of early, pre attentive Pattern recognition capability that does not depend on having three-dimensional information or detailed geometry. He should develop a computational model of face recognition that is fast, reasonably simple, and accurate. Eigenface approach is one of the simplest and most efficient methods that can
locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. This approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. In eigenface approach, after the dimensional reduction of the face space, the distance is measured between two images for recognition. If the distance is lessthan some threshold value, then it is considered as a known face else it is an unknown face.Face recognition is a very high level computer vision task, in which many early vision techniques can be involved. Face Recognition can be divided into two parts. 1.3 FACE DETECTION Face detection is largely motivated by the need for surveillance and security, humancomputerintelligent interaction. Detecting faces is challenging due to the wide varietiesof face appearances and the complexity of the backgrounds. The methods proposed forface detection so far generally fall into two major categories: feature-based methods andclassification-based approaches. 1.3.1 FEATURE BASED METHODS These methods detect faces by searching for facial features andgrouping them into faces according to their geometrical relationships. Since theperformance of feature-based methods primarily depends on the reliable locationof facial features, it is susceptible to partial occlusion, excessive deformation, andlow quality of images. 1.3.2 CLASSIFICATION BASED METHODS These methods have used the intensity values of images as theinput features of the underlying classifier. The Gabor filter banks, whose kernels are similar to the 2D Dept. of ECE, BGSIT Page 2
2013
receptive field profiles of the mammalian cortical simplecells, exhibit desirable characteristics of spatial locality and orientation selectivity.As a result, the Gabor filter features extracted from face images should be robustto variations due to illumination and facial expression changes. We propose aclassification-based approach using Gabor filter features for detecting faces.
A Matlab based Face Recognition using PCA proposed a face recognition method based on the eigenfaces approach. 1.4.1 PRINCIPAL COMPONENT ANALYSIS METHOD
2013
averaged covariance of the ensemble of faces. Later, M. Turk and A. Pentland have
We have focused our research toward developing a sort of unsupervised pattern recognition scheme that does not depend on excessive geometry and computations like Elastic bunch templates. Eigenfaces approach seemed to be an adequate method to be used in face recognition due to its simplicity, speed and learning capability.
A previous work based on the eigenfaces approach was done by M. Turk and A. Pentland, in which, faces were first detected and then identified. In this thesis, a face recognition system based on the eigenfaces approach, similar to the one presented by M. Turk and A. Pentland, is proposed. The scheme is based on an information theory approach that decomposes face images into a small set of characteristic feature images called eigenfaces, which may be thought of as the principal components of the initial training set of face images. Recognition is performed by projecting a new image onto the subspace spanned by the eigenfaces and then classifying the face by comparing its position in the face space with the positions of known individuals. Actual system is capable of both recognizing known individuals and learning to recognize new face images. The eigenface approach used in this scheme has advantages over other face recognition methods in its speed, simplicity, learning capability and robustness to small changes in the face image.
Page 4
2013
2013
Fischler and Elschlager attempted to measure similar features automatically. They described a linear embedding algorithm that used local feature template matching and a global measure of fit to find and measure facial features. This template matching approach has been continued and improved by the recent work of Yuille and Cohen. Their strategy is based on deformable templates, which are parameterized models of the face and its features in which the parameter values are determined by interactions with the face image. Connectionist approaches to face identification seek to capture the configurational nature of the task. Kohonen and Kononen and Lehtio describe an associative network with a simple learning algorithm that can recognize face images and recall a face image from an incomplete or noisy version input to the network. Fleming and Cottrell extend these ideas using nonlinear units, training the system by back propagation. Others have approached automated face recognition by characterizing a face by a set of geometric parameters and performing pattern recognition based on the parameters. Kanade's face identification system was the first system in which all steps of the recognition process were automated, using a top-down control strategy directed by a generic model of expected feature characteristics. His system calculated a set of facial parameters from a single face image and used a pattern classification technique to match the face from a known set, a purely statistical approach depending primarily on local histogram analysis and absolute gray-scale values. Recent work by Burt uses a smart sensing approach based on multiresolution template matching. This coarse to fine strategy uses a special purpose computer built to calculate multiresolution pyramid images quickly, and has been demonstrated identifying people in near real time.
2013
idea of eigenface for face detection and identification. Mathematically eigenfaces are the principal components of the distribution of faces or the eigenvectors of the covariance matrix of the set of face images (this is explained in chapter 2). The eigenvectors account for the variation between the images in a set. A face can be represented by a linear combination of the eigenvectors. Later it was shown thatillumination normalization [6] is required for the eigenface approach. Pentland et al [3] used the eigenface recognition technique and extended it to facial features such as eyes, nose, and mouth which can be called eigenfeatures. This modular eigenspace approach comprising of the eigenfeatures namely eigeneyes, eigennose and eigenmouth was found to be less sensitive to changes in appearance than the standard eigenface method. To summarize eigenface method is not invariant to illumination and changes in scale.
Page 7
2013
gallery of 112 neutral frontal view faces. They reported 86.5 percent and 66.4 percent for matching 111 faces of 15 degree and 110 faces of 30 degree rotation to a gallery of 112 neutral frontal faces. In general, dynamic link architecture is superior to other face recognition techniques in terms of computationally expensive. rotation invariance but the matching process is
2013
recognition rate of 90 percent on a database of 47 people. In these methods, typically 35 45 feature points per face were generated. Geometrical feature matching depends on precisely measured distances between features, which in turn depend on the accuracy of the feature location algorithms.
2.8TEMPLATE MATCHING
In this method a test image is represented as a two-dimensional array of intensity values and is compared using a metric such as Euclidean distance with a single template representing the whole face. In this method, each viewpoint of an individual can be used to generate a template. Thus, multiple templates can represent each person in the database. A single template corresponding to a viewpoint can be made up of smaller distinctive templates [24] [10]. Bruneli and Poggio [10] extracted four features namely the eyes, nose, mouth and the entire face and compared the performance with their geometrical method with the template method. The template matching method was found to be superior to the geometrical matching technique. Drawbacks of template matching include the complexity of the computation and the description of the templates. As the recognition system is tolerant to certain discrepancies between the test image and the template, the tolerance might average out the differences that make a face unique.
Page 9
2013
There are six main functional blocks, whose responsibilities are given below:
Page 10
2013
Histogram equalization is applied in order to improve the contrast of the images. The peaks in the image histogram, indicating the commonly used grey levels, are widened, while the valleys are compressed.
2.11.3 Median Filtering For noisy images especially obtained from a camera or from aframe grabber, median filtering can clean the image without losing information. 2.11.4 High Pass Filtering Feature extractors that are based on facial outlines, may benefit the results that are obtained from an edge detection scheme. High-pass filtering emphasizes the details of an image such as contours which can dramatically improve edge detection performance. 2.11.5 Background Removal In order to deal primarily with facial information itself, face background can be removed. This is especially important for face recognition systems where entire information contained in the image is used.
Page 11
2013
When the locations of eyes, nose and mouth are given for a face image, this data can be used to rotate the image so that all the face images are exactly positioned the same. When the positions for the right and left eyes are known, the inverse tangent of the angle between the lines, l1 and l2, connecting the mid-points of two eyes can be calculated. The image can be rotated using the calculated angle. This process is drawn in Figure 2.9.6.
Figure 2.3: The rotation correction procedure when eye, nose and mouth locations are given.
2.11.7 Masking By using a mask, which simply has a face shaped region, the effect ofbackground change is minimized. The effect of masking is studied only onFERET database images. The mask used in this study is shown in Figure 2.4.
Page 12
2013
2.11.8 Translational and Rotational Normalizations In some cases, it is possible to work on a face image in which the head is somehow shifted or rotated. Especially for face recognition systems that are based on the frontal views of faces, it may be desirable that the pre-processing module determines and if possible, normalizes the shifts and rotations in the head position. 2.11.9 Illumination Normalization Face images taken under different illuminations can degrade recognition performance especially for face recognition systems based on the principal component analysis in which entire face information is used for recognition. A picture can be equivalently viewed as an array of reflectivities r(x). Thus, under a uniform illumination I, the corresponding picture is given by
The normalization comes in imposing a fixed level of illumination I0 at a reference point x 0 on a picture. The normalized picture is given by
In actual practice, the average of two reference points, such as one under each eye, each consisting of 2 x 2 array of pixels can be used.
Page 13
2013
Page 14
2013
CHAPTER 3
Page 15
A Matlab based Face Recognition using PCA thethree functional units are given below:
2013
they are depicted in Figure 3.1. The characteristics of these phases in conjunction with
Figure 3.1: Functional block diagram of the proposed face recognition system
2013
speed, no datacompression is performed on the face image that is stored in the face library) and the other corresponds to the weight vector associated for that face image. Weight vectors of the face library members are empty until a training set is chosen and eigenfaces are formed.
2013
image is classified as "known". Otherwise, a miss has occurred and the face image is
The images are mean centered by subtracting the mean image from each image vector. Let m represent the mean image.
Our goal is to find a set of eis which have the largest possible projection onto each of the wis. We wish to find a set of M orthonormal vectors ei for which the quantity
It has been shown that the eis and is are given by the eigenvectors and eigenvalues of the covariance matrix.
Page 18
2013
where W is a matrix composed of the column vectors wi placed side by side. The size of C is N N which could be enormous. For example, images of size 64 64 create the covariance matrix of size 40964096. It is not practical to solve for the eigenvectors of C directly. A common theorem in linear algebra states that the vectors ei and scalars i can be obtained by solving for the eigenvectors and eigenvalues of the M M matrix W TW. Let di and I be the eigenvectors and eigenvalues of WTW, respectively.
which means that the first M-1 eigenvectors ei and Eigen values iof WWT are given by W di and i, respectively. W di needs to be normalized in order to be equal to ei. Since we only sum up a finite number of image vectors, M, the rank of the covariance matrix cannot exceed M- 1 (The -1 comes from the subtraction of the mean vector m). The eigenvectors corresponding to non-zero eigenvalues of the covariance matrix produce an orthonormal basis for the subspace within which most image data can be represented with a small amount of error. The eigenvectors are sorted from high to low according to their corresponding eigenvalues. The eigenvector associated with the largest eigenvalue is one that reflects the greatest variance in the image. That is, the smallest eigenvalue is associated with the eigenvector that finds the least variance. They decrease in exponential fashion, meaning that the roughly 90% of the total variance is contained in the first 5% to 10% of the dimensions. A facial image can be projected onto M (<<M) dimensions by computing
where vi = eitwi. vi is the ith coordinate of the facial image in the new space, which came to be the principal component. The vectors ei are also images, so called, eigen images, or eigenfaces in our case. They can be viewed as images and indeed look like faces. So, describes the contribution of each eigenface in representing the facial image by treating Dept. of ECE, BGSIT Page 19
2013
the eigenfaces as a basis set for facial images. The simplest method for determining which face class provides the best description of a n input facial image is to find the face class k that minimizes the Euclidean distance
where k is a vector describing the kth face class. If kis less than some predefined threshold , a face is classified as belonging to the class k.
for k = 1,...,M'.
Page 20
A Matlab based Face Recognition using PCA The weights form a feature vector,
2013
that describes the contribution of each eigenface in representing the input face image,treating the eigenfaces as a basis set for face images. The feature vector is then used in astandard pattern recognition algorithm to find which of a number of predefined faceclasses, if any, best describes the face. The face classesican be calculated by averagingthe results of the eigenface representation over a small number of face images (as few asone) of each individual. In the proposed face recognition system, face classescontainonly one representation of each individual. Classification is performed by comparing the feature vectors of the face library members with the feature vector of the input face image. This comparison is based on the Euclidean distance between the two members to be smaller than a user defined threshold k. If the comparison falls within the user defined threshold, then face image is classified as known, otherwise it is classified as unknown.
where
Page 21
2013
The face image under consideration is rebuilt just by adding each eigenface with a contribution of wito the average of the training set images. The degree of the fit or the "rebuild error ratio" can be expressedby means of the Euclidean distance between the original and the reconstructed face image.
It has been observed that, rebuild error ratio increases as the training set members differ heavily from each other. This is due to the addition of the average face image. When the members differ from each other (especially in image background) the average face image becomes messier and this increases the rebuild error ratio. There are four possibilities for an input image and its pattern vector: Near face space and near a face class, Near face space but not near a known face class, Distant from face space and near a face class, Distant from face space and not near a known face class. In the first case, an individual is recognized and identified. In the second case, an unknown individual is presented. The last two cases indicate that the image is not a face image. Case three typically shows up as a false classification. It is possible to avoid this false classification in this system.
where k is a user defined threshold for the faceness of the input face images belonging to kth face class.
Page 22
2013
Page 23
2013
CHAPTER 4
PROJECT IMPLEMENTATION
4.1 FLOWCHART
Start Acquire a training set of face images Image size reduction and grayscale conversion Face Detection Principal Component Analysis, Finding Eigenfaces
Image size reduction and grayscale conversion Face Detection Projected Test Image (test image feature vector)
Compare?
Match
No Match
Stop
Page 24
2013
4.3DATA ACQUISITION
Here a directory dialog box is opened up by using Matlab commands to select a folder where the training set of images is stored. This dialog box helps to select any folder thus overcoming the concept of hard coding where the user has to mention the particular folder path. This dialog box is shown below:
schemes of selecting training sets to enhance recognition performance. Oneis to make the training set more representative of the gallery; the other is to enlarge thetraining set size in order to increase the dimensionality of the subspace. Let the training set of the image be 1, 2, 3..n. Dept. of ECE, BGSIT Page 25
2013
Page 26
2013
4.5.3 Normalized Training Set In image processing, normalization is a process that changes the range of pixel intensity values. Applications include photographs with poor contrast due to glare, for example. Normalization is sometimes called contrast stretching. In more general fields of data processing, such as digital signal processing, it is referred to as dynamic range expansion. The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses, hence the term normalization. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of grayscale. For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255.
Page 27
2013
4.6 MEAN
It is defined as the arithmetic average of the training image vectors at each pixel point and its size is (N x 1). The Mean face set is defined by
Subtract the Mean Face. It gives us the difference of the training image from the mean image (size N x 1).
Page 28
2013
An important property of the Eigenface method is obtaining the eigenvectors of the covariance matrix. For a face image of size (Nx x Ny) pixels, the covariance matrix is of size (P x P), P being (Nx x Ny). This covariance matrix is very hard to work with due to its huge dimension causing computational complexity. On the other hand, Eigenface method calculates the eigenvectors of the (Mt x Mt) matrix, Mt being the number of face images, and obtains (P x P) matrix using the eigenvectors of the (Mt x Mt) matrix. Thus, it Dept. of ECE, BGSIT Page 29
2013
is possible to obtain the eigenvectors of X by using the eigenvectors of Y. A matrix of size (Mt x Mt) is utilized instead of a matrix of size (P x P) (i.e. [{Nx x Ny} x {Nx x Ny}] ). This formulation brings substantial computational efficiency.
correspondingeigenvalues. The eigenvector having the largest eigenvalue is marked as the first eigenvector, and so on. In this manner, the most generalizing eigenvector comes first in the eigenvector matrix.
Example:
Eigenvectors
Eigenvalue Eigenvectors
Page 30
2013
Eigenvalues
4.9EIGENFACES
Eigenfaces are a set of eigenvectors used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Matthew Turk and Alex Pentland beginning in 1987, and is considered the first facial recognition technology that worked. These eigenvectors are derived from the covariance matrix of the probability distribution of the high-dimensional vector space of possible faces of human beings To generate a set of eigenfaces, a large set of digitized images of human faces, taken under the same lighting conditions, are normalized to line up the eyes and mouths. They are then all resampled at the same pixel resolution. Eigenfaces can be extracted out of the image data by means of a mathematical tool called Principal Component Analysis (PCA).
Page 31
2013
4.11 CLASSIFICATION
The process of classification of a new (unknown) face new to one of the classes (known faces) can be carried out in two ways: 4.11.1 Bayesian classifier Dept. of ECE, BGSIT Page 32
2013
Bayesian classifier is one of the most widely applied statistical approaches in the standard pattern classification. It assumes that the classification problem is posed in probabilistic terms, and that all the relevant probability values are known. The decision process in statistical pattern recognition can be explained as follows: First, the new image is transformed into its eigenface components. The resulting weights form the weight vector newT.
4.11.2. Nearest Mean Classifier Nearest Mean Classifier is an analogous approach to the Nearest Neighbor Rule(NN-Rule). In the NN-Rule, after the classification system is trained by thesamples, a test data is fed into the system and it is classified in the class of thenearesttrainingsample in the data space with respect to Euclidean Distance. Inthe Nearest Mean Classifier, again the Euclidean distance from each class mean(in this case) is computed for the decision of the class of the test data. Inmathematical terms, the Euclidean distance between the test sample x, and each face class mean i is:
Page 33
2013
where x is a d dimensional input data. After computing the distance to each class mean, the test data is classified into the class with minimum Euclidean Distance.
4.13Matlab Output
Based on the least value of the Euclidean distance, face recognition is carried out.
Page 34
2013
The above output is obtained when we select a different test image of the person which is not present in the database of images. The algorithm finds the correct match.
CHAPTER 5
Face Detection
5.1 INTRODUCTION
Detecting human faces automatically is becoming a very important and challenging task in computer vision research. The significance of the problem can be easily illustrated by its vast applications, as face detection is the first step towards intelligent vision-based human computer interaction. Face recognition, face tracking, pose estimation and expression recognition all require robust face detecting algorithms for successful implementation. Segmenting facial regions in images or video sequences can also lead to more efficient coding schemes [1], content-based representation
(MPEG4) [2], three-dimensional human face model fitting, image enhancement and audio-visual speech integration [3]. Although a major area of interest, many problems still need to be solved, as segmenting a human face successfully depends on many parameters such as skin-tones under varying lighting conditions, complexity level of the background in the image to be segmented and application for which the segmentation is
Page 35
A Matlab based Face Recognition using PCA required. Inherent differences due to the existence of different ethnic gender and age groups also complicate the face detection paradigm.
2013 backgrounds,
With so many new applications, the development of faster and more robust face detection algorithms has become a major area of research over the last few years. Techniques based on knowledge of rules that capture the relationship between facial features, feature invariant approaches that tend to define structural features that exist even when the pose, viewpoint and lighting condition vary, and template matching methods that use several standard templates to describe a face, are all being thoroughly investigated. A comprehensive survey on the methods used to detect faces in images can be found in [4]. However, many of the successful claims reported in the literature either use data sets that are too small or the test images acquired are not standard images and so can show biased results favouring one method over another. The comparison difficulties are due to the fact that much less attention has been paid to the development of a standard face detection database that can be used as a benchmark to test the performance of these new algorithms.
Page 36
2013
right axis (out-of-plane rotation), or both. The newer algorithms take into account variations in the image or video by factors such as face appearance, lighting, and pose.
Fig. 5.1 shows the DFFS map for the luminance image salesman#1. The global minimum is marked by the white circle. Though there is a local minimum at the true face position, the best match is in the background region leading to a false detection. The DFFS is high for non-facial image regions with a high changing in the intensity (e.g. the area thatincludes parts of the light-colored shirt and the dark background). On the other hand, the DFFS becomes relative small in non-facial regions with little variance in the intensity like parts of the background at the right side of the test image. This is due to the fact that an image pattern that can be modeled by noise can be better represented by the eigenfaces than a non-facial image pattern containing a strong edge. Therefore, detection based on the DFFS becomes di_cult even in images with a simple background if the face region does not cover the main part of the test image.We now apply a principal component analysis to the skin probability image de_ned by equation (2) using the same projection matrix. The DFFS map for the skin probability image shown in _g. 2 is displayed in _g. 4a. The true face region is characterized by a local minimum. Similar to the luminance case, the error criterion is also low in background regions with little variance. These regions represent nonskin-colored background, so the probability image isnear zero in these areas.
Page 37
2013
In 5.1 the eigenspace decomposition is used to estimate the complete probability density for a certain object class (e.g. the face class). Assuming a normal distribution, the estimate for the Mahalanobis distance is also a weighted sum of the DFFS and the DIFS similar to equation . A main difference isthat our scaling factor c is much smaller than the one used in because the DIFS provides more information when using the skin probability image instead of the luminance image. The distance map using the combined error criterion is shown in fig. 5.1.The global minimum (marked by the white circle) lies in the true face region and the face is detected correctly. Fig. 5.1 shows the detected face region super imposed on the luminance component.To detect faces at multiple scales, the detectionalgorithm is performed on several scaled versions ofthe skin probability image and the global minimum of the resulting multi-scale distance maps is selected.
Page 38
2013
Detection results for severalimages of MPEG test sequences are shown in fig 5.2.The global minimum detection based on the assumption that the image contains exactly one face. To detect several faces and reject non-facial images, athreshold can be introduced which allows a tradeoff between false detection and false rejection. Ifthe error criterion at a certain spatial position is less than this predetermined threshold, the subimage located at this position is classified as a face. To prevent overlapping regions, only the global minimumis selected in the next step. For the search of the second (local) minimum, all spatial positions which leadto overlapping regions are discarded. This procedure is repeated until no position with error less thanthe threshold can be found which is not already covered by another detected region. The result of a multiple face detection is given in fig 5.2. 5.4 APPLICATIONS 1. Biometrics 2. A Facial Recognition System 3. Video Surveillance 4. Human Computer Interface 5. Image Database Management 6. Webcam etc..
Face detection is used in biometrics, often as a part of (or together with) a facial recognition system. It is also used in video surveillance, human computer interface and image database management. Some recent digital cameras use face detection for autofocus.[1] Face detection is also useful for selecting regions of interest in photo slideshows that use a pan-and-scale Ken Burns effect. Face detection is gaining the interest of marketers. A webcam can be integrated into a television and detect any face that walks by. The system then calculates the race, gender, and age range of the face. Once the information is collected, a series of advertisements can be played that is specific toward the detected race/gender/age.
With increasing research in the area of face segmentation, new methods for detecting human faces automatically are being developed. However, less attention is Dept. of ECE, BGSIT Page 39
2013
being paid to the development of a standard face image database to evaluate these new algorithms. This paper recognizes the need for a colour face image database and creates such a database for direct benchmarking of automatic face detection algorithms. The database has two parts. Part one contains colour pictures of faces having a high degree of variability in scale, location, orientation, pose, facial expression and lighting conditions, while part two has manually segmented results for each of the images in part one of the database. This allows direct comparison of algorithms. These images are acquired from a wide variety of sources such as digital cameras, pictures scanned in from a photo-scanner and the World Wide Web. The database is intended for distribution to researchers. Details of the face database such as the development process and file information along with a common criterion for performance evaluation measures is also discussed in this paper. Key Words: Face Detection, Colour Face Image Database, Evaluation Performance.
CHAPTER 6
Page 40
2013
6.2 Advantages
It is simple and fast, and it only needs a small amount of memory. PCA basically performs dimensionality reduction.
Smaller representation of database because we only store the training images in the form of their projections on the reduced basis. Noise is reduced because we choose the maximum variation basis and features like backgrounds with small variations are automatically ignored. It is quite efficient and accurate in terms of success rate.
6.3 Limitations
Mainly frontal face is required. Multiscaling is a problem. Background variations and lightning conditions still pose a difficulty.
Entertainment
1. Video Game 2. Virtual Reality 3. Training Programs Dept. of ECE, BGSIT Page 41
A Matlab based Face Recognition using PCA 4. Human-Computer-Interaction 5. Human-Robotics 6. Family Photo Album
2013
Smart Cards
1. Drivers Licenses 2. Passports 3. Voter Registrations 4. Welfare Fraud 5. Voter Registration
Information Security
1. TV Parental control 2. Desktop Logon 3. Personal Device Logon 4. Database Security 5. Medical Access 6. Internet Access
Multimedia Management
1. Multimedia management is used in the face based database search
2013
CONCLUSION
We are currently extending the system to deal with a range of aspects (other than full frontal views) by defining a small number of classes for each known person corresponding to characteristic views. Because of the speed of the recognition, the system has many chances within a few seconds to respond to attempt to recognize many slightly different views. In this project we store a set of images in the database ,whenever we input a image that we want to test and if it is in the database will be recognized using the Eigen face algorithm and the reconstructed face will the output image. Here we are not using any filters but we are simply recognizing by reconstructed images of the input. And parallel the Euclidean distances also will be measured.
Page 43
2013
The eigenface approach to face recognition was motivated by information theory, leading to the idea of basing face recognition on a small set of image features that best approximate the set of known face images, without requiring that they correspond to our intuitive notions of facial parts and features. Although it is not an elegant solution to the general object recognition problem, the eigenface approach does provide a practical solution that is well fitted to the problem of face recognition. It is fast, relatively simple, and has been shown to work well in a somewhat constrained environment.
FUTURE SCOPE
This project is based on eigenface approach that gives the accuracy maximum 92.5%. There is scope for future using Neural Network and Active Appearance Model techniques that can give better results compared to eigenface approach. With the help of neural network technique accuracy can be improved. The whole software is dependent on the database and the database is dependent on resolution of camera, so in future if good resolution digital camera or good resolution analog camera will be in use then result will be better used. Real time video processing . As for now, the threshold value is difficult to set in real time.Using OpenCV, an external platform provided by Intel consisting of inbuilt libraries and functions, face recognition can be made easier.
REFERENCES
[1] Zhujie, Y. L. Yu. Face Recognition with eigenfaces, Proceedings of the IEEE International Conference on Industrial Technology, pp. 434-438., 1994. [2] Lizama, E.; Waldoestl, D.; Nickolay, B., An eigenfaces-based automatic face recognition system IEEE International Conference Systems, Man, and Cybernetics, 1997. Volume 1, Issue 12-15 Oct 1997 Page(s):174 177. [3] M.Turk and A.Pentland; Face Recognition Using Eigenfaces, Proceedings; IEEE Conference on Computer Vision and Pattern Recognition, pages 586-591, 1991. [4] Pablo Navarrete Javier Ruiz-del-Solar Analysis and Comparison of Eigenspace Based Face Recognition Approaches IJPRAI 16(7): 817-830 (2002).
Page 44
2013
H.K. Ekenel, J. Stallkamp, H. Gao, M. Fischer, R. Stiefelhagen.; Face Recognition for Smart Interactions, IEEE International Conference on Multimedia & Expo, Beijing, China, July 2007. Page(s):1007 1010. Alex Pentland, BabackMoghaddam, and Thad Starner, View-Based and Modular Eigenspaces for Face Recognition, IEEE Conf. on Computer Vision and Pattern Recognition, MIT Media Laboratory Tech. Report No. 245 1994.
[6]
Page 45