Title:
Face Recognition Using Principal Component Analysis (PCA)
Presenter:
Mrs.Komal Gadekar
International Center of Excellence In Engineering and
Management
Introduction
What is Face Recognition?
A computer vision task that involves identifying or verifying a person from a digital
image or video.
Importance in Real-World Applications:
Security systems (e.g., surveillance, access control).
Social media platforms (e.g., tagging, photo management).
Authentication systems (e.g., smartphones, banking).
Overview of PCA (Principal Component Analysis)
•What is PCA?
• A statistical technique used to simplify the complexity of data by transforming it
into a set of orthogonal (uncorrelated) components.
•Goal:
• Reduce dimensionality while retaining as much variance as possible.
• Useful in reducing data size while maintaining essential features.
Why Use PCA for Face Recognition?
•High Dimensionality of Image Data:
• Face images contain large amounts of pixel data (e.g., 100x100 images
have 10,000 features).
• PCA helps in reducing the number of dimensions, which makes the face
recognition task computationally feasible.
•Feature Extraction:
• PCA captures key features (principal components) of faces that are most
informative for distinguishing individuals.
How PCA Works (Concept)
Step-by-step Process:
1. Data Collection:
1. Gather a dataset of images (e.g., a database with many face images).
2. Preprocessing:
1. Convert images into a vector form.
2. Normalize the images (scale pixels to a common range).
3. Covariance Matrix:
1. Calculate the covariance matrix to find the correlations between features.
4. Eigenvectors and Eigenvalues:
1. Find the eigenvectors (principal components) and eigenvalues (which represent the
variance captured by each component).
5. Feature Selection:
1. Select the top 'k' eigenvectors based on eigenvalues to form the new feature space.
6. Projection:
1. Project the original images into the reduced feature space.
Mathematics Behind PCA
•Covariance Matrix (C): C=1N−1XTXC = \frac{1}{N-1} X^T
XC=N−11XTX where XXX is the data matrix and NNN is the number
of data points.
•Eigenvectors and Eigenvalues:
• Solve the equation C⋅v=λ⋅vC \cdot v = \lambda \cdot vC⋅v=λ⋅v,
where vvv are eigenvectors and λ\lambdaλ are eigenvalues.
•Projection:
• Project the original data onto the selected eigenvectors:
•Y=X⋅WY = X \cdot WY=X⋅W where WWW is the matrix of
eigenvectors.
Eigenfaces
•Definition of Eigenfaces:
• The principal components (eigenvectors) derived from a collection of
face images are called "eigenfaces."
• These are not actual faces but abstract representations that capture the
main variations in face shapes.
•Visualization:
• Display a few examples of eigenfaces.
Face Recognition with PCA
•Training Phase:
• Apply PCA to the training images to extract eigenfaces.
• Represent each image as a linear combination of the eigenfaces.
• Store the reduced feature representation (e.g., projection on the
eigenfaces).
•Testing Phase:
• Project the test image onto the eigenfaces space.
• Compare the projection with stored projections (e.g., using
Euclidean distance).
• Classify the face based on the closest match.
Example: Face Recognition Workflow
1.Collect Image Dataset:
1. Gather a dataset of faces, e.g., the ORL or Yale Face Database.
2.Apply PCA:
1. Perform dimensionality reduction to create eigenfaces.
3.Training Phase:
1. Project training images onto the eigenface space.
4.Testing Phase:
1. Project a test image and find the closest match from the training dataset.
5.Recognition:
1. Output the predicted identity based on the closest match.
Advantages of PCA for Face Recognition
•Dimensionality Reduction:
• Reduces computational complexity by decreasing the number of features.
•Captures Major Variations:
• Focuses on the most significant variations in the data, improving accuracy.
•Efficiency:
• Fast for recognition once eigenfaces are computed.
Limitations of PCA in Face Recognition
•Sensitivity to Lighting and Pose:
• PCA may not perform well under different lighting conditions or facial
orientations.
•Limited to Linear Variations:
• PCA assumes linear relationships, which may not capture complex facial
features.
•Data Variability:
• Requires a large and diverse dataset for accurate recognition.
Applications of PCA in Face Recognition
•Security Systems:
• Used for access control, surveillance, and identification.
•Social Media Platforms:
• For automatic face tagging in photos.
•Authentication Systems:
• Used in mobile phones and biometric authentication systems.
Alternative Methods to PCA
•Linear Discriminant Analysis (LDA):
• Focuses on maximizing class separability, unlike PCA which maximizes
variance.
•Deep Learning (CNNs):
• Modern approaches like Convolutional Neural Networks (CNNs) are more
effective at handling complex variations in faces.
Conclusion
•Summary:
• PCA is a powerful technique for dimensionality reduction and feature
extraction in face recognition.
• While effective, it has limitations that modern techniques like deep
learning aim to address.
•Future Outlook:
• Integration with advanced AI methods for more robust and accurate face
recognition systems.
Thank You