0% found this document useful (0 votes)
11 views17 pages

Age Estimation Using Image Using Machine Learning

Uploaded by

Patanjali Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views17 pages

Age Estimation Using Image Using Machine Learning

Uploaded by

Patanjali Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

AGE ESTIMATION USING UN

IMAGE USING MACHINE


LEARNING
Identify your age by using just an image

Age
24
UNDER THE SUPERVISION
OF
Asst. Prof.
DR. RITUPARNA
Submitted
Engineering By
SAHA
Department of Computer Science &

Gaganprit Singh Patanjali Pandey


(Univ. Roll. (Univ. Roll.
10300120054) 10300121206)

Ankur Kumar Maity Atul Kumar


(Univ. Roll. Pradhan
10300120022) (Univ. Roll.
10300120039)
Serial Number Topics
UN
1. Abstract
2. Introduction
3. Literature survey
4. Methodology
5. Implementation
6. Result
7. Conclusion
ABSTRACT
Facial age estimation plays a pivotal role in numerous
applications, ranging from security systems to personalized
user experiences.
This project explores the realm of age estimation by
leveraging facial images through advanced machine-learning
techniques.
The objective is to develop a robust and accurate model
capable of estimating the age of an individual based on 6
facial features.
The project begins with a comprehensive review of existing
literature, delving into various methodologies and algorithms
employed in the domain of age estimation.
INTRODUCTION
Face recognition is one of the biometric methods to identify individuals by features
of the face. The biometric has a significant advantage over traditional
authentication techniques as the biometric characteristics of the individual are
unique for every person.
 The problem of personal verification and identification is an actively growing area
of research. Face, voice, fingerprint, iris, ear, and retina are the most commonly
used authentication methods.

 Traditionally, face recognition is used for the identification of documents such as


land registration, passports, and driver’s licenses, and for recognition of a human
in a security area

 In this project, effective age group estimation using face features like texture and
shape from human face image are proposed.

 For better performance, the geometric features of the facial images like wrinkle
geography, face angle, left-to-right eye distance, eye-to-nose distance, eye-to-chin
distance, and eye-to-lip distance are calculated.
LITERATURE SURVEY
Traditional face recognition includes methods like eigen face or principal component
analysis (PCA), fisher face, or linear discriminant analysis (LDA). These techniques
extract facial features from an image and using them perform searches in the face
database for images with matching features.
 Skin texture analysis technique uses the visual details of the skin, as captured in
standard digital or scanned images, and turns the unique lines, patterns, and spots
apparent in a person’s skin into a mathematical space.

 In human-computer interaction, aging effects on human faces have been studied


for two main reasons :
1. Automatic age estimation for face image
2. Automatic age progression for face recognition.

 A system has been developed to classify face images into one of the three age
groups: infants, young adults, and senior adults.

 In this paper, key landmarks were extracted from face images, and distances
between those landmarks were calculated. Then, ratios of those distances were
used to classify face images as that of infants or adults.
METHODOLOGY
The methodology for the age estimation project using facial images with a CNN involves several key
steps, from data preparation to model training and evaluation. Here's a general methodology process

 Data collection
Gathered a diverse dataset of facial images with corresponding age labels.
Ensured a balanced distribution across age groups, ethnicities, and expressions.
Augmented the dataset if needed to increase variability

 Data Preprocessing
1. Loaded and inspected the image as shown in below image.
METHODOLOGY
 Data Preprocessing
2. Resized images to a consistent size as shown in image.

3. Converted the image into grayscale image using openCV as shown in


Figure below
METHODOLOGY
 Data Preprocessing : Face Detection
- Utilize a pre-trained face detection model (e.g.,
Haarcascades or a deep learning-based face detector)
to identify and extract faces from the images.
- Crop the images to include only the detected faces,
ensuring consistent facial regions for age estimation.
- Resize the cropped face image.

 Data Preprocessing : Eye Detection


- Integrate an eye detection algorithm (e.g.,
Haarcascades for eyes or a specialized eye detector)
to locate eyes within the cropped face region.
- Extract the eye regions to capture additional facial
features that may contribute to age estimation.
- Resize the eye images to a common size before
feeding them into the model.
METHODOLOGY
 Data Preprocessing : Mouth Detection
- Employ a mouth detection algorithm (e.g., using
Haarcascades or a dedicated mouth detector) to
pinpoint the mouth region within the cropped face.
- Extract the mouth regions for additional features
relevant to age classification.
- Resize the mouth images to a consistent size for
integration into the overall input data
 Data Preprocessing : Nose Detection
- Implement a nose detection algorithm, such as utilizing
Haarcascades or a specialized nose detector, to
accurately locate the nose region within the cropped
face.
- Isolate the nose regions to gather supplementary
features pertinent to age classification.
- Standardize the size of the nose images by resizing
them consistently, ensuring seamless integration into
the comprehensive input data.
METHODOLOGY
 Feature Extraction
A combination of global and grid features is extracted from face
images. The global features such as distance between two eyeballs,
eye to nose tip, eye to chin, and eye to lip is calculated in Fig. 2. Using
four distance values, four features F1, F2, F3, and F4 is calculated as
follows:
a) Feature 1 (F1):
(F1) = (distance from left to right
eyeball) / (distance from eye to nose)

a) Feature 2 (F2):
(F2) = (distance from left to right
eyeball) / (distance from eye to lip)
METHODOLOGY
 Feature Extraction
A combination of global and grid features is extracted from face
images. The global features such as distance between two eyeballs,
eye to nose tip, eye to chin, and eye to lip is calculated in Fig. 2. Using
four distance values, four features F1, F2, F3, and F4 is calculated as
follows:
a) Feature 3 (F3):
(F3) = (distance from eye to nose) /
(distance from eye to chin)

a) Feature 4 (F4):
(F4) = (distance from eye to nose) /
(distance from eye to lip)
IMPLEMENTATION
 Importing the required libraries.

 Loading Haar Cascades:


• Haar cascades for eyes, nose, and mouth are loaded using
cv2.CascadeClassifier.

 Loading and Preprocessing Image


• The script loads an image using OpenCV and resizes it using
imutils.resize. The image is then converted to grayscale for face
detection.
IMPLEMENTATION
 Detecting Facial Feature
• The script uses Haar cascades to detect eyes, nose, and mouth in
the grayscale image.

 Calculating Ratios and Composite Metric

 Visualizing Results
• The script draws rectangles and circles around the detected eyes,
nose, and mouth to visualize the results.
RESULT
The outcomes which our model gave for the following
imagesImage
are given below
F1 F2 F3 F4 F5

1.20 0.78 0.53 0.65 666.69


CONCLUSION
In this age estimation project leveraging facial images, a comprehensive methodology has
been undertaken.
 The project commenced with meticulous data collection, ensuring diversity across
age groups, ethnicities, and expressions.

 Data preprocessing involved image loading, resizing, conversion to grayscale, and


the implementation of face, eye, mouth, and nose detection algorithms to extract
relevant facial features consistently.

 Feature extraction was a crucial aspect, incorporating both global and grid features
to capture distinctive facial characteristics.

 Features such as inter-eye distances, eye-to-nose and lip ratios, and forehead and
eyelid pixel distributions were computed, contributing to a nuanced representation
of facial attributes for age classification.
THANKYOU

You might also like