Age Estimation Using Image Using Machine Learning
Age Estimation Using Image Using Machine Learning
Age
24
UNDER THE SUPERVISION
OF
Asst. Prof.
DR. RITUPARNA
Submitted
Engineering By
SAHA
Department of Computer Science &
In this project, effective age group estimation using face features like texture and
shape from human face image are proposed.
For better performance, the geometric features of the facial images like wrinkle
geography, face angle, left-to-right eye distance, eye-to-nose distance, eye-to-chin
distance, and eye-to-lip distance are calculated.
LITERATURE SURVEY
Traditional face recognition includes methods like eigen face or principal component
analysis (PCA), fisher face, or linear discriminant analysis (LDA). These techniques
extract facial features from an image and using them perform searches in the face
database for images with matching features.
Skin texture analysis technique uses the visual details of the skin, as captured in
standard digital or scanned images, and turns the unique lines, patterns, and spots
apparent in a person’s skin into a mathematical space.
A system has been developed to classify face images into one of the three age
groups: infants, young adults, and senior adults.
In this paper, key landmarks were extracted from face images, and distances
between those landmarks were calculated. Then, ratios of those distances were
used to classify face images as that of infants or adults.
METHODOLOGY
The methodology for the age estimation project using facial images with a CNN involves several key
steps, from data preparation to model training and evaluation. Here's a general methodology process
Data collection
Gathered a diverse dataset of facial images with corresponding age labels.
Ensured a balanced distribution across age groups, ethnicities, and expressions.
Augmented the dataset if needed to increase variability
Data Preprocessing
1. Loaded and inspected the image as shown in below image.
METHODOLOGY
Data Preprocessing
2. Resized images to a consistent size as shown in image.
a) Feature 2 (F2):
(F2) = (distance from left to right
eyeball) / (distance from eye to lip)
METHODOLOGY
Feature Extraction
A combination of global and grid features is extracted from face
images. The global features such as distance between two eyeballs,
eye to nose tip, eye to chin, and eye to lip is calculated in Fig. 2. Using
four distance values, four features F1, F2, F3, and F4 is calculated as
follows:
a) Feature 3 (F3):
(F3) = (distance from eye to nose) /
(distance from eye to chin)
a) Feature 4 (F4):
(F4) = (distance from eye to nose) /
(distance from eye to lip)
IMPLEMENTATION
Importing the required libraries.
Visualizing Results
• The script draws rectangles and circles around the detected eyes,
nose, and mouth to visualize the results.
RESULT
The outcomes which our model gave for the following
imagesImage
are given below
F1 F2 F3 F4 F5
Feature extraction was a crucial aspect, incorporating both global and grid features
to capture distinctive facial characteristics.
Features such as inter-eye distances, eye-to-nose and lip ratios, and forehead and
eyelid pixel distributions were computed, contributing to a nuanced representation
of facial attributes for age classification.
THANKYOU