0% found this document useful (0 votes)
45 views3 pages

Deepweb

This document discusses creating a facial recognition model using the Deepface framework to identify and distinguish faces in images. It shows how to install Deepface, import images, create models using VGG-Face and Facenet, verify faces are the same or different people, and analyze facial features like age, gender, race and emotion. The models are tested on images of the same person and then a different person to validate the results.

Uploaded by

bca department
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views3 pages

Deepweb

This document discusses creating a facial recognition model using the Deepface framework to identify and distinguish faces in images. It shows how to install Deepface, import images, create models using VGG-Face and Facenet, verify faces are the same or different people, and analyze facial features like age, gender, race and emotion. The models are tested on images of the same person and then a different person to validate the results.

Uploaded by

bca department
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 3

Getting Started with Facial Recognition Model

We will try to create a face detection and facial feature recognition model using
Facebook’s Deepface Framework to identify and distinguish between a set of images.
We will also compare the results using two of many in-house models present in the
framework and predict the age of the faces present in the images.

So let’s start!

Creating The Model


We will first install the Deepface Library to help us call our further modules to
use. It can be done by running the following command :

!pip install deepface #install the Deepface Library

We will now import and call our modules from the framework. We will also use OpenCV
to help our model with image processing and matplotlib to plot the results.

#calling the dependencies


from deepface import DeepFace
import cv2
import matplotlib.pyplot as plt
Importing our images to be used and setting their path in the model, here we will
be using three images of the same face to test our facial recognition and one
different face image to cross-validate our result.

#importing the images


img1_path = '/content/Img1.jpg'
img2_path = '/content/Img2.jpg'
img3_path = '/content/Img3.jpg'
#confirming the path of images
img1 = cv2.imread(img1_path)
img2 = cv2.imread(img2_path)
We will now plot and check if our images have been imported correctly

plt.imshow(img1[:, :, ::-1 ]) #setting value as -1 to maintain saturation


plt.show()
plt.imshow(img2[:, :, ::-1 ])
plt.show()
Here are our images

We will now call our first library model for facial analysis called VGG-Face. When
we call the model, it imports a set of pre-trained deep learning networks with pre-
trained weights.

#calling VGGFace
model_name = "VGG-Face"
model = DeepFace.build_model(model_name)
Verifying the Results
Creating a function called result to get our results and using the verify function
to validate the images

result = DeepFace.verify(img1_path,img2_path)#validate our images


DeepFace.verify("Img1.jpg","Img2.jpg")#generating result of comparison
We will get the following output :

{'distance': 0.2570606020004992,
'max_threshold_to_verify': 0.4,
'model': 'VGG-Face',
'similarity_metric': 'cosine',
'verified': True}
Here, the distance tells us how far apart are the two faces present in images, i.e.
the difference between the two. We can also see that it provides us with our image
verification result as TRUE telling us that the compared faces present in images
are of similar people.

Cross Validating Our Model


We will now cross-validate our model and check whether the results generated
before are irrational or not. For this, we will use a different face image and
verify it with one of our first face images.

img4_path = '/content/JAN.jpg' #setting path for different image


img4 = cv2.imread(img4_path)
#plotting the image
plt.imshow(img4[:, :, ::-1 ])
plt.show()

Comparing the two images,

#comparing the faces in images using VGG Face


DeepFace.verify("Img1.jpg","JAN.jpg")
Here’s our result

{'distance': 0.6309288770738648,
'max_threshold_to_verify': 0.4,
'model': 'VGG-Face',
'similarity_metric': 'cosine',
'verified': False}
As we can notice, the distance this time is very high, and the verification says
FALSE, telling us that the compared faces are of two different people!

Testing using a Different Model


We will create a separate model by calling a different analysis model named
Facenet, comparing our the first two images, and seeing how different a result it
provides us with than the VGG Face Model.

#calling the model


model_name = 'Facenet'
#creating a function named resp to store the result
resp = DeepFace.verify(img1_path = img1_path , img2_path = img2_path, model_name =
model_name)
resp #generating our result
Here’s the output for Facenet :

{'distance': 0.42664666323609834,
'max_threshold_to_verify': 0.4,
'model': 'Facenet',
'similarity_metric': 'cosine',
'verified': True}
We can see by comparing the faces present in the first two images, although Facenet
tells us that they are similar, the distance seems to be a bit high. Hence, telling
us that the VGG Face model gives a more accurate representation of results than
Facenet.

We can also match and rank the similarity of faces using a different image of the
same person.

#storing match and ranks by creating a dataframe


df = DeepFace.find(img_path = '/content/Other.jpg', db_path ='/content/')
Finding representations: 0%| | 0/4 [00:00<?, ?it/s]
Finding representations: 25%|██▌ | 1/4 [00:10<00:31, 10.58s/it]
Finding representations: 50%|█████ | 2/4 [00:22<00:21, 10.90s/it]
Finding representations: 75%|███████▌ | 3/4 [00:31<00:10, 10.56s/it]
Finding representations: 100%|██████████| 4/4 [00:41<00:00, 10.33s/it]
df.head() #show top matches

Result :

Facial Feature Analysis using Deepface


Using Deepface, we can also analyze the facial features. One can analyze the age,
race, emotion and gender using Deepface’s functions.

#creating an object to analyze facial features


obj = DeepFace.analyze(img_path = "Img2.jpg", actions = ['age', 'gender', 'race',
'emotion'])
print(obj["age"]," years old ",obj["dominant_race"]," ",obj["dominant_emotion"],"
", obj["gender"])
Finding actions: 0%| | 0/4 [00:00<?, ?it/s]
Action: age: 0%| | 0/4 [00:00<?, ?it/s]
Action: age: 25%|██▌ | 1/4 [00:07<00:22, 7.60s/it]
Action: gender: 25%|██▌ | 1/4 [00:07<00:22, 7.60s/it]
Action: gender: 50%|█████ | 2/4 [00:08<00:11, 5.57s/it]
Action: race: 50%|█████ | 2/4 [00:08<00:11, 5.57s/it]
Action: race: 75%|███████▌ | 3/4 [00:08<00:04, 4.04s/it]
Action: emotion: 75%|███████▌ | 3/4 [00:08<00:04, 4.04s/it]
Action: emotion: 100%|██████████| 4/4 [00:13<00:00, 3.48s/it]

Analyzing this image it tell us the following :

32 years old white neutral Woman

Analyzing the next face, tell us the following :

28 years old white happy Woman

ENDNOTES
This article has now implemented and learned how to create a Face Recognition &
Facial Feature Detection model to analyze faces from a set of images. You can
choose other models present in Deepface such as “OpenFace”, “DeepID”, “ArcFace”,
“Dlib ” and check their recognition accuracy as well. The full Colab file for the
following can be accessed from here.

Happy Learning!

References
Deepface and its Implementations
Deepface Documentation
More Great AIM Stories
What did Oracle do in 2022?
What Changed After Akamai’s Acquisition of Linode
Top Data Science and AI Trends for 2023
TSMC’s 3nm: Lost in Memory Lane
Could Tata’s Semiconductor Push Make the Conglomerate Self Reliable?

You might also like