Synopsis
Synopsis
First introducing you with the terminologies used in this advanced python project
of gender and age detection �
What is OpenCV?
OpenCV is short for Open Source Computer Vision. Intuitively by the name, it is an
open-source Computer Vision and Machine Learning library. This library is capable
of processing real-time image and video while also boasting analytical
capabilities. It supports the Deep Learning frameworks TensorFlow, Caffe, and
PyTorch.
What is a CNN?
A Convolutional Neural Network is a deep neural network (DNN) widely used for the
purposes of image recognition and processing and NLP. Also known as a ConvNet, a
CNN has input and output layers, and multiple hidden layers, many of which are
convolutional. In a way, CNNs are regularized multilayer perceptrons.
Detect faces
Classify into Male/Female
Classify into one of the 8 age ranges
Put the results on the image and display it
The Dataset
For this python project, we�ll use the Adience dataset; the dataset is available in
the public domain and you can find it here. This dataset serves as a benchmark for
face photos and is inclusive of various real-world imaging conditions like noise,
lighting, pose, and appearance. The images have been collected from Flickr albums
and distributed under the Creative Commons (CC) license. It has a total of 26,580
photos of 2,284 subjects in eight age ranges (as mentioned above) and is about 1GB
in size. The models we will use have been trained on this dataset.
Prerequisites
You�ll need to install OpenCV (cv2) to be able to run this project. You can do this
with pip-
opencv_face_detector.pbtxt
opencv_face_detector_uint8.pb
age_deploy.prototxt
age_net.caffemodel
gender_deploy.prototxt
gender_net.caffemodel
a few pictures to try the project on
For face detection, we have a .pb file- this is a protobuf file (protocol buffer);
it holds the graph definition and the trained weights of the model. We can use this
to run the trained model. And while a .pb file holds the protobuf in binary format,
one with the .pbtxt extension holds it in text format. These are TensorFlow files.
For age and gender, the .prototxt files describe the network configuration and
the .caffemodel file defines the internal states of the parameters of the layers.
2. We use the argparse library to create an argument parser so we can get the image
argument from the command prompt. We make it parse the argument holding the path to
the image to classify gender and age for.
3. For face, age, and gender, initialize protocol buffer and model.
4. Initialize the mean values for the model and the lists of age ranges and genders
to classify from.
5. Now, use the readNet() method to load the networks. The first parameter holds
trained weights and the second carries network configuration.
6. Let�s capture video stream in case you�d like to classify on a webcam�s stream.
Set padding to 20.
7. Now until any key is pressed, we read the stream and store the content into the
names hasFrame and frame. If it isn�t a video, it must wait, and so we call up
waitKey() from cv2, then break.
8. Let�s make a call to the highlightFace() function with the faceNet and frame
parameters, and what this returns, we will store in the names resultImg and
faceBoxes. And if we got 0 faceBoxes, it means there was no face to detect.
Here, net is faceNet- this model is the DNN Face Detector and holds only about
2.7MB on disk.
Create a shallow copy of frame and get its height and width.
Create a blob from the shallow copy.
Set the input and make a forward pass to the network.
faceBoxes is an empty list now. for each value in 0 to 127, define the confidence
(between 0 and 1). Wherever we find the confidence greater than the confidence
threshold, which is 0.7, we get the x1, y1, x2, and y2 coordinates and append a
list of those to faceBoxes.
Then, we put up rectangles on the image for each such list of coordinates and
return two things: the shallow copy and the list of faceBoxes.
9. But if there are indeed faceBoxes, for each of those, we define the face, create
a 4-dimensional blob from the image. In doing this, we scale it, resize it, and
pass in the mean values.
10. We feed the input and give the network a forward pass to get the confidence of
the two class. Whichever is higher, that is the gender of the person in the
picture.
12. We�ll add the gender and age texts to the resulting image and display it with
imshow().
We�ll get to the command prompt, run our script with the image option and specify
an image to classify:
Output:
Projects in python
Output:
Output:
Python project ideas
Output:
Output:
Output:
Summary
In this python project, we implemented a CNN to detect gender and age from a single
picture of a face. Did you finish the project with us? Try this on your own
pictures. Check more cool projects in python with source code published by
DataFlair.
If you enjoyed the above python project, do comment and let us know your thoughts.