Interesting Python Project of Gender and Age Detection With OpenCV
Interesting Python Project of Gender and Age Detection With OpenCV
If these projects are helping you then please share your feedback with us. It
will motivate us, to provide more such useful content you.
Without talking much, let’s start the gender and age detection Python Project.
Detect faces
Classify into Male/Female
Classify into one of the 8 age ranges
Put the results on the image and display it
The Dataset
For this python project, we’ll use the Adience dataset; the dataset is available
in the public domain and you can find it here. This dataset serves as a
benchmark for face photos and is inclusive of various real-world imaging
conditions like noise, lighting, pose, and appearance. The images have been
collected from Flickr albums and distributed under the Creative Commons
(CC) license. It has a total of 26,580 photos of 2,284 subjects in eight age
ranges (as mentioned above) and is about 1GB in size. The models we will use
have been trained on this dataset.
Prerequisites
You’ll need to install OpenCV (cv2) to be able to run this project. You can do
this with pip-
Other packages you’ll be needing are math and argparse, but those come as
part of the standard Python library.
opencv_face_detector.pbtxt
opencv_face_detector_uint8.pb
age_deploy.prototxt
age_net.caffemodel
gender_deploy.prototxt
gender_net.caffemodel
a few pictures to try the project on
For face detection, we have a .pb file- this is a protobuf file (protocol buffer); it
holds the graph definition and the trained weights of the model. We can use
this to run the trained model. And while a .pb file holds the protobuf in binary
format, one with the .pbtxt extension holds it in text format. These are
TensorFlow files. For age and gender, the .prototxt files describe the network
configuration and the .caffemodel file defines the internal states of the
parameters of the layers.
2. We use the argparse library to create an argument parser so we can get the
image argument from the command prompt. We make it parse the argument
holding the path to the image to classify gender and age for.
3. For face, age, and gender, initialize protocol buffer and model.
4. Initialize the mean values for the model and the lists of age ranges and
genders to classify from.
5. Now, use the readNet() method to load the networks. The first parameter
holds trained weights and the second carries network configuration.
7. Now until any key is pressed, we read the stream and store the content into
the names hasFrame and frame. If it isn’t a video, it must wait, and so we call
up waitKey() from cv2, then break.
8. Let’s make a call to the highlightFace() function with the faceNet and frame
parameters, and what this returns, we will store in the names resultImg and
faceBoxes. And if we got 0 faceBoxes, it means there was no face to detect.
Here, net is faceNet- this model is the DNN Face Detector and holds only
about 2.7MB on disk.
Create a shallow copy of frame and get its height and width.
Create a blob from the shallow copy.
Set the input and make a forward pass to the network.
faceBoxes is an empty list now. for each value in 0 to 127, define the
confidence (between 0 and 1). Wherever we find the confidence
greater than the confidence threshold, which is 0.7, we get the x1, y1,
x2, and y2 coordinates and append a list of those to faceBoxes.
Then, we put up rectangles on the image for each such list of
coordinates and return two things: the shallow copy and the list of
faceBoxes.
9. But if there are indeed faceBoxes, for each of those, we define the face,
create a 4-dimensional blob from the image. In doing this, we scale it, resize it,
and pass in the mean values.
10. We feed the input and give the network a forward pass to get the
confidence of the two class. Whichever is higher, that is the gender of the
person in the picture.
12. We’ll add the gender and age texts to the resulting image and display it
with imshow().
We’ll get to the command prompt, run our script with the image option and
specify an image to classify:
Output:
Python Project Example 3
Output:
Output:
Python Project Example 5
Output:
Output: