Karneesh CS
Karneesh CS
1
predicted information is overlaid on the original
image, providing a visual representation of the
program's analysis. The annotated image is displayed
to the user using matplotlib.
2
OBJECTIVE
3
Tkinter-based file dialog, ensuring ease of use for
individuals exploring the application.
4
EXISTING SYSTEM
7
4.Demonstration of Integration: The code
showcases the integration of multiple technologies,
including OpenCV for image processing, deep neural
networks for face detection and age-gender prediction,
and multimedia libraries like matplotlib and pygame.
It serves as an example of how diverse tools can be
seamlessly integrated.
INPUT DATA
Image File:
9
TYPES OF OUTPUT
1.Visual Output:
2.Text Output:
10
Console Messages: The code prints messages to the
console to communicate information such as whether
a face is detected or if there are any errors.
3. Interactive Input:
4.Audio Output:
IMPLEMENTATION
Libraries Used:
Main Functions:
3.predict_gender_age(face,age_net,
gender_net):Takes a face region and utilizes the age
and gender prediction models to estimate the age
category and gender of the person in the face.
13
4.select_image():Utilizes Tkinter to open a file
dialog, allowing the user to select an image file.
5.main():
Overall Structure:
14
The face detection model is based on OpenCV's deep
neural network module (`cv2.dnn`).
Usage:
Dependencies: - Python
- OpenCV
- Matplotlib
- Tkinter
15
SYSTEM ANALYSIS
16
Record the origin of and the reason for every
requirement. This is the first step-in establishing
traceability back to the customer.
Use multiple views of requirements building data,
functional and behavioral models provide the software
engineer with three different views. This reduces the
likelihood that something will be missed and increases
the likelihood that inconsistency will be recognized.
Rank requirements. Tight deadlines may preclude the
implementation of every software requirements to be
delivered in the first increment must be identified.
SYSTEM PLANNING
17
An open-ended approach, called evolutionary
prototyping uses the prototype as the first part of an
analysis activity that will be continued into design and
construction the prototype of the software is the first
evolution of the finished system.
FEASIBLITY STUDY
Technology:
This system is technically feasible, because the system
activated by computers and recent technology. We
use client / server technology which is powerful and
very user friendly.
Finance:
18
It is financially feasible. There is no need of spending
over money. Mainly this system constructed by
existing devices only. Since we use visual studio dot
net as a front-end it was most power-full, small and
portable across platforms and operating systems both
at the source and at the binary level. This project
reduces the number of workers wage also.
Time:
This system really time-to-market beat the
competition. Because the system developed with in a
time span and worked based on time event. The time
taken to access the account is very less and avoids
unnecessary waiting that was in the traditional
system. Although it uses less time but its performance
is very well.
Resources:
This system will use the well known resources.
Where there is no need of any special kind of
resource. It uses only the required databases, tables
only.
19
SOFTWARE REQUIREMENTS
20
SOURCE CODE
import cv2
import pygame
21
def load_models():
face_net = cv2.dnn.readNet("opencv_face_detector_uint8.pb",
"opencv_face_detector.pbtxt")
age_net = cv2.dnn.readNet("age_net.caffemodel",
"age_deploy.prototxt")
gender_net =
cv2.dnn.readNetFromCaffe("gender_deploy.prototxt",
"gender_net.caffemodel")
h, w = image.shape[:2]
face_net.setInput(blob)
detections = face_net.forward()
face_boxes = []
for i in range(detections.shape[2]):
confidence = detections[0, 0, i, 2]
x1 = int(detections[0, 0, i, 3] * w)
22
y1 = int(detections[0, 0, i, 4] * h)
x2 = int(detections[0, 0, i, 5] * w)
y2 = int(detections[0, 0, i, 6] * h)
return face_boxes
gender_net.setInput(blob)
gender_preds = gender_net.forward()
age_net.setInput(blob)
age_preds = age_net.forward()
predicted_age = age_category[age_preds[0].argmax()]
23
return gender, predicted_age
def select_image():
Tk().withdraw()
return file_path
def main():
pygame.mixer.init()
pygame.mixer.music.load('zoro.mp3')
pygame.mixer.music.play(-1)
while True:
image_path = select_image()
if not image_path:
break
image = cv2.imread(image_path)
24
if image is None:
continue
if not face_boxes:
continue
max(0, face_box[0]-15):min(face_box[2]+15,
image.shape[1]-1)]
25
gender, predicted_age = predict_gender_age(face_roi,
age_net, gender_net)
plt.figure(figsize=(7, 7))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.show()
if choice != 'yes':
pygame.mixer.music.stop()
break
if __name__ == "__main__":
main()
26
OUTPUT
27
28
29
CONCLUSION
30
BIBILIOGRAPHY
31