Safety For Drivers Using OpenCV in Python
Safety For Drivers Using OpenCV in Python
https://doi.org/10.22214/ijraset.2022.47254
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
Abstract: Majority of accidents happening in the country is mainly due to the usage of cell-phones or drowsiness. Travelling
long distance or driving trucks and taxis for a long time every day and night can cause the driver to be sleep deprived and drowsy
which can lead to accidents. With this project, we will be building a system using python and OpenCV that will detect the age
and gender of the person and detect if there is any mobile phone in the vicinity. It also detects if the person is sleeping or not and
will alarm the driver accordingly. With this project, we would like to reduce the number of accidents happening around us.
Keywords: OpenCV, Keras, NumPy, Pygame, Face recognition, Object detection, CNN
I. INTRODUCTION
The goal of this paper is to develop a python code for safety of drivers, that detects the person’s age and gender. It detects if the
person is sleeping by checking if the eyes are closed for a few seconds or using a mobile phone so as to ensure the safety of the
driver by alarming the person.
II. PROCEDURE
A. Gender and Age Detection
Two custom CNN layers are used for age group and gender estimation. The age group classification and gender classification trains
the CNN layer over many images. These CNN layers can be implemented over OpenCV and can detect age and gender on live
camera. The CNN using a Caffe deep learning framework
Using Caffe, there are 4 steps to training a CNN:
1) Step 1: The first step is data preparation, where we clean the photos and store them in a Caffe-compatible format. We'll create a
Python script to take care of the pre-processing and storage of the images.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1549
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
2) Step 2: A CNN architecture is selected in this stage, and its parameters are defined in a configuration file with the.prototxt
extension.
3) Step 3: Model optimization is the responsibility of the solver. The solver parameters are specified in a configuration file with
the.prototxt extension.
4) Step4: Training the model entails running a single Caffe command from the terminal. We will receive the trained model in a file
with the extension when the model has been trained .caffemodel.
gender_net.caffemodel: It is the pre-trained model weights for gender detection.
deploy_gender.prototxt: is the model architecture for the gender detection model (a plain text file with a JSON-like structure
containing all the neural network layer’s definitions).
res10_300x300_ssd_iter_140000_fp16.caffemodel: The pre-trained model weights for face detection.
deploy.prototxt.txt: This is the model architecture for the face detection model.
B. Cellphone Detection
cv::dnn::DetectionModel Class is a Deep Neural Network module which we use to detect the cellphone after integrating it with
OpenCV.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1550
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
III. CODE
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1551
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
gender=genderList[genderPreds[0].argmax()]
print(f'Gender: {gender}') else:
score=score-1
ageNet.setInput(blob) cv2.putText(frame,"Open",(10,height-20), font,
agePreds=ageNet.forward() 1,(255,255,255),1,cv2.LINE_AA)
age=ageList[agePreds[0].argmax()] cv2.putText(frame, f'{gender}, {age}', (faceBox[0],
print(f'Age: {age[1:-1]} years') faceBox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
for (x,y,w,h) in faces: (0,255,255), 2, cv2.LINE_AA)
cv2.rectangle(frame, (x,y) , (x+w,y+h) , (100,100,100) , if len(classIds) != 0: #
1) for classId, confidence, box in zip(classIds.flatten(),
for (x,y,w,h) in right_eye: confs.flatten(), bbox):
r_eye=frame[y:y+h,x:x+w] if(classNames[classId-1].upper()=="CELL
count=count+1 PHONE"):
r_eye = cv2.cvtColor(r_eye,cv2.COLOR_BGR2GRAY) cv2.rectangle(frame, box, color=(0, 255, 0),
r_eye = cv2.resize(r_eye,(24,24)) thickness=2)
r_eye= r_eye/255 cv2.putText(frame, classNames[classId-
r_eye= r_eye.reshape(24,24,-1) 1].upper(), (box[0]+10, box[1]+30),
r_eye = np.expand_dims(r_eye,axis=0) cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
#rpred = model.predict_classes(r_eye) if(score<0):
predict_x=model.predict(r_eye) score=0
rpred=np.argmax(predict_x,axis=1) cv2.putText(frame,'Score:'+str(score),(100,height-20), font,
if(rpred[0]==1): 1,(255,255,255),1,cv2.LINE_AA)
lbl='Open' cv2.putText(frame, f'{gender}, {age}', (faceBox[0],
if(rpred[0]==0): faceBox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
lbl='Closed' (0,255,255), 2, cv2.LINE_AA)
break if len(classIds) != 0: #
for (x,y,w,h) in left_eye: for classId, confidence, box in zip(classIds.flatten(),
l_eye=frame[y:y+h,x:x+w] confs.flatten(), bbox):
count=count+1 if(classNames[classId-1].upper()=="CELL PHONE"):
l_eye = cv2.cvtColor(l_eye,cv2.COLOR_BGR2GRAY) cv2.rectangle(frame, box, color=(0, 255, 0),
l_eye = cv2.resize(l_eye,(24,24)) thickness=2)
l_eye= l_eye/255 cv2.putText(frame, classNames[classId-1].upper(),
l_eye=l_eye.reshape(24,24,-1) (box[0]+10, box[1]+30), cv2.FONT_HERSHEY_COMPLEX,
l_eye = np.expand_dims(l_eye,axis=0) 1, (0,255,0), 2)
#lpred = model.predict_classes(l_eye) if(score>15):
predict_x=model.predict(l_eye) #person is feeling sleepy so we beep the alarm
lpred=np.argmax(predict_x,axis=1) cv2.imwrite(os.path.join(path,'image.jpg'),frame)
if(lpred[0]==1): try:
lbl='Open' sound.play()
if(lpred[0]==0):
lbl='Closed' except: # isplaying = False
break pass
if(rpred[0]==0 and lpred[0]==0): if(thicc<16):
score=score+1 thicc= thicc+2
cv2.putText(frame,"Closed",(10,height-20), font, else:
1,(255,255,255),1,cv2.LINE_AA) thicc=thicc-2
cv2.putText(frame, f'{gender}, {age}', (faceBox[0], if(thicc<2):
faceBox[1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, thicc=2
(0,255,255), 2, cv2.LINE_AA) cv2.rectangle(frame,(0,0),(width,height),(0,0,255),thicc)
if len(classIds) != 0: # #cv2.imshow('frame',frame)
for classId, confidence, box in zip(classIds.flatten(), if cv2.waitKey(1) & 0xFF == ord('q'):
confs.flatten(), bbox): break
if(classNames[classId-1].upper()=="CELL
PHONE"):
cv2.rectangle(frame, box, color=(0, 255, 0), cv2.imshow("Detecting age and gender", frame)
thickness=2)
cv2.putText(frame, classNames[classId-
1].upper(), (box[0]+10, box[1]+30),
cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
# if(rpred[0]==1 or lpred[0]==1):
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1552
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
IV. RESULTS
V. CONCLUSION
We have developed a python code that runs successfully which ensures the safety of the drivers. It alarms when the person is
sleeping or if it detects a mobile phone. It also displays the gender and the age of the person with about 80% accuracy. Haar-like
features have the potential for great accuracy and low expenditure. In the future years, OpenCV will become quite well-known
among python programmers in the IT industry.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1553
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue X Oct 2022- Available at www.ijraset.com
REFERENCES
[1] Chandan, G., Ayush Jain, and Harsh Jain. "Real time object detection and tracking using Deep Learning and OpenCV." 2018 International Conference on
inventive research in computing applications (ICIRCA). IEEE, 2018.
[2] Rajput, Bhumika. "DRIVER DROWSINESS DETECTION USING PYTHON."
[3] Emami, Shervin, and Valentin Petrut Suciu. "Facial recognition using OpenCV." Journal of Mobile, Embedded and Distributed Systems 4.1 (2012): 38-43..
[4] P. Reshvanth et al., "Age Detection from Facial Images Using Python," 2021 Fifth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and
Cloud) (I-SMAC), 2021, pp. 1316-1321, doi: 10.1109/I-SMAC52330.2021.9641056.
[5] Saxena, Meghna Raj, et al. "Real-time object detection using machine learning and opencv." Int J Inform Sci Appl (IJISA) 11.1 (2019): 0974-225.
[6] Khunpisuth, Oraan, et al. "Driver drowsiness detection using eye-closeness detection." 2016 12th International Conference on Signal-Image Technology &
Internet-Based Systems (SITIS). IEEE, 2016.
[7] Salihbašić, Alen, and Tihomir Orehovački. "Development of android application for gender, age and face recognition using opencv." 2019 42nd International
Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, 2019.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 1554