Driver Drowsiness Detection Based On Face Feature and Perclos

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

DRIVER DROWSINESS DETECTION BASED ON

FACE FEATURE AND PERCLOS


V.Vijayalakshmi*, T. Kokila*, S. Lavanya*, M. Thithiya*, V. Vijayakumari**
*B.E. Student, Department of Computer Science Engineering, The Kavery College of Engineering
**Assistant Professor, Department of Computer Science Engineering, The Kavery College of Engineering, Salem

Abstract:- Driving vehicles are complex and drowsiness is increasing at a shocking pace.
require undivided attention to prevent road Recent numbers indicate 10% to 40% of all road
accidents. Fatigue and distraction are a major accidents are due to drivers feeling exhausted
risk factor that causes traffic accidents, severe and sleepy. In the trucking industry, about 60%
injuries, and a high risk of death. Some progress of fatal accidents are caused by driver fatigue.
has been made for driver drowsiness detection For the reasons stated above, developing
using a contact-based method that utilizes systems to continuously monitor the driver’s
vehicle parts (such as steering angle and concentration on the road and level of
pressure on the pedal) and physiological signals drowsiness and alerting them is important.
(electrocardiogram and electromyogram). Researchers and innovators have been working
However, a contactless system is more potential on producing such systems for the betterment of
for real-world conditions. In this study, we the human race. From years of research, the best
propose a computer vision based method to way of predicting such behaviour is from the
detect driver's drowsiness from a video taken by physical factors like breathing, heart rate, pulse
a camera. The method attempts to recognize the rate, brain waves, etc. Such systems never made
face and then detecting the eye in every frame. it to public use as they required attachment of
From the detected eye, iris regions for left and sensors and electrodes onto the bodies of the
right eyes are used to calculate the PERCLOS drivers, causing frustration. Some representative
measure (the percentage of total time that eye is projects in this line are the MIT-Smart Car, and
closed). The proposed method was evaluated ASV (Advanced Safety Vehicle) project
based on public YawDD video dataset. The performed by Toyota, Nissan and Honda. Some
results found that PERCLOS value when the other systems proposed included monitoring the
driver is alert is lower than when the driver is movement of pupils and movement of head
drowsy. using specialised helmets and optical lenses.
Such systems were not accepted even after not
Keywords – Facial Images, PERCLOS, resnet being disturbing as production costs were
architecture, YawDD, electrocardiogram, traffic challenging. Some indirect methods were also
accidents, driver drowsiness detection. introduced to detect the drowsiness in a driver
by reading the maneuvering of the steering
wheel, positioning of the wheel axles etc. These
1. INTRODUCTION
systems were also not entertained as they had
In today’s fast moving world, people other difficulties such as the type of vehicle,
depend on their means of transport excessively. environmental conditions, driver experience,
Feeling drowsy and fatigued during a long drive geometric aspects, state of the road, etc.
or after a short night’s sleep is common among Contrarily, the time taken to analyse these user
everyone. This physical feeling of tiredness behaviours is too much and thereby it doesn’t
brings down the level of concentration of the work with the blinking of eyes or micro-sleeps.
driver. Such conditions are not favoured while In this line we can find an important Spanish
driving and result in the increase of accidents. project called TCD (Tech CO Driver) and the
Driver drowsiness and exhaustion are prime Mitsubishi advanced safety vehicle system.
contenders in the cause of road accidents. The People with exhaustion or fatigue show some
cases of car accidents caused by driver visual behaviours easily notable from changes in

1
their physical features of the face like eyes, 2.2 Proposed System
movement of the face and head. Computer In the proposed system, the driver
Vision is free from disturbance and a natural fatigue and distraction is detected only by
approach to monitor the driver’s vigilance. In
processing of eye region. The main
this context, it is critical to use new and better
technologies to design and build systems that are symptoms of driver fatigue and distraction
able to monitor the drivers and to compute their appear in the driver’s eyes because of
level of concentration during the whole process sleeping while driving. Nowadays, there are
of driving. In this project, a module for many fatigue detection methods and the best
Advanced Assistance to Driver Drowsiness is capturing the eyes in real time using web
(AADD) is presented in order to control the camera to detect the physical responses in
number of accidents caused by driver eyes. Moreover, the processing of the eye
drowsiness and thus improve transport safety. region instead of the processing of the face
This system will manage to detect the driver region has less computational complexity.
drowsiness using machine vision and artificial
intelligence automatically. We present an 3. SYSTEM DESIGN
algorithm to capture, locate and analyse both the
driver’s face and eyes to measure PERCLOS System design is the process or art of
(percentage of eye closure). defining the architecture, components, modules,
interface, and data for the system to satisfy
2. SYSTEM ANALYSIS specified requirements. This chapter deals with
various design and function of the system.
2.1 Existing System
3.1 Sysem Architecture
The existing system evaluate whether
changes in the eye-steering correlation that can
indicate distraction. The auto-correlation and
cross-correlation of horizontal eye position and
steering wheel angle show that eye movements
associated with road scanning procedure a low
eye steering correlation. The eye steering
correlation will control the relationship on a
straight road. The straight road led to a low
correlation between the steering movement and
eye glances. In this system it is aim to detect the
driver distraction based on visual behavior or the
performance of the driver so for this purpose it
is used to define the relationship between the
visual behavior and the vehicle control. This Fig. 3.1 Architecture Diagram
system evaluates the eye-steering correlation
associated with the straight road with the 3.2 Usecase Diagram
assumption that it might show a qualitatively
and quantitatively different relationship
compared with curvy road and that it might be
sensitive to distraction. Here in the visual
behavior and vehicle control relationship reflects
a fundamental perception-control mechanism
which plays a major role in driving and a strong
eye steering correlation associated with this
process has been observed on curvy roads.

2
3.4 Class Diagram

start detection (opencv) Class diagram in the unified modeling


language (UML) is a type of static structure
diagram that structure of a system by showing
Eye detection
the system classes attribute operators.

Camera Eye calculate


driver Find eye region
opencv
face detection Drowsiness level determination
eye parameter calculation find driver sleep or not
eye detection
Intimation
find operation()

Drowsiness level determination


Fig. 3.4 Class Diagram

3.5 Collaboration Diagram


intimation

A collaboration diagram, also known as a


communication diagram, is an illustration of the
Fig. 3.2 Usecase Diagram relationships and interactions among software
objects in the Unified Modeling Language
Usecase diagram are usually referred to as (UML).
behavior diagram used to describe a set of
actions that some system should or canperform 1: start camera
4: face detected
in collaboration with one or more external users Driver camera
of the system.
2: load opencv library
3: find face and eye with help of haarcscade
3.3 Sequence Diagram 5: drowsiness level calculated
6: Intimation via alert

A sequence diagram is an interaction


diagram that shows how objects operate with Fig. 3.5 Collaboration Diagram
one another and in what order. It is a construct
of a message sequence chart. A sequence 4. SYSTEM IMPLEMENTATION
diagram shows object interactions arranged in
time sequence. 4.1 Modules

camera Driver • start detection (Camera Opencv)


• driver eye detection
start camera

• eye parameters calculation


load opencv library

• drowsiness level determination


find face and eye with help of haarcscade
• Intimation
face detected
4.2 Start Detection (CAMERA OPENCV)
drowsiness level calculated
• This the first module of this system, its
Intimation via alert
used to open a camera with (Opencv)
library.
• After initialize the camera its ready to
detect the human face or driver face.
Fig. 3.3 Sequence Diagram

3
4.3 Driver Eye Detection testing is usually conducted as part of a
combined code and test phase of the software
• This the second module with the help
lifecycle, although it is not uncommon for
of this module to detect human eye coding and unit testing to be conducted as two
distinct phases.
through (haarcascade_frontalface_alt)
this xml file. 5.2 Test Strategy and Approach

• After with help of this haarcascade file Field testing will be


performed manually and functional
to find the x,y coordinates of eye. tests will be written in detail.
4.4 Eye Parameters Calculation
5.3 Test Objectives
• In this module for recognize the face of
 All field entries must work
the atm user. So if the user cover the
properly.
face using helmet or etc, This module  Pages must be activated
going to detect that face covered or from the identified link.
uncovered  The entry screen, message
and response must not be
5. SYSTEM TESTING delayed.
The purpose of testing is to discover
5.4 Features to Be Tested
errors. Testing is the process of trying to
discover every conceivable fault or weakness  Verify that the entries are of
in a work product. It provides a way to check
the functionality of components, subassemblies, the correct format
assemblies, and/or a finished product. It is the  No duplicate entries should
process of exercising software with the intent of be allowed
ensuring that the software system meets its  All links should take the
requirements and user expectations and type user to the correct page.
address a specific testing requirement.
5.5 Integration Testing
5.1 Unit Testing
Unit testing involves the design of test Integration tests are
cases that validate that the internal program designed to test integrated software
logic is functioning properly, and that program components to determine if they
inputs produce valid outputs. All decision actually run as one program. Testing
branches and internal code flow should be is event driven and is more
validated. It is the testing of individual software concerned with the basic
unit of the application. It is done after the outcome of screens or fields.
completion of an individual unit before Integration tests demonstrate that
integration. This is a structural testing, that although the components were
relies on knowledge of its construction and is individually satisfaction, as shown
invasive. Unit tests perform basic tests at by successfully unit testing, the
component level and test a specific business combination of components is
process, application, and/or System correct and consistent. Integration
configuration. Unit tests ensure that each unique testing is specifically aimed at
path of a business process performs accurately exposing the problems that arise
to the documented specifications and contains from the combination of
clearly defined inputs and expected results. Unit components.

4
5.6 Functional Test emphasizing pre-driven
Process links and integration
Functional tests provide points.
systematic demonstrations that
functions tested are available as 5.8 Acceptance Testing
specified by the business and
technical requirements, system User acceptance
documentation, and user manuals. testing is a critical phase of
any project and requires
Functional testing is significant participation by the
centered on the following items: end user. It also ensures that
the system meets the
Valid Input :
functional requirements.
Identified classes of
valid input must be accepted. 6. CONCLUSION
Invalid Input :
Identified classes of In this paper, we have
invalid input must be rejected. presented the concept and
Functions : implemented a system to
Identified functions detect driver drowsiness using
must be exercised. computer vision which focuses
Output : to notify the driver if he is
Identified classes of drowsy. The proposed system
application outputs must be has the capability to detect the
exercised. real time state of the driver in
Systems/Procedures : day and night conditions with
interfacing systems or the help of a camera. The
procedures must be invoked. detection of the Face and Eyes
Organization and preparation applied based on the
of functional tests is focused symmetry. We have
on requirements, key developed a non-intrusive
Functions, or special test prototype of a computer
cases. In addition, systematic vision-based system for real-
coverage pertaining to identify time monitoring of the driver’s
Business process flows; data drowsiness.
fields, predefined processes,
and successive processes must 7. SCREENSHOTS
be considered for testing.

5.7 System Test

System testing ensures


that the entire integrated
software system meets
requirements. It tests a
configuration to ensure known
and predicate results. An
Example of system testing is
the configuration oriented
system integration test.
System testing is based on
process descriptions and flow

5
hierarchies for accurate object
detection and semantic segmentation,
in: Proceedings of the IEEE
conference on computer vision and
pattern recognition, pp. 580–587.
[2] Glorot, X., Bengio, Y., (2010).
Understanding the difficulty of training deep
feedforward neural networks., in: Aistats, pp.
249–256.
[3] Goodfellow, I.J., (2013). Piecewise linear
multilayer perceptrons and dropout. stat
1050, 22.
[4] He, K., Zhang, X., Ren, S., Sun, J.,
(2016). Deep residual learning for
image recognition, in: Proceedings of
the IEEE Conference on Computer
Vision and Pattern Recognition, pp.
770–778.
[5] Jung, K., (2012). Object recognition
on mobile devices, in: Consumer
Electronics-Berlin (ICCE-Berlin),
2012 IEEE International Conference
on, IEEE. pp. 258–262.
[6] Kingma, D., Ba, J., (2014). Adam: A method
for stochastic optimization. arXiv preprint
arXiv:1412.6980 .
[7] Krizhevsky, A., Sutskever, I., Hinton,
G.E., (2012). Imagenet classification
with deep convolutional neural
networks, in: Advances in neural
information processing systems, pp.
1097–1105.
[8] Le, Q.V., Karpenko, A., Ngiam, J., Ng,
A.Y., (2011). Ica with reconstruction
cost for efficient overcomplete feature
learning, in: Advances in Neural
Information Processing Systems, pp.
1017–1025.
[9] LeCun, Y., Boser, B., Denker, J.S.,
Henderson, D., Howard, R.E.,
Hubbard, W., Jackel, L.D., (1989).
Backpropagation applied to
handwritten zip code recognition.
Neural computation 1, 541–551.
[10] LeCun, Y., Bottou, L., Bengio, Y.,
Haffner, P., (1998)a. Gradient-based
learning applied to document
REFERENCES recognition. Proceedings of the IEEE
[1] Girshick, R., Donahue, J., Darrell, T., 86, 2278–2324
Malik, J., (2014). Rich feature

You might also like