0% found this document useful (0 votes)
19 views

Driver Drowsiness Detector Using Machine Learning (Report) Completed

Uploaded by

shyamganeshsg15
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Driver Drowsiness Detector Using Machine Learning (Report) Completed

Uploaded by

shyamganeshsg15
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 85

DRIVER DROWSINESS DETECTOR USING

MACHINE LEARNING
A MINI PROJECT REPORT
Submitted by

KUMARA VISHNU. E 142221205056


KARTHIKEYAN. M 142221205050
VINOTH RAJ. V 142221205310

In partial fulfilment for the award of the degree


Of
BACHELOR OF TECHNOLOGY
In
INFORMATION TECHNOLOGY

SRM VALLIAMMAI ENGINEERING COLLEGE


(AN AUTONOMOUS INSTITUTION)

SRM NAGAR, KATTANKULATHUR,


CHENGALPATTU

ANNA UNIVERSITY: CHENNAI 600 025

APRIL 2024
ANNA UNIVERSITY: CHENNAI-600025

BONAFIDE CERTIFICATE

Certified that this project report “driver drowsiness detection using machine

learning” is the bonafide work of “E. KUMARA VISHNU (142221205056),

M. KARTHIKEYAN (142221205050) and V. VINOTH RAJ

(142221205310)” who carried out the project work under my supervision.

SIGNATURE SIGNATURE

DR. S. NARAYANAN DR. S. SEKAR


HEAD OF DEPARTMENT SUPERVISOR
Associate Professor Assistant Professor
Department of Information Technology Department of Information
Technology
Srm Valliammai Engineering College Srm Valliamai Engineering College
(An Autonomous Institution) (An Autonomous Institution)
SRM Nagar, Kattankulathur, SRM Nagar, Kattankulathur
Chengalpattu - 603 203. Chengalpattu – 6030203

Submitted for the viva voce held on ………………………


INTERNAL EXAMINER EXTERNAL
EXAMINER
ACKNOWLEDGMENT

First and foremost, we would like to extend our heartfelt respect and gratitude
to Management, Director Dr. B. Chidhambara Rajan, M.E., Ph.D., Principal
Dr. M. Murugan, and vice principal Dr. S. Visalakshi, M.E., Ph.D., who
helped us in our endeavour’s.

We also extend our heartfelt respect to our beloved Dr. S. NARAYANAN,


B.E., M.Tech., Ph.D. Associate Professor Head of the Department for offering
sincere support throughout the project work.

We thank our Project Coordinator Dr.S.SEKAR,M.E.,Ph.D, Assistant


Professor (Sel.G) for their consistent guidance and encouragement throughout
the progress of the project.

We thank our Project guide Dr. S. SEKAR, M.E., Ph.D. Assistant


Professor (Sel.G) for his valuable guidance, support and active interest for the
successful implementation of the project.

We would also thank all the Teaching and Non-Teaching staff members of
our department for their constant support and encouragement throughout the
course of this project work.

Finally, the constant support from our lovable Parents and Friends is untold
and immeasurable.

ABSTRACT
Drowsiness and tiredness are primary factors contributing to traffic accidents.
Precautions can be taken to avoid them, such as ensuring sufficient sleep before
driving, having coffee or an energy drink, or taking a break when drowsiness is
noticed. Although the conventional method for drowsiness detection involves
intricate techniques like EEG and ECG, it is highly accurate but requires contact
sensors and has limitations in monitoring driver fatigue and drowsiness in real-
time driving situations. This paper presents an alternative approach to
identifying signs of drowsiness in drivers by assessing the rate of eye closure
and yawning. The study details the process of detecting eyes and mouth
movements from video recordings of an experiment carried out by IIROS
(Indian Institute of Road Safety). The participant engages in a driving
simulation while being filmed by a webcam positioned in front of the simulator.
The video footage captures the progression from alertness to fatigue and
eventually drowsiness. The system is designed to recognize facial features in
these images, focusing on isolating the eyes and mouth. By detecting the face
area, the program can identify and analyse the eyes and mouth through
algorithms for detecting the left and right eyes and the mouth. The analysis of
eye and mouth movements is based on frames extracted from the video footage.
The positions of the eyes and mouth can be detected using this method. When
the eyes are located, changes in their intensity serve as indicators of open or
closed eyes. If the eyes remain closed for four consecutive frames, it indicates
that the driver is experiencing drowsiness.

Keywords: Driver drowsiness; Eye detection; Yawn detection; Blink pattern;


Fatigue
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT

LIST OF TABLES

LIST OF FIGURES

LIST OF ABBRIVATIONS

1 INTRODUTION

1.1 Introduction
1.2 Objective
1.3 Significance of project
1.4 Background of study
1.5 Problem statement
1.6 Scope of study
1.7 Relevancy of the project

LITERATURE REVIEW
2

2.1 Literature review

2.2 Drowsiness and fatigue

2.3 Electroencephalography (EEG)

Detection

2.4 Face drowsiness detection system

2.5 Perclos (Percentage Of Eye


Colusure)
2.56Yawning detection system
3
SYSTEM DESIGN
3.1 Ues case diagram
3.2 Architecture diagram
3.3 Activity diagram
3.4 class diagram
3.5 Data flow diagram
3.6 Sequence diagram
3.7 Collaboration diagram

4 METHODOLOGY
4.1 Functional requirement
4.2 Non-functional requirement
4.3 System configuration
4.3.1 Software requirements
4.3.2 Hardware requirements
4.4 Technology used
4.5 Packages used
4.5.1 Open cv
4.5.1.1 Opencv-python
4.5.2 Numpy
4.5.2.1 What is numpy
4.5.2.2 Why use numpy
4.5.2.3 Why is numpy faster than lists
4.5.2.4 Which language is numpy
written
4.5.3 Pandas
4.5.3.1 Working with pandas
4.5.4 Tensorflow
4.5.4.2 What is tensorflow and how it
is used
4.5.4.3 Why is it Called tensorflow
4.5.4.4 What is tensorflow backend

IMPLEMENTATION
5.1 Important libraries
5
5.2 Driver alertness monitoring dataset
5.3 Face detection
5.4 Eye state tracking
5.5 Yawning detection
5.6 Alert system module
5.7 Head tilt alert
5.8 Real time monitoring module
5.9 Data logging and storage module

INTEGRATION AND TESTING


6 6.1 Testing
6.2 Types of tests unit testing
6.3 Function test
6.4 System testing
6.5 White box testing
6.6 Unit testing
6.7 Integration
6.8 Acceptance testing
6.9 Test objectives

7 RESULT AND DISCUSSION

7.1 Result
8

CONCLUSION AND FUTURE


SCOPE
8.1 Conclusion
8.2 Future scope

APPENDIX
SOURCE CODE

REFERENCE

TABLE NO TABLE NAME PAGE NO


2.1
Literature review
6.1
Test case result 1
6.2
Test case result 2

LIST OF TABLES
FIGURE NO DESCRIPTION PAGE NO

3.1 Ues case diagram


3.2 Architecture diagram
3.3 Activity diagram
3.4 Class diagram
3.5 Data flow diagram
3.6 Sequence diagram
3.7 Collaboration diagram

ABBREVATIONS EXPANSION
EEG Electroencephalography
PERCLOS percentage of eye closure

LIST OF ABBREVATIONS
CHAPTER 01
INTRODUCTION
1.1 INTRODUCTION

Driver fatigue is a critical factor in endless mishaps. Late


estimations measure that yearly 1200 passing's and 76,000 injuries can be
credited to exhaustion related accidents. Driver sluggishness and
weariness is a main consideration which brings about various vehicle
mishaps. Creating and keeping up innovations which can viably
recognize or forestall sluggishness in the driver's seat and alarm the driver
before a disaster is a significant test in the field of mishap avoidance
frameworks. Due to the threat that laziness can cause on the streets a few
techniques should be produced for forestalling checking its belongings.
With the coming of present-day innovation and continuous filtering
frameworks utilizing cameras we can forestall significant disasters out
and about by cautioning vehicle drivers who are feeling lazy through a
languor location framework. The mark of this endeavour is to develop a
model languor recognition framework. The spotlight will be put on
arranging a system that will unequivocally screen the open or shut state of
the driver's eyes constantly. By observing the eyes, it's accepted that the
side effects of driver weakness are frequently recognized early enough to
stay away from a fender bender. Location of exhaustion includes the
perception of eye developments and squint examples during a grouping of
pictures of a face.

There are location frameworks planned dependent on estimation of


driver's tiredness, which can be observed by camera. The conduct-based
strategy identifies languor utilizing picture preparing on a driver's facial
developments caught by cameras.
1.2 OBJECTIVES

Nowadays the driver’s safety in the car is one of the most wanted
systems to avoid accidents. Our goal of the undertaking is to ensure the security
framework. For upgrading the wellbeing, we are identifying the eye flickers of
the driver and assessing the driver's status and controlling the vehicle likewise.
The venture primarily centres around these targets:

1. To recommend approaches to recognize exhaustion and sleepiness while


driving.

2. To examine eyes and mouth from the video pictures of the members in the
trial of driving reenactment led by IIROS that can be utilized as a marker of
exhaustion and tiredness.

3. To examine the actual changes of weariness and laziness.

4. To build up a framework that utilizations eye conclusion and yawning as an


approach to distinguish exhaustion and laziness.

5. Speed of the vehicle can be decreased.

6. Traffic the executives can be kept up by decreasing the mishap.


1.3 SIGNIFICANCE OF THE PROJECT

The driver drowsiness detection project holds significant importance


across various domains. Primarily, it addresses a critical aspect of road safety by
mitigating the risks associated with driver fatigue, a leading cause of accidents
worldwide. By leveraging machine learning algorithms, the project aims to
detect signs of drowsiness in real-time, thereby alerting drivers and preventing
potential accidents, ultimately saving lives and reducing injuries. Moreover, the
project represents a notable advancement in technological innovation within the
automotive industry, showcasing the practical application of AI and data science
to improve vehicle safety. Economically, the project has implications in terms of
reducing the substantial financial losses incurred due to road accidents caused
by drowsy driving, encompassing property damage, medical expenses, and
productivity losses. Additionally, the project aligns with regulatory standards
and guidelines related to road safety, contributing to compliance with industry
regulations mandating the implementation of safety measures in vehicles.
Furthermore, the project serves as a platform for raising public awareness about
the dangers of drowsy driving and promoting responsible driving behaviour
through education and technology-based solutions. Overall, the project's
significance lies in its multifaceted impact on enhancing road safety, fostering
technological innovation, and promoting societal well-being.

1.4 BACKGROUND OF STUDY

Consistently there is an extension in road disasters cases including


vehicles and significant vehicles like vehicles, Lorries and trucks. Languor and
weariness conditions are one of the phenomenal factors adding to road setbacks.
Driving in this condition may achieve dreadful causes since it impacts the
driver's judgment in obsession. Falling asleep on the wheel can be avoided if the
drivers take tries like getting adequate rest before driving, taking caffeine or
stop for quite a while to rest when the signs of exhaustion and laziness appear.
Regardless, all around drivers won't require one among these methods even
once they understand that they're impacted by exhaustion, and can continue to
drive. In like manner, distinguishing sleepiness is urgent together of the way to
stop the road incidents. This endeavour recommended that yawning and eye
distinguishing proof are the obvious signs of exhaustion and sluggishness.

1.5 PROBLEM STATEMENT

Planning a model Drowsiness Detection framework which will zero in on


ceaselessly and precisely observing the condition of the driver's eyes
continuously to check whether they are open or shut for in excess of a given
timeframe. Flow sluggishness identification frameworks checking the drivers
condition demands complex calculation and costly gear, not happy to wear
during driving and isn't appropriate for driving conditions, for instance
Electroencephalography (EEG) and Electrocardiography (ECG), i.e.,
distinguishing he mind recurrence and estimating the cadence of heart,
individually. A driver laziness discovery framework which utilizes a camera set
before the driver is more appropriate to be utilized yet the actual signs that will
show languor should be found first in order to return up with a sleepiness
identification calculation that is solid and exact. Lightning power and keeping in
mind that the driver slants their face left or right the issues happen during
recognition of eyes and mouth locale.
1.6 SCOPE OF STUDY

In this project, we will focus on these following procedures:

 Basic concept of drowsiness detection system

 Familiarize with the signs of drowsiness

 Determine the drowsiness from these parameters - Eye blink - - Area of the
pupils detected at eyes Yawning

 Data collection and measurement.

 Integration of the methods chosen.

 Coding development and testing.

 Complete testing and improvement.

1.7 RELEVANCY OF THE PROJECT

This project is relevant to the implementation since fatigue and drowsiness


drivers contribute to the percentage of road accidents. Many researches have
been conducted to implement safe driving systems in order to reduce road
accidents. Detecting the driver’s alertness and drowsiness is an efficient way to
prevent road accidents. With this system, drivers who are drowsy will be alerted
by an alarm to regulate consciousness, attention and concentration of the
drivers. This will help to reduce the number of road accidents. This project is an
active topic that is still being enhanced and improved by researches and can be
applied in many areas such as detecting the attention-level of students in
classrooms and lectures. This is also relevant to the three author’s field of study
since it requires the author to apply and combine the knowledge of electronics,
programming and algorithms.
CHAPTER 02
LITERATURE REVIEW
There are many previous researches regarding driver drowsiness detection
system that can be used as a reference to develop a real-time system on
detecting drowsiness for drivers. There are also several methods which use
different approaches to detect the drowsiness signs. According to IIROS (Indian
Institute of Road Safety), from the year of 2007 until 2020, they were 2300
cases of road accidents have been investigated by the IIROS crash team [1].

TITLE AUTHOR NAME METHODOLOGY YEAR

Driver Inattention Y. Dong, Z. Hu, Complete information process is 2020


done under algorithm and system
Monitoring System Uchimura and
compares to the value
for Intelligent N.Murayama
Vehicles

Real-Time Driver- WANGHUA DENG He proposed a system called Dricare 2019


Drowsiness Detection AND RUOXUE WU which detect driver’s fatigue status,
System which has new algorithm for face
Using Facial Features tracking.

Detecting driver A. Sahayadhas, K. He uses infrared rays to detect 2018


drowsiness based on Sundaraj, M. drowsiness and more than 80% of test

sensors Murugappa results were passed.

There are numerous examinations finished with an open CV for android


likewise which is accessible for modest cell phones too. Different analyses
directed have brought about most extreme exactness when the camera was
arranged at various areas. OpenCV is overwhelmingly a method for ongoing
picture preparing which has liberated from cost executions on most recent PC
vision calculations. It has all necessary PC vision calculationsq

2.1 LITERATURE REVIEW


2.2 DROWSINESS AND FATIGUE

Antoine Picotiter, expressed that tiredness is the place where an individual is in


an alert and drowsy state. The present circumstance drives the driver to not
focusing on their driving. Subsequently, the vehicle can presently don't be
controlled because of the driver in a semi - cognizant state. As per Gianluca
Borghini etal, mental exhaustion is a factor of tiredness and it causes the
individual who encounters to not have the option to perform on the grounds that
it diminishes the proficiency of the mind to react towards unexpected occasions.

2.3 ELECTROENCEPHALOGRAPHY (EEG) DETECTION

Figure 2.1: EEG Data Collecting Samples

Electroencephalography (EEC) is a strategy that actions the mind electrical


movement. It tends to be utilized to quantify the heartbeat, eye squint and
surprisingly major actual development, for example, head development. It very
well may be utilized on people or creatures as subjects to get the cerebrum
movement. It utilizes a unique equipment that places sensors around the highest
point of the head territory to detect any electrical cerebrum action. Authors
referenced that from the technique that has been carried out by the past
specialist to identify sleepiness signs, the EEG strategy is ideal to be applied for
tiredness and weariness recognition. In this technique, EEG has four sorts of
recurrence segments that can be examined, i.e., alpha, beta and delta. At the
point when the force is expanded in alpha and delta recurrence groups it shows
that the driver is confronting weariness and sleepiness. The hindrances of this
strategy are, it is exceptionally delicate to commotion around the sensors. For
instance, when the individual is doing the EEG explore; the encompassing
territory should be totally quiet. The commotion will meddle with the sensor
that identifies the mind action. Another weakness of this technique is that
regardless of whether the outcome may be precise it isn't reasonable to use for
genuine driving application. Envision when an individual is driving and he is
wearing something on his head brimming with wires and when the driver moves
their head, the wire may take off from their places. Despite the fact that it isn't
advantageous to be utilized for ongoing driving however for try purposes and
information assortment, it is probably the best strategy up until now.

2.4 FACE DROWSINESS DETECTION SYSTEM


Sleepiness can be distinguished by utilizing face territory recognition. The
techniques to identify sleepiness inside the face territory shift because of
languor. Sign in are more noticeable and clearer to be identified at the face
territory, we can identify the eyes area. From eyes identification, the creator
expressed that there are four kinds of eyelid development that can be utilized for
laziness location. They are totally open, total close, and in the centre where the
eyes are from open to close and the other way around. The calculation measures
the picture caught in a dark scale strategy; where the tone from the pictures is
then changed into highly contrasting. Working with highly contrasting pictures
is simpler on the grounds that lone two boundaries must be estimated. The
creator at that point plays out the edge discovery to identify the edges of eyes so
the estimation of the eyelid territory can be determined. The issue happening
with this technique is that the size space of the eye may shift starting with one
individual then onto the next. Somebody may have little eyes and appears as
though it is drowsy yet some are most certainly not. Other than that, if the
individual is wearing glasses there is a hindrance to distinguish eye area. The
pictures that are caught should be in a specific reach from the camera since
when the distance is a long way from the camera, the pictures are obscured.

2.5 PERCLOS (PERCENTAGE OF EYE CLOSURE)


Drowsiness can be caught by recognizing the eye squints and level of eye
conclusion (PERCLOS). For eye squint identification, propose a strategy which
learns the example of span of eyelid shut. As per this proposed technique
estimates the ideal opportunity for an individual shut their eyes and on the off
chance that they are shut longer than the ordinary Eye flicker time, it is
conceivable that the individual is nodding off. It is referenced that eye flicker
time; it is conceivable that the individual is nodding off. It is referenced that
almost 310.3ms are the normal of an ordinary individual's eye squint.
Figure 2.2: Fatigue Condition

Subsequent to going through the exploration papers and the current techniques,
this task suggested that eyes and yawning location strategies will be utilized.
Eye flicker term gives the information that the more extended the individuals
nearby their eyes, the drowsier it. PERCLOS technique recommends that
languor is estimated by ascertaining the level of the eyelid 'hangs'. Sets of eyes
open and eye shut have been put away in the product library to be utilized as a
boundary to separate whether the eyes is completely open or completely shut.
For eyelids to hang, it occurs in much more slow time as the individual is
gradually nodding off. Consequently, the progress of the driver's laziness can be
recorded.

Figure 2.3: Difference Between Eyes in Open and Close State


In this way, PERCLOS strategy puts a corresponding worth where when the
eyes are 80% shut, which it is almost to completely close, it accepts that the
driver is lazy. This technique isn't advantageous to be shut to be utilized
progressively driving as it needs a fixed limit estimation of enlightening for the
PERCLOS strategy to Perform precisely. The two strategies to distinguish
tiredness utilizing eye flicker example and PERCLOS have a similar issue
where the camera should be put at a particular point to get a decent picture of
video with no aggravation of eyebrow and shadow that cover the eyes.

2.6 YAWNING DETECTION METHOD


As per, sleepiness of an individual is regularly seen by watching their face and
conduct. The creator proposes a way where laziness is regularly identified by
month situating and thusly the pictures were prepared by utilizing a course of
classifiers that has been proposed by Viola - Jones for faces. The photos were
contrasted and the arrangement of pictures information for mouth and yawning.
A few groups will close their mouth with their hand while yakking. It is an
impediment to encourage great pictures if an individual is shutting their mouth
while yawning yet yawning is absolutely an image of an individual having
laziness and weakness, the examples of yawning location strategy used in the
examination.

Figure 2.4: Yawning Detection


In the wake of perusing the exploration papers and consequently the current
strategies, this undertaking suggested that eyes and yawning location techniques
will be utilized. flicker span gives the information that the more extended the
individuals nearby their eyes, the drowsier it'll be viewed as it'll be thought of.
this is on the grounds that when an individual is during a lazy express; its eyes
will be shut longer than the conventional eyes flicker. Besides that, yawning is
one among the indications of tiredness where it's ordinary human reaction when
yawning is that the sign that they feel lazy or tired.
CHAPTER 03
SYSTEM DESIGN
3.1 USE CASE DIAGRAM

A use case diagram illustrates the interactions between actors (users or external
systems) and a system to achieve specific goals. In context of driver drowsiness
detector, use case diagram represents interaction between driver and the system.

Figure 3.1: Use Case Diagram


3.2 ARCHITECTURE DIAGRAM
An architecture diagram is a graphical representation that illustrates the
structure, components, interactions, and relationships within a system or
application. It provides a high-level overview of how different parts of the
system are organized and how they interact with each other to achieve the
system's objectives. In the context of driver drowsiness detection, an
architecture diagram would illustrate the high-level structure and components of
the system responsible for detecting and alerting drivers of drowsiness.

Figure 3.2: Architecture Diagram


3.3 ACTIVITY DIAGRAM
An activity diagram is a behavioural diagram in UML (Unified Modelling
Language) used to model the flow of activities and actions within a system or
business process. It depicts the sequential flow of activities, along with
decisions, branching, parallelism, and concurrency. An activity diagram for
driver drowsiness detection is a visual representation that illustrates the
sequence of actions and interactions involved in detecting drowsiness in a
driver. It depicts the flow of activities starting from capturing video frames from
a camera to generating alerts when drowsiness is detected. Key activities may
include preprocessing frames, extracting features, analyzing features, and
generating alerts.

Figure 3.3: Activity Diagram


3.4 CLASS DIAGRAM
A class diagram is a type of static structure diagram in the Unified Modelling
Language (UML) that represents the structure of a system by depicting the
classes, attributes, methods, relationships, and constraints among them. It
provides a visual representation of the objects and their interactions within a
software system. A class diagram is a type of static structure diagram in the
Unified Modelling Language (UML) that represents the structure of a system by
depicting the classes, attributes, methods, relationships, and constraints among
them. It provides a visual representation of the objects and their interactions
within a software system.

Figure 3.4: Class Diagram


3.5 DATA FLOW DIAGRAM

A data flow diagram (DFD) is a graphical representation that illustrates the flow
of data within a system or process. It visually depicts how data moves through
various processes, data stores, and external entities in a system. DFDs are
commonly used in system analysis and design to model the data flow and
transformations within a system, helping to understand and document its
functionality. In the context of driver drowsiness detection, a data flow diagram
(DFD) would illustrate how video frames captured from a camera are processed,
analysed, and used to generate alerts when drowsiness is detected.

3.6 SEQUENCE DIAGRAM


Figure 3.5: Data Flow Diagram
A sequence diagram is a type of interaction diagram in the Unified Modelling
Language (UML) that illustrates the interactions and messages exchanged
between objects or components in a system over time. It depicts the sequence of
events and the order in which interactions occur among objects, helping to
visualize the dynamic behaviour of a system. In the context of driver drowsiness
detection, a sequence diagram would illustrate the sequence of events and
interactions among system components involved in detecting driver drowsiness.
It shows how various components communicate and collaborate to process
video frames, extract features, analyse them, and generate alerts when
drowsiness is detected.

Figure 3.6: Sequence Diagram


3.7 COLLABORATION DIAGRAM
A collaboration diagram, also known as a communication diagram, is a type of
interaction diagram in the Unified Modelling Language (UML) that depicts the
interactions and relationships among objects or components within a system.
Unlike sequence diagrams, which focus on the sequence of messages exchanged
over time, collaboration diagrams emphasize the structural organization of
objects and the messages exchanged between them. In the context of driver
drowsiness detection, a collaboration diagram illustrates the structural
relationships and interactions among various system components involved in
detecting drowsiness in drivers. It shows how objects or components
communicate and collaborate to process video frames, extract features, analyse
them, and generate alerts when drowsiness is detected.

Figure 3.7: Collaboration Diagram


CHAPTER 04
METHODOLOGY
4.1 FUNCTIONAL REQUIREMENT
A Functional prerequisite is described as one portion or an element of a product,
in the entire methodology of programming building that the end user
specifically demands as basic facilities that the system should offer.

• Recording the driver’s behaviour, the moment the trip begins.

• Continuous evaluation of driver’s facial features over the course of long trip.

• Raising an alarm if driver feels drowsy.

4.2 NON-FUNCTIONAL REQUIREMENT

Non-functional requirements are basically the quality constraints that the system
must satisfy according to the project contract. These are also called non-
behavioural requirements.

• Camera capturing the video should be of high resolution.

• System should work even in low light conditions.

• Alarm raised should be of high volume to wake the driver up.

4.3 SYSTEM CONFIGURATION

4.3.1 SOFTWARE REQUIREMENTS

• Operating system: Windows 10/8 (incl. 64-bit), Mac OS, Linux

• Language: Python 3

• IDE: Visual Studio Code


4.3.2 HARDWARE REQUIREMENTS

• Processor: 64-bit, quad-core, 2.5 GHz minimum per core

• RAM: 4 GB or more

• Display:1024 x 768 or higher resolution monitors

• Camera: A webcam

4.4 TECHNOLOGY USED


•a. PYTHON-Python is a mediator, undeniable level, general - reason
programming language. Python's plan theory accentuates code meaningfulness
with its outstanding utilization of huge whitespace. Its language builds and
article arranged methodology intend to assist software engineers with
composing, sensible code for little and enormous scope projects. Python is
progressively composed AND upholds numerous programming standards,
including procedural, object-situated, and useful programming.

•b. PICTURE PROCESSING-In software engineering, advanced picture


handling is the utilization of PC calculations to perform picture preparing in
computerized pictures.

4.5 PACKAGES USED


● OPEN CV
● NUMPY
● PANDAS
● FRAMES
● TENSOR FLOW
4.5.1 OPEN CV
• Open CV was started at Intel in 1999 by Gary Bradsky, and the chief release
turned out in 2000. Vadim Pisarevsky joined Gary Bradsky to manage Intel's
Russian programming OpenCV bunch. In 2005, OpenCV was used on Stanley,
the vehicle that won the 2005 DARPA Grand Challenge Later, its dynamic
headway continued under the assistance of Willow Garage with Gary Bradsky
and Vadim Pisarevsky driving the endeavour. OpenCV presently maintains an
enormous number of estimations related to Computer Vision and Machine
Learning and is broadening bit by bit.

•OpenCV maintains a wide grouping of programming tongues like C++,


Python, Java, etc, and is open on different stages including Windows, Linux, OS
X, Android, and iOS. Interfaces for high-speed GPU undertakings subject to
CUDA and OpenCV are moreover under powerful unforeseen development.

•OpenCV-Python is the Python API for OpenCV, solidifying the best attributes
of the OpenCV C++ API and the Python language.

4.5.1.1 OPENCV-PYTHON

•OpenCV-Python is a library of Python attaches planned to handle PC vision


issues.

•Python is a comprehensively helpful programming language started by Guido


Van Rossum that ended up being standard quickly, generally considering its
straightforwardness and code fathomability. It engages the computer
programmer to convey musings in less lines of code without diminishing
comprehensibility.

•Compared to tongues like C/C++, Python is all the more lethargic. Taking
everything into account, Python can be successfully loosened up with C/C++,
which grants us to form computationally genuine code in C/C++ and make
Python covers that can be used as Python modules. This gives us two
advantages: first, the code is practically pretty much as fast as the principal
C/C++ code (since it is the genuine C++ code working in establishment) and
second, it is easier to code in Python than C/C++. OpenCV-Python is a Python
covering for the primary OpenCV C++ execution.

•OpenCV-Python uses NumPy, which is an astoundingly improved library for


numerical exercises with MATLAB-style language structure. This also
simplifies it to join with various libraries that use NumPy like SciPy and
Matplotlib.

4.5.2 NUMPY
•NumPy is a Python library that gives a basic yet incredible information
structure: the n-dimensional cluster. This is the establishment on which
practically all the force of Python's information science tool compartment is
fabricated, and learning NumPy is the initial step on any Python information
researcher's excursion.

•NumPy is an open-source mathematical Python library.

•NumPy contains a multi-dimensional exhibit and network information


structures. It very well may be used to play out various numerical procedure on
clusters like geometrical, factual and arithmetical schedules. Pandas objects
depend vigorously on NumPy objects.

4.5.2.1 WHAT IS NUMPY

•NumPy is a Python library utilized for working with clusters.


•It likewise has capacities for working in the space of straight variable based
math, Fourier change, and grids.

•NumPy was made in 2005 by Travis Oliphant. It is an open-source venture and


you can utilize it unreservedly.

•NumPy represents Numerical Python.

4.5.2.2 WHY USE NUMPY

•In Python we have records that fill the need of clusters, yet they are delayed to
measure.

•NumPy expects to give an exhibit object that is up to 50x quicker than


conventional Python records.

•The cluster object in NumPy is called ndarray, it gives a ton of supporting


capacities that make working with ndarray extremely simple.

•Arrays are regularly utilized in information science, where speed and assets
are vital.

4.5.2.3 WHY IS NUMPY FASTER THAN LISTS

•NumPy exhibits are put away at one nonstop spot in memory not at all like
records, so cycles can get to and control them productively.

•This conduct is called region of reference in software engineering.

•This is the primary motivation behind why NumPy is quicker than records.
Likewise, it is advanced to work with the most recent CPU engineering.

4.5.2.4 WHICH LANGUAGE IS NUMPY WRITTEN


•NumPy is a Python library and is composed in part in Python, however the
majority of the parts that require quick calculation are written in C or C++.

4.5.3 PANDAS
•Pandas is a Python library. Pandas is utilized to break down information.
Learning by perusing.

•Pandas are a quick, amazing, adaptable and simple to utilize open source
information examination and control device, based on top of the Python
programming language.

•We can perform fundamental procedure on lines/segments like choosing,


erasing, adding, and remaining.

4.5.3.1 WORKING WITH PANDAS

Stacking and Saving Data with Pandas at the point when you need to utilize
Pandas for information investigation, you'll as a rule use it in one of three
unique ways:

•Convert a Python's rundown, word reference or NumPy cluster to a Pandas


information outline.

•Open a neighbourhood record utilizing Pandas, normally a CSV document,


however could likewise be a delimited book record, Excel, and so on.

•Open a distant record or information base like a CSV or a JSON on a site


through a URL or read from a SQL table/data set.

4.5.4 TENSORFLOW
TensorFlow makes it simple for fledglings and specialists to make AI models
for work area, versatile, web, and cloud.

•Tensor Flow is a Python library for quick mathematical registering made and
delivered by Google. It is an establishment library that can be utilized to make
Deep Learning models straightforwardly or by utilizing covering libraries that
work on the cycle based on top of TensorFlow.

4.5.4.1 WHY TENSOR FL0W IS USED IN PYTHON


•A tensor is a compartment which can house information in N measurements,
alongside its straight tasks, however there is subtlety in what tensors actually
are and what we allude to as tensors practically speaking.

4.5.4.2 WHAT IS TENSORFLOW AND HOW IT IS USED

•Created by the Google Brain group, Tensor Flow is an opensource library for
mathematical calculation and huge scope AI. TensorFlow packages together a
huge number of AI and profound learning models and calculations and makes
them helpful via a typical analogy.

4.5.4.3 WHY IS IT CALLED TENSORFLOW

The name TensorFlow gets from the tasks that such neural organizations
perform on multidimensional information exhibits, which are alluded to as
tensors. During the Google I/O Conference in June2016, Jeff Dean expressed
that 1500 stores on GitHub referenced Tensor Flow, of which just 5 were from
Google.

4.5.4.4 WHAT IS TENSORFLOW BACKEND


•It doesn't deal with low-level activities like tensor items, convolutions, etc. All
things considered, it depends on a particular, all-around enhanced tensor control
library to do as such, filling in as the "backend motor" system created by
Google

CHAPTER 05
IMPLEMENTATION
5.1 IMPORT LIBRARIES

• Numpy is used for handling the data from dlib and mathematical functions.
OpenCV will help us in gathering the frames from the webcam and writing over
them and also displaying the resultant frames
.
• In our program we used Dlib, a pre-trained program trained on the HELEN
dataset to detect human faces using the pre-defined 68 landmarks

Figure 5.1: Landmarked Image of a Person by Dlib


Figure 5.2: HELEN Data sample

figure 5.3: HELEN Data sample 2


• Dlib to extract features from the face and predict the landmark using its pre-
trained face landmark detector.

• Dlib is an open source toolkit written in c++ that has a variety of machine
learning models implemented and optimized. Preference is given to dlib over
other libraries and training your own model because it is fairly accurate, fast,
well documented, and available for academic, research, and even commercial
use.

• Dlib’s accuracy and speed are comparable with the most state-of-the-art
neural networks, and because the scope of this project is not to train one, we’ll
be using dlib python wrapper.

Figure 5.4: Facial Landmarks


• The hypot function from the math library calculates the hypotenuse of a right-
angle triangle or the distance between two points (Euclidean norm).

• Here we prepare our capture call to OpenCV’s video capture method that will
capture the frames from the webcam in an infinite loop till we break it and stop
the capture.

• Dlib’s face and facial landmark predictors

• Keep the downloaded landmark detection .dat file in the same folder as this
code file or provide a complete path in the dlib.shape_predictor function.

• This will prepare the predictor for further prediction.

detector = dlib.get_frontal_face_detector()

predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

• We create a function to calculate the midpoint from two given points.

• As we are going to use this more than once in a call, we create a separate
function for this.

def mid(p1 ,p2):

return int((p1.x + p2.x)/2), int((p1.y + p2.y)/2


5.2 DRIVER ALERTNESS MONITORING DATASET

 The initial focus lies on gathering the necessary data for our drowsiness
detection system. Leveraging the dlib library, we process the video feed
frame by frame to identify and isolate crucial facial features such as the
left and right eyes.

 This step is essential as it provides us with the primary elements required


for monitoring driver drowsiness. Once the eyes are detected, OpenCV
comes into play, enabling us to draw contours around each eye region
accurately.

 These contours serve as visual markers, aiding in the subsequent analysis


of eye movements. With the contours in place, we proceed to calculate
the Eyes Aspect Ratio (E.A.R.) and the Mouth Aspect Ratio. Utilizing
the Euclidean function from SciPy, these ratios are computed by
measuring specific distances between key points within the eye and
mouth regions respectively.

 The E.A.R. serves as a critical metric for assessing the level of eye
closure, while the Mouth Aspect Ratio helps in monitoring mouth
movements, providing additional insights into the driver's state of
alertness.

 By expanding our dataset with these calculated ratios, we enhance the


accuracy and effectiveness of our drowsiness detection system, laying a
solid foundation for subsequent stages of model development and
implementation.
Figure 5.5: Driver Alertness Monitoring System

5.3 FACE DETECTION

 In the Face Identification phase, the primary objective is to precisely


locate the positions of the eyes and mouth within the detected face region.
Once the face is recognized within the image, the focus shifts to the
YCbCr color space, where skin segmentation takes place.

 This technique involves separating the skin tones from the rest of the
image, effectively isolating the facial regions of interest. By leveraging
the YCbCr color space, which separates luminance (Y) from chrominance
(Cb and Cr), skin segmentation can be achieved with greater accuracy
compared to other color spaces.
 This process results in the extraction of the skin areas while discarding a
significant portion of non-face elements present in the image. Through
skin segmentation, the system can effectively filter out irrelevant
background noise, ensuring that subsequent analyses are performed
exclusively on the facial features of interest.

 Facial Landmark- It is an inbuilt HOG SVM classifier used to determine


the position of 68(x, y) coordinates that map to facial structures on
the face

 It is mainly used for image or video processing and also analysis


including object detection, face detection, etc. Facial landmarks are used
to localize and represent important regions of the face, such as: · Mouth. ·
Eyes. · Eyebrows.

Figure 5.6: Facial Landmarks by OpenCV


Figure 5.7: 68 Facial Landmarks by Dlib
5.4 EYE STATE TRACKING

 In the Eye State Analysis phase, the primary objective is to isolate and
analyse the state of each eye with a focus on symmetrical properties. This
process begins with the separation of both eyes using edge identification
techniques, which prioritize the preservation of symmetrical features.
Edge detection is crucial for accurately delineating the boundaries of the
eyes and capturing subtle changes in pixel intensities indicative of eye
movements.

 To achieve this, the Sobel operator is employed for edge discovery due to
its effectiveness in avoiding image blurring while maintaining clear and
distinct edges. The Sobel operator operates by convolving the image with
a pair of kernels, which effectively capture gradients in the horizontal and
vertical directions. This approach ensures that edges are accurately
identified while preserving image clarity.

 The edge discovery process is visualized as a pathway to confining pixel


power changes, allowing for precise localization of eye features. Once the
edges are detected, the resolution of the eye's focus is determined to
assess the state of vigilance. In the usual state of vigilance where the eyes
are open, the system continues monitoring without intervention.

 However, precautionary measures are swiftly activated in the event of


closed eyes to prevent accidents. These measures may include issuing
alerts to the driver or initiating automated responses to ensure the safety
of both the driver and other road users.

 Through rigorous eye state analysis coupled with edge detection


techniques, the system can accurately identify signs of drowsiness and
take proactive measures to mitigate potential risks. Below is the formula
for eye state detection:

Eye Detection Accuracy = total number of detections / (Total number of


eyes detected + total number of undetected eyes)

Figure 5.8: Eye Points Detected by Dlib


 Starting from the left corner moving clockwise. We find the ratio of
height and width of the eye to infer the open or close state of the
eye.blink-ratio=(|p2-p6|+|p3-p5|)(2|p1-p4|). The ratio falls to
approximately zero when the eye is close but remains constant when they
are open.

Figure 5.9: Plot Over Eye State Analysis


def eye_aspect_ratio (eye_landmark, face_roi_landmark):

left_point = (face_roi_landmark.part(eye_landmark[0]).x,
face_roi_landmark.part(eye_landmark[0]).y)

right_point = (face_roi_landmark.part(eye_landmark[3]).x,
face_roi_landmark.part(eye_landmark[3]).y)

center_top = mid(face_roi_landmark.part(eye_landmark[1]),
face_roi_landmark.part(eye_landmark[2]))

center_bottom = mid(face_roi_landmark.part(eye_landmark[5]),
face_roi_landmark.part(eye_landmark[4]))

hor_line_length = hypot((left_point[0] - right_point[0]), (left_point[1] -


right_point[1]))

ver_line_length = hypot((center_top[0] - center_bottom[0]), (center_top[1] -


center_bottom[1]))

ratio = hor_line_length / ver_line_length return ratio

Figure 5.10: Eye State Analysis


5.5 YAWNING DETECTION

 In the context of fatigue detection in drivers, the utilization of K-Means


clustering for Gauntlet Analysis plays a pivotal role in identifying and
understanding distinctive signs of fatigue, including the gauntlet
phenomenon. This technique involves leveraging K-Means clustering to
separate the middle region identified by Viola-Jones, a robust face
detection algorithm commonly used in computer vision applications.

 The Viola Figure

 5.10: Eye State Analysis Jones algorithm effectively locates facial


features within the image, providing a starting point for further analysis.
Once the middle region is isolated, the next step involves coordinating
relationship coefficients to define the boundaries of this region more
precisely.

 By applying K Means clustering, which partitions the data into distinct


clusters based on similarity, the system can effectively delineate the
boundaries of the middle region. This segmentation enables a focused
analysis of the facial area most susceptible to fatigue related indicators,
such as the gauntlet phenomenon.

 Fatigue detection in drivers encompasses a multifaceted approach that


involves examining distinctive signs of exhaustion, including the gauntlet
appearance. This phenomenon refers to the characteristic drooping of the
head and the repeated downward motion, resembling the motion of a
gauntlet or glove. To understand fatigue's impact on drivers, exploration
of the body's reflexes during exhaustion is essential.

 Fatigue impairs cognitive function and physical coordination, leading to


slower reaction times and compromised alertness levels. As fatigue
progresses, drivers may exhibit nodding movements and display the
gauntlet appearance more frequently.

 By incorporating K-Means clustering for Gauntlet Analysis, the system


can enhance its ability to identify and quantify fatigue related indicators,
facilitating more accurate and timely detection of drowsiness or
exhaustion in drivers. This comprehensive approach to fatigue detection
aids in improving road safety by enabling proactive interventions or alerts
to prevent potential accidents caused by driver fatigue.

 Through ongoing research and refinement of detection algorithms,


advancements in fatigue detection systems can contribute significantly to
reducing the risks associated with drowsy driving and enhancing overall
transportation safety.

 Similarly, we define the mouth ratio function for finding out if a person is
yawning or not. This function gives the ratio of height to width of mouth.
If height is more than width it means that the mouth is wide open.

 For this as well we use a series of points from the dlib detector to find the
ratio.
def mouth_aspect_ratio(lips_landmark, face_roi_landmark): left_point =
(face_roi_landmark.part(lips_landmark[0]).x,
face_roi_landmark.part(lips_landmark[0]).y) right_point =
(face_roi_landmark.part(lips_landmark[2]).x,
face_roi_landmark.part(lips_landmark[2]).y) center_top =
(face_roi_landmark.part(lips_landmark[1]).x,
face_roi_landmark.part(lips_landmark[1]).y) center_bottom =
(face_roi_landmark.part(lips_landmark[3]).x,
face_roi_landmark.part(lips_landmark[3]).y) hor_line_length =
hypot((left_point[0] - right_point[0]), (left_point[1] - right_point[1]))
ver_line_length = hypot((center_top[0] - center_bottom[0]), (center_top[1] -
center_bottom[1])) if hor_line_length == 0: return ver_line_length ratio =
ver_line_length / hor_line_length return ratio
• We create a counter variable to count the number of frames the eye has been
close for or the person is yawning and later use to define drowsiness in driver
drowsiness detection system project

• Also, we declare the font for writing on images with OpenCV.

count = 0 font = cv2.FONT_HERSHEY_TRIPLEX

Figure 5.11: Facial Landmarks Associated with The Lips


Figure 5.12: Yawn Detection

Figure 5.13: Yawn Alert

5.6 ALERT SYSTEM MODULE

 In the detection process, once drowsiness is identified through the


analysis of various physiological and behavioural indicators, the system
initiates a notification or alarm to alert the driver.
 This crucial step aims to promptly bring the driver's attention to their
compromised state of alertness, mitigating the risk of potential accidents
caused by drowsy driving.

 The warning process encompasses a range of auditory, visual, and tactile


alerts designed to effectively capture the driver's attention and prompt
them to take corrective action. Auditory alerts may include beeping
sounds or spoken warnings, while visual alerts could involve flashing
lights or messages displayed on dashboard screens.

 Additionally, haptic feedback mechanisms such as vibrating seats or


steering wheel vibrations may be employed to provide tactile alerts. By
offering multiple warning signals through various sensory channels, the
system increases the likelihood of the driver receiving and responding to
the alert, thus enhancing overall safety on the road.

 The warning process acts as a critical intervention mechanism, enabling


drivers to acknowledge their drowsiness and take appropriate measures to
ensure their safety and that of others on the road.

 The formula for alert system is given below:

Drowsiness Detection Accuracy = total number of alarm triggers / (total number


of alarm triggers + total number of silent alarms)
Figure 5.14: Alert Emerges

5.7 HEAD TILT ALERT

 The implementation of the head tilt driver drowsiness detection system


involves several key steps. First, we utilize OpenCV, a powerful
computer vision library, along with Python programming language for
ease of implementation. We begin by employing a pre-trained face
detection model, such as Haar cascades or a deep learning-based detector
like MTCNN or Dlib, to locate and extract the driver's face from the
video feed in real-time.
 Once the face region is detected, we leverage facial landmark detection to
identify critical points on the face, including the eyes, nose, and mouth.
With the facial landmarks identified, we proceed to calculate the head tilt
angle. This is accomplished by analysing the spatial relationship between
specific facial landmarks, typically focusing on the positions of the eyes
and the nose.

 By measuring the deviation from a reference position, such as the


horizontal plane, we can infer the degree of head tilt. A significant
deviation from the baseline angle suggests a potential indication of
drowsiness. To refine the drowsiness detection, we integrate the head tilt
angle with other indicators, such as eye closure detection and yawning
detection.

 Combining multiple cues enhances the robustness and accuracy of the


system, allowing for more reliable identification of drowsiness-related
behaviors. The final step involves implementing a decision-making
mechanism to classify the driver's state based on the collected features.
This can range from simple thresholding of feature values to more
sophisticated machine learning models trained on labelled datasets.

 Throughout the implementation process, considerations for efficiency and


real-world applicability are paramount. Optimizing the algorithms for
real-time performance ensures timely detection of drowsiness events,
while accounting for factors like varying lighting conditions and driver
appearance enhances the system's reliability across different
environments. The resulting implementation provides a practical solution
for mitigating the risks associated with drowsy driving, contributing to
overall road safety.
5.8 Real Time Monitoring Module
Figure 5.15: Head Turn Alert

 In the real-time process of driver drowsiness detection, the system


continuously monitors the driver's state during vehicle operation to ensure
timely intervention and prevent potential accidents. This process involves
the integration of various components, including a trained machine
learning model, to make predictions based on extracted features in real-
time.

 At the core of the real-time process is the continuous monitoring of the


driver's physiological and behavioural indicators using sensors or
cameras installed within the vehicle. These sensors capture data such as
eye movements, facial expressions, head position, and other relevant
features of fatigue.
 Once the data is captured, it is fed into the system's components for
analysis. This includes preprocessing steps to extract relevant features
from the raw data, such as calculating the Eyes Aspect Ratio (E.A.R.),
mouth aspect ratio, or identifying the middle region using techniques like
Viola Jones face detection.

 The extracted features are then input into the trained machine learning
model, which has been previously trained on labeled datasets to recognize
patterns associated with drowsiness. The model employs algorithms such
as Convolutional Neural Networks (CNNs) or Support Vector Machines
(SVMs) to make real-time predictions about the driver's state of alertness.

 Based on the predictions made by the machine learning model, the system
initiates appropriate actions to alert the driver or trigger precautionary
measures. Figure 5.14: Alert Emerges This may include issuing auditory
or visual alerts, activating haptic feedback mechanisms, or sending
notifications to the driver or vehicle control systems.
Figure 5.16: Real Time Monitoring

5.9 DATA LOGGING AND STORAGE MODULE

 In the driver drowsiness detection system, the analysis and storage of data
play essential roles in improving system performance and ensuring
comprehensive evaluation of driver behaviour over time. The Analysis
Data component of the system is responsible for recording various data
points obtained during the drowsiness detection process.

 This includes information such as eye closure duration, head position,


facial expressions, and any other relevant physiological or behavioural
indicators of drowsiness. By collecting this data, the system can conduct
further analysis to identify patterns, trends, and correlations that
contribute to more accurate drowsiness detection algorithms.
 Additionally, the Analysis Data module facilitates continuous system
improvement by enabling the refinement of detection algorithms based on
real-world data and feedback. On the other hand, the Storage Data
component serves as a repository for storing historical data captured
during the operation of the drowsiness detection system. Historical data
may include previous instances of drowsiness alerts, driver responses to
warnings, and overall system effectiveness in mitigating drowsy driving
incidents.

 By maintaining a centralized repository of historical data, the system can


support longitudinal analysis, trend identification, and retrospective
evaluation of system performance. This accumulated knowledge is
invaluable for refining algorithms, optimizing system parameters, and
informing future enhancements to the drowsiness detection system.
Overall, the Analysis support continuous improvement, innovation and
road safety.

Figure 5.17: Database And Storage


CHAPTER 06
INTEGRATION AND TESTING
6.1 TESTING:
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub-assemblies, assemblies
and/or a finished product It is the process of exercising software with the intent
of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various
types of tests. Each test type addresses a specific testing requirement.

6.2 TYPES OF TESTS UNIT TESTING


 Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated.

 It is the testing of individual software units of the application .it is done after
the completion of an individual unit before integration. This is a structural
testing, that relies on knowledge of its construction and is invasive.

 Unit tests perform basic tests at component level and test a specific business
process, application, and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
6.3 FUNCTIONAL TEST
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centred on the following items:

Valid Input: Identified classes of valid input are accepted.

Invalid Input: Identified classes of invalid input are rejected. Functions:


Identified functions are exercised.

Output: Identified classes of application outputs are exercised.

Systems/Procedures: Interfacing systems or procedures must be invoked.


Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and
successive processes must be considered for testing. Before functional testing is
complete, additional tests are identified and the effective value of current tests is
determined.

6.4 SYSTEM TESTING

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results.
An example of system testing is the configuration-oriented system integration
test. System testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.
6.5 WHITE BOX TESTING
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached
from a black box level.

o Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black
box tests, as most other kinds of tests, must be written from a definitive
source document, such as Specification or requirements document, such
as specification or requirements document.

o It is a testing in which the software under test is treated, as a black box


you cannot “see” into it. The test provides inputs and responds to outputs
without Considering how the software works.

6.6 UNIT TESTING:


Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

6.7 INTEGRATION TESTING

 Software integration testing is the incremental integration testing of two or


more integrated software components on a single platform to produce failures
caused by interface defects.
 The task of the integration test is to check that components or software
applications,

 E.g. components in a software system or – one step up – software applications


at the company level – interact without error.

6.8 ACCEPTANCE TESTING

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets
the functional requirements.

6.9 TEST OBJECTIVES

1. Assess accuracy across varied driving conditions.

2. Measure response time for drowsiness detection alerts.

3. Analyse false alarm frequency.

4. Test robustness against environmental and behavioural factors.

5. Validate performance across diverse driving scenarios.

6. Evaluate user experience and interface integration.

Test Results:

All the test cases mentioned above passed successfully. No defects encountered.
TABLE 6.1: Test Case Result 1

TABLE 6.2: Test Case Result 2


CHAPTER 07
RESULT AND DISCUSSION
7.1 RESULT

Figure 7.1: Model Accuracy Graph


The existing system for detecting driver
drowsiness typically relies on simple algorithms or rules-based approaches. It
often utilizes basic metrics like steering wheel movement, lane deviation, or eye
closure duration to infer drowsiness. However, it lacks adaptability and may
struggle to perform well under varying conditions or for all individuals.
Additionally, its ability to accurately distinguish between drowsiness and other
factors affecting driving behaviour is limited. Moreover, the existing system
may not provide real-time feedback or intervention to prevent accidents caused
by drowsy driving. In contrast, the proposed system using machine learning
offers significant improvements. It utilizes advanced machine learning
algorithms such as deep neural networks, support vector machines, or decision
trees to learn patterns of drowsiness from data. This system can incorporate a
wider range of features. Furthermore, it offers better adaptability and
generalization as the system can continuously learn and improve from new data.
The proposed system can provide real time detection , alerting the driver or
even autonomously triggering safety mechanisms sending alerts to emergency
services.
Chapter 08
Conclusion & future scope
8.1 CONCLUSION:
The Driver Anomaly Observing Framework has been developed to swiftly
identify signs of laziness, intoxication, and carelessness in drivers. Specifically,
the Laziness Detecting Framework utilizes eye closure patterns to differentiate
between normal eye flickers and indicators of tiredness, effectively pinpointing
instances of laziness while driving. This innovative device is designed to
prevent accidents resulting from driver sleepiness, boasting functionality even
for drivers wearing spectacles and under low light conditions, provided the
camera yields high-quality output. Through a suite of self-developed image
processing algorithms, the system gathers data on the position of the driver's
head and eyes. By continuously monitoring these parameters, the system
discerns whether the driver's eyes are open or closed. In cases where prolonged
eye closure is detected, the system issues a warning signal, thereby assessing the
driver's level of alertness based on patterns of eye closure.

8.2 FUTURE SCOPE


The future scope for driver drowsiness prediction encompasses various avenues
for advancement and application. Integrating biometric sensors like heart rate
monitors and EEG sensors could offer deeper insights into the driver's
physiological state. Advanced AI-driven prediction models, including deep
learning algorithms, promise improved accuracy by analysing complex
behavioural patterns and physiological signals. Real-time feedback systems,
incorporating auditory alerts or adaptive cruise control. Considering
environmental factors like road conditions and weather could further refine
prediction algorithms. Wearable devices tailored for drowsiness monitoring
offer a portable solution. Integration with autonomous vehicles ensures
passenger safety by enabling appropriate responses to driver fatigue. It's vital to
address data privacy and ethical concerns surrounding the collection and use of
sensible data.
APPENDIX
SOURCE CODE

from scipy.spatial import distance

import numpy as np

from imutils.face_utils import rect_to_bb, shape_to_np

from imutils import face_utils

from pygame import mixer

import imutils

import dlib

import cv2

mixer.init()

mixer.music.load("C:\\Users\\kumaravishnu\\Desktop\\mini project\\
music.wav")

def eye_aspect_ratio(eye):

A = distance.euclidean(eye[1], eye[5])

B = distance.euclidean(eye[2], eye[4])

C = distance.euclidean(eye[0], eye[3])

ear = (A + B) / (2.0 * C)
return ear

def mouth_aspect_ratio(mouth):

A = distance.euclidean(mouth[14], mouth[18])

B = distance.euclidean(mouth[12], mouth[16])

C = distance.euclidean(mouth[0], mouth[6])

mar = (A + B) / (2.0 * C)

return mar

def get_head_pose(shape, frame_shape):

image_points = np.array([

shape[30], # Nose tip

shape[8], # Chin

shape[36], # Left eye left corner

shape[45], # Right eye right corner

shape[48], # Left mouth corner

shape[54] # Right mouth corner

], dtype="double")

model_points = np.array([

(0.0, 0.0, 0.0), # Nose tip

(0.0, -330.0, -65.0), # Chin


(-225.0, 170.0, -135.0), # Left eye left corner

(225.0, 170.0, -135.0), # Right eye right corner

(-150.0, -150.0, -125.0), # Left mouth corner

(150.0, -150.0, -125.0) # Right mouth corner

])

focal_length = frame_shape[1]

center = (frame_shape[1] / 2, frame_shape[0] / 2)

camera_matrix = np.array([[focal_length, 0, center[0]],

[0, focal_length, center[1]],

[0, 0, 1]], dtype="double")

dist_coeffs = np.zeros((4, 1)) # Assuming no lens distortion

(success, rotation_vector, translation_vector) = cv2.solvePnP(model_points,


image_points, camera_matrix, dist_coeffs,
flags=cv2.SOLVEPNP_ITERATIVE)

rvec_matrix = cv2.Rodrigues(rotation_vector)[0]

proj_matrix = maphack((rvec_matrix, translation_vector))

eulerAngles = cv2.decomposeProjectionMatrix(proj_matrix)[6]
return eulerAngles

frame_check = 20

detect = dlib.get_frontal_face_detector()

predict = dlib.shape_predictor("C:\\Users\\kumaravishnu\\Desktop\\mini
project\\shape_predictor_68_face_landmarks.dat")

(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["left_eye"]

(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["right_eye"]

(mStart, mEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["mouth"]

cap = cv2.VideoCapture(0)

flag = 0

head_turn_flag = 0

eye_closed_flag = 0

yawn_flag = 0

alert_duration = 2 # seconds

while True:

ret, frame = cap.read()

frame = imutils.resize(frame, width=450)

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)


subjects = detect(gray, 0)

for subject in subjects:

shape = predict(gray, subject)

shape = face_utils.shape_to_np(shape)

eulerAngles = get_head_pose(shape, frame.shape)

# Extract the yaw angle (head turning left/right)

yaw_angle = eulerAngles[1]

# Extract eye landmarks and compute eye aspect ratio

leftEye = shape[lStart:lEnd]

rightEye = shape[rStart:rEnd]

leftEAR = eye_aspect_ratio(leftEye)

rightEAR = eye_aspect_ratio(rightEye)

ear = (leftEAR + rightEAR) / 2.0

# Extract mouth landmarks and compute mouth aspect ratio

mouth = shape[mStart:mEnd]

mar = mouth_aspect_ratio(mouth)

# Draw contours around eyes and mouth


leftEyeHull = cv2.convexHull(leftEye)

rightEyeHull = cv2.convexHull(rightEye)

mouthHull = cv2.convexHull(mouth)

cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)

cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)

cv2.drawContours(frame, [mouthHull], -1, (0, 255, 0), 1)

# Check for eye closure

if ear < 0.25:

eye_closed_flag += 1

if eye_closed_flag >= frame_check:

cv2.putText(frame, "Eye Alert", (10, 30),


cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

mixer.music.play()

else:

eye_closed_flag = 0

# Check for head turn

if abs(yaw_angle) > 30: # Adjust the threshold as needed

head_turn_flag += 1

if head_turn_flag >= frame_check:


cv2.putText(frame, "Head Turn Alert", (10, 60),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0,255), 2)

mixer.music.play()

# Check for yawn

if mar > 0.5: # Adjust threshold as needed

yawn_flag += 1

if yawn_flag >= frame_check:

cv2.putText(frame, "Yawn Alert", (10, 90),


cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

mixer.music.play()

cv2.imshow("Frame", frame)

key = cv2.waitKey(1) & 0xFF

if key == ord("q"):

break

cap.release()

cv2.destroyAllWindows()
REFERENCES

[1] National Highway Traffic Safety Administration. “Traffic safety facts crash
stats: Drowsy driving 2019,” Oct. 2017. [Online]. Available:
http://www.nhtsa.gov/ri sky driving/drowsy-driving
[2] European New Car Assessment Program. “Euro NCAP 2025 Roadmap,”
Sep. 2019. [Online]. Available: https://cdn.euroncap.com/media/30700/eur
oncap roadmap-2025-v4.pdf
[3] A. Sahayadhas, K. Sundaraj, and M. Murugappan, “Detecting driver
drowsiness based on sensors: A review,” Sensors, vol. 12, no. 12, pp. 6937–
16953, Dec. 2018.
[4] Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, “Driver inattention
monitoring system for intelligent vehicles: A review,” IEEE Trans. Transp.
Syst., vol. 12, no. 2, pp. 596–614, Jun. 2020.
[5] C. Bila, F. Sivrikaya, M. A. Khan, and S. Albayrak, “Vehicles of the future:
A survey of research on safety issues,” IEEE Trans. Intell. Transp. Syst., vol. 18,
no. 5, pp. 1046–1065, 2020.
[6] R. Alharbey, Mohammed M Dessouky, A Sedik, A. I. Siam and M. A.
Elaskily, "Fatigue State Detection for Tired Persons in Presence of Driving
Periods," in IEEE Access, vol. 10, pp. 79403-79418, 2022.
[7] Q. Zhuang, Z. Kehua, J. Wang and Q. Chen, "Driver Fatigue Detection
Method Based on Eye States With Pupil and Iris Segmentation," in IEEE
Access, vol. 8, pp. 173440-173449, 2020.
[8] W. Deng and R. Wu, "Real-Time Driver-Drowsiness Detection System
Using Facial Features," in IEEE Access, vol. 7, pp. 118727-118738, 2019.
[9] M. H. Alkinani, W. Z. Khan and Q. Arshad, "Detecting Human Driver
Inattentive and Aggressive Driving Behavior Using Deep Learning: Recent
Advances, Requirements and Open Challenges," in IEEE Access, vol. 8, pp.
105008-105030, 2020.
[10] A. Altameem, A. Kumar, R. C. Poonia, S. Kumar and A. K. J.Saudagar,
"Early Identification and Detection of Driver Drowsiness by Hybrid Machine
Learning," in IEEE Access, vol. 9, pp. 162805-162819, 2021.

You might also like