0% found this document useful (0 votes)
82 views57 pages

Major Project

The document is a project report on a drowsiness detection and alert system. It was submitted by 4 students to fulfill the requirements for a Bachelor of Technology degree in Computer Science and Engineering. The report describes the development of a vision-based system using OpenCV to detect driver drowsiness through facial and eye tracking in real-time, and alert the driver if drowsiness is detected. It discusses current driver fatigue detection systems, the proposed system design using computer vision techniques, the drowsiness detection algorithm, system requirements and analysis, and testing of the OpenCV drowsiness detector.

Uploaded by

shaik shahid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views57 pages

Major Project

The document is a project report on a drowsiness detection and alert system. It was submitted by 4 students to fulfill the requirements for a Bachelor of Technology degree in Computer Science and Engineering. The report describes the development of a vision-based system using OpenCV to detect driver drowsiness through facial and eye tracking in real-time, and alert the driver if drowsiness is detected. It discusses current driver fatigue detection systems, the proposed system design using computer vision techniques, the drowsiness detection algorithm, system requirements and analysis, and testing of the OpenCV drowsiness detector.

Uploaded by

shaik shahid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

~1~

A Project Report on

DROWSINESS DETECTION AND ALERT SYSTEM

Submitted to partial fulfillment of the requirements for the award of the degree
of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE & ENGINEERING
By
A NAVEEN KRISHNA-------- (16911A05C4)
M SUSHANTH REDDY------- (16911A05F2)
SHAIK SHAHID---------------- (16911A05H3)
YASHWANTH HOLLA------- (16911A05J0)

Under the Esteemed Guidance of

Dr. D. ARUNA KUMARI


Professor

Department of Computer Science & Engineering

VIDYA JYOTHI INSTITUTE OF TECHNOLOGY


(An Autonomous Institution)
(Approved by AICTE, New Delhi & Affiliated to JNTUH, Hyderabad)
Aziz Nagar Gate, C.B. Post, Hyderabad-500075
2019-2020
~2~

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

CERTIFICATE

This is to certify that the project report titled “Drowsiness Detection And Alert
System” is being submitted by A. NAVEEN KRISHNA(16911A05C4), M
SUSHANTH REDDY(16911A05F2), SHAIK SHAHID(16911A05H3),
YASHWANTH HOLLA(16911A05J0). In partial fulfilment for the award of the
Degree of Bachelor of Technology in Computer Science & Engineering, is a record
of bonafide work carried out by him/her under my guidance and supervision. These
results embodied in this project report have not been submitted to any other University
or Institute for the award of any degree of diploma.

Internal Guide Head of Department


DR.D.Aruna Kumari Dr. B. Vijaya Kumar
Professor Professor

External Examiner
~3~

DECLARATION

I, A.Naveen Krishna, M.Sushanth Reddy, Shaik Shahid,


Yashwanth Holla bearing Roll Number 16911A05C4, 16911A05F2, 16911A05H3,
16911A05J0 hereby declare that the project entitled, “Drowsiness Detection And
Alert System” submitted for the degree of Bachelor of Technology in Computer
Science and Engineering is original and has been done by us and this work is not copied
and submitted anywhere for the award of any degree.

Date: A NAVEEN KRISHNA

(16911A05C4)

M SUSHANTH REDDY

Place: HYDERABAD (16911A05F2)

SHAIK SHAHID

(16911A05H3)

YASHWANTH HOLLA

(16911A05J0)
~4~

ACKNOWLEDGEMENT

I wish to express my sincere gratitude to the project guide, Dr. D. ARUNA


KUMARI, Professor, Vidya Jyothi Institute of Technology, Hyderabad for her timely
cooperation and valuable suggestions while carrying out this work. It is her kindness
that made me learn more from her.

I am grateful to Dr. B. Vijayakumar, Professor and HOD, department of CSE,


for his help and support during my academic year.

I whole-heartedly convey my gratitude to Principal Dr. A. Padmaja for her


constructive encouragement.

I would like to take this opportunity to express my gratitude to our Director Dr.
P. VENUGOPAL REDDY for providing necessary infrastructure to complete this
project.

I would thank my parents and all the faculty members who have contributed to
my progress through the course to come to this stage.

A NAVEEN KRISHNA

(16911A05C4)

M SUSHANTH REDDY

(16911A05F2)

SHAIK SHAHID

(16911A05H3)

YASHWANTH HOLLA

(16911A05J0)
~5~

ABSTRACT

DROWSINESS DETECTION AND ALERT SYSTEM


Drowsiness and Fatigue of drivers are amongst the significant
causes of road accidents. Every year, they increase the amounts of
deaths and fatalities injuries globally. In this project, a module for
Advanced Driver Assistance System (ADAS) is presented to reduce the
number of accidents due to drivers fatigue and hence increase the
transportation safety; this system deals with automatic driver
drowsiness detection based on visual information and Artificial
Intelligence. We propose an algorithm to locate, track, and analyse
both the drivers face and eyes to measure PERCLOS, a scientifically
supported measure of drowsiness associated with slow eye closure.

Nowadays, more and more professions require long-term


concentration. Drivers must keep a close eye on the road, so they can
react to sudden events immediately. Driver fatigue often becomes a
direct cause of many traffic accidents. Therefore, there is a need to
develop the systems that will detect and notify a driver of her/him bad
psychophysical condition, which could significantly reduce the
number of fatigue-related car accidents. However, the development of
such systems encounters many difficulties related to fast and proper
recognition of a driver’s fatigue symptoms. One of the technical
possibilities to implement driver drowsiness detection systems is to
use the vision-based approach.The technical aspects of using the
vision system to detect a driver drowsiness are also discussed.
~6~

INDEX

S.NO TITLE PAGE NO

Abstract

1. Introduction 1

2. Currently Used Driver Fatigue Detetction System 2

3. Using A Vision Based System To Detect A Fatigue 4

3.1. Block Diagram 8

4. How Driver Drowsiness Detection System Works 9


4.1. Drowsiness Detection With OpenCV 11

5. Drowsiness Detection Algorithm 13


5.1 Building the drowsiness detector with OpenCV 16
5.2 Pip install –upgrade imutils 16
5.3 Pip install playsound 17
5.4 Facial landmark produced by dlib 21

6. System Analysis And Requirements 26


6.1 Machine Learning 26
6.2 Machine learning Methods 27
6.3 System Requirements 29

7. System Design 31
7.1 Functional Specifications 31
7.2 Non-Functional Specifications 31
7.3 UseCase Diagram 32
7.4 Class Diagram 34
7.5 Sequence Diagram 35
7.6 Activity Diagram 36
7.7 State Chart Diagram 37
7.8 Component Diagram 38
~7~

7.9 Object Diagram 39


7.10 Collaboration Diagram 40
7.11 Deployment Diagram 40
7.12 Data Flow Diagram 41

8. Testing The OpenCV Drowsiness Detector 43

9. Conclusion 51
~1~

CHAPTER-1

INTRODUCTION

The development of technology allows introducing more


advanced solutions in everyday life. This makes work less exhausting
for employees, and also increases the work safety. Vision-based
systems are becoming more popular and are more widely used in
different applications. These systems can be used in industry (e.g.
sorting systems), transportation (e.g. traffic monitoring), airport
security (e.g. suspect detection systems), and in the end-user
complex products such as cars (car parking camera). Such complex
systems could also be used to detect vehicle operator fatigue using
vision-based solutions. Fatigue is such a psychophysical condition of
a man, which does not allow for a full concentration. It influences the
human response time, because the tired person reacts much slower,
compared to the rested one. Appearance of the first signs of a fatigue
can become very dangerous, especially for such professions like
drivers. Nowadays, more and more professions require long-term
concentration. People, who work for transportation business (car
and truck drivers, steersmen, airplane pilots), must keep a close eye
on the road, so they can react to sudden events (e.g. road accidents,
animals on the road, etc.) immediately. Long hours of driving cause
the driver fatigue and, consequently, reduces her/him response time.
According to the results of the study presented at the International
Symposium on Sleep Disorders, fatigue of drivers is responsible for
30% of road accidents. The British journal “What Car?” presented
results of the experiment conducted with the driving simulator and
they concluded that a tired driver is much worse dangerous than a
person whose alcohol in blood level is 25% above the allowed limit.
Driver fatigue can cause a microsleep (e.g. loss of concentration, a
short sleep lasting from 1 to 30 seconds), and falling asleep behind
the wheel. Therefore, there is a need to develop a system that will
detect and notify a driver of her/him bad psychophysical condition,
which could significantly reduce the number of fatigue-related car
accidents. However, the biggest difficulties in development of such a
system are related to fast and proper recognition of a driver’s fatigue
symptoms. Due to the increasing amount of vehicles on the road,
which translates into the road accidents directly, equipping a car
with the fatigue detection system is a must. One of the technical
~2~

possibilities to implement such a system is to use vision-based


approach. With the rapid development of image analysis techniques
and methods, and a number of ready Component-on-the-Shelf
solutions (e.g. high-resolution cameras, embedded systems, sensors),
it can be envisaged, that introducing such systems into widespread
use should be easy. Car drivers, truck drivers, taxi drivers, etc. should
be allowed to use this solution to increase the safety of the
passengers, other road users and the goods they carry.

Currently Used Driver Fatigue Detection System

One of the examples of a system detecting a driver’s fatigue is


the system implemented into the Driver Assistant in Ford cars. It
analyses rapid steering movements, driving onto lines separating
lanes, irregular and rapid braking or acceleration. The system collects
and processes these data, assigns the driver using one of the 5-degree
concentration levels (5 – the driver is concentrated, drives properly,
1 – the driver is very tired, should immediately stop driving and rest).
When the rating falls to level 1, the driver is notified by beeps and
warnings on the instrument panel's middle screen. The system can be
reset and the warnings will disappear, only when the driver stops
and opens the door. Skoda cars use a similar system. It analyses the
steering movements and compares them to the movements in normal
driving. The system begins to analyse how the vehicle performs 15
minutes after starting the engine and at the speeds of more than 65
km/h. When the system detects that driving is abnormal, the driver's
fatigue status is displayed on the screen, followed by a beep,
informing the driver to take a break. Volkswagen uses the Bosch
Driver Drowsiness Detection system. It also analyses how a car
behaves on the road. Based on the information from the power
assisted steering sensor and the steering angle sensor, the system
detects sudden changes in the trajectory of the vehicle, which
translates into driver’s fatigue. Some driver fatigue detection
methods use the heart rate analysis. The psychophysical state is
determined by the HRV (heart rate variability). DENSO
(manufacturer of car parts and systems) at the Detroit Auto Show
presented a system that relies on a driver's heart rate analysis and
~3~

the use of the cameras to observe a driver’s eyes. Such a solution


allows detecting a fatigue at the operator of the vehicle. There are
also ideas for the use of electroencephalogram (EEG) to detect the
driver's brain wave changes, which may indicate the first symptom of
fatigue. The panel view implemented in Android is shown below.

View at a driver fatigue information (Ford Driver Assistant)

The PSA Group (formerly PSA Peugeot Citroën), in


collaboration with the Lausanne University of Technology, are
working on a camera-based system to analyse the facial expressions
of a driver. It is interesting to note that the very early aim of this
system was a detection of emotions of a driver, but they decided to
develop it into the fatigue detection system). It is based on the
analysis of eye movement, the closing and opening of the eyelids as
well as the movement of the mouth. It allows detecting the first
symptoms of a fatigue. Information provided by this system will
inform a driver on the state of her/his psychophysical state.
~4~

Driver Drowsiness Detection System using EEG

Using a vision-based system to detect a fatigue

Fatigue detection is not an easy task. It requires taking into account


many factors. Using a video system for this purpose can be a good
solution. This system would allow for precise detection of a fatigue in
real time. The speed of such a system is very important because even
slight delays in the operation of such a system could be fatal
(excessive reaction of the system while traveling along the highway).
An important issue in the design of the vision-based driver fatigue
detection system is the right choice of the analysed symptoms of
fatigue. In a situation, where it is not possible to monitor all potential
symptoms of a fatigue, it should be limited to the detection of the
most important ones such as: closing the eyelids, slow the eye
movements, yawning and drooping a head. The basis of the fatigue
detection system are the algorithms responsible for detecting facial
features and their motion. There are many methods that allow
detecting individual facial elements. They are based both on the
vector operations and the pattern classification. Particular methods
are based on an image filtering in complex space or an image
~5~

processing in spatial-frequency domain. Some methods are very


effective in detecting characteristic facial features, but sensitive to
changing lighting conditions. There are also methods that rely on
analysing a 3D image. The most popular methods are Main
Component Analysis, Neural Networks, Gabor filters, and frequency-
spatial methods. Principal Components Analysis (PCA) is based on its
own vectors. It is often used during pre-processing (to get rid of noise
from the image – they correspond to a small variance that is
correlated with the corresponding own vectors). The method based
on neural networks is used for processing input data. Neural
networks are used for identification and classification of pattern data,
and therefore they are also used in face detection and recognition
systems. Gabor filters are one of the most commonly used methods
for representing facial features, using complex functions. Frequency-
spatial methods are based on frequency analysis of the image in
conjunction with the methods based on a geometric model.
Frequency-spatial methods allow for the proper isolation of 44 the
characteristic facial features and minimizing the influence of lighting
conditions during the acquisition. In vision-based systems it is
important to correctly identify the specific elements as well as to
analyse their movement. Common methods used to detect a motion
in video systems are differential and gradient methods. Differential
methods determine the difference between the subsequent image
frames. This allows determining the brightness level in the grayscale
or the colour intensity of the pixel during the frame changes. So, the
movement of the object can be detected (this is related to the change
in the brightness of the pixels that appear next to each other in the
image). This is a simple way to detect a movement, however, its
implementation can be tedious. One of the limitations of this method
is to have the stationary background, the lighting should be constant,
and the noise in the film should be reduced to a minimum (otherwise,
the algorithm may work not properly). Additionally, in order to
improve a movement detection, the moving object should contrast
with the background. Gradient methods rely on the optical flow. They
use spatial and temporal derivatives of the consecutive video frames.
In order to make an effective use of this group of the methods, the
following conditions should be met: invariability of the light, a small
displacement of moving objects in one sequence and spatial
coherence of the contiguous dots. Two most popular gradient
algorithms are Lucas-Kaneda and Horn-Schunk algorithm. The
principle of operation of the first algorithm is a characteristic
assumption: the brightness of the dots in the image is unchanged
~6~

over the time, the movement of the frames is constant. This method is
intended for the methodological purposes, that is, the section of
traffic in the area devoted to the area (not exceeding the registration
process). Its performance can be improved by solving this algorithm
into the form of a pyramid (image analysis of a small improvement
and then its gradual change). The Horn-Schunk algorithm is based on
the use of the optical flow equation, taking into account two
conditions: the brightness of the dots (the pixel brightness of the
moving object in the image is constant) and the speed of the dots (the
speed of the pixels belonging to the moving object are close to each
other, the motion field changes smoothly). This method belongs to
the global methods. Thanks to this, we get a high density of the flow
vector, which results in a more accurate information about the
movement of the object, including information about the area under
investigation in which the object is moving. The disadvantage of this
algorithm is that it is more susceptible to interference compared to
the local methods (e.g. Lucas-Kaneda). The example of the application
of an optical flow is shown below.

Example of using optical flow

When designing a video system that records moving objects, one


should choose algorithms that are resistant to interference.
~7~

Interference occurrence may disturb the processing and the analysis


of the data, which may lead to misinterpretation by the system. If the
selected methods are susceptible to interference, then the system will
not analyse the movement of the elements. It may lead to a kind of
dynamic "jumps" of the system between the observed objects. The
next procedure is to register and analyse the movement of the
classified features. It should be remembered that real-time systems
require a rapid response to the changes in observed objects. For
example, if the eyelid is closed for a long time, the system response:
"eyelids close" should be immediate. Any delay in the identification of
a fatigue can have catastrophic consequences. If the system does not
respond in time to the driver's microsleep, an accident may occur.
Problems related to design vision-based fatigue detection system
operating in real-time also apply to the hardware used for the video
signal acquisition and processing. One of the issues is the number of
cameras recording the object of the interest. We can install three
cameras in front of a driver, apply algorithms that generate a 3D
image, and then implement the appropriate data analysis methods.
Information will be the most accurate, but associated costs of the
final system may disqualify it for the industrial scale. Limiting the
number of the cameras to one will allow you to cut costs
considerably. The quality of the recording equipment is also of great
importance. Using equipment of poor quality, recorded video signal
may become noisy, affecting the performance of the algorithms for
video processing and analysing, which ultimately can lead to
misinterpretation of the results by the system. In addition, since
capturing details of the face is of the great importance to the system,
the vision system should use cameras high resolution image.
~8~

BLOCK DIAGRAM
In order to reduce the number of road accidents resulting from
a driver fatigue, it is of great importance to introduce to the
automotive industry a system that would effectively detect the first
signs of a fatigue and notify the driver. A system based on real-time
face analysis can be one of the most effective approaches for
detecting fatigue symptoms. There are many problems associated
with its design such as uneven illumination of a driver’s face or the
selection of effective real-time data processing algorithms to name a
few. Current technological advances in video recording and
processing help reduce and even eliminate such problems. It is
envisaged that by integrating such a system with other on-board car
driving system would increase road safety definitely. The block
diagram of the hypothetical system, and principle of its operation is
presented in Figure 4. The investigations of the proposed drowsiness
detection vision system will be continued and the results of the
research will be delivered.

Block diagram of driver drowsiness detection vision system


~9~

CHAPTER-2

How Driver Drowsiness Detection System Works??

During long journeys, it’s possible that the driver may lose
attention because of drowsiness, which may be a potential reason for
fatal accidents. With technologies like Driver Drowsiness
Detection getting it is possible to detect driver’s driving behaviour
that may prove fatal to the vehicle as well as the people boarding it.

Having such sleep detection system in cars embedded in vehicles


could protect precious lives and property worth billion dollars. The
outcome would be positive – it would be suitable for fleet owners as
well as individual vehicle users. In either case, the objective is
identical by sleep detection while driving.

In this article, we’ll discuss How Driver Drowsiness Detection works in


Vehicles.
~ 10 ~

Driving a vehicle involves coordination of the locomotor system


along with the healthy function of the brain. When the driver feels
drowsy, it may unsettle the balance and may lead to erratic driving
causing potential accidents.

While driving, you may feel drowsy when you’re under driving
fatigue because of continuous driving for several hours. It’s here that
the driver drowsiness detection plays a significant role in preventing
accidents that could otherwise cause massive loss of life and
property.

How does Drowsiness Detection System work?

The driver drowsiness detection system uses Image Processing to


analyse the driver’s eye blink pattern by sitting on the vehicle’s
dashboard.

If the eye lid movements are abnormal than usual then the detection
system triggers the alarm thus alerting the driver about the
condition.

According to National Highway Traffic Safety Administration around


72K crashes, 44K injuries & 800+ deaths are reportedly caused due
to Drowsiness in the year 2013.

Importance of sleep detection- Who needs to get it?


Life is precious and no number of words suffice to evaluate it. It’s,
therefore, imperative to protect it from fatal consequences while
driving a vehicle. And, if you’re an owner of a fleet/s of vehicles.
~ 11 ~

Further, your vehicles comprise heavy capital assets, which you need
to protect from potential losses because of fatal accidents.

There are diverse products that protect and ensure the safety of
vehicles but most of them don’t come with a built-in drowsiness
detection sensor.

If you’re looking for the feature such as sleep detection in vehicles


and happen to be a Fleet owner. Then you’ve to deal with multiple
devices & multiple apps associated with them.

Practically this is not an ideal option for both fleet owners and even
for individual car owners to deal with multiple devices.

Drowsiness detection with OpenCV

Rigging with a drowsiness detector


~ 12 ~

We can also use Raspberry Pi 3 due to (1) form factor


and the real-world implications of building a driver drowsiness
detector using very affordable hardware. The Raspberry Pi isn’t
quite fast enough for real-time facial landmark detection.
~ 13 ~

CHAPTER-3

The drowsiness detection algorithm

The general flow of our drowsiness detection algorithm is fairly


straightforward.

First, we’ll setup a camera that monitors a stream for faces:

Look for faces in the input video stream.

If a face is found, we apply facial landmark detection and extract the


eye regions:
~ 14 ~

Apply facial landmark localization to extract the eye regions from the
face.

Now that we have the eye regions, we can compute the eye aspect
ratio to determine if the eyes are closed:
~ 15 ~

Figure 5: Step #3 — Compute the eye aspect ratio to determine if the eyes are
closed.

If the eye aspect ratio indicates that the eyes have been closed for a
sufficiently long enough amount of time, we’ll sound an alarm to
wake up the driver:

Figure 6: Step #4 — Sound an alarm if the eyes have been closed for a
sufficiently long enough time.
~ 16 ~

In the next section, we’ll implement the drowsiness detection


algorithm detailed above using OpenCV, dib, and Python.

Building the drowsiness detector with OpenCV


To start our implementation, open up a new file, name
it detect_drowsiness.py, and use the below packages:

1 # import the necessary packages


2 from scipy.spatial import distance as dist
3 from imutils.video import VideoStream
4 from imutils import face_utils
5 from threading import Thread
6 import numpy as np
7 import playsound
8 import argparse
9 import imutils
10 import time
11 import dlib
12 import cv2
import our required Python packages.

The SciPy package is used to compute the Euclidean distance


between facial landmarks points in the eye aspect ratio calculation
(not strictly a requirement, but you should have SciPy installed if you
intend on doing any work in the computer vision, image processing,
or machine learning space).
The imutils package, computer vision and image processing functions
to make working with OpenCV easier.
If you don’t already have imutils installed on your system, you can
install/upgrade imutils via:

Pip install --upgrade imutils


~ 17 ~

The Thread class so we can play our alarm in a separate thread from
the main thread to ensure our script doesn’t pause execution while
the alarm sounds.
In order to actually play our WAV/MP3 alarm, we need
the playsound library, a pure Python, cross-platform implementation
for playing simple sounds.
The playsound library is conveniently installable via pip :

pip install playsound

To detect and localize facial landmarks we’ll need the dlib


library which is imported on Line 11.

Next, we need to define our sound_alarm function which accepts


a path to an audio file residing on disk and then plays the file:
14 def sound_alarm(path):
15 # play an alarm sound
16 playsound.playsound(path)

To define the eye_aspect_ratio function which is used to compute the


ratio of distances between the vertical eye landmarks and the distanc
es between the horizontal eye landmarks:

def eye_aspect_ratio(eye):
18 # compute the euclidean distances between the two
19 sets of
20 # vertical eye landmarks (x, y)-coordinates
21 A = dist.euclidean(eye[1], eye[5])
22 B = dist.euclidean(eye[2], eye[4])
23
24 # compute the euclidean distance between the
25 horizontal
26 # eye landmark (x, y)-coordinates
27 C = dist.euclidean(eye[0], eye[3])
28
29 # compute the eye aspect ratio
30 ear = (A + B) / (2.0 * C)
31
32 # return the eye aspect ratio
return ear
~ 18 ~

The return value of the eye aspect ratio will be approximately


constant when the eye is open. The value will then rapidly
decrease towards zero during a blink.

If the eye is closed, the eye aspect ratio will again remain
approximately constant, but will be much smaller than the ratio
when the eye is open.

Top-left: A visualization of eye landmarks when then the eye is open.


Top-right: Eye landmarks when the eye is closed.
Bottom: Plotting the eye aspect ratio over time.
The dip in the eye aspect ratio indicates a blink.

On the top-left we have an eye that is fully open with the eye facial
landmarks plotted. Then on the top-right we have an eye that is
closed. The bottom then plots the eye aspect ratio over time.
As we can see, the eye aspect ratio is constant (indicating the eye
is open), then rapidly drops to zero, then increases again,
indicating a blink has taken place.

In our drowsiness detector case, we’ll be monitoring the eye


aspect ratio to see if the value falls but does not increase again,
thus implying that the person has closed their eyes.
~ 19 ~

34 # construct the argument parse and parse the arguments


35 ap = argparse.ArgumentParser()
36 ap.add_argument("-p", "--shape-predictor", required=True,
37 help="path to facial landmark predictor")
38 ap.add_argument("-a", "--alarm", type=str, default="",
39 help="path alarm .WAV file")
40 ap.add_argument("-w", "--webcam", type=int, default=0,
41 help="index of webcam on system")
42 args = vars(ap.parse_args())

Our drowsiness detector requires one command line argument


followed by two optional ones, each of which is detailed below:

• --shape-predictor: This is the path to dlib’s pre-trained facial


landmark detector.
• --alarm: Here you can optionally specify the path to an input audio
file to be used as an alarm.
• --webcam: This integer controls the index of your built-in
webcam/USB camera.

Now that our command line arguments have been parsed, we


need to define a few important variables:

# define two constants, one for the eye aspect ratio to


indicate
44
# blink and then a second constant for the number of
45
consecutive
46
# frames the eye must be below the threshold for to set off
47
the
48
# alarm
49
EYE_AR_THRESH = 0.3
50
EYE_AR_CONSEC_FRAMES = 48
51
52
# initialize the frame counter as well as a boolean used to
53
# indicate if the alarm is going off
54
COUNTER = 0
ALARM_ON = False
~ 20 ~

Line 48 defines the EYE_AR_THRESH. If the eye aspect ratio


falls below this threshold, we’ll start counting the number of
frames the person has closed their eyes for.
If the number of frames the person has closed their eyes in
exceeds EYE_AR_CONSEC_FRAMES (Line 49), we’ll sound an
alarm.
Experimentally, I’ve found that an EYE_AR_THRESH of 0.3 works
well in a variety of situations (although you may need to tune it
yourself for your own applications).
I’ve also set the EYE_AR_CONSEC_FRAMES to be 48, meaning that
if a person has closed their eyes for 48 consecutive frames, we’ll
play the alarm sound.
You can make the drowsiness detector more
sensitive by decreasing the EYE_AR_CONSEC_FRAMES —
similarly, you can make the drowsiness detector less sensitive by
increasing it.
Line 53 defines COUNTER, the total number of consecutive
frames where the eye aspect ratio is below EYE_AR_THRESH.
If COUNTER exceeds EYE_AR_CONSEC_FRAMES, then we’ll
update the Boolean ALARM_ON (Line 54).
The dlib library ships with a histogram of oriented gradient
based face detector along with a facial landmark predictor — we
instantiate both of these in the following code block:

56 # initialize dlib's face detector (HOG-based) and then create


57 # the facial landmark predictor
58 print("[INFO] loading facial landmark predictor...")
59 detector = dlib.get_frontal_face_detector()
60 predictor = dlib.shape_predictor(args["shape_predictor"])

The facial landmarks produced by dlib are an indexable list, as I


describe here:
~ 21 ~

Visualizing the 68 facial landmark coordinates from the iBUG 300-W dataset
(larger resolution).

Therefore, to extract the eye regions from a set of facial landmarks,


we simply need to know the correct array slice indexes:

# grab the indexes of the facial landmarks for the left and
62 # right eye, respectively
63 (lStart, lEnd) =
64 face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
65 (rStart, rEnd) =
face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]

Using these indexes, we’ll easily be able to extract the eye regions via
an array slice.
~ 22 ~

We are now ready to start the core of our drowsiness detector:

# start the video stream thread


67
print("[INFO] starting video stream thread...")
68
vs = VideoStream(src=args["webcam"]).start()
69
time.sleep(1.0)
70
71
# loop over frames from the video stream
72
while True:
73
# grab the frame from the threaded video file stream,
74
resize
75
# it, and convert it to grayscale
76
# channels)
77
frame = vs.read()
78
frame = imutils.resize(frame, width=450)
79
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
80
81
# detect faces in the grayscale frame
82
rects = detector(gray, 0)

On Line 69 we instantiate our Video Stream using the supplied --


webcam index.
We then pause for a second to allow the camera sensor to warm
up (Line 70).
On Line 73 we start looping over frames in our video stream.
Line 77 reads the next frame, which we then pre-process by
resizing it to have a width of 450 pixels and converting it to
grayscale (Lines 78 and 79).
Line 82 applies dlib’s face detector to find and locate the face(s) in
the image.
The next step is to apply facial landmark detection to localize each
of the important regions of the face:

84 # loop over the face detections


85 for rect in rects:
86 # determine the facial landmarks for the face region,
87 then
~ 23 ~

88 # convert the facial landmark (x, y)-coordinates to a


89 NumPy
90 # array
91 shape = predictor(gray, rect)
92 shape = face_utils.shape_to_np(shape)
93
94 # extract the left and right eye coordinates, then use the
95 # coordinates to compute the eye aspect ratio for both
96 eyes
97 leftEye = shape[lStart:lEnd]
98 rightEye = shape[rStart:rEnd]
99 leftEAR = eye_aspect_ratio(leftEye)
100 rightEAR = eye_aspect_ratio(rightEye)

# average the eye aspect ratio together for both eyes


ear = (leftEAR + rightEAR) / 2.0

We loop over each of the detected faces on Line 85 — in our


implementation (specifically related to driver drowsiness), we
assume there is only one face — the driver — but I left this
for loop in here just in case you want to apply the technique to
videos with more than one face.
For each of the detected faces, we apply dlib’s facial landmark
detector (Line 89) and convert the result to a NumPy array (Line
90).
Using NumPy array slicing we can extract the (x, y)-coordinates of
the left and right eye, respectively (Lines 94 and 95).
Given the (x, y)-coordinates for both eyes, we then compute their
eye aspect ratios on Line 96 and 97.
averaging both eye aspect ratios together to obtain a better
estimation (Line 100).
We can then visualize each of the eye regions on our frame by
using the cv2.drawContours function below — this is often helpful
when we are trying to debug our script and want to ensure that the
eyes are being correctly detected and localized

102 # compute the convex hull for the left and right eye, then
103 # visualize each of the eyes
104 leftEyeHull = cv2.convexHull(leftEye)
105 rightEyeHull = cv2.convexHull(rightEye)
106 cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
~ 24 ~

107 cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0),


1)

we are now ready to check to see if the person in our video stream
is starting to show symptoms of drowsiness:
# check to see if the eye aspect ratio is below the blink
# threshold, and if so, increment the blink frame counter
if ear < EYE_AR_THRESH:
109 COUNTER += 1
110
111 # if the eyes were closed for a sufficient number of
112 # then sound the alarm
113 if COUNTER >= EYE_AR_CONSEC_FRAMES:
114 # if the alarm is not on, turn it on
115 if not ALARM_ON:
116 ALARM_ON = True
117
118 # check to see if an alarm file was
119 supplied,
120 # and if so, start a thread to have the
121 alarm
122 # sound played in the background
123 if args["alarm"] != "":
124 t=
125 Thread(target=sound_alarm,
126 args=(args["alarm"],))
127 t.deamon = True
128 t.start()
129
130 # draw an alarm on the frame
131 cv2.putText(frame, "DROWSINESS ALERT!",
132 (10, 30),
133 cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0,
134 0, 255), 2)
135
136 # otherwise, the eye aspect ratio is not below the blink
137 # threshold, so reset the counter and alarm
138 else:
COUNTER = 0
ALARM_ON = False

On Line 111 we make a check to see if the eye aspect ratio is


below the “blink/closed” eye threshold, EYE_AR_THRESH.
~ 25 ~

If it is, we increment COUNTER, the total number of consecutive


frames where the person has had their eyes closed.
If COUNTER exceeds EYE_AR_CONSEC_FRAMES (Line 116),
then we assume the person is starting to doze off.
Another check is made, this time on Line 118 and 119 to see if the
alarm is on — if it’s not, we turn it on.
Lines 124-128 handle playing the alarm sound, provided an --
alarm path was supplied when the script was executed. We take
special care to create a separate thread responsible for
calling sound_alarm to ensure that our main program isn’t blocked
until the sound finishes playing.
Lines 131 and 132 draw the text DROWSINESS ALERT! on
our frame — again, this is often helpful for debugging, especially if
you are not using the playsound library.
Finally, Lines 136-138 handle the case where the eye aspect ratio
is larger thanEYE_AR_THRESH, indicating the eyes are open. If
the eyes are open, we reset COUNTER and ensure the alarm is
off.
The final code block in our drowsiness detector handles displaying
the output frame to our screen:

# draw the computed eye aspect ratio on the frame to


140
help
141
# with debugging and setting the correct eye aspect ratio
142
# thresholds and frame counters
143
cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30),
144
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
145
146
# show the frame
147
cv2.imshow("Frame", frame)
148
key = cv2.waitKey(1) & 0xFF
149
150
# if the `q` key was pressed, break from the loop
151
if key == ord("q"):
152
break
153
154
# do a bit of cleanup
155
cv2.destroyAllWindows()
156
vs.stop()
~ 26 ~

CHAPTER-4

System Analysis and Requirements: --

MACHINE LEARNING: --

Machine Learning is a sub-area of artificial intelligence, whereby the term


refers to the ability of IT systems to independently find solutions to
problems by recognizing patterns in databases. In other words: Machine
Learning enables IT systems to recognize patterns on the basis of existing
algorithms and data sets and to develop adequate solution concepts.
Therefore, in Machine Learning, artificial knowledge is generated on the
basis of experience. In order to enable the software to independently
generate solutions, the prior action of people is necessary. For example, the
required algorithms and data must be fed into the systems in advance and
the respective analysis rules for the recognition of patterns in the data stock
must be defined. Once these two steps have been completed, the system can
perform the following tasks by Machine Learning:

• Finding, extracting and summarizing relevant data


• Making predictions based on the analysis data
• Calculating probabilities for specific results
• Adapting to certain developments autonomously
• Optimizing processes based on recognized patterns

ADVANTAGES OF MACHINE LEARNING

Easily identifies trends and patterns: --

Machine Learning can review large volumes of data and


discover specific trends and patterns that would not be apparent to
humans. For instance, for an e-commerce website like Amazon, it serves
to understand the browsing behaviours and purchase histories of its users
to help cater to the right products, deals, and reminders relevant to them.
It uses the results to reveal relevant advertisements to them.

No human intervention needed (automation): --


~ 27 ~

With ML, you don’t need to babysit your project every step
of the way. Since it means giving machines the ability to learn, it lets
them make predictions and also improve the algorithms on their own. A
common example of this is anti-virus software; 9 they learn to filter new
threats as they are recognized. ML is also good at recognizing spam.

Continuous Improvement: --

As ML algorithms gain experience, they keep improving in


accuracy and efficiency. This lets them make better decisions. Say you
need to make a weather forecast model. As the amount of data, you have
keeps growing, your algorithms learn to make more accurate predictions
faster.

Handling multi-dimensional and multi-variety data: --

Machine Learning algorithms are good at handling data that are


multi-dimensional and multi-variety, and they can do this in dynamic or
uncertain environments.

Wide Applications: --

You could be an e-tailer or a healthcare provider and make ML


work for you. Where it does apply, it holds the capability to help deliver a
much more personal experience to customers while also targeting the
right customers.

MACHINE LEARNING METHODS

SUPERVISED LEARNING: --

These algorithms are trained using labeled examples, in


different scenarios, as an input where the desired outcome is already
known. An equipment, for instance, could have data points such as "F"
and "R" where "F" represents "failed" and "R" represents "runs". A
learning algorithm will receive a set of input instructions along with the
corresponding accurate outcomes. The learning algorithm will then
compare the actual outcome with the accurate outcome and flag an error,
if there is any discrepancy. Using different methods, such as regression,
classification, gradient boosting, and prediction, supervised learning uses
~ 28 ~

different patterns to proactively predict the values of a label on extra


unlabelled data. This method is commonly used in areas where historical
data is used to predict events that are likely to occur in the future. For
instance, anticipate when a credit card transaction is likely to be
fraudulent or predict which insurance customers are likely to file their
claims.

UNSUPERVISED LEARNING: --

This method of ML finds its application in areas were data has no


historical labels. Here, the system will not be provided with the
"right answer" and the algorithm should identify what is being
shown. The main aim here is to analyse the data and identify a
pattern and structure within the available data set. Transactional
data serves as a good source of data set for unsupervised learning.
For instance, this type of learning identifies customer segments
with similar attributes and then lets the business to treat them
similarly in marketing campaigns. Similarly, it can also identify
attributes that differentiate customer segments from one another.
Either ways, it is about identifying a similar structure in the
available data set. Besides, these algorithms can also identify
outliers in the available data sets. Some of the widely used
techniques of unsupervised learning are –

• k-means clustering

• self-organizing maps

• value decomposition

• mapping of nearest neighbour

SEMI-SUPERVISED LEARNING: --

This kind of learning is used and applied to the same kind of


scenarios where supervised learning is applicable. However, one
must note that this technique uses both unlabelled and labelled data
for training. Ideally, a small set of labelled data, along with a large
volume of unlabelled data is used, as it takes less time, money and
efforts to acquire unlabelled data. This type of machine learning is
~ 29 ~

often used with methods, such as regression, classification and


prediction. Companies that usually find it challenging to meet the
high costs associated with labelled training process opt for semi-
supervised learning.

REINFORCEMENT LEARNING: --

This is mainly used in navigation, robotics and gaming. Actions


that yield the best rewards are identified by algorithms that use trial
and error methods. There are three major components in
reinforcement learning, namely, the agent, the actions and the
environment. The agent in this case is the decision maker, the
actions are what an agent does, and the environment is anything
that an agent interacts with. The main aim in these 11 kinds of
learning is to select the actions that maximize the reward, within a
specified time. By following a good policy, the agent can achieve
the goal faster. Hence, the primary idea of reinforcement learning
is to identify the best policy or the method that helps businesses in
achieving the goals faster. While humans can create a few good
models in a week, machine learning is capable of developing
thousands of such models in a week.

SYSTEM REQUIREMENTS

HARDWARE REQUIREMENTS: --

System: Broadcom Processor

Hard Disk: 100MB.

Monitor: 15 VGA Colour

Mouse: Logitech.

Ram: 1 GB

Small Display: LCD Display

Camera : WebCam
~ 30 ~

SOFTWARE REQUIREMENT: --

Operating system: Windows/Linux

Coding Language: Python3


~ 31 ~

CHAPTER-5

SYSTEM DESIGN

PROPOSED SYSTEM: --

In the existing system we are just generating the alarm based on the
speed of the driver. But where as in the proposed system whenever the
drowsiness of the driver is detected we genearate an alarm.

Advantages: --

• Increase in safety

• Better transportation services

• Avoid accidents

FUNCTIONAL SPECIFICATIONS: --

The system should be able to meet the following functionalities:

1. To detect drowsiness in driver.

2. Issue alert when drowsiness is detected

NON-FUNCTIONAL SPECIFICATIONS: --

Unified Modelling Language (UML) is a general-purpose


modelling language. The main aim of UML is to define a standard way to
visualize the way a system has been designed. It is quite similar to
blueprints used in other fields of engineering. UML is not a programming
language; it is rather a visual language. We use UML diagrams to portray
the behaviour and structure of a system. UML helps software engineers,
businessmen and system architects with modelling, design and analysis.

The goal is for UML to become a common language for creating


models of object-oriented computer software. In its current form UML is
comprised of two major components: a Meta-model and a notation. In the
future, some form of method or process may also be added to; or
associated with, UML. The Unified Modelling Language is a standard
~ 32 ~

language for specifying, Visualization, Constructing and documenting the


artifacts of software system, as well as for business modelling and other
non-software systems. The UML represents a collection of best
engineering practices that have proven successful in the modelling of
large and complex systems. The UML is a very important part of
developing objects-oriented software and the software development
process. The UML uses mostly graphical notations to express the design
of software projects.

GOALS: --

The Primary goals in the design of the UML are as follows:

1. Provide users a ready-to-use, expressive visual modeling Language so


that they can develop and exchange meaningful models.

2. Provide extendibility and specialization mechanisms to extend the core


concepts.

3. Be independent of particular programming languages and development


process.

4. Provide a formal basis for understanding the modeling language.

5. Encourage the growth of OO tools market.

6. Support higher level development concepts such as collaborations,


frameworks, patterns and components.

7. Integrate best practices

USE CASE DIAGRAM: --

A use case diagram in the Unified Modelling Language (UML) is a


type of behavioural diagram defined by and created from a Use-case
analysis. Its purpose is to present a graphical overview of the
functionality provided by a system in terms of actors, their goals
(represented as use cases), and any dependencies between those use
cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system
~ 33 ~

can be depicted. A use case can be described as a specific way of using


the system from a user’s (actor’s) perspective.

A use case diagram is usually simple. It does not show the detail of the
use cases:

• It only summarizes some of the relationships between use cases,


actors, and

• It does not show the order in which steps are performed to achieve
the goals of each use case

UseCase Diagram
~ 34 ~

CLASS DIAGRAM: --

Class diagrams are the main building block in object-oriented


modelling. They are used to show the different objects in a system, their
attributes, their operations and the relationships among them.

Class diagram in the Unified Modelling Language (UML) is a type


of static structure diagram that describes the structure of a system by
showing the system's classes, their attributes, operations (or methods),
and the relationships among the classes. It explains which class contains
information.

Class Diagram

SEQUENCE DIAGRAM: --

A sequence diagram simply depicts interaction between objects in


a sequential order i.e. the order in which these interactions take place. We
can also use the terms event diagrams or event scenarios to refer to a
sequence diagram. Sequence diagrams describe how and in what order
~ 35 ~

the objects in a system function. A sequence diagram in Unified


modelling Language (UML) is a kind of interaction diagram that shows
how processes operate with one another and in what order. It is a 16
construct of a Message Sequence Chart. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagrams.

Sequence Diagram

ACTIVITY DIAGRAM: --

Activity diagram focuses on the execution and flow of the behavior


of a system instead of implementation. It is also called object-oriented
flowchart. Activity diagrams consist of activities that are made up of
~ 36 ~

actions which apply to behavioural modeling technology. Activity


diagrams are graphical representations of workflows of stepwise activities
and actions with support for choice, iteration and concurrency. In the
Unified 17 Modelling Language, activity diagrams can be used to
describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of
control.

Activity Diagram

STATECHART DIAGRAM: --

A Statechart diagram describes a state machine. State machine can


be defined as a machine which defines different states of an object and
these states are controlled by external or internal events. It describes
~ 37 ~

different states of a component in a system. The states are specific to a


component/object of a system. Statechart diagram describes the flow of
control from one state to another state. States are defined as a condition in
which an object exists and it changes when some 18 events is triggered.
The most important purpose of Statechart diagram is to model lifetime of
an object from creation to termination. Statechart diagram describes the
flow of control from one state to another state. States are defined as a
condition in which an object exists and it changes when some event is
triggered. The most important purpose of Statechart diagram is to model
lifetime of an object from creation to termination.

Statechart Diagram

COMPONENT DIAGRAM: --

When modelling large object-oriented systems, it is necessary to


break down the system into manageable subsystems. UML component
diagrams are used for modelling large systems into smaller subsystems
~ 38 ~

which can be easily managed. A component is a replaceable and


executable piece of a system whose implementation details are hidden. A
component provides the set of interfaces that a 19 component realizes or
implements. Components also require interfaces to carry out a function.

UML Component diagrams are used to represent different components


of a system.

Component Diagram

OBJECT DIAGRAM: --

An Object Diagram can be referred to as a screenshot of the


instances in a system and the relationship that exists between them.
~ 39 ~

Object diagrams are vital to portray and understand functional


requirements of a system. It is a diagram that shows a complete or partial
view of the structure of a modelled system at a specific time.

Object Diagram

COLLABORATION DIAGRAM: --

Collaboration diagrams are used to show how objects interact to


perform the behaviour of a particular use case, or a part of a use case.
Along with sequence diagrams, collaboration is used by designers to
define and clarify the roles of the objects that perform a particular flow of
events of a use case. They are the primary source of information used to
determining class responsibilities and interfaces.
~ 40 ~

Collaboration Diagram

DEPLOYMENT DIAGRAM: --

Deployment Diagram is a type of diagram that specifies the


physical hardware on which the software system will execute. It also
determines how the software is deployed on the underlying hardware. It
maps software pieces of a system to the device that are going to execute
it.The deployment diagram maps the software architecture created in
design to the physical system architecture that executes it.

DATA FLOW DIAGRAM: --

The DFD is also called as bubble chart. It is a simple graphical


formalism that can be used to represent a system in terms of input data to
the system, various processing carried out on this data, and the output
data is generated by this system.

The data flow diagram (DFD) is one of the most important


modelling tools. It is used to model the system components. These
components are the system process, the data used by the process, an
external entity that interacts with the system and the information flows in
the system.

DFD shows how the information moves through the system and
how it is modified by a series of transformations. It is a graphical
~ 41 ~

technique that depicts information flow and the transformations that are
applied as data moves from input to output.

DFD is also known as bubble chart. A DFD may be used to


represent a system at any level of abstraction. DFD may be partitioned
into levels that represent increasing information flow and functional
detail.

Data Flow Diagram


~ 42 ~

CHAPTER-6

Testing the OpenCV drowsiness detector.

$ python detect_drowsiness.py \
--shape-predictor
shape_predictor_68_face_landmarks.dat
--alarm alarm.wav

Two important computer vision techniques:

• Facial landmark detection


• Eye aspect ratio

Facial Landmark Prediction is the process of localizing key facial


structures on a face, including the eyes, eyebrows, nose, mouth, and
jawline.
Specifically, in the context of drowsiness detection, we only needed
the eye regions.The eye aspect ratio to determine if the eyes are
closed. If the eyes have been closed for a sufficiently long enough
period of time, we can assume the user is at risk of falling asleep and
sound an alarm to grab their attention. More details on the eye aspect
ratio

Eye blink detection with OpenCV, Python, and dlib


Our blink detection is divided into four parts.

In the first part- the eye aspect ratio and how it can be used to
determine if a person is blinking or not in a given video frame.
Then write Python, OpenCV, and dlib code to
(1) perform facial landmark detection
(2) detect blinks in video streams.
Based on this implementation we’ll apply our method to detecting
blinks in example webcam streams along with video files.

Finally, methods to improve our blink detector.

Understanding the “eye aspect ratio” (EAR)


~ 43 ~

facial landmark detection to localize important regions of the face,


including eyes, eyebrows, nose, ears, and mouth:

Detecting facial landmarks in an video stream in real-time.

This also implies that we can extract specific spacial


structures by knowing the indexes of the particular face parts:
~ 44 ~

Applying facial landmarks to localize various regions of the face, including eyes,
eyebrows, nose, mouth, and jawline.

In terms of blink detection, we are only interested in two sets of


facial structures — the eyes.

Each eye is represented by 6 (x, y)-coordinates, starting at the left-


corner of the eye (as if you were looking at the person), and then
working clockwise around the remainder of the region:
~ 45 ~

The 6 facial landmarks associated with the eye.

Based on this image, we should take away on key point:

There is a relation between the width and the height of these


coordinates.

RealTime Eye Blink Detection Using Facial Landmarks- we can then


derive an equation that reflects this relation called the eye aspect
ratio (EAR):

The eye aspect ratio equation.

Where p1, …, p6 are 2D facial landmark locations.


The numerator of this equation computes the distance between the
vertical eye landmarks while the denominator computes the distance
between horizontal eye landmarks, weighting the denominator
appropriately since there is only one set of horizontal points
but two sets of vertical points.
~ 46 ~

TEST CASES AND REPORT: --


~ 47 ~
~ 48 ~

CHAPTER-7

Test-Cases

The tests were conducted in various conditions including:

1. Different light conditions.


2. Drivers posture and position of the automobile drivers
face.
3. Drivers with spectacles.
• Test case 1: When there is ambient light:

Result: --
As shown in Fig, when there is ambient amount of light, the automobile
driver's face and eyes are successfully detected.
Test case 2: -- Position
Position of the automobile drivers face.
When the automobile driver’s head is tilted.
~ 49 ~

Result: --
As shown in the figure, when the automobile driver’s face is tilted for
more than 30 degrees from vertical plane, it was observed that the
detection of face and eyes failed.
The system was extensively tested even in real world scenarios, this was
achieved by placing the camera on the visor of the car, focusing on the
automobile driver. It was found that the system gave positive output
unless there was any direct light falling on the camera.
Test Case3: -- Using spectacles
Result: --
As shown in screen snapshot below, When the automobile driver is
wearing spectacles, the face, eyes, eye blinks, and drowsiness was
successfully detected.
~ 50 ~

Limitations: --
The following are some of the limitations of proposed system.
1. The system fails, if the automobile driver is wearing any kind of
sunglasses.
2. The system does not function if there is light falling directly on
the camera.
3. The system fails when the person is not blinking his eyes.
4. The system fails when the driver sleeps by opening his eyes.
CONCLUSION:
Four features that make our system different from existing ones are:
(a) Focus on the driver, which is a direct way of detecting the
drowsiness.
(b) A real-time system that detects face,iris, blink, and driver
drowsiness.
(c) A completely non-intrusive system, and
(d) Cost effective.

You might also like