Driver Drowsiness Detection System Final Draft
Driver Drowsiness Detection System Final Draft
By
Road crashes are the most common cause of accidents and deaths worldwide, and the
major causes of these accidents are usually intoxication, drowsiness, and reckless
driving. According to the World Health Organization, the number of road traffic
injuries has risen to 1.25 billion worldwide, making driver drowsiness detection a
critical issue.
Potential area for reducing the number of sleep-related road accidents. This project
proposes a method for detecting drowsiness using machine learning. As a result of the
learning algorithms, the driver is alerted in real-time to avoid a collision The Haar
Cascade algorithm is used in the model. in conjunction with the OpenCV library to
monitor real-time video of the driver, as well as to detect the driver's eyes The system
makes use of The concept of Eye Aspect Ratio (EAR) is used to determine whether or
not the eyes are open or it is closed. We also feed a data-set file containing the facial
images. To train the machine learning algorithm, features data-points are used. The
model inspects each frame of the video, which aids in determining the driver's state.
In addition, a Raspberry Pi single-board computer a computer with a camera module
and an alarm system assists the project in simulating a compact drowsiness detection
system adaptable to a variety of automobiles.
Supervisor ……………………………………………………
This work is a fruit of countless and arduous sacrifices. Through the researchers effort
this work is heartily and proudly dedicated to the people who save as an inspiration
from my parents brother and the circle of colleagues who extended their help in the
midst of problems while doing this work. .May the dear Lord bless you abundantly.
“ If you keep doing what you are doing, you keep getting what you are getting”
1.1.Background
Drowsy driving is amongst the leading causes of fatal car accidents. According to a
recent study, one out of every five road accidents is caused by drowsy driving,
accounting for roughly 21 percent of all road accidents, and this percentage is
increasing year after year, according to the global status report on road safety 2015,
which is based on data from over 160 different countries. It is undeniably true,
highlighting the fact that the total number of road traffic deaths worldwide is very
high due to driver drowsiness. Driver fatigue, drunk driving, and carelessness are
cited as major causes of such road accidents. Many lives and families were lost.
Various countries are being impacted as a result of this. One of the most effective
majors that can be used is real-time drowsy driving detection can be used to help
drivers become aware of drowsy driving conditions. [10]This type of driver
behavioral state detection system can assist in catching. However, ocular
measurement can be performed without a physical connection. Since it can detect the
eye's open/closed state non intrusively by the use of a camera, an ocular measure for
detection of the drivers eye condition and possible vision based on eye closure are
well suited for real-world driving conditions. [8]In a real-time driver drowsiness
system based on image processing, the eye state of the driver is captured using
computer vision.
[9]Drowsy or fatigued driving is a condition in which drivers are tired and exhausted
but continue to drive their vehicle. Driving in a fatigued state can impair the driver's
attention span. and also the vehicles around. For those with less build-up sleep debt,
or not enough sleep, fatigue can impair reaction time and decision making when one
[5]The Computer Vision method is the most feasible method for a Driver's
Drowsiness Detection System. It is because it does not rely on any outside factor that
might cause a false positive. Neither does it necessitate any physical connections with
the driver and could divert the driver's attention. The computer vision domain makes
use of a number of machine learning algorithms to ascertain drowsiness, for example,
the Support Vector Machine (SVM)[3]. An algorithm for classifying objects by
separating data items.[6][8][9] Using a dataset, it detects the eyes and other facial
features. However, it produces less accurate results and has a higher error rate.
particularly in large or noisy datasets. Another algorithm of this type is the model of
Convolutional Neural Networks (CNN), which utilizes neural networks to detect
drowsiness mimic the operation of the human brain on a computer [4, 5].
[5] It proves to be extremely accurate, but it also necessitates a device is needed for
detection of a significant feature of the driver’s face to indicate the fatigue level and
wake the driver. The device should be able to detect the eyes of the driver if is seeing
the road as well as if the driver is awake. In case the driver is drowsy or has lost focus
operating the radio that the driver can not see where he/she is driving to, the system
should be able to alert the driver.
This chapter holds the vital content of this research subject by giving exact and
satisfactory advancement, points, and purpose of this research. It likewise extends a
sound result of this examination through objectives and aims.
The design aims to ensure that drivers are assisted during driving and accidents by
lack of concentration are reduced. the driver is alerted when they start to doze off or
close their eyes. There will be a reduction in accidents and deaths caused by driver
error. To design a cost-effective and compatible system that benefits drivers during
driving. To build a system that alerts the driver when they are losing focus by not
looking on the road and alerts the driver when they are sleeping whilst driving. To
build a system that is reliable and convenient to the user. The goals of the project are
as shown below
Chapter 1-This part gives a completely clear picture and outline of the project. It
gives the issue explanation, point and goals, project scope, and the forecast to offer a
more clear course of the project.
Chapter 2- The Chapter provides detailed analysis, literature review, and theoretical
analysis of the instruments used in this research. The instruments include Pi Camera,
Chapter 3: The chapter states the methodology used, systems design, the interfacing
of components, the data collection methods, and procedures
Chapter 4- The chapter presents the gathered outcomes, the tuning strategies utilized,
and an investigation of the outcomes, difficulties, choices, and solutions used are
unmistakably stated.
LITERATURE REVIEW
2.1. Introduction
The automobile industry is currently on the rise all over the world. As a result, the
number of vehicles on the road is increasing at an exponential rate, which has
exacerbated the problem of increased road accidents. In each country, there has been
an increase in the number of road accidents. Road Accidents have proven to be a
significant threat and reduced the general public's safety, let alone the driver's.
The World Health Organization stated in their Global Status Report on Road Safety,
distinguished sleepiness, liquor addiction, and inconsiderateness as critical reasons for
street mishaps. Subsequently, the fatalities and related costs represent a genuine
danger to families from one side of the planet to the other. The ongoing drowsiness
identification strategies are not generally utilized because of their significant expense
and restricted accessibility, making them unsatisfactory for use in the norm or non-
extravagance vehicles.
Along these lines, there is a developing requirement for a brilliant and reasonable
drowsiness identification framework that the various vehicles in the business can
rapidly adjust. The fields of AI and computerized reasoning have made various earth-
shattering advances, which utilize various algorithms to train the framework to be
brilliant and independent.
2.2. BACKGROUND
2.2.1. Fatigue
According to the National Sleep Foundation's Sleep in America poll on 2005 [11]
National Sleep Foundation. approximately 60% of Americans have experienced
driving whilst feeling sleepy, whilst the other 36% have admitted to have fallen asleep
whilst driving. This disturbing figure is obvious proof that drowsiness and fatigue are
affecting drivers nowadays. Society is being fed with precautions not to operate
machinery or drive under when intoxicated. However, people do not put into
consideration [5] that fatigue also can contribute to road carnage. This is due to the
fact that tiredness or fatigue will lessen the response time, attention, and attentiveness
of a person who is undertaking activities that are need full attention, in this case, using
motor vehicles. This will similarly result in slower, poor judgment and decision-
making, [12]The Royal Society For The Prevention of Accidents.
The figure shows Examples of EEG Data Collecting A. Picot, 7 Authors in B. T. Jap
et al, referenced that the carried out technique by the previous specialist to notice
indications of sluggishness, the EEG strategy is appropriate for use for exhaustion and
sleepiness identification. In the technique, EEG has 4 sorts of recurrence parts that
might be investigated, theta (θ), beta (β), alpha (α) and delta (δ). At the point when
there are an expansion in power in alpha (α) and delta (δ) recurrence groups, it shows
that the driver is encountering exhaustion and drowsiness[4]B. T. Jap, The
disadvantage of the technique is that it is very commotion delicate to regions around
the sensors. For example, when one is attempted an EEG explore, the encompassing
regions should be absolutely quiet. The commotion slows down the sensors that are
for cerebrum movement identification. The other inconvenience of this technique is
that, regardless of whether the outcome might be precise, it is can not be qualified for
use in genuine driving applications [10]. Think about it when somebody is driving and
is wearing something on the head brimming with wires and when the driver moves
their head, the wire could take off from its place. However it isn't advantageous to be
utilized for continuous driving it is probably the best technique such a long ways for
analyze purposes and information assortment [2].
The most reasonable technique for a Driver Drowsiness Detection System amongst
the three, as mentioned above, is the Computer Vision strategy. This technique neither
depends on any outside factors that could prompt an inaccurate positive nor does it
need any actual associations with the driver and could distract the driver.
The Computer vision space utilizes an assortment of AI calculations to decide
sluggishness, similar to the Support Vector Machine (SVM) calculation, it
characterizes objects by isolating information things Savas, B. K., and Becerikli, Y.
The framework distinguishes the eyes and other facial highlights by the utilization of
a dataset however it gives less precise outcomes and has a higher mistake rate,
particularly in uproarious or enormous datasets. Another comparative calculation is
the Convolutional Neural Networks (CNN) model, which completes drowsiness
identification by utilizing brain networks that reproduce the working of the human
cerebrum on a PC, Donahue J et al. [7]. It ends up being more precise however it
additionally requires a very high computational expense and an enormous dataset to
prepare the model because of which it isn't the best appropriate for the drowsiness
identification framework.
Another critical model that we consider for the undertaking utilizes the Haar Cascades
calculation, which utilizes the facial highlights of the driver to distinguish
sluggishness [6]Viola, P.and Jones M. It is the second most precise and quickest
calculation after the CNNs, and it chips away at the low computational expense and a
more modest training set, which then, at that point, makes the framework conservative
and the best model for our pupose.
Facial recognitiont utilized in the undertaking was made utilizing ubuntu and
customized in Python (programming language). In the framework, pip was introduced,
a package of the executive's framework that is for simplifying the establishment and
the board of programming bundles written in Python. Pip is utilized to introduce
NumPy, Dlib scikit-picture, OpenCV3, and OpenCV 4. NumPy is a bundle that
contains N-layered exhibit object, instruments for reconciliation C/C++ and Fortran
code, refined (telecom) work, Fourier change, straight variable-based math, and
irregular number capacities. The is an open-source picture handling library for the
Python programming language. Scikit-picture incorporates calculations for
mathematical changes, division, variety space control, investigation, morphology,
highlight identification, separating, and that's only the tip of the iceberg.
It is intended to work close by with the Python mathematical and logical libraries
NumPy [16] Wikipedia, OpenCV is a Python library that is intended to take care of
PC vision issues, OpenCV is a wrapped class for the first C++ library to be utilized in
Python. All OpenCV cluster structures are switched over completely to or from
NumPy array.[17] The Dlib is a cutting edge C++ tool compartment containing AI
calculations and instruments for making complex programming in C++ to address
certifiable issues. It is utilized in scholarly community and industry in a wide scope of
spaces including installed gadgets, mechanical technology, cell phones, and greater
superior execution registering conditions. Dlib's open source authorizing permits it to
be utilized for nothing and in any application.
[17] Sparkfun Start Something, Using OpenCV and the Dlib facial landmark is
distinguished in a picture. The landmarks (central issues) that are of significance for
the gadget are the ones that portray the state of the face ascribes like eyes, nose,
mouth, jawline, eyes, and eyebrows. [19]These focuses give extraordinary
understanding into the investigated face structure, it tends to be exceptionally valuable
for a wide scope of utilization, including face liveliness, face acknowledgment, flicker
identification, feeling acknowledgment, and photography. The Dlib offers The Face
Landmark Detection calculation which is an execution of the Ensemble of Regression
Trees (ERT) introduced in 2014 by Kazemi and Sullivan.
c
Fig2.5 Facial Diagram Built with Dlib(Landmarks)
By the utilization of a coding program running in python, a component inside Dlib
called preparing choice utilizes a few boundaries, for example,
• "Tree Depth — It determines the profundity of the trees utilized in each outpouring.
The "limit" of the model is addressed by the boundary. As far as the place of precision,
an ideal worth is 4, all things being equal, a worth of 3 is a decent trade among
exactness and model size.
• Cascade Depth — it is the quantity of cascades used to prepare the model. This
boundary influences either the precision or size of a model. 10 - 12 is viewed as a
decent value, all things being equal, a value of 15 is the right equilibrium between
sensible model size and most extreme exactness.
• Feature Pool Size — indicates the number of pixels used to produce the elements for
the arbitrary trees at each cascade. A bigger number of pixels will lead the calculation
to become precise and vigorous however to execute more slowly. A value of 400
accomplishes extraordinary precision with an incredible runtime speed. On the off
chance that speed isn't an issue, setting the parameter value to 800 (or even 1000) 16
will prompt predominant accuracy. With a value somewhere in the range of 100 and
150, it is as yet conceivable to procure a decent precision with an extraordinary
runtime speed. The last value is appropriate for installed and cell phone applications.
• Num Test Splits — is the number of divided highlights inspected at every node. It is
a boundary answerable for choosing the best highlights at each cascade during the
training interaction. The parameter influences the training speed and the model
exactness. The default value of the boundary is 20. This parameter is exceptionally
valuable, for instance, when we need to train a model with great exactness despite
everything keeping it of little size. This is finished by expanding how much num split
test to 100 or 300, to build the model exactness and not the model's size.
The Pi camera module is a small, lightweight camera module that supports Raspberry
Pi. The pi camera communicates with Pi by the use of the MIPI camera serial
interface protocol.Ordinarily it is utilized in AI, picture handling, or reconnaissance
projects. It is generally utilized in reconnaissance drones since the camera's payload is
exceptionally less. Aside from these modules, Pi can utilize ordinary USB webcams
that are utilized alongside the PC.
Pin Description
Viola-Jones object detection framework may be used for detection of several some of
object classes, however, it is more centered on detecting the face and its capabilities.
The algorithm makes use of the idea of rectangle functions which contain the sums of
pixels in the square areas [15]. The total of the pixels that are inside white rectangles
are removed from the total of the pixels which are in grey rectangles. The value of the
rectangular feature, represented with the aid of using A and B is the distinction among
the sum of pixels in square areas. [22]The areas have form and size. They also are
vertically or Horizontally-orientated and are adjoining to every other. A three-
rectangle feature, being represented as C, generates the sum inside out of doors
rectangles being subtracted from the sum in a middle rectangle. Lastly, a four-
rectangular feature, being presented as D generates the distinction among the diagonal
pairs of the rectangles.
The rectangle features are generated rapidly by the use of a representation for the
picture known as the integral image.
METHODOLOGY
3. Introduction
This chapter covers the processes, tools and tasks accompanying this projects’
completion. It involves analysis of every stage that will be featured to carry out the
project.
This chapter explains the methods that have been put to practice to reach the set
objectives and aims of the project and a closer look at project implementation. Each
selection and accomplishment of the method implemented in this project will be
explained for each stage till the completion of the project. The project makes use of is
Computer Vision Software . These methods used are methods in command for
detection of mouth area, face, and nose.
This technique is an intrusive method where we make use of electrodes to obtain brain
activity, pulse rate, and heart rate. ECG is used to calculate heart rate variations and to
detect various conditions for drowsiness [12]. The correlation between different
signals such as EEG (electroencephalogram), ECG (electrocardiogram), and EMG
(electromyogram) are made and the output is generated whether the driver is drowsy
or not.
In this technique, the eye-head pose, blinking frequency, etc. of a person is monitored
by a camera, and the driver is alerted if any of the drowsiness signs are sensed.
This technique continuously monitors the position of the car in the lane, steering
wheel position, and pressure on the acceleration pedal. by measuring all these
parameters system indicates whether the driver Is drowsy or not.
At this stage, it concerned having a look at the preceding studies carried out
associated with the creator project. This subject matter observes the connection
between drowsiness situations and coping with a motored vehicle. A thorough
commentary became carried out on the present technique to hit upon the drowsiness.
Different parameters had been utilized by preceding research. By putting focus at the
parameters which are detecting the mouth and eyes, enables to slender down the angle
of the project.
At this stage, it becomes located that one of the satisfactory manners to hit upon eyes
and yawning is through an algorithm. Some of the present-day algorithms which
might be associated with this assignment are reviewed to assist in growing the
assignment. In [10], the advocate technique measures the time for someone to close
their eyes and if their eyes are have been closed longer than the regular eye-blink time,
it is far feasible that the man or woman is getting asleep. Based upon research of
human eye blinks, it has been recognized that the common human blink length takes
approximately 202.25ms even as the blink length of a drowsy character takes
approximately 258.58ms.
Garcia et al, explains that a few algorithms and approach has been used withinside the
method of detecting eyes, face, and mouth. The Cascade Object Detector is the set of
rules and approaches used. The Cascade Object Detector utilizes the Viola-Jones set
of rules to discover people's faces, noses, eyes, mouths, or top bodies.
The value of the integral picture at the point (x,y) is the total of all the pixels to the
left above and of it. Supported on the integral picture, the total of the pixels within the
rectangle D can be calculated using four matrix references. The integer image value at
position 1 is the total of the pixels in rectangle A. B + A is the value at position 2,
A + C is the value at position 3 , and at position 4 is C + D + B + A
3.1.8.2.Cascade of Classifiers
In a regular 24x24 pixel pane, there are around a total of 45,397 possible features to
be recognized. This is too large and prohibitively expensive a number to judge. To
improve the recognition performance, there is need for functions to be included to the
classifiers. This step, computation time directly increases and the detection process is
made slower[16]. Consequently, the cascade of classifiers is built to increase the
recognition performance while drastically reducing the computation time.
To be able to detect the mouth and eye area, there must face be detection of the face
area. Nevertheless, this measure reduces system performance and speed due to a large
detection area. The project aims to identify signs of sleepiness in the eyes and mouth.
Hence , this project put the limit to the detection area to the eyes and mouth. This will
improve system performance. Tests are required to ensure it meets the required
parameters.
I) Raspberry Pi Module
II) Buzzer
III) Pi Camera Module
IV) Connecting Wires
.
Fig 3.2 Block diagram of Driver Drowsiness Detection System
3.2.4. Open CV
Eyes
Eyebrows
Jawline
Mouth
Facial landmarks had been efficiently carried out to stand alignment, head pose
estimation, face swapping, blink detection, and plenty more[13]. In the context of
facial landmarks, we intend to detect critical facial systems at the face through the
usage of form prediction methods. The detection of facial landmarks is consequently
a two-step process:
Localize the face with inside the photo: The face photo is localized via way of means
of Haar feature-primarily based cascade classifiers which turned into mentioned
withinside the first step of our set of rules i.e. face detection[12]. Detecting the
important thing facial systems at the face ROI. There are a whole lot of facial
landmark detectors, however, all strategies attempt to localize and label the
subsequent facial regions: mouth, right eye, left eye, left brow, right brow, and nose.
2. Priors, of greater specifically, the possibility of distance among pairs of enter pixels.
The pre-skilled facial landmark detector in the dlib library is used for estimation of
the location of the 68 (x, y)-coordinates that map to the facial systems at the face.
The eye place may be expected from optical flow, with the aid of using sparse
monitoring or with the aid of using frame-to-frame depth differencing and adaptive
thresholding by H. Seifoory et al. Finally, a selection is made whether or not the eyes
are or aren't included with the aid of using eyelids. An extraordinary technique is to
deduce the kingdom of the attention commencing from an unmarried picture, as e.g.
with the aid of using correlation coordination with open and closed eye templates, a
heuristic horizontal or vertical picture depth projection over the attention region, a
parametric version becoming to discover the eyelids, or lively form models[15].
Therefore, we suggest an easy however green set of rules to locate eye blinks with the
aid of using the use of a recent facial landmark detector. A singular scalar amount that
displays a degree of the attention establishing is derived from the landmarks.
Eventually, having a per-body collection of the attention-establishing estimates, the
attention blinks are located with the aid of using an SVM classifier this is educated on
examples of blinking and non-blinking patterns[18].
The eye landmarks are detected for every video frame, The (EAR) Eye aspect ratio is
between the width and height of the eye is computed.
in which p1,. ., p6 are the 2D landmark locations, shown in the diagram. The EAR is
ordinarily constant while a watch is open and is getting near 0 at the same time as
ultimate a watch. It is in part individual and head pose insensitive. The aspect ratio of
the open eye has a smaller variance amongst individuals, and its miles are completely
invariant to a uniform scaling of the picture and in-aircraft rotation of the face.
[16]Since eye blinking is carried out with the aid of using each eye synchronously, the
EAR of each eye is averaged.
Finally, the selection for the attention nation is made primarily based totally on EAR
calculated withinside the preceding step. If the space is 0 or is near 0, the attention
nation is classed as "closed" in any other case the attention nation is diagnosed as
“open”.
The ultimate step of the set of rules is to decide the person's circumstance primarily
based totally on pre-set circumstances for drowsiness. The common blink length of
someone is 100-four hundred milliseconds (i.e. 0.1-0.four of a second)[13]. Hence, if
someone is drowsy, his or her eye closure ought to be passed this interval. We set a
time body of five seconds. If the eyes continue to be closed for 3 or greater seconds,
drowsiness is then detected and an alert pop concerning the drowsiness is triggered.
The Viola Jones algorithm is best for image detection since it is a frame work that
deals with Object detection. Open CV is used for image manipulation and image
processing and machine learning. The eye state detection will be measured by the
Eye Aspect Ratio(EAR) to determine the percentage of eye closure before the
buzzer is triggered.
4. Introduction
This chapter discusses the results achieved in the context of the final year project.
Apart from that, this chapter covers the information for simulating the algorithm. The
steps to detect eyes are also explained in this chapter.
Successful runtime video capture with a camera. The captured video was separated
into frames and each individual frame was analyzed. Successful face recognition
followed by eye recognition. detected, it is classified as a sleepy state, otherwise, it is
considered a normal blink, and the loop of image capture and controller status
analysis is performed over and over . In this implementation, during the sleep state,
the eye is not circled or detected, and the appropriate message is displayed.
Yawn Detected
Driver is Awake
While executing the program, the dataset trains the model utilizing facial highlights of
a human face and perceives the driver's eyes progressively. The upper right corner of
the screen screens the EAR or perspective proportion of the eye. This proportion
decides the "receptiveness" of the eyes and falls underneath 0.17 (the limit used to
decide framework exhaustion) when the framework distinguishes shut eyes. Each
casing ascertains its EAR, and a counter factor tracks the quantity of consistent edges
that the driver's eyes are shut. When the counter arrives at the model's limit, for this
situation, five back to back outlines, the driver is viewed as lethargic and the
Raspberry Pi conveys a result message to the caution framework by means of the
GPIO library.
To achieve the end result a massive wide variety of images and videos had been taken
and their accuracy in figuring out eye blinks and drowsiness become examined. An
outside buzzer is used to supply alert sound output to be able to awaken the driving
force when drowsiness exceeds a sure threshold. The device becomes examined for
extraordinary humans in extraordinary ambient lights situations( daylight and
nighttime). When the webcam backlight become grew to become ON and the face is
4.4. LIMITATIONS
1. The dependence on ambient lighting- With bad lights situations even though
the face is easily detected, once in a while the machine is not able to come across
the eyes. So it offers an erroneous result that needs to be taken care of. In actual
time situations, infrared backlights have to be used to keep away from bad lights
situations.
2. Orientation of the Face- while the face is tilted to a positive volume it could be
detected, but past this our device fails to stumble on the face. So while the face
isn't always detected, eyes are additionally now no longer detected. This hassle is
resolved through the use of monitoring capabilities which music any motion and
rotation of the items in an image. An educated classifier for tilted face and tilted
eyes can additionally be used to keep away from this form of hassle.
4. Problem with multiple faces detected- If multiple faces are detected through the
webcam, then our device offers an inaccurate result. This trouble isn't critical as
we need to detect the drowsiness of a single driver.
5. Optimum Range Required for Detection- whilst the distance between the face
and webcam isn't always at most fulfilling variety then certain issues arise. When
the face is so closer to the the webcam ( less than 30 cm), the system finds it
difficult to detect the face from the picture. So it best indicates the video as output
as a set of rules is designed to locate eyes from the face region. This may be
resolved through detecting eyes at once by the use of haar detect objects
capabilities from the whole picture rather than the face area. So eyes may be
monitored even supposing faces aren't detected. When a face is far from the
webcam (greater than 70cm) then the backlight is not enough to light up the face
properly. Therefore, eyes aren't detected with excessive accuracy which indicates
blunders in the drowsiness detection. This trouble isn't always significantly taken
under consideration as in actual time state of affairs the gap among drivers face
and the webcam does not exceed 50cm. So the trouble in no way arises.
Considering the mentioned difficulties, the most fulfilling distance variety for
detection of drowsiness is about 40-70 cm.
4.5. Conclusion
The results came out as expected and as required. Under certain conditions, the
system works very well, when there isn't enough ambient lighting and people driving
with spectacles the system will have difficulties in detecting the eyes of the driver.
Also when there are so many faces within the camera's detection space, the system
will have difficulties in eye detection.
5. Introduction
The project undertaking intends to plan and foster a minimal expense Driver
Drowsiness Detection System utilizing Open CV. The ringer is for cautioning the
driver by delivering sound signs, which stirs the driver progressively to keep away
from road carnage. Eye identification is finished utilizing the Haar cascade classifier
by the computation of the Eye perspective proportion additionally diminishes the
misleading eye recognition, an issue that is looked at by the other drowsiness
discovery frameworks that utilize just the OpenCV library. Be that as it may, the
misleading problems are insignificant, which gives an expansion in the capacity of the
system. The Eye Aspect Ratio( EAR) of various back-to-back casings will help us to
eliminate those insignificant mistakes and successfully ascertain Drowsiness.
A non-invasive system was developed to locate the eyes and for the monitoring of
fatigue. Information about the positioning of the head and eyes is acquired through
various image processing algorithms that are developed. During monitoring, the
system can decide whether the eyes are open or closed. closed for too long, a warning
signal sounds. In addition, the system can automatically detect eye position errors that
occur during monitoring.
The overall objectives of the research were met. A working prototype was designed,
built, and tested, showing that the theory is workable, flexible, and can be further
Several challenges were met during the design and production of the low-cost Driver
drowsiness detection system. Failed to gather all the best-suited components for the
project and ended up using alternatives that work similarly. The Pi Camera was
expensive and made use of a WebCam. Also to be able to install the Operation code,
we had to use Anaconda Spider( programming Platform) which requires a high-
performance computer with a big size RAM. To curb the challenge, the RAM was
changed to an 8Gb RAM. Installation of Raspbian OS on the Raspberry Pi 3 was a
challenge. There was a need for a monitor to use as a screen for the Raspberry and it
was not at easy disposal as they are hard to get. This challenge was solved by using a
Projector connected to the Raspberry Pi.
5.4. Recommendations
Adaptive binarization can be added to assist in making the system more robust. This
eliminates the requirement for the noise removal function, reducing the number of the
computations needed to locate the eyes. By this, adaptability to changes in ambient
lighting is allowed.
MICROCONTROLLER CODE
ALGORITHM
1. Capture the picture of the driver from the camera.
1. Send the captured picture to Haar Cascade report for face detection.
2. If the face is detected then crop the picture together with the face most effective.
If the driver is distracted then a face won't be detected, so play the buzzer.
3. Send the face picture to Haar Cascade report for eye detection.
4. If the eyes are detected then crop most effective the eyes and extract the left and
proper eye from that picture. If each eyes aren't located, then the driving force is
asking sideways, so sound the buzzer.
5. The cropped eye pics are despatched to the hough changes for detecting pupils,
with a view to decide whether or not they're open or closed.
6. If they're located to be closed for 5 non-stop frames, the driver should be alerted
via way of means of sound from the buzzer.
import face_recognition
import cv2
import numpy as np
import time
import cv2
import eye_game
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
BUZZER= 23
GPIO.setup(BUZZER, GPIO.OUT)
previous ="Unknown"
count=0
video_capture = cv2.VideoCapture(0)
#frame = (video_capture, file)
file = 'image_data/image.jpg'
# Load a sample picture and learn how to recognize it.
img_image = face_recognition.load_image_file("img.jpg")
img_face_encoding = face_recognition.face_encodings(img_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
img_face_encoding
]