Advanced Driver Assistance System For Drivers Using Machine Learning and Artificial Intelligence Techniques
Advanced Driver Assistance System For Drivers Using Machine Learning and Artificial Intelligence Techniques
https://doi.org/10.22214/ijraset.2022.43260
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
Abstract: Machine learning techniques have been used in order to predict the condition and emotion of a driver to provide
information that will improve safety on the road. It is an application of artificial intelligence. The face, an important part of the
body, conveys a lot of information. When a driver is in a state of fatigue, the facial expressions, e.g., the frequency of blinking
and yawning, are different from those in the normal state. In this paper, we propose a system called “Advanced Driver Assistant
System”, which detects the drivers fatigue status, such as yawning, blinking, and duration of eye closure, using video images,
without equipping their bodies with devices. Artificial Intelligence is a method by which systems can automatically learn as well
as improve without being explicitly programmed. A driver’s condition can be estimated by bio-indicators, behavior while driving
as well as the expressions on the face of a driver. In this paper we present an all-inclusive survey of recent works related to driver
drowsiness detection and alert system. We also present the various machine
learning techniques such as CNN algorithm, HAAR based cascade classifier, OpenCV which are used in order to determine the
driver’s condition. Finally, we identify the challenges faced by the current systems and present the corresponding research
opportunities.
Keywords: Convolutional neural network, fatigue detection, feature location, face tracking, Artificial Intelligence, Autonomous
Vehicle Technology, Drowsiness Detection, Machine Learning.
I. INTRODUCTION
Currently, transport systems are an essential part of human activities. We all can be victim of drowsiness while driving, simply after
too short night sleep, altered physical condition or during long journeys. The sensation of sleep reduces the driver's level of
vigilance producing dangerous situations and increases the probability of an occurrence of accidents. Driver drowsiness
and fatigue are among the important causes of
road accidents. Every year, they increase the number of deaths and fatalities injuries globally.
In this context, it is important to use new technologies to design and build systems that are able to monitor drivers and to measure
their level of attention during the entire process of driving.
In this paper, a module for ADAS (Advanced driver assistance System) is presented in order to reduce the number of accidents
caused by driver fatigue and thus improve road safety. This system treats the automatic detection of driver drowsiness based on
visual information and artificial intelligence.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3885
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
Facial landmarks recognition. The purpose of facial keypoints recognition is that getting the crucial information about locations of
eyebrows, eyes, lips and nose in the face. With the development of deep learning, it is the first time for Sun to introduced DCNN
based on CNN to detect human facial keypoints. This algorithm only recognizes 5 facial keypoints, albeit its speed is very fast. To
get a higher precision for facial key points recognition, Zhou [11] employed FACE++ which optimizes DCNN and it can recognize
68 facial keypoints, but this algorithm includes too much of a model and the operation of this algorithm is very complicated.Wu
proposed Tweaked Convolutional Neural Networks (TCNN) which is based on Gaussian Mixture Model (GMM) to improve
different layers of CNN. However, the robustness of TCNN depends on data excessively. Kowalski introduced Deep Alignment
Network (DAN) to recognize the facial keypoints, which has better performance than other algorithms. Unfortunately, DAN needs
vast models and calculation based on complicated functions. So in order to meet the requirement about real time performance,
DriCare uses Dlib to recognize facial keypoints. DriCare, is built using a commercial camera automobile device, a cloud server that
processes video data, and a commercial cell phone that stores the result.DriCare system, While driving, the automobile’s camera
captures the driver’s portrait and uploads the video stream to the cloud server in realtime. Then, the cloud server analyzes the video
and detects the driver’s degree of drowsiness. In this stage, three main parts are analyzed: the driver’s face tracking, facial key
region recognition, and driver’s fatigue state. To meet the real-time performance of the system, we use the MC-KCF algorithm to
track the driver’s face and recognize the facial key regions based on key-point detection. Then, the cloud server estimates the
driver’s state when the states of the eyes and mouth change.
B. System Architecture
Following Figure shows the Architecture of the system
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3886
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
The fully-connected layers used to produce class scores from the activations which are used for classification.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3887
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
In the proposed method, 4 convolutional layers and one fully connected layer are used. Extracted key images with size of 128 X 128
are passed as input to the convolution layer-1 (Conv2d_1). In Conv2d_1 input image is convolved with 84 filters of size 3x3. After
convolution, batch Normalization, non-linear transformation ReLU, Max pooling over 2 × 2 cells are included in the architecture,
which is followed by dropout with 0.25%. Conv2d_1 required 840 parameters. Batch_normalization_1 is done with 336 parameters.
The output of convolution layer-1 is fed in to the convolution layer-2(Conv2d_2). In Conv2d_2, input is convolved with 128 filters
with size 5x5 each. After convolution, batch Normalization, non-linear transformation ReLU, MaxPooling over 2 × 2 cells with
stride 2 followed by dropout with 0.25% applied. Conv2d_2 required 268928 parameters. Batch_normalization_2 required 512
parameters. The output of convolution layer-2 is fed in to the convolution layer-3(Conv2d_3). In Conv2d_3, input is convolved with
256 filters with size 5x5 each. After convolution, Batch Normalization, non-linear transformation ReLU, MaxPooling over 2 × 2
cells with stride 2 followed by dropout with 0.25% applied, Conv2d_3 required 819456 parameters. Batch_normalization_3
required 1024 parameters.
The output of convolution layer-3 is fed in to the convolution layer-4(Conv2d_4). In Conv2d_4 input is convolved with 512 filters
with size 5x5 each. After convolution, Batch Normalization, non-linear transformation ReLU, Max Pooling over 2 × 2 cells with
stride 2 followed by dropout with 0.25% applied. Conv2d_4 required 3277312 parameters. Batch_normalization_4 required 2048
parameters. Fully connected layer that is dense_1 required 8388864 parameters. Proposed CNN model required 12,757,874 trainable
parameters. The output of classifier is two state, so output layer having only two outputs.Adam method is used for Optimization.
Here softmax classifier is used for classification. In our proposed CNN framework, the 256 outputs of fully connected layer are the
deep features retrieved from input eye images. The final 2 outputs can be the linear combinations of the deep features.
IV. ADVANTAGES
1) Its helps prevent accidents caused by the driver getting drowsy.
2) Driver drowsiness detection helps to avoid crashes caused by fatigue by advising drivers to take a break in time.
3) It’s possible to play music during driving.
4) Its help to search data on Google during driving using text to speech.
5) Traffic management can be maintained by reducing the accidents.
6)
V. LIMITATIONS
1) Dependence on Ambient Light: With poor lighting conditions even though face is easily detected, sometimes the system is
unable to detect the eyes. So it gives an erroneous result which must be taken care of. In real time scenario infrared backlights
should be used to avoid poor lighting conditions.
2) Optimum Range Required: When the distance between face and webcam is not at optimum range then certain problems are
arising. When face is too close to webcam (less than 30 cm), then the system is unable to detect the face from the image. So, it
only shows the video as output as algorithm is designed so as to detect eyes from the face region. This can be resolved by
detecting eyes directly using haardetectobjects functions from the complete image instead of the face region. So, eyes can be
monitored even if faces are not detected.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3888
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
3) Hardware Requirements: Our system was run in a PC with a configuration of 1.6GHz and 1GB RAM Pentium dual core
processor. Though the system runs fine on higher configurations, when a system has an inferior configuration, the system may
not be smooth and drowsiness detection will be slow. The problem was resolved by using dedicated hardware in real time
applications, so there are no issues of frame buffering or slower detection.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3889
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3890
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
VIII. CONCLUSION
This paper provides a comparative study on papers related to river drowsiness detection and alert system. In order to provide a
solution to the problem of detecting the state of drowsiness, an arithmetic-based method is used. This system uses eye movement in
order to detect fatigue. Eye movement is detected using a camera. This is done to recognize the symptoms of fatigue in order to
avoid accidents. It is based on the concept of eye-tracking. In order to obtain finer results, a hundred and fifty images of different
people have been used. If the state of fatigue has been identified, an alarm system is turned on. In this paper, we presented the
conception and implementation of a system for detecting driver drowsiness based on vision that aims to warn the driver if he is in
drowsy state.
Face and eye region are detected using Viola-Jones detection algorithm. Stacked deep convolution neural network is developed to
extract features and used for learning phase. A SoftMax layer in CNN classifier is used to classify the driver as sleep or non-sleep.
Proposed system achieved 96.42% accuracy. Proposed system effectively identifies the state of driver and alert with an alarm when
the model predicts drowsy output state continuously.
REFERENCES
[1] Driver Drowsiness Detection System Based On Visual Features: Fouzia, Roopalakshmi R, Jayantkumar A Rathod, Ashwitha S Shetty, Supriya K Alva’s
Institute Of Engineering And Technology, Shobhavana Campus, Mijar, Moodbidri {Fouziya23, Drroopalakshmir, Ashwithashetty123, Supriya
K189,}@Gmail.Com, [email protected]
[2] Real-Time Driver-Drowsiness Detection System Using Facial Features Wanghua Deng,Ruoxue Wu
1)Beijing Engineering Research Center For Iot Software And Systems, Beijing University Of Technology, China
2)School Of Software, Yunnan University, China
Corresponding Author: Ruoxue Wu (E-Mail: [email protected]).
[3] Driver Drowsiness Detection:International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181. Published by, www.ijert.org
NCAIT - 2020 Conference Proceedings Volume 8, Issue 15
1) V B Navya Kiran
Dept. of Information Science and Engineering ,JSS Academy of Technical Education
Bengaluru, India
2) Anisoor Rahman
Dept. of Information Science and Engineering, JSS Academy of Technical Education
Bengaluru, India
3) Raksha R
Dept. of Information Science and Engineering,JSS Academy of Technical Education
Bengaluru, India
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3891
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue V May 2022- Available at www.ijraset.com
4) Varsha K N
Dept. of Information Science and Engineering, JSS Academy of Technical Education
Bengaluru, India
5) Dr. Nagamani N P
Assistant Professor
Dept. of Information Science and Engineering, JSS Academy of Technical Education
Bengaluru, India
[4] DRIVER DROWSINESS DETECTION SYSTEM
Belal ALSHAQAQI, Abdullah Salem BAQUHAIZEL, Mohamed El Amine OUIS,
Meriem BOUMEHED, Abdelaziz OUAMRI ,Mokhtar KECHE
Laboratory signals and images (LSI) ,University of Sciences and Technology of Oran Mohamed Boudiaf (USTO-MB) Oran, Algeria
{[email protected]; [email protected]; [email protected];
[email protected]; [email protected]; [email protected]}
[5] Deep CNN: A Machine Learning Approach for Driver Drowsiness Detection Based on Eye State :Venkata Rami Reddy Chirra, Srinivasulu Reddy Uyyala,
Venkata Krishna Kishore Kolli
1 Department of Computer Applications, National Institute of Technology, Tiruchirappalli 620015, India
2 Machine Learning & Data Analytics Lab, Department of Computer Applications, National Institute of Technology, Tiruchirappalli 620015, India
3 Department of Computer Science & Engineering, VFSTR, Guntur 522213, India
Corresponding Author Email: [email protected]
[6] Smart Driver Assistant :Abha Tewari, Sahil Khan, Aditya Krishnan, Tanmay Rauth, Jyoti Singh Dept. of Computer Engineering, VESIT,
Chembur, Mumbai -400 074. Email: [email protected]
[7] Real-Time Driver-Drowsiness Detection System Using Facial Features
WANGHUA DENG1 AND RUOXUE WU :
1)Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing 100124, China
2)School of Software, Yunnan University, Kunming 650000, China
Corresponding author: Ruoxue Wu ([email protected])
[8] An Investigation of Early Detection of Driver Drowsiness Using Ensemble Machine Learning Based on Hybrid Sensing: Jongseong Gwak , Akinari Hirao and
Motoki Shino
1 Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505, Japan
2 Nissan Motor, Co., Ltd., Kanagawa 243-0192, Japan; [email protected]
3 Department of Human and Engineered Environment Studies, Graduate School of Frontier Sciences,
The University of Tokyo, Chiba 277-8563, Japan; [email protected]
Correspondence: [email protected]; Tel.: +81-3-5452-6098
[9] Drowsiness Detection And Parking Assistance For Drivers :
Alagu Meyyappan Kailasam, Devi K., Ayikumar Mohan, Benjamin Felix Prabhakar
Department Of Computer Science And Engineering, Srm Valliammai Engineering College, Kattankulathur, Tamilnadu, India.
[10] Smart driver monitoring system : Shubhi Shaily1 & Srikaran Krishnan1 & Saisriram Natarajan1 &ikumar Periyasamy
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3892