Accident Detection and Alerting System Using Deep Learning Techniques
Accident Detection and Alerting System Using Deep Learning Techniques
https://doi.org/10.22214/ijraset.2023.50701
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue IV Apr 2023- Available at www.ijraset.com
Abstract: The Aim of Our project is to Focus on Late Responses By Emergency Services During the time of Road Accidents
Which Leads to Death of Many lives, Where to Culminate this Apart , Our Project Majorly Focuses On Gathering Images and
Informative Selection, Location Tracking and Transferring Notification. Though traditional machine learning approaches Had
been outperformed by feature engineering methods which can select an optimal set of features. On contrary, Modernly it is
known that deep learning models such as Convolutional Neural Networks (CNN) can extract features and it ables to reduce the
computational cost automatically. The proposed implementation increases the survival rate by meager percentage. Efficient
computation occurs when the algorithm utilizes the maximum GPU and the right functions.
I. INTRODUCTION
As the statistics of accidents are viewed, we can see that the tunnel accidents are increasing day by day. Our main objective is to
minimize the accidents’ response time when an accident occurs, and the time emergency responders reach the accident scene in
reducing human deaths due to tunnel accidents. Accident detection system is used to recognize the location of the accident and
easily reach the location. Every second is valuable for the ambulance.
The notification is immediately sent as soon as the crash takes place. There is no loss of life due to the delay in the arrival of the
ambulance. In order to give treatment for injured people, first we need to know where the accident happened through location
tracking and sending a message to the emergency services.
According to this project when a vehicle meets with an accident, an immediate notification will be sent to the emergency services.
The emergency response to accidents is very crucial. People injured in a crash need to be sent to the nearest hospital in the first
place to prevent their health condition from worsening, on the other hand, serious crashes often cause non recurrent congestion, if
emergency response or clearance is not carried out on time. In order to mitigate those negative impacts, tunnel crashes need to be
quickly detected.
The autonomous vehicle black-box system is simply used as a video recording device for accident identification. The traditional
detection methods require manual extraction of features by experts over time. Unlike the traditional detection methods, the Deep
Learning-based methods doesn’t require manual extraction of features, with dataset as input it can detect accidents. With the help of
the video input the deep learning-based methods does Image Classification and Object Localization. This brings us to the motivation
to use Deep Learning Model.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3167
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue IV Apr 2023- Available at www.ijraset.com
2) Yolo Algorithm
We re-frame object detection as a single regression problem, straight from image pixels to bounding box coordinates and class of
its probabilities. YOLO is refreshingly simple. A single convolutional network simultaneously predicts multiple bounding boxes
and class probabilities for the boxes. YOLO trains on full images and directly optimizes detection performance. This unified model
has several benefits over traditional methods of object detection. YOLO sees the entire image during training and test time so it
implicitly encodes contextual information about classes as well as their appearance. Fast R-CNN, a top detection method, mistakes
background patches in an image for objects because it can’t see the larger context. YOLO makes less than half the number of
background errors compared to Fast R- CNN.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3168
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue IV Apr 2023- Available at www.ijraset.com
A. Sytem Flow
Our project is classified into three modules,
i.e., processing input, Accident Detection Module and Notification System.
We take the input from the CCTV footage for this project and is divided into frames. Comparison between the previous frame and
current frame vehicle’s speed is done and if there is a huge difference in speed such that it is greater than the threshold value the
frame is detected as crash frame. The crash frames are stored in the crash images folder. When a crash frame is detected the
notification system sends an immediate alert along with crash images to the concerned department.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3169
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue IV Apr 2023- Available at www.ijraset.com
V. MODULE DESCRIPTION
A. Processing Input
Our project generally takes data from live feed from CCTV Camera, but for feasibility we have worked on pre-recorded traffic
videos. This module takes direct feed from video and breaks the video stream into a number of frames then we process each frame
as required by the object detection model and then passes the frame to the next module.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3170
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue IV Apr 2023- Available at www.ijraset.com
Bounding boxes can be represented in multiple ways, the most common formats are:
1) Storing the coordinates of the corners [x-min, y-min, x-max, y-max]
2) Storing the coordinates of the center and the box dimensions [x, y, width, height]
In our project, we used the first way of bounding boxes, i.e., by storing the coordinates of the corners. Now for every frame received
from the above module, we pass it to the retinanet object detection model and receive the bounding boxes for each vehicle in the
format specified above. We track each vehicle by associating an object to it which contains vehicle information such as its current
position, its link to previous frame, its speed.
For every new object detected in the current frame we find an old object from the previous frame such that the distance between this
old object and new object is the least among all combinations of old object and new object pairs. The old object in the previous
frame with the least distance to the new object in the current frame is most certainly the same object. This is because an object can
only move so far between subsequent frames so this distance will almost always be smaller than the distance between 2 different
objects. Now establish link between objects having minimum distance. If it's not previously assigned we establish a dual link (from
both sides)we check if an old object has already been assigned to a new object. If this is the case then there is a conflict between
assigning two new objects to a single old object, we will compare the distances between
Previous object and its already assigned old cur_frame object and
Previous object and the new cur_frame object
Then decide which new object the old object actually corresponds to. The incorrect new object is then marked as not found which
represents that this object is found first time and will later be assigned completely new index. After the link has been established, we
update the new object's number of frames detected. Now, we calculate the vector for each object in the current frame. The vector for
every Object is calculated from the object's midpoint at the 1st frame, and the midpoint at the 5th frame. We store these midpoints in
a deque inside previous_frame_objects and/or cur_frame_objects. The deque is kept updated every frame by removing the oldest
midpoint and adding the latest midpoint. Next, we check the difference in vector magnitude between previous vector magnitude and
the current vector magnitude, if the change in magnitude is greater than the threshold value the current frame is detected as Crash
frame. The threshold value is calculated manually, and we checked the accurate value where the crash is detected.
C. Notification System
In this module, it takes the input as the crash detected frame. Here, as soon as the crash is detected an immediate notification is sent
to the concerned department. The notification is via SMS (Short Message Service). The SMS contains an alert message having crash
occurred and link redirecting to the crash images where the crash images are stored. The SMS is sent through an API (Application
Programming Interface).
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3171
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 11 Issue IV Apr 2023- Available at www.ijraset.com
REFERENCES
[1] https://towardsdatascience.com/a-comprehensive-guide-to-convolutional- neural- networks-the-eli5-way-3bd2b1164a53
[2] https://arxiv.org/pdf/1905.05055.pdf#:~:text=Since%20then%2C%20obje ct%20 et
ection%20started,%E2%80%9Ccomplete%20in%20one%20step%E2%80%9D
[3] https://mega.nz/#!lkFDUaJQ!IpJ00KTEB1LG01XEYfApor9HNvADY3 91g9lE0baP6 ns
[4] https://towardsdatascience.com/common-loss-functions-in-machine- learning- 46af0ffc4d23
[5] https://www.kdnuggets.com/2020/08/metrics-evaluate-deep-learning- object- detectors.html
[6] https://github.com/Raghav-B/cctv-crash-detector
[7] https://keras.io/examples/vision/retinanet/
[8] https://towardsdatascience.com/what-is-map-understanding-the-statistic- of-choice- for-comparing-object-detection-models-1ea4f67a9dbd
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 3172