0% found this document useful (0 votes)
24 views45 pages

Main Project Final Report

The accident detection report

Uploaded by

aishuunagarajj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views45 pages

Main Project Final Report

The accident detection report

Uploaded by

aishuunagarajj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

JNANA SANGAMA, BELAGAVI-590018, KARNATAKA

Project Phase-II

“Vision-Based System Design and Implementation for Accident


Detection and Analysis via Traffic Surveillance Video”
Submitted in partial fulfilment of the requirements for the Degree of
BACHELOR OF ENGINEERING
in

COMPUTER SCIENCE AND ENGINEERING

Submitted By
AJAY P R [1JV20CS001]
MOHAMMED FARAZ ULLA [1JV20CS008]
SHARIFF
NITHYASHREE [1JV20CS010]
SHAIK FARDHIN AHAMAD [1JV20CS011]

Under the Guidance of


Mrs. Niveditha S
HoD & Assistant Professor
Dept. of Computer Science and Engineering
JVIT, Bidadi – 562109

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


JNANA VIKAS INSTITUTE OF TECHNOLOGY
Vidya Soudha, Padmashree G.D. Goyalji Campus, Bangalore-Mysore Highway
BIDADI, BANGALORE – 562109.
JNANA VIKAS INSTITUTE OF TECHNOLOGY
BIDADI- 562109
(Affiliated to Visvesvaraya Technological University, Belagavi, Approved by AICTE, New Delhi)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE
Certified that the Project Phase-II work entitled “Vision-Based System Design and
Implementation for Accident Detection and Analysis via Traffic Surveillance Video” carried
out by AJAY P R [1JV20CS001], MOHAMMED FARAZ ULLA SHARIFF
[1JV20CS008], NITHYASHREE [1JV20CS010] and SHAIK FARDHIN AHAMAD
[1JV20CS011] are bonafide students of Jnana Vikas Institute of Technology in partial
fulfilment for the award of the degree of Bachelor of Engineering in Computer Science and
Engineering of the Visvesvaraya Technological University, Belagavi during the year 2023-
2024. It is certified that all corrections/suggestions indicated for Internal Assessment have
been incorporated in the report deposited in the department library. The project report has
been approved as it satisfies the academic requirement in respect of project work prescribed
for the said degree.

Signature of Guide Signature of HOD Signature of Principal


NIVEDITHA S NIVEDITHA S Dr. AV SEETHA GIRISHA
Assistant Professor HoD & Assistant Professor Principal
Department of CSE Department of CSE JVIT, Bidadi
JVIT, Bidadi JVIT, Bidadi

Name of the Examiners Signature with Date

1. ________________ _________________

2. ________________ _________________
JNANAVIKAS INSTITUTE OF TECHNOLOGY
BIDADI-562109
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DECLARATION
We, AJAY P R [1JV20CS001], MOHAMMED FARAZ ULLA SHARIFF [1JV20CS008],
NITHYASHREE [1JV20CS010] and SHAIK FARDHIN AHAMAD [1JV20CS011]
students of 8th semester B.E in Computer Science And Engineering, Jnana Vikas Institute
of Technology, Bidadi, hereby declare that the Project Phase-II work entitled “Vision-Based
System Design and Implementation for Accident Detection and Analysis via Traffic
Surveillance Video” has been carried out by us and submitted in partial fulfilment of the course
requirements for the award of degree of Bachelor of Engineering, from Visvesvaraya
Technological University during 2023-2024 is a record of an original work done by us under the
guidance of NIVEDITHA S, HoD & Assistant Professor in Department of Computer Science
and Engineering, JVIT, Bidadi. The results embodied in this work has not submitted to any
other University or Institute for the award of any degree.

AJAY P R
USN: 1JV20CS001
Department of Computer Science and Engineering,
Jnana Vikas Institute of Technology
Bengaluru-562109 Signature of the Student

MOHAMMED FARAZ ULLA SHARIFF


USN: 1JV20CS008
Department of Computer Science and Engineering,
Jnana Vikas Institute of Technology
Bengaluru-562109 Signature of the Student

NITHYASHREE
USN: 1JV20CS010
Department of Computer Science and Engineering,
Jnana Vikas Institute of Technology
Bengaluru-562109 Signature of the Student

SHAIK FARDHIN AHAMAD


USN: 1JV20CS011
Department of Computer Science and Engineering,
Jnana Vikas Institute of Technology
Bengaluru-562109 Signature of the Student
ACKNOWLEDGEMENT

A Project work is a job of great enormity and it cannot accomplish by an individual all by
them. Eventually we are grateful to a number of individuals whose professional guidance,
assistance and encouragement have made it a pleasant endeavour to undertake this project work.

We have great pleasure in expressing our deep sense of gratitude to founder Sir C.M
Lingappa, Chairman for having provided us with a great infrastructure and well- furnished
labs.

We take this opportunity to express our profound gratitude to Dr. A V SEETHA GIRISHA,
Principal for his constant support and encouragement.

We would like to express our heart full thanks to NIVEDITHA S, Head of the
Department, Computer Science and Engineering who has guided us in all aspects. We
are grateful for her unfailing encouragement and suggestion, given to us in course of our
project works.

We would like to mention special thanks to all our Teaching Faculties of Department of
Computer Science and Engineering, JVIT, Bidadi for their valuable support and guidance.

We would like to thank to our family and friends for their unforgettable support and
encouragement.

AJAY P R [1JV20CS001]
MOHAMMED FARAZ ULLA SHARIFF [1JV20CS008]
NITHYASHREE [1JV20CS010]
SHAIK FARDHIN AHAMAD [1JV20CS011]

i
ABSTRACT

With the increasing number of vehicles on roads worldwide, ensuring road safety has
become paramount. In this context, automated accident detection systems leveraging computer
vision techniques have garnered significant attention due to their potential to enhance
emergency response and reduce road fatalities. This paper presents the design and
implementation of a vision-based system for accident detection and analysis using traffic
surveillance video. Furthermore, the system incorporates advanced analytics capabilities to
provide post-incident analysis. By analyzing the captured video footage, the system can extract
valuable insights into the causes and dynamics of accidents, aiding in accident reconstruction
and forensic investigations. Additionally, the system generates comprehensive reports with
statistical data on accident occurrences, contributing to traffic management and policy-making
efforts. To evaluate the effectiveness of the proposed system, extensive experiments were
conducted using real-world traffic surveillance videos. The results demonstrate the system's
high accuracy in detecting accidents and its ability to provide valuable insights for post-incident
analysis. Overall, the developed vision-based system holds great promise for enhancing road
safety and improving the efficiency of emergency response operations in urban environments.

ii
TABLE OF CONTENTS

DETAILS Page No.


ACKNOWLEDGEMENT i
ABSTRACT ii
TABLE OF CONTENTS iii
LIST OF FIGURES v

Chapter 1 INTRODUCTION 1
1.1 Introduction 1
1.2 Overview 2
1.3 Existing System 3
1.4 Proposed System 4
1.5 Problem Definition 5

Chapter 2 LITERATURE SURVEY 7

Chapter 3 SYSTEM REQUIREMENT SPECIFICATION 10


3.1 Feasibility Study 10
3.1.1 Economic Feasibility 10
3.1.2 Technical Feasibility 10
3.1.3 Social Feasibility 11
3.2 Functional Requirement 11
3.3 Non-Functional Requirement 11
3.4 System Requirements 12
3.4.1 Hardware Requirements 12
3.4.2 Software Requirements 12

Chapter 4 SYSTEM DESIGN 13


4.1 Introduction 13
4.2 Architectural Design 13
4.3 Data Flow Diagram 13
4.4 Class Diagram 14

iii
4.5 Use Case Diagram 15
4.6 Sequence Diagram 16

Chapter 5 SYSTEM IMPLEMENTATION 17


5.1 Importing Libraries 17

5.2 Approaches 17

5.2.1 Frame Differencing Method 17

5.2.2 Background Differencing Method 17

5.2.3 Feature Differencing Method 19

5.2.4 Motion Differencing Method 19

5.3 System Analysis 19

5.3.1 Experimental Setup and Testing Conditions 20

5.3.2 Key Features 20

Chapter 6 VEHICLE DETECTION AND FEATURE 22


EXTRACTION
6.1 Vehicle Detection 22

6.2 Background Subtraction 23

6.3 Threshold and Morphological Processing 24

6.4 Code 25

Chapter 7 RESULTS 30
7.1 Results 30

7.2 Snapshots 30

Chapter 8 CONCLUSION AND FUTURE SCOPE 34


8.1 Conclusion 34

8.2 Future Scope 34

REFERENCES

iv
LIST OF FIGURES

Figure No. Name of the Figure Page No.

1.1.1 Components of Traffic Management Centre 1

1.2.1 An example of Vehicle Tracking 3

Block diagram of the proposed accident detection


1.4.1 system 5

1.5.1 Overview of the accident detection system 6

4.2.1 Architectural diagram of Accident Detection 13

4.3.1 Data Flow Diagram 14

4.4.1 Class Diagram 14

4.5.1 Use Case Diagram 15

4.6.1 Sequence Diagram 16

5.3.3.1 Frame sequences from test video saigon01.avi 20

5.3.2.1 Frame sequences from test video saigon02.avi 21

6.1.1 Description of vehicle detection system 22

6.2.1 Examples of background subtraction 23

6.3.1 Illustration of thresholding 25

7.2.1 Detection of Objects 30

7.2.2 Accuracy Detection 31

7.2.3 Accident Detection 31

7.2.4 Crash Detection 32

7.2.5 Person Detection 32

7.2.6 Collision Detection 33

7.2.7 Indication of Danger and Safety Accidents 33

v
Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
Over the past few decades, the significance of utilizing technologies for traffic monitoring
has increased. Crash detection is heavily dependent on human oversight at the traffic
management centre. On the one hand, it is difficult for people to promptly recognize all traffic
accidents in the city, which means that the injured may not receive adequate treatment in many
instances. On the other hand, because it is difficult to obtain the trajectory and speed from
surveillance footage, manual investigation of the cause of a traffic accident occasionally results
in errors. Therefore, technologies that automatically recognize and analyse traffic incidents are
required. The below figure Vision-based collision detection systems have advanced in three
ways over the past two decades: modelling of vehicle interactions, vehicle behaviour analysis,
and patterns of traffic flow. To imitate regular traffic patterns, the first strategy makes use of
traffic restrictions derived from large data sets. When a vehicle's trajectory departs from normal
patterns, an accident occurs. However, the lack of collision trajectory data in the real world
makes it difficult to identify collisions. The second method looks at vehicle motion metrics like
speed, acceleration, and the distance between two vehicles

Fig 1.1.1: Components of Traffic Management Centre

This suggests that all vehicles should be watched constantly. Consequently, processing
capacity typically limits the method's accuracy in a crowded traffic environment. In the third
approach, vehicle interactions are depicted using the intelligence driver model and the social

Dept. of CSE, JVIT Page 1


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
force model. The methods of vision-based accident detection in the past two decades have been
developed in three ways: modelling of traffic flow patterns, analysing vehicle activities, and
modelling of vehicle interactions.

In the first method, the typical traffic patterns are modelled based on traffic rules from
large data samples. If the trajectory of a vehicle is inconsistent with typical trajectory patterns,
it would be determined as an accident. However, it is difficult to detect collisions because the
trajectory data of collisions in the real world is limited. The second method detects accidents
by calculating vehicle motion characteristics, such as speed, acceleration, and distance between
two vehicles, which means that all vehicles need to be tracked continuously. As a result, the
accuracy of the method in a crowded traffic environment is usually limited by computational
capacity. In the third method, the interaction of vehicles is modelled with the application of the
social force model and the intelligence driver model.

This method requires a lot of training samples, and the performance is poor since it
detects collision based on the change of vehicle speed only. One of the widely used applications
of traffic surveillance systems is vehicle detection and tracking. By detecting and tracking
vehicles, we can detect vehicle’s velocity, trajectory, traffic density, etc. and if there is any
abnormality, the recorded information can be sent to the traffic authorities to take necessary
action. The main advantage of the video monitoring systems over the existing systems such as
physical detectors that uses magnetic loops is the cost efficiency involved in installing and
maintaining these video systems, and also the aspect of 2 video storage and transmission to
analyse the detected events.

Thus video-based traffic surveillance systems have been preferred all over the world.
Video surveillance systems are been used for security monitoring, anomaly detection, traffic
monitoring and many other purposes. Video surveillance systems have decreased the need of
human presence to monitor activities captured by video cameras. And also, one of the
advantages of visual surveillance systems is videos can been stored and analysed for future
reference. One of the important applications of video surveillance systems is traffic
surveillance. Extensive research has been done in the field of video traffic surveillance.

1.2 OVERVIEW
Although considerable amount of research has been done to develop a system that can
detect an accident through video surveillance, real-time implementation of all these systems

Dept. of CSE, JVIT Page 2


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
have not been realized yet. Real-time implementation of accident detection through video-
based traffic surveillance have always been challenging since one has to strike a right balance
between the speed of the system and the performance of the systems such as correctly
detecting accidents and also reducing false alarm rate. Ideally, we want a system that could
maximize the number of frames processed per second at the same time able to achieve
acceptable performance rate. This brings us to the goal of this research. The goal of this
research is to develop an accident detection module at roadway intersections through video
processing that is suitable for real-time implementation. In this thesis we developed an
accident detection module that uses the parameters extracted from the detected and tracked
vehicles which is able to achieve good real-time performance.

An important stage in automatic vehicle crash monitoring systems is the detection of


vehicles in each video frame and accurately tracking the vehicles across multiple frames. With
such tracking, vehicle information such as speed, change in speed and change in orientation
can be determined to facilitate the process of crash detection. As shown in the Figure 1.2,
given the detected vehicles, tracking can be viewed as a correspondence problem in which the
goal is to determine which detected vehicle in the next frame corresponds to a given vehicle
in the current frame. While for a human analyst, the task of tracking can often be performed
effortlessly, this task is quite challenging for a computer. Therefore, in this thesis more
emphasis has been given to the real-time implementation of vehicle detection and tracking.

(a) (b)

Figure 1.2.1: An example of vehicle tracking (a) Frame at time t (b) Frame at time t+1

1.3 EXISTNG SYSTEM

A Vision-Based System Design and Implementation for Accident Detection and Analysis via
Traffic Surveillance Video involves the utilization of advanced computer vision techniques to
enhance road safety and traffic management. This system integrates sophisticated algorithms

Dept. of CSE, JVIT Page 3


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
and machine learning models to analyse real-time video feeds from traffic surveillance
cameras, aiming to detect and assess road accidents promptly. The system typically employs
object detection and tracking algorithms to identify vehicles, pedestrians, and other relevant
entities in the video stream. By continuously monitoring the traffic scene, the system can detect
unusual patterns or events indicative of accidents, such as sudden changes in vehicle speed,
unexpected stops, or collisions.

Once an accident is identified, the system can trigger immediate alerts to relevant
authorities, emergency services, or traffic management centres. Additionally, the system may
capture and store footage surrounding the incident for further analysis and investigation. The
implementation of such a vision-based system requires a robust hardware infrastructure,
including high-resolution cameras, powerful processing units, and efficient storage solutions.
Furthermore, it necessitates the development and training of machine learning models tailored
to accurately recognize and classify various objects and behaviours within the traffic
environment.

The benefits of this system extend beyond accident detection, as it also facilitates post-incident
analysis. By examining the recorded video data, authorities can gain insights into the factors
contributing to accidents, such as traffic flow, weather conditions, and road infrastructure. This
information is invaluable for formulating targeted strategies to improve road safety and
mitigate the risk of future accidents. Overall, a Vision-Based System for Accident Detection
and Analysis via Traffic Surveillance Video represents a cutting-edge approach to enhance road
safety through the fusion of computer vision, machine learning, and real-time monitoring
technologies

1.4 PROPOSED SYSTEM


The proposed system for Vision-Based System Design and Implementation aims to
enhance road safety through the utilization of advanced computer vision techniques for
accident detection and analysis. This innovative approach leverages traffic surveillance videos
to monitor and interpret the dynamics of vehicular movement. The system will employ state-
of-the-art image processing algorithms to identify potential accidents in real-time, such as
sudden collisions, erratic vehicle behaviour, or pedestrians in dangerous proximity to traffic.
Through the integration of machine learning models, the system will continuously learn and
adapt to diverse traffic scenarios, improving its accuracy over time. Upon detecting an incident,
the system will trigger immediate alerts to relevant authorities, emergency services, and even
nearby vehicles equipped with compatible communication systems. Furthermore, the system

Dept. of CSE, JVIT Page 4


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
will facilitate post-incident analysis by providing detailed reports, including the sequence of
events leading to the accident, contributing factors, and potential improvements to road
infrastructure or traffic management.

Fig 1.4.1: Block diagram of the proposed accident detection system

The proposed system employs state-of-the-art computer vision algorithms for real-time
video analysis, including object detection, tracking, and trajectory analysis. By integrating
these techniques, the system can accurately identify potential accidents based on abnormal
vehicle behaviors such as sudden stops, erratic movements, and collisions. Upon detecting an
incident, the system triggers an alert mechanism to notify relevant authorities and emergency
services promptly.

To evaluate the effectiveness of the proposed system, extensive experiments were


conducted using real-world traffic surveillance videos. The results demonstrate the system's
high accuracy in detecting accidents and its ability to provide valuable insights for post-incident
analysis. Overall, the developed vision-based system holds great promise for enhancing road
safety and improving the efficiency of emergency response operations in urban environments.

1.5 PROBLEM DEFINITION

The problem addressed in this research is preliminary approach to real -time


implementation of accident detection systems at traffic intersections. Therefore, the main

Dept. of CSE, JVIT Page 5


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
objective of this research is to design and implement an accident detection system through video
processing that is suitable for real-time implementation. The below illustrates brief overview
of the system. Although considerable amount of research has been done to develop a system
that can detect an accident through video surveillance, real-time implementation of all these
systems have not been realized yet. Real-time implementation of accident detection through
video-based traffic surveillance have always been challenging since one has to strike a right
balance between the speed of the system and the performance of the systems such as correctly
detecting accidents and also reducing false alarm rate.

Fig 1.5.1: Overview of the accident detection system

For the last two decades researchers have spent quality time to develop different
methods that can be applied in the field of video-based traffic surveillance. Some of the
applications of video-based surveillance include vehicle tracking, counting the number of
vehicles, calculating vehicle velocity, finding vehicle trajectory, classifying the vehicles,
estimating the traffic density, finding the traffic flow, license plate recognition, etc. Of late the
focus of video-based traffic surveillance has shifted to detect incidents in roadways such as
vehicle accidents, traffic congestion, and unexpected traffic blocks. From researches and
surveys, it was found that there is more necessity to detect accidents in highways and roadway
intersections, as vehicle accidents causes huge loss to lives and property.

Dept. of CSE, JVIT Page 6


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 2

LITERATURE SURVEY

[1] Synergies of electric urban transport systems and distributed energy resources in
smart cities:

In urban areas, transportation plans and foundations consume the most energy. These
frameworks are distinct enough to be noted (transportation and workplace environments).
However, their efforts to collaborate are frequently overlooked, preventing them from reaping
the anticipated rewards of organized cooperation and load-up. Taking into account energetic
private and public transit service forms like energetic automobiles and the subterranean
railway, this study presents a novel prioritized strategy for determining highest in rank shift
and systematizing distributed energy resources (DER) in a hidden domain. The valuable
advantages of this organized organization are the primary focus of this study, which is
appropriate. The belief is that the public transportation administration's lighting will delay the
terrible power that will be used in the batteries of electric vehicles (EVs) when it could have
been used for more trains or the actual EV. A few critical examinations in view of information
from a Madrid metro line and the confidential area have been introduced. According to the data
that were obtained, basic use conserves energy in a significant amount across the entire
construction, with the metro framework seeing the greatest reduction in power consumption

[2] Video analytics for surveillance:

The test for broadcasting, which is nearly identical to the free interpretation of events
in a program based on a certain number of cameras, has changed a lot in recent years. True
understanding structures have not been able to deconstruct complicated events on their own
until recently, and this research is still ongoing. TV handles from different comprehension
cameras all around the world are not checked predictably, delivering administration clumsy for
inconvenience, troubling attitude, or distraught hardship reprisal and help, which are all
troublesome issues in the affiliation. Consequently, this is a significant issue. These tracks help
students quickly understand real-world sciences related to post-occasion programs.

[3] Motion interaction field for accident detection in traffic surveillance video

A novel approach to identifying vehicle accidents from their network of moving parts
is presented in this article. Since the expanded strategy for introducing item exchanges was

Dept. of CSE, JVIT Page 7


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
activated in one way, water waves found themselves in the position of one more exciting article
on the width experiences. Gaussian parts are used in a field design to depict the science of the
water surface utilizing the Motion Interaction Field (MIF). We can differentiate and limit traffic
occurrences without having to spend time testing vehicle following by utilizing the symmetric
components of the MIF. Our technology outperforms existing methods for identifying and
degrading traffic incidents, according to fundamental news.

[4] Using the visual intervention influence of pavement marking for rutting mitigation—
Part II: Visual intervention timing based on the finite element simulation:

A colleague who was listening made the observation that the ability as seen with eyes
intervention has a strong tendency for propelling, that it reduces accomplishment wheel tracks,
stress from the group of concentration pushes, and push in slightly (Part I). This study presents
a secret bettering rate policy and supports an empty out prediction method in light of an
impelled part model. It also presents a three-stage mediation technique with expert ocular
arbitration assistance and a slight decrease in push in the create a space deformity rate twist is
used to evaluate the mediation times of three distinct known dirty-top procedures. The empty
important news is segmentally constant. SUPERPAVE's dull top has shown promise as a new
attack aid, but AC's ferocious top is actually well-taught. In a similar vein, the study found that
the confidence against shape change is inversely correlated with the amount of time it takes for
a form change to progress to the next stage (settled state). In a similar dark-top occurrence, the
invasion of the longitudinal grade slice occurs more quickly than the invasion of the level slant
fragment. The dark top angry-top's help history may also improve by 16-31% during a
negotiation phase.

[5] Bridging the past, present and future: Modelling scene activities from event
relationships and global rules:

The key factors that govern workouts over time and their discovery in complex
perceptual contexts are the focus of this study. As a result, we offer a novel topic model that
takes into account the two primary factors that influence these occurrences: 1) Which
occurrences may occur that are not entirely predetermined by global scene articulations'
openness; 2) Because of how close the environment is, decisions are made that mix growth
spurts that came before temporary postponements. These reciprocal bits are connected using a
matched unpredictable variable during the probabilistic age process to determine which of the
two norms is essential for each activity occurrence. Each model breaking point is Journal of
Survey in Fisheries Sciences 10(1) 2593-2600 2023 2595 obtained utilizing a fell Gibbs
Dept. of CSE, JVIT Page 8
Accident Detection and Analysis via Traffic Surveillance Video 2023-24
assessment induction system. The ability of the model to differentiate brief cycles at various
scales is demonstrated by some of the datasets in the article: The setting-level initial appeal for
Markovian service and the new unions between practices that can be appropriated to predict
that venture will occur subsequent to another and the amount of respite that will accompany it
contribute to a complete awareness of the setting's vigorous element.

[6] A Markov clustering topic model for mining behaviour in video:

This test focuses on the mining of public spot program footage. A sophisticated Markov
Clustering Topic Model (MCTM) performs better than existing Unique Bayesian Association
models (like Well) and Bayesian subject models (like Dormant Dirichlet Part) in terms of
accuracy, content, and calculation competence. By purposefully separating visual times into
figures out, these exercises into global methods of dealing with acting, and then linking these
methods of dealing with acting across time, our method demonstrates important areas of power
for amazing.

Dept. of CSE, JVIT Page 9


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 3
SYSTEM REQUIREMENT SPECIFICATION

A software requirements specification (SRS)-a requirements specification for a software


system- is a complete description of the behavior of a system to be developed. In addition to a
description of the software functions, the SRS also contains non-functional requirements.
Software requirements are a sub-field of software engineering that deals with the elicitation,
analysis, specification, and validation of requirements for software.

3.1 FEASIBILITY STUDY


The feasibility of the project is analysed in this phase and business proposal is put forth
with a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some understanding of the
major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

• Economic Feasibility
• Technical Feasibility
• Social Feasibility

3.1.1 ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus, the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

3.1.2 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed system
must have a modest requirement, as only minimal or null changes are required for
implementing this system.

Dept. of CSE, JVIT Page 10


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
3.1.3 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. The as
Aspect of study is to check the level of acceptance of the system by the user. This includes the
process of training the user to use the system efficiently. The user must not feel threatened by
the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make
him familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

3.2 FUNCTIONAL REQUIREMENT

The particular necessities are user interface. The outside clients are farmers. Every one
of the farmers can utilize this product for ordering and looking.
• Hardware Interface: The outside equipment interface utilized for ordering and looking
is PCs of farmers. The PCs might be portable PCs with remote LAN as the web
association gave will be remote.
• Reliability: The product thought to be easy to understand. At that point the clients. Can
utilize effortlessly, so it doesn't require much preparing time.
• Portability: It thought to be anything but difficult to actualize in any framework.
• Software Interfaces: The working Frameworks can be any rendition of windows.
• Performance Prerequisites: The PCs utilized must be Pentium 4 machine with the goal
that they can give ideal execution of the item.

3.3 NON-FUNCTIONAL REQUIREMENT

Non utilitarian necessities are the capacities offered by the framework. It incorporates
time imperative and requirement on the advancement procedure and models. The non-useful
prerequisites are as per the following:

• Speed: The framework ought to prepare the given contribution to yield inside fitting
time.
• Ease of utilization: The product thought to be easy to understand. At that point the
clients. Can utilize effortlessly, so it doesn't require much preparing time.
• Reliability: The rate of disappointments ought to be less than just the framework more
solid.
• Portability: It thought to be anything but difficult to actualize in any framework.

Dept. of CSE, JVIT Page 11


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
3.4 SYSTEM REQUIREMENTS

Designing a vision-based system for accident detection and analysis via traffic
surveillance video involves both hardware and software components. Here's a general outline
of the requirements for such a system

3.4.1 HARDWARE REQUIREMENTS

➢ 3-D Camera
➢ Sensor integration
➢ Power supply
➢ User interface hardware
➢ Advanced hardware for AIML
➢ Processing unit
➢ Traffic parameters
➢ High speed processor
➢ Speedometer
➢ Accelerometer
➢ Sound sensor
➢ Pulse sensor

3.4.2 SOFTWARE REQUIREMENTS

➢ Accident detection algorithms


➢ Python Idle
➢ PyCharm
➢ Firebase
➢ MIT Application
➢ Visual studio
➢ Operating system
➢ Computer vision libraries
➢ Communication protocols

The specific requirements may vary based on the scale of the deployment, the complexity
of the environment, and any specific regulations or standards that need to be followed in
the targeted region.

Dept. of CSE, JVIT Page 12


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 4
SYSTEM DESIGN
4.1 INTRODUCTION

System design is the process of defining the architecture, components, modules,


interfaces, and data for a system to satisfy specified requirements. Systems design could be
seen as the application of systems theory to product development.

4.2 ARCHITECTURAL DESIGN

System architecture is a conceptual model that defines the structure and behaviour of the
system. It comprises of the system components and the relationships describing how they work
together to implement the overall system.

Fig 4.2.1: Architectural diagram of Accident Detection

4.3 DATA FLOW DIAGRAM

A data flow diagram is a graphical representation of the "flow" of data through an


information system, modelling its process aspects. A DFD is often used as a preliminary step
to create an overview of the system without going into great detail, which can later be
elaborated. DFDs can also be used for the visualization of data processing.

A DFD shows what kind of information will be input to and output from the system, how
the data will advance through the system, and where the data will be stored. It does not show
information about the timing of process or information about whether processes will operate in
sequence or in parallel unlike a flowchart which also shows this information.

Dept. of CSE, JVIT Page 13


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

Fig. 4.3.1: Data Flow Diagram

4.4 CLASS DIAGRAM

Class diagram is a static diagram. It represents the static view of an application. Class
diagram is not only used for visualizing, describing, and documenting different aspects of a
system but also for constructing executable code of the software application. Class diagram
shows a collection of classes, interfaces, associations, collaborations, and constraints. It is also
as a structural diagram.

Fig. 4.4.1: Class Diagram

Dept. of CSE, JVIT Page 14


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
4.5 USE CASE DIAGRAM

A use case diagram at its simplest is a representation of a user's interaction with the system
that shows the relationship between the user and the different use cases in which the user is
involved. A use case diagram can identify the different types of users of a system and the
different use cases and will often be accompanied by other types of diagrams as well.

Fig. 4.5.1: Use Case Diagram

While a use case itself might drill into a lot of detail about every possibility, a use case
diagram can help provide a higher-level view of the system. It has been said before that "Use
case diagrams are the blueprints for your system". They provide the simplified and graphical
representation of what the system must actually do. A use case diagram can identify the different
types of users of a system and the different use cases and will often be accompanied by other
types of diagrams as well.

Dept. of CSE, JVIT Page 15


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
4.6 SEQUENCE DIAGRAM

A sequence diagram shows object interactions arranged in time sequence. It depicts the
objects and classes involved in the scenario and the sequence of messages exchanged between
the objects needed to carry out the functionality of the scenario. Sequence diagrams are
typically associated with use case realizations in the Logical View of the system under
development. Sequence diagrams are sometimes called event diagrams or event scenarios.

A sequence diagram shows, as parallel vertical lines (lifelines), different processes or


objects that live simultaneously, and, as horizontal arrows, the messages exchanged between
them, in the order in which they occur. This allows the specification of simple runtime scenarios
in a graphical manner.

Fig. 4.6.1: Sequence Diagram

Dept. of CSE, JVIT Page 16


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 5
SYSTEM IMPLEMENTATION
5.1 IMPORTING LIBRARIES

• OpenCV: The 'import cv2' statement brings the OpenCV library into the Python script,
allowing access to its functions for computer vision and image processing.
• NumPy: NumPy is a Python module for scientific computing. This library will be
utilized throughout the project and is import as ‘np’
• OS: The OS module in python provides functions for interacting with the operating
system. ‘os’ comes under Python's standard utility modules.
• Time: You can use the time module in your Python scripts to work with time. You can
do actions such as retrieving the current time, waiting during code execution, and
measuring the efficiency of your code.
• PySerial: PySerial is a Python library that provides access to the serial ports on a variety
of operating systems.

5.2 APPROACHES

There are four main approaches to detect vehicle regions, they are

1. Frame differencing method

2. Background subtraction method

3. Feature based method

4. Motion based method

5.2.1 Frame differencing method

In the frame difference method moving vehicle regions are detected by subtracting two
consecutive image frames in the image sequence. This works well in case of uniform
illumination conditions, otherwise it creates non-vehicular region and also frame differencing
method does not work well if the time interval between the frames being subtracted is too
large.

5.2.2 Background subtraction method

Background subtraction method is one of the widely used methods to detect moving
vehicle regions. In this step the either the already stored background frame or the background

Dept. of CSE, JVIT Page 17


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
generated from the accumulated the image sequence is subtracted from the input image frame
to detect the moving vehicle regions. This difference image is then thresholder to extract the
vehicle regions.

The problem with the stored background frame is that they are not adaptive to changing
illumination and weather conditions which may create non-existent vehicle regions and also
works for stationary background. Therefore, there is need to generate a background that is
dynamic to the illumination and weather conditions. Various methods based on statistics and
parametric model have been used. Some of the approaches assumed Gaussian probability
distribution for each pixel in the image. Then the Gaussian distribution model is updated with
the pixel values from the new image frame in the image sequence.

Single Gaussian distribution-based background modelling works well if the


background is relatively stationary and it fails if the background contains shadows and non-
important moving regions (e.g., tree branches). This led the researches to use more than one
Gaussian to build more robust background modelling technique. In Mixture of Gaussian
methods colour from a pixel in a background object are described by multiple Gaussian
distributions. These methods were able to produce good background modelling. In all the
above-described methods several parameters need to be estimated from the data to achieve
accurate density estimation for background. However, most of the times this information is
not known beforehand.

Non-parametric methods do not assume any fixed model for probability distribution
of background pixels. These methods are known to deal with multimodality in background
pixel distributions without determining the number of modes in the background. However,
these systems do not adapt to sudden changes in illumination. So, few methods based on
Support Vector Machine (SVM), robust recursive learning were proposed to dynamically
update the background. Some methods used Kalman filter to model the foreground and
background and some other methods used depth and colour information to model the
background using stereo camera. Background subtraction methods produced better
segmentation results due to better background modelling, when compared to frame
differencing method. But the disadvantages of background modelling to detect vehicle regions
are high computational complexity making them difficult to operate in real-time and increased
sensitivity to changes in lightning conditions. Some methods used Kalman filter to model the
foreground and background and some other methods used depth and colour information to
model the background using stereo camera.

Dept. of CSE, JVIT Page 18


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
5.2.3 Feature Based Method

Since the background subtraction methods needs accurate of modelling of background to


detect moving vehicle regions, researched shifted their focus to detect moving vehicle regions
using feature-based methods. These methods made use of sub-features such as edges or corner
of vehicles. These features are then grouped by analysing their motion between consecutive
frames. Thus, a group of features now segments a moving vehicle from the background. The
advantages of these methods are that the problem of occlusion between the vehicle regions
can be handled well, the feature-based methods have less computational complexity compared
to background subtraction method, the sub-features can be further analysed for classifying the
vehicle type and there is no necessity of stationary camera. But the disadvantage of these
systems is that if the features are not grouped accurately, then there may be failure in detecting
vehicles correctly and also some of the systems are computationally complex and needs fast
processing computers for real-time implementation.

5.2.4 Motion Based Method


Motion based approaches were also used to detect the vehicle regions in image
sequences. Optical flow-based approaches were used to detect moving objects in the methods
These methods are very effecting on small moving objects. Wixon proposed an algorithm to
detect salient motion by integrating frame-to-frame optical flow over time, thus it is possible
to predict the motion pattern of each pixel. This approach assumes that the object tends to
move in a consistent direction over time and that foreground motion has different saliency.
The drawbacks of optical flow-based methods are calculation of optical flow consumes time
and the inner points of a large homogeneous object (e.g. car with single colour) cannot be
featured with optical flow. Some of the approaches used spatial-temporal intensity variations
to detect motion and thus segment the moving vehicle regions.

5.3 SYSTEM ANALYSIS


An "accident detection system" typically refers to a technological solution designed to
automatically detect and report accidents or emergencies, particularly in scenarios like road
accidents, industrial mishaps, or other hazardous situations. Here's a systematic analysis of
such a system.

Once an accident is detected, the system needs to transmit relevant information to


emergency services, fleet managers, or designated contacts. This could be through cellular
networks, Wi-Fi, or dedicated communication protocols

Dept. of CSE, JVIT Page 19


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
5.3.1 Experimental Setup and Testing Conditions

For the purpose of testing our algorithm we used two video sequences obtained from
traffic intersection at Saigon, Vietnam. These videos were used to test the performance of the
tracking algorithm. These videos were used as they provided nice scenarios for busy traffic
intersection. The testing was done offline and results were used to improve the performance
of the tracking algorithm. Few frames from the test videos are shown in the Figure 3.2 and
Figure 3.3. Figure 2.4 shows the frame sequences from the video named saigon01.avi and
Figure 2.5 shows the frame sequences from the video named saigon02.avi.

Figure 5.3.3.1: Frame sequences from test video saigon01.avi

5.3.2 Key Features:

➢ Real-time Monitoring: Constantly monitors the environment for signs of accidents


or emergencies.
➢ Accuracy: Minimizes false positives to prevent unnecessary alerts or disruptions.
➢ Reliability: Operates effectively under various environmental conditions (e.g.,
weather, lighting) and is resistant to tampering or interference.
➢ Scalability: Adaptable to different contexts, including vehicles, industrial sites, or
smart city infrastructure.

Dept. of CSE, JVIT Page 20


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
➢ Integration: Seamlessly integrates with existing emergency response systems, such
as 911 services or corporate safety protocols.

Figure 5.3.2.1: Frame sequences from test video saigon02.avi

Overall, an accident detection system represents a critical technology for enhancing


safety and emergency response across various domains, with ongoing advancements aimed at
further improving its effectiveness and usability

Dept. of CSE, JVIT Page 21


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 6

VEHICLE DETECTION AND FEATURE


EXTRACTION

6.1 VEHICLE DETECTION


Vehicle Detection is an important stage of the accident detection system in which the
moving vehicles are segmented from the background. Figure 4.1 shows brief description of
vehicle detection system.

Figure 6.1.1: Description of vehicle detection system

The method that is used for detecting moving vehicles is background subtraction. Since the
research focused on real-time implementation of the system, background modelling
techniques that have high computational cost were not tried. And also, since the testing of the
algorithm is done offline and the position of the camera recording the video sequence is static,
we used a stored background frame for background subtraction. Once the vehicle regions are
detected, suitable low-level features are extracted from the vehicle regions.

Dept. of CSE, JVIT Page 22


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
6.2 BACKGROUND SUBSTRACTION
Background subtraction is a fundamental technique utilized in vehicle detection within
video streams. Initially, a model of the background scene is constructed from a series of frames
where no moving objects are present, establishing a baseline for stationary elements in the
environment. Subsequently, each frame is compared to this background model to identify
pixels that deviate significantly, indicating potential moving objects like vehicles. To refine this
identification, a thresholding step is often applied to filter out minor changes and retain only
substantial deviations from the background. The resulting foreground regions, representing
potential vehicles, are then grouped into blobs using techniques like contour detection. To
enhance accuracy, these blobs may undergo further processing to eliminate noise and false
positives. Finally, the detected vehicles can be tracked over time, providing information on
their movement patterns, speed, and trajectories. While background subtraction is effective in
controlled environments, challenges such as variations in lighting, shadows, and occlusions
necessitate additional techniques for robust vehicle detection, including adaptive background
modelling and machine learning-based approaches.

(a) (b) (c)


Figure 6.2.1: Examples of Background Subtraction
(a) Input frame (b) Background frame (c) Difference Image

Dept. of CSE, JVIT Page 23


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
6.3 THRESHOLD AND MORPHOLOGICAL PROCESSING
Detecting vehicles in images or video streams is crucial for various applications like
traffic monitoring, surveillance, and autonomous navigation. One effective method for vehicle
detection involves the combined use of thresholding and morphological processing. In this
approach, the first step is image preprocessing, which typically involves converting the input
image to grayscale to simplify processing and reduce computational load. Optionally, noise
reduction techniques such as Gaussian blur can be applied to enhance the quality of the image.
Next comes thresholding, a technique that separates objects of interest from the background by
converting the grayscale image into a binary image. This is achieved by applying a threshold
value, above or below which pixel intensities are classified as foreground (potentially
containing vehicles) or background. Various thresholding methods exist, including global
thresholding techniques like Otsu's method or adaptive thresholding, which adjusts the
threshold locally based on image characteristics.

Following thresholding, morphological processing is employed to refine the binary


image obtained. Morphological operations such as erosion and dilation are used to manipulate
the shape and structure of the binary image. Erosion can help remove small noise pixels or
isolate thin features, while dilation can fill gaps between foreground regions and connect
nearby pixels, enhancing the continuity of potential vehicle shapes.

Once the binary image has been processed morphologically, further analysis is
conducted to identify regions or blobs that likely correspond to vehicles. Techniques such as
contour detection or connected component analysis are commonly used for this purpose. These
techniques enable the extraction of spatial features such as area, aspect ratio, and circularity,
which can help distinguish vehicles from other objects or artifacts in the scene. Post-processing
steps may involve refining the detected vehicle regions by eliminating false positives or
tracking vehicles across multiple frames in a video sequence. This can improve the robustness
and accuracy of the detection system, particularly in dynamic environments with changing
lighting conditions or occlusions.

Continuous refinement and optimization of the thresholding and morphological


processing parameters are essential to adapt the detection method to different environments
and improve its performance over time. Additionally, integration with other sensor modalities
or higher-level decision-making algorithms may be necessary for more advanced applications
such as autonomous driving.

Dept. of CSE, JVIT Page 24


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
Finally, the detected vehicles can be visualized by overlaying bounding boxes on the
original image or outputting the detection results in a suitable format for further analysis or
integration into larger systems.

(a) (b)
Figure 6.3.1: Illustration of thresholding
(a) Difference image (b) Thresholder image

6.4 Code

import cv2
import numpy as np
# Function to detect accidents in the video using YOLO object detection
def detect_accidents(input_video_path, output_video_path):
# Load YOLO
net = cv2.dnn.readNet(r"C:\Users\FOLDER\OneDrive\Desktop\yolov4.weights",
r"C:\Users\FOLDER\OneDrive\Desktop\yolov4.cfg")
# Load names of classes
with open("coco.names", "r") as f:
classes = f.read().strip().split("\n")
# Initialize video capture
cap = cv2.VideoCapture(input_video_path)

Dept. of CSE, JVIT Page 25


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
# Get the video frame rate and resolution
frame_rate = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(output_video_path, fourcc, frame_rate, (width, height))
# Define colors for different classes
class_colors = np.random.uniform(0, 255, size=(len(classes), 3))
# Dictionary to store previous positions of bounding boxes
prev_positions = {}
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Create a blob from the frame and perform a forward pass through the network
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False)
net.setInput(blob)
layer_outputs = net.forward(net.getUnconnectedOutLayersNames())
# Initialize lists for bounding boxes, confidences, and class IDs
boxes = []
confidences = []
class_ids = []
# Process each output layer
for output in layer_outputs:
for detection in output:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
# Filter out weak detections by ensuring the confidence is greater than a threshold
if confidence > 0.5:
# Scale the bounding box coordinates to the frame size
center_x = int(detection[0] * width)

Dept. of CSE, JVIT Page 26


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
x = int(center_x - w / 2)
y = int(center_y - h / 2)
# Add bounding box coordinates, confidences, and class IDs to their respective lists
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
# Perform non-maximum suppression to eliminate overlapping bounding boxes
indices = cv2.dnn.NMSBoxes(boxes, confidences, score_threshold=0.5,
nms_threshold=0.4)
# Inside the main detection loop
for i in indices.flatten():
box = boxes[i]
x, y, w, h = box
label = f"{classes[class_ids[i]]}: {confidences[i]:.2f}"
color = (0, 255, 0) # Default color is green
# Change color to red if confidence score is above 0.9 (90%)
if confidences[i] > 0.9:
color = (0, 0, 255) # Change color to red
# Draw bounding box with the determined color
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
cv2.putText(frame, label, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# Write the frame into the output video
out.write(frame)
# Display the resulting frame
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()

Dept. of CSE, JVIT Page 27


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
# Main function
if __name__ == "__main__":
input_video_path = r"C:\Users\FOLDER\OneDrive\Desktop\1519.mp4" # Path to input
video
output_video_path = "C:\\Users\\FOLDER\\OneDrive\\Desktop\\output_video.mp4" #
Path to save output video
detect_accidents(input_video_path, output_video_path)
# Draw bounding box with the determined color
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
cv2.putText(frame, label, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# Write the frame into the output video
out.write(frame)
# Display the resulting frame
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if np.any(iou > 0.5):
accident_frames += 1
if accident_frames >= consecutive_frames:
cv2.putText(frame, "Accident Detected", (50, 50),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
else:
accident_frames = 0
prev_boxes = boxes
# Draw bounding boxes and labels for detected objects
for i in range(len(boxes)):
box = boxes[i]
x, y, w, h = box
label = f"{classes[class_ids[i]]}: {confidences[i]:.2f}"
color = (0, 255, 0) # Default color is green
# Change color to red if confidence score is above 0.9 (90%)
if confidences[i] > 0.9:
color = (0, 0, 255) # Change color to red
# Release everything if job is finished

Dept. of CSE, JVIT Page 28


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
cap.release()
out.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
input_video_path = r"C:\Users\FOLDER\OneDrive\Desktop\1519.mp4" # Path to input
video
output_video_path = "C:\\Users\\FOLDER\\OneDrive\\Desktop\\output_video.mp4" #
Path to save output video
detect_accidents(input_video_path, output_video_path)
# Draw bounding box with the determined color
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
cv2.putText(frame, label, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# Write the frame into the output video
out.write(frame)
# Main function
if __name__ == "__main__":
input_video_path = r"C:\Users\FOLDER\OneDrive\Desktop\1559.mp4" # Path to input
video
output_video_path = "C:\\Users\\FOLDER\\OneDrive\\Desktop\\output_video.mp4" #
Path to save output video
detect_accidents(input_video_path, output_video_path)

Dept. of CSE, JVIT Page 29


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 7

RESULTS
7.1 RESULTS

➢ Accuracy: Report on the overall accuracy of the system in detecting accidents. This
could include metrics such as true positives, true negatives, false positives, and false
negatives.
➢ Detection Rate: Measure the system's ability to detect accidents in real-time or near
real-time scenarios. This could be expressed as a percentage of accidents detected out of
the total number of accidents occurring in the surveillance area.
➢ Response Time: Evaluate how quickly the system can detect accidents from the
moment they occur to the moment the system sends out alerts or notifications. This metric
is crucial for timely emergency response.

7.2 SNAPSHOTS

Figure 7.2.1: Detection of objects

Dept. of CSE, JVIT Page 30


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

Figure 7.2.2 Accuracy Detection

Figure 7.2.3: Accident Detection

Dept. of CSE, JVIT Page 31


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

Figure 7.2.4: Crash Detection

Figure 7.2.5: Person Detection

Dept. of CSE, JVIT Page 32


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

Figure 7.2.6: Collision Detection

Figure 7.2.7: Indication of Danger and Safety Accidents

Dept. of CSE, JVIT Page 33


Accident Detection and Analysis via Traffic Surveillance Video 2023-24

CHAPTER 8

CONCLUSION AND FUTURE SCOPE


8.1 CONCLUSION

In this thesis a crash detection system at traffic intersections that is capable of operating in
real-time is presented. More emphasis has been given to vehicle detection and tracking stage,
since they are essential to extract suitable vehicle features and vehicle parameters that can be
used as factors for determining crashes at intersections. In this work, a tracking algorithm that
uses a weighted combination of low-level features extracted from moving vehicles and low-
level vision analysis on vehicle regions extracted from different frames is presented. The
vehicle detection rate of the proposed algorithm is about 90-93% and tracking rate is about
88-92% on two test videos. The average processing speed of the algorithm is about 5 second
for the two test videos used. The detection and tracking rate of the algorithm was decremented
due to the problem of shadows and occlusion, which the algorithm did not address in this
work. If the problems of shadows and occlusion are addressed, the performance rate of the
proposed algorithm is expected to go high. Overall, from the work presented it is shown that
proper combination of low-level features that have low computational complexity are
sufficient for vehicle tracking compared to complex feature tracking methods.

Using the low-level features and vehicle velocity of the correctly tracked vehicles,
crashes were detected by the system using an accident index calculated from speed, area,
orientation and position indexes of the tracked vehicles. The proposed crash detection system
has a precision (correct detection rate) of 87.5% and detection rate of 100% for test crashes
created using experimental test-bed. Overall, the performance of the collision detection
system is good particularly considering the fact that the algorithm is capable of operating in
real-time. Also, the method used a low-level feature instead of any learning algorithm such as
Hidden Markov Model, Neural Networks, etc. that can consume a lot of time for computation
and take decision. It is believed that with more analysis of traffic crashes data and more
training for the collision detection algorithm, it can be implemented for monitoring real-time
traffic scenarios.

8.2 FUTURE WORK


To improve the performance of the detection and tracking algorithm, problems created
by shadows and occlusion is planned to be addressed by using better background modelling
techniques and re-segmentation of the segmented vehicle region using average colour and

Dept. of CSE, JVIT Page 34


Accident Detection and Analysis via Traffic Surveillance Video 2023-24
lightness distance of blocks in the segmented vehicle regions. And also, it is planned to make
the vehicle detection and tracking algorithm operate under night conditions. Currently studies
having been done on determining the features to be used to detect and track vehicles in night
conditions. Also, it is planned to collect more traffic data from different camera angles to
make the algorithm robust to various conditions and situations. Also, the current focus is on
analysing the parts of the algorithm that can be optimized to increase the processing speed of
the detection and tracking algorithm.

To improve the performance of the collision detection algorithm, it is planned to


collect more crash cases from real-traffic situations by installing a camera at busy intersection
and to analyse the performance of the algorithm on real-traffic situations. Currently the focus
is on to determine the factors that can be added with existing low-level features and velocity
information of the vehicle that can improve the overall performance and increase the
robustness of the system.

Dept. of CSE, JVIT Page 35


REFERENCES
[1] C. Regazzoni, A. Cavallaro, Y. Wu, J. Konrad, and A. Hampapur, “Video analytics for
surveillance: Theory and practice,” IEEE Signal Process. Mag., vol. 27, no. 5, pp. 16–17,
Sep. 2010.

[2] X. Zhu, Z. Dai, F. Chen, X. Pan, and M. Xu, “Using the visual intervention influence of
pavement marking for rutting mitigation— Part II: Visual intervention timing based on the
finite element simulation,” Int. J. Pavement Eng., vol. 20, no. 5, pp. 573–584, May 2019.

[3] C. F. Calvillo, A. Sánchez-Miralles, and J. Villar, “Synergies of electric urban transport


systems and distributed energy resources in smart cities,” IEEE Trans. Intell. Transp. Syst.,
vol. 19, no. 8, pp. 2445–2453, Aug. 2018.

[4] K. Yun, H. Jeong, K. M. Yi, S. W. Kim, and J. Y. Choi, “Motion interaction field for
accident detection in traffic surveillance video,” in Proc. 22nd Int. Conf. Pattern Recognit.,
Aug. 2014, pp. 3062–3067.

[5] J. Varadarajan, R. Emonet, and J. Odobez, “Bridging the past, present and future: Modeling
scene activities from event relationships and global rules,” in Proc. IEEE Conf. Comput.
Vis. Pattern Recognit., Jun. 2012, pp. 2096–2103.

[6] T. Hospedales, S. Gong, and T. Xiang, “A Markov clustering topic model for mining
behaviour in video,” in Proc. IEEE 12th Int. Conf. Comput. Vis., Sep. 2009, pp. 1165–
1172.

[7] W. Hu, X. Xiao, Z. Fu, D. Xie, T. Tan, and S. Maybank, “A system for learning statistical
motion patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 9, pp. 1450–1464,
Sep. 2006.

[8] S. Sadeky, A. Al-Hamadiy, B. Michaelisy, and U. Sayed, “Real-time automatic traffic


accident recognition using HFG,” in Proc. 20th Int. Conf. Pattern Recognit., Aug. 2010,
pp. 3348–3351. Volume 12, Issue 01, Jan 2023 ISSN 2456 – 5083 Page 484.

[9] Y.-K. Ki, “Accident detection system using image processing and MDR,” Int. J. Comput.
Sci. Netw. Secur., vol. 7, no. 3, pp. 35–39, 2007.
[10] D. Zeng, J. Xu, and G. Xu, “Data fusion for traffic incident detector using D-S evidence
theory with probabilistic SVMs,” J. Comput., vol. 3, no. 10, pp. 36–43, Oct. 2008.

[11] R. Mehran, A. Oyama, and M. Shah, “Abnormal crowd behavior detection using social
force model,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 935–
942.

[12] W. Sultani and J. Y. Choi, “Abnormal traffic detection using intelligent driver model,” in
Proc. 20th Int. Conf. Pattern Recognit., Aug. 2010, pp. 324–327.

[13] H.-N. Hu et al., “Joint monocular 3D vehicle detection and tracking,” in Proc. IEEE/CVF
Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 5389–5398.

[14] A. Kuramoto, M. A. Aldibaja, R. Yanase, J. Kameyama, K. Yoneda, and N. Suganuma,


“Mono-camera based 3D object tracking strategy for autonomous vehicles,” in Proc. IEEE
Intell. Vehicles Symp. (IV), Jun. 2018, pp. 459–464.

[15] H. Phat, D. Trong-Hop, and Y. Myungsik, “A probability-based algorithm using image


sensors to track the LED in a vehicle visible light communication system,” Sensors, vol.
17, no. 2, p. 347, 2017

You might also like