34

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

A

Project Report
On

VEHICLE SPEED AND DISTANCE DETECTION


Submitted
in partial fulfillment for the award of degree of
Bachelor of Technology
in
Department of Computer Science & Engineering

Submitted to: Submitted By:


Prof. C.P GUPTA SIR Kumar Tarun Sundaram(20EUCCS031)
Department of Computer Science Madan Palsaniya(20EUCCS033)
Tanvi Sharma(20EUCCS061)
Tejas Prajapati(20EUCCS0)
Semester/Year: VIII Sem/IV Year

Department of Computer Science & Engineering


University Department,
Rajasthan Technical University, Kota
May-2024
CERTIFICATE

I hereby declare that the project work, which is being presented to the Department of Computer
Science and Engineering, Rajasthan Technical University Kota, entitled as “Vehicle Speed and
Distance Detection” was carried out and written by us with our correct and complete knowledge
carried under the guidance of Dr. Gowri Choudhary, Assistant Professor, Department of
Computer Science and Engineering, Rajasthan Technical University Kota.

The results contained in this report have not been submitted in part or in full to any other University
or Institute for the award of any degree or diploma to the best of our knowledge.

Drs. Gowri Choudhary Kumar Tarun Sundaram (20EUCCS031)


Department of Computer Science and Madan Palsaniya (20EUCCS033)
Engineering Tanvi Sharma (20EUCCS061)
Rajasthan Technical University, Kota Tejes Prajapati (20EUCCS063)

ii
CERTIFICATE

This is to certify that the final semester project, entitled as “ Vehicle Speed and Distance
Detection” has been successfully carried out by Tanvi Sharm (Enrolment No. 20EUCCS061),
Madan Palsaniya (Enrolment No. 20EUCCS033), Kumar Tarun Sundaram (Enrolment No.
20EUCCS031) and Tejas Prajapati(Enrolment No. 20EUCCS063) under my guidance partially
fulfilling the criteria of “Bachelors of Technology in Computer Science and Engineering” from
the Department of Computer Science and Engineering, Rajasthan Technical University Kota, for
the academic year 2023-24.

Dr. Gowri Choudhary Kumar Tarun Sundaram (20EUCCS031)


Department of Computer Science and Madan Palsaniya (20EUCCS033)
Engineering Tanvi Sharma (20EUCCS061)
Rajasthan Technical University, Kota Tejas Prajapati (20EUCCS063)

iii
ACKNOWLEDGEMENT

I am thankful to my project’s supervisor Dr. Gowri Choudhary for his continuous support,
conviction, encouragement, and invaluable advice in credit project work. I also like to thank Prof.
C.P Gupta for helping me throughout the literature review and presentation preparation process.

Tanvi Sharam (20EUCCS061)


Madan Palsaniya (20EUCCS033)
Kumar Tarun Sundaram (20EUCCS031)
Tejas Prajapati (20EUCCS063)

iv
ABSTRACT

Speed detection of vehicle and its tracking plays an important role for safety of civilian lives, thus
preventing many mishaps. This module plays a very significant role in the monitoring of traffic
where efficient management and safety of citizens is the main concern. In this paper, we discuss
about potential methods for detecting vehicle and its speed. Various research has already been
conducted and various papers have also been published in this area. The proposed method consists
of mainly three steps background subtraction, feature extraction and vehicle tracking. The speed
is determined using distance travelled by vehicle over number of frames and frame rate. For vehicle
detection, we use various techniques and algorithms like Background Subtraction Method, Feature
Based Method, Frame Differencing and motion-based method, Gaussian mixture model and Blob
Detection algorithm. Vehicle detection is a part of speed detection where, the vehicle is located
using various algorithms and later determination of speed takes place. The process for speed
detection is as follows:1) Input Video 2)Pre-Processing 3)Moving Vehicle detection 4)Feature
Extraction 5)Vehicle tracking 6)Speed detection. Many accidents and mishaps can be avoided if
vehicle detection and speed tracking techniques are implemented.

v
CONTENTS

CHAPTER 1: INTRODUCTION ABOUT THE COURSE................................................................. 1-2


1.1 Objectives ........................................................................................................................................... 1
1.2 Motivation .......................................................................................................................................... 1
1.3 About the company............................................................................................................................. 2
1.4 Layout of the Report ........................................................................................................................... 2
CHAPTER 2: BASICS OF FULL STACK WEB DESIGN .................................................................... 3
CHAPTER 3: HTML .............................................................................................................................. 4-8
3.1 Identifying the parts that make up an HTML tag ............................................................................... 4
3.2 Determining when to use specific HTML tags ................................................................................... 5
3.3 Correctly structuring nested HTML content....................................................................................... 8
CHAPTER 4: CSS ................................................................................................................................. 9-12
4.1 Introduction about CSS ...................................................................................................................... 9
4.2 Identifying the benefit of separating style from content ..................................................................... 9
4.3 Using CSS to style a website .............................................................................................................. 9
4.4 Structure of CSS ............................................................................................................................... 10
4.5 Targeting things in CSS.................................................................................................................... 10
4.6 Some basic properties of CSS .......................................................................................................... 11
CHAPTER 5: JAVASCRIPT ............................................................................................................. 13-18
5.1 Introduction to JavaScript................................................................................................................. 13
5.2 JavaScript Language Basics ............................................................................................................. 13
5.3 JavaScript Loops, Condition ............................................................................................................ 16
5.4 JavaScript Objects ............................................................................................................................ 16
5.5 Basics of ES6.................................................................................................................................... 17
5.6 JavaScript DOM ............................................................................................................................... 18
CHAPTER 6: REACT JS……………………………………………………………..……………..19-21
6.1 Introduction to React.js……………………………………………………………………………..19
6.2 React.js Fundamentals……………………………………………………………………………...19
6.3 React.js Advanced Concepts…………………………………………………….……………...….20
6.4 Project Implementation with React.js………………………………………………………………21

vi
CHAPTER 7: NODE JS…………………………………………………………………………..….22-24
7.1 Introduction to Node.js…………………………………………………………………………....22
7.2 Node.js Environment Setup…………………………………………………………………….…22
7.3 Node.js Modules and CommonJS………………………………………………………………....22
7.4 Asynchronous JavaScript and Callbacks…………………………………………………….…….23
7.5 Working with npm (Node Package Manager)……………………………………………….…….23
7.6 Express.js Framework……………………………………………………………………….…….23
7.7 RESTful APIs with Express…………………………………………………………………….…24
7.8 Deploying Node.js Applications…………………………………………………………………..24
CHAPTER 8: MONGODB DATABASE ……………………………………………………….…..25-27

8.1 Introduction to MongoDB…………………………………………………………….…………..25


8.2 Installing and Setting Up MongoDB……………………………………………………………..25
8.3 CRUD Operations in MongoDB…………………………………………………………………26
8.4 Aggregation Framework………………………………………………………………………….26
8.5 Data Validation and Schema Design……………………………………………………………..26
8.6 Sharding and Scalability………………………………………………………………………….27
8.7 MongoDB and Node.js Integration………………………………………………………………27
8.8 Geospatial Data and Indexing……………………………………………………………………27
CHAPTER 9: PROJECT………………………………………………………………………………..28
CONCLUSION ......................................................................................................................................... 29
REFERENCES .......................................................................................................................................... 30

vii
List of Figures

Figure No. Name of Figure Page No.


3.1 Structure of an HTML webpage 5
4.1 Linking HTML and CSS file using link tag 9
4.2 Structure of CSS 10
9.1 Crowdfunding Project made using HTML, CSS and JavaScript 19
9.2 Create a new Campaign for funding 33
9.3 Connect wallat and ongoing funding project 34
9.4 Ongoing crowdfunding Projects 35
9.5 Connecting metamask to the Funding Project 36

viii
List of Tables

Table No. Name of Table Page No.


3.1 Basic HTML tags 6
3.2 Formatting tags 6-7
3.3 Forms and Input tags 7
3.4 Images tags 7
3.5 Links tags 8
3.6 Lists tags 8
3.7 Tables tags 8
5.1 Data Types in JavaScript 14
5.2 Object Properties in JavaScript Example 16
6.2 Object Methods in JavaScript Example 17

ix
CHAPTER 1: INTRODUCTION ABOUT THE PROJECT

1.1 Background and Motivation


The advancement in technology has led to the development of various systems aimed at enhancing
safety and efficiency in transportation. One such area of focus is the automatic detection of vehicle
speed and distance. Automatic systems for speed and distance detection are essential components
of modern traffic management and automotive safety systems. These systems utilize a combination
of sensors, data processing algorithms, and communication technologies to accurately measure the
speed and distance between vehicles on the road.

The motivation behind this project stems from the increasing need for effective traffic management
solutions to address the growing challenges of congestion, accidents, and pollution on roadways.
By developing an automatic vehicle speed and distance detection system, we aim to contribute to
the improvement of road safety, traffic flow optimization, and overall transportation efficiency.

1.2 Problem Statement


Traditional methods of speed and distance measurement, such as radar guns and manual
observations, have limitations in terms of accuracy, reliability, and scalability. Human error,
weather conditions, and other factors can affect the precision of these methods, leading to potential
safety hazards and inefficiencies on the road.

The problem statement for this project revolves around the need to develop a robust and reliable
automatic system for detecting vehicle speed and distance. This system should be capable of
accurately measuring the speed of moving vehicles and calculating the distance between them in
real-time, without relying on manual intervention or external factors that may compromise
accuracy.

1.3 Objectives of the Project


The primary objectives of this project are as follows:

Design and develop a prototype automatic vehicle speed and distance detection system.

Implement algorithms for speed detection and distance measurement using sensor data.

Evaluate the performance of the system in real-world conditions.

Identify potential challenges and limitations of the system and propose solutions for improvement.

1
Explore potential applications and implications of the developed system in traffic management
and automotive safety.

1.4 Scope and Limitations


The scope of this project encompasses the design, development, implementation, and evaluation
of an automatic vehicle speed and distance detection system. The system will be designed to work
in typical road conditions, including urban and highway environments, and will be capable of
detecting speeds and distances within a specified range.

However, it's important to acknowledge certain limitations of the project, including:

The prototype system may not achieve the same level of accuracy and reliability as commercial-
grade solutions.

Environmental factors such as weather conditions and terrain may impact the performance of the
system.

The system may have limitations in detecting vehicles with certain characteristics or in complex
traffic scenarios.

Despite these limitations, the project aims to provide valuable insights into the feasibility and
effectiveness of automatic vehicle speed and distance detection systems.

1.5 Overview of the Report


This report is structured into several chapters, each focusing on different aspects of the project.

Chapter 2 provides a comprehensive review of existing literature and technologies related to


automatic vehicle speed and distance detection systems.

Chapter 3 outlines the methodology employed in the design and development of the system,
including the hardware and software components used.

Chapter 4 presents the results of experiments and tests conducted to evaluate the performance of
the system.

Chapter 5 discusses the implications of the results, addresses challenges, and suggests avenues for
future research. Finally,

Chapter 6 summarizes the key findings of the project and provides concluding remarks.

2
CHAPTER 2: LITERATURE REVIEW

2.1 Introduction to Automatic Vehicle Speed and Distance Detection Systems


Automatic Vehicle Speed and Distance Detection Systems play a pivotal role in modern
transportation infrastructure, offering real-time insights into traffic dynamics and vehicle
behaviors. These systems leverage a range of technologies and methodologies to accurately
measure vehicle speeds and distances, thereby enhancing road safety, optimizing traffic flow, and
improving transportation efficiency. Among the key techniques utilized in these systems are the
optical flow method and background subtraction method, each offering unique advantages and
challenges in speed and distance detection.

2.2 Optical Flow Method


The optical flow method is a popular approach employed in automatic vehicle speed and distance
detection systems. It relies on analyzing the apparent motion of pixels in consecutive video frames
captured by cameras mounted along roadways. By tracking the displacement of pixels between
frames, the optical flow method can estimate the speed and direction of vehicle movement,
enabling real-time speed detection and traffic monitoring.

Advantages of Optical Flow Method:

 Real-Time Performance: Optical flow algorithms can provide speed estimates in real-time,
making them suitable for applications requiring immediate feedback, such as traffic
management and surveillance.
 Non-Intrusive: Optical flow methods are non-intrusive and can be deployed without the
need for physical infrastructure or sensors embedded in the road surface, reducing
installation and maintenance costs.
 Suitability for Various Environments: Optical flow algorithms are versatile and can be
applied in various environmental conditions, including daylight, low light, and adverse
weather, making them suitable for outdoor traffic monitoring applications.

Challenges of Optical Flow Method:

 Complexity in Crowded Scenes: Optical flow algorithms may face challenges in accurately
estimating vehicle speeds in crowded traffic scenarios with overlapping motion patterns
and occlusions, leading to inaccuracies and false detections.
 Sensitivity to Lighting Conditions: Changes in lighting conditions, such as shadows, glare,
and reflections, can affect the performance of optical flow methods, leading to variations
in speed estimates and reduced accuracy.

3
2.3 Background Subtraction Method
Another widely used technique in automatic vehicle speed and distance detection systems is the
background subtraction method. This method involves subtracting a reference background image
from the current frame to isolate moving objects, including vehicles, on the roadway. By analyzing
the motion of foreground objects over time, the background subtraction method can estimate
vehicle speeds and distances with high accuracy.

Advantages of Background Subtraction Method:

 Robustness to Environmental Changes: Background subtraction methods are robust to


changes in lighting conditions, shadows, and environmental clutter, making them suitable
for outdoor traffic monitoring applications.
 Accurate Detection of Moving Objects: By isolating moving objects from the background
scene, background subtraction methods can accurately detect vehicles and estimate their
speeds and trajectories, even in complex traffic scenarios.

Challenges of Background Subtraction Method:

 Adaptability to Dynamic Backgrounds: Background subtraction methods may encounter


challenges in environments with dynamic backgrounds, such as moving trees, foliage, or
pedestrians, leading to false detections and inaccuracies in speed estimation.
 Parameter Sensitivity: The performance of background subtraction algorithms is highly
sensitive to parameter settings, such as threshold values and background model parameters,
requiring careful tuning for optimal performance in different scenarios.

2.4 Integration of Optical Flow and Background Subtraction Methods


In recent years, researchers have explored the integration of optical flow and background
subtraction methods to leverage the complementary strengths of both techniques. By combining
the robustness of background subtraction with the real-time performance of optical flow, integrated
systems can achieve more accurate and reliable speed and distance detection in diverse traffic
scenarios.

Advancements in Integration Techniques:

 Feature-Based Fusion: Integration techniques based on feature extraction and fusion allow
for the combination of motion information extracted from optical flow with foreground
detection results obtained from background subtraction, enhancing the accuracy and
robustness of speed and distance estimation.
 Machine Learning Approaches: Machine learning algorithms, such as neural networks and
support vector machines, have been employed to learn the relationships between optical
4
flow features and background subtraction results, enabling more effective integration and
adaptation to varying traffic conditions.

2.5 Summary
This chapter provided an in-depth exploration of automatic vehicle speed and distance detection
systems, focusing on the optical flow method and background subtraction method as key
techniques. By examining the advantages, challenges, and integration possibilities of these
methods, this literature review sets the stage for the subsequent chapters, where we will discuss
the methodology employed in this project to leverage these techniques for the development of an
effective automatic speed and distance detection system.

5
CHAPTER 3: METHODLOGY

3.1 Vehicle Detection Techniques and Approach


Recognition of change in location of a non-stationary object in a series of images captured of a
definite region at equal intervals of time is considered as an interesting topic in computer vision.
A plethora of application from multiple nuances are deployed to function in real time
environments; video surveillance, identifying objects lying underwater, diagnosing abnormalities
in patient and providing proper treatment in the medical department. Among these, one of the
applications is detection of vehicle in traffic and identifying the speed of the vehicle. However,
there are certain factors which should be considered for detection of constantly moving vehicles
at every interval of time. It mainly comprises of three techniques to detect a vehicle namely:

1) Background Subtraction Methods

2) Feature Based Methods

3) Frame Differencing and motion-based methods

1)Background Subtraction methods:

The method of retrieval of a mobile object from a definite image (fixed background) is called
background subtraction and the retrieved object is the resulted as threshold of image differencing
. This technique is pre dominantly used in detection of vehicle in an image frame. However, the
results are affected in poor lighting or bad climatic conditions and acts as a drawback to this
method. BS calculates the foreground mask performing a subtraction between the current frame
and a background model, containing the static part of the scene or, more in general, everything
that can be considered as background given the characteristics of the observed scene.

Studies have suggested that statistical and parametric based methodologies are primarily used for
background subtraction methods. Whereas, some of these techniques used Gaussian distribution
model for every pixel in the image. Furthermore, every pixel (i,j) in the image is categorized into
two parts; foreground (moving vehicles, also known as blobs) or background based on the
knowledge procured from the model using the equation (i) as:

I(i, j) – Mean(i, j) < (C x Std (i, j)) …(i)

Where, I(i,j) is intensity of the pixel,

C is a constant,

Mean(i,j) is the mean

Std(i,j) is the standard deviation.

6
Figure 1: Background Subtraction Methods

2) Feature based modelling

This technique of identifying image displacements which are easiest to interpret is feature based
modelling. The technique helps in identifying edges, corners and other structures in an image
which are restricted properly in a two-dimensional plane and trace these objects as they transit
between multiple frames. This technique comprises of two stages; finding the features in multiple
images and matching these features between the frames:

Stage 1: In this step, the features are found in a series of two or more images. If carried out
perfectly, with no overhead cost; it may work efficiently with less overload and reduce the
extraneous information to be processed

Stage 2: Features found in stage 1 are matched between the frames. Under most common scenarios,
two frames are used and two sets of features are matched to a resultant single set of motion vectors.
These features in one frame are used as seed points which use other techniques to determine the
flow.

Despite this, both these stages of feature-based modelling possess drawbacks. In the stage of
detecting features, it is necessary that features are located with precision and good reliability. This
is proved to be of immense significance and research is performed on feature detectors. This
feature holds an ambiguity of possible matches to occur as well; unless it is priorly known that
image displacement less than the distance between features.

7
3) Frame Differencing and motion-based methods:

Frame differencing is a method of finding the difference between two consecutive images from a
sequence of images to segregate the moving object (vehicle) from the background. If there is a
change in pixel values, it implies that there was a change in position in the two image frames. The
motion rectification step of detecting a vehicle in a trail of images by alienating the moving objects,
also known as blobs based on its speed, movement and orientation.

It is recommended to use an intraframe, interframe and tracking levels as frameworks to identify


and control the motion of vehicles in frame. Using quantitative evaluation this paper illustrated
that interframe and intraframe can be used to control and handle partially detected images and
tracking level can be used to handle full blocked images efficiently.

An approach to calculate frame difference can be given as:

Difference between two consecutive frames

8
Let Ik be the value of kth frame in the trail of images and Ik+1 be the value of k+1th frame in the
trail of images Then, the absolute difference in image is calculates as

Id(k, k+1) = |I k+1 – Ik|

Conversion of absolute differential image to Binary image

The resultant picture comprises of holes in area of non-stationary objects and its mapped area is
not closed. The transformation of absolute differential image to binary image can be defined as:

Y = 0.299*R + 0.587*G + 0.114*B

Limitations:

This approach is not effective in windy conditions as this technique detects motion caused by
movement in air. The possibility of camera not remaining fixed in its position due to air cannot be
neglected which results in motion and formation of holes in the binary image.

B. Gaussian Mixture model:

Gaussian mixture model is a probabilistic function used for representation of normally distributed
data points in a complete set of data points. These models do not require a prior knowledge of the
data point is from which subpopulation cluster which allows the model to learn in an unsupervised
manner. Gaussian mixture models are typically used for extracting features in tracking of
numerous objects which uses the number of mixture components and their means to estimate the
location of an object at every frame in the series of images or video. [6]

The primary aim using this approach is to detect the vehicle and tracing algorithm which can be
used to monitor the traffic. This model uses a customary observation pattern change for each pixel
in the image matrix. Further, Mahalanobis distance of the Gaussian is calculated depending on
customary observation change factor, intensity of colour and determining the Gaussian component
mean.

C. Blob Detection Algorithm:

Blob detection algorithm is a technique which can track the motion of non-stationary objects in
the frame. A blob is defined as a collection of pixels which are identified as an object. This
algorithm determines the location of the blob in consecutive frames of images. Pixels with similar
intensity values or colour codes are clubbed to determine the blob. The algorithm is capable to
detect multiple blobs in the same image and differentiate their speed and motion. This method has
to estimate factors like size, location and colour to determine if the new blob shows resemblance
to the previous blob such that the blob has the same label name.

for each pixel in the image matrix

9
{

if pixel is blob colour

{ label pixel = 1 }

Else

{ label pixel = 0 }

search the next pixel {

if pixel is blob colour & adjacent to blob is 1

{ label pixel =1 }

Else

{ label pixel = 2 }

repeat loop for all pixels }

D. Speed Tracking:

The presented methodology is used for determining the speed of a moving vehicle towards the
camera situated at a considerable distance by tracking the motion of vehicle through series of
images. The proposed methodology consists of steps as shown in the figure below

10
1) Pre-Processing:

Primarily, the video is converted into small frames. Background Subtraction algorithm is used
which subtracts the background from the primary feature/image. A average of all frames is
obtained consisting of only the main feature/image hence subtracting the background. Later, the
output obtained is applied for Thresholding and Morphological Operation. Detection of the object
and centroid is done with the help of Connected Component Method. Centroid is obtained for all
frames. Velocity of the vehicle is calculated using the distance travelled by vehicle and frame rate
of the input video. The various parameters such as number of frames, frame rate, colour format,
frame size are extracted.

11
2) Detection of Moving Vehicle:

The main challenge face during vehicle detection and speed tracking is detecting the main object
(In our case the vehicle). To detect moving object there are various approaches such as temporal
differencing method, optical flow algorithm, background subtraction algorithm. [7] In temporal
difference method, background image is extracted from two adjacent frames with the only
drawback of video being slow. The second, Optical flow algorithm detects the object
independently based on motion of camera but gets complex for real time applications. In
background subtraction absolute difference between background model and each instantaneous
frame is taken to detect moving object. Background model is an image which has no moving object
in it. In this we would be using Background Subtraction Algorithm which specifically consists of
3 stages.

a) Background Extraction:

The video which is recorded on highway consists of object along with the background, it is very
difficult to capture image without the object, thus, for getting such image , background extraction
method is used. In this average of all frames is taken and the object is subtracted out leaving the
background alone. This extracted image is known as ROI (Region of Interest). Now, each of the
frames are converted from RGB to a gray-scale image and each individual frame is multiplied with
the ROI(Region of Interest) obtained. Because of this other unwanted noise of waving trees and
vehicles is avoided which helps in increasing accuracy. The absolute difference of each
instantaneous frame and background model after multiplying both with extracted ROI has taken
to detect only moving vehicles.

12
b)Thresholding:

Image segmentation is done using thresholding in which gray-scale images are converted into
binary images. Threshold value is selected which is very important. To separate foreground vehicle
from static background thresholding is used here.

g(x, y) = 0 for f(x, y) < T

g(x, y) = 0 1 for f(x, y) >= T

Where g(x, y) is threshold image, T is the selected threshold value; f(x, y) is instantaneous frame.
In this work, we got vehicle as object and some noise.

c)Morphological Operations:

Morphological Operations are used to remove noise from the imperfect segmentations and are well
suited for binary images. This is performed on output image obtained from the thresholding phase.
Opening closing dilations are performed where opening and closing is used to remove the detected
foreground holes. Dilation consists of interaction of structuring element and foreground pixels.
The structuring element is a small binary image. After the whole process, the selected object pixels
are applied for connected component analysis. Connected component is applied to binary and grey
scale image and analysis is used to identify connected pixel region by scanning an image pixel-
by-pixel. It has various connectivity i.e.8-pixel connectivity or 4-pixel connectivity.

3) Feature Extraction based on Background Subtraction:


The features in feature extraction are nothing else but independent characteristics of vehicle such
as speed, color , shape ,centroid, edges etc. The result of connected component analysis is used
and a bounding box has been drawn around vehicle. In this work the centroid and histogram of
vehicles surrounded by bounding box are selected as features.

4) Detection of Vehicle:
Vehicle detection process is based on the process of feature detection. The features which are
extracted are tracked over sequential frames.[Mohit] Matching algorithm is used to determine
whether it is the same object or a different one. Mahalanobis distance is used in object matching
algorithm.

13
During the past decade, Mahalanobis distance learning has attracted a lot of interest. The
Mahalanobis distance between two d-dimensional numerical vectors x and x′ is defined by

d2(x, x′) = (x − x′)TM(x − x′),

where M is a d × d dimension matrix1.Similarity and dissimilarity between two groups is found


using Mahalanobis distance When covariance matrix is same as identity matrix then Mahalanobis
distance is same as Euclidian distance. In object matching, mahalanob is the distance between
features of object in the previous frame and instantaneous frame is determined. A threshold value
is set which is compared with calculated distance. If the distance is less than the threshold value,
then the object in the previous frame and instantaneous frame is same. According to this, match id
has given for each object and is tracked over for sequential frames.

5) Speed Determination:
The vehicles with a particular Id is observed for a series of sequential frames. The number of
frames in which the car appears is noted.

Total Frames Covered= frame n – frame 0

Where, frame 0 is the first frame when object is entered in Region of interest and frame n is last
frame when object passed away from Region of interest and the real-world distance is mapped on
the image. The count of total number of frames is then multiplied with duration of one frame which
is calculated from frame rate of video. Similarly, total time taken by vehicle to travel and distance
is fixed and is mapped from real-world into image.

Speed=Distance/ (TF*Frame rate)

14
Figure 2: Row Figure Before Optical flow Method

Thus, from distance and travelled time of detected vehicle, speed of that vehicle is determined
from above formulae.

15
CHAPTER 4: OPTICAL FLOW METHOD AND BACKGROUND
SUBTRACTION FOR VEHICLE DETECTION AND SPEED
TRACKING

4.1 Introduction
In this chapter, we delve into the utilization of the optical flow method and background subtraction
technique for vehicle detection and speed tracking in real-world traffic scenarios. Optical flow
captures the apparent motion of pixels between consecutive frames, enabling the estimation of
vehicle speeds, while background subtraction isolates moving objects from the static background,
facilitating vehicle detection. We explore the principles, implementation, and integration of these
methods to develop a robust system for traffic monitoring and management.

4.2 Optical Flow Method


The optical flow method is a fundamental technique used in computer vision to analyze the motion
of objects within an image or video sequence. It calculates the displacement of pixels between
consecutive frames, representing the apparent motion of objects in the scene. By estimating the
flow vectors of pixels, optical flow algorithms provide insights into the direction and speed of
object movement.

Principles of Optical Flow:

Optical flow is based on the assumption of spatial and temporal coherence, where neighboring
pixels exhibit similar motion patterns over time.

The optical flow equation models the relationship between image brightness variations and the
motion field, seeking to minimize the difference between observed and predicted pixel intensities.

Implementation of Optical Flow:

Optical flow algorithms can be categorized into dense and sparse methods. Dense methods
estimate flow vectors for every pixel in the image, while sparse methods focus on key points or
features.

16
Figure 3: Vehicle Speed detection in Night

Common optical flow algorithms include Lucas-Kanade, Horn-Schunck, and Farneback. These
algorithms differ in their assumptions, optimization criteria, and computational complexity.

Figure 4 : Vehicle Speed Detection In Day

17
Integration with Vehicle Speed Tracking:

In the context of vehicle speed tracking, optical flow is utilized to measure the displacement of
vehicles between consecutive frames.

By analyzing the motion vectors of vehicle pixels, the speed of individual vehicles can be estimated
based on the distance traveled over time.

4.3 Background Subtraction


Background subtraction is a technique used to segment moving objects from a static background
in video sequences. It involves subtracting a reference background image from the current frame
to isolate foreground objects, which typically represent moving vehicles. Background subtraction
algorithms play a crucial role in vehicle detection and tracking systems, providing a basis for
identifying and analyzing dynamic objects in the scene.

Principles of Background Subtraction:

Background subtraction relies on the assumption that the background scene remains relatively
static over time, while foreground objects introduce temporal changes in pixel values.

Common approaches to background subtraction include frame differencing, Gaussian mixture


models (GMM), and adaptive background modeling.

Implementation of Background Subtraction:

Background subtraction algorithms typically consist of three main steps: background modeling,
foreground segmentation, and post-processing.

Background modeling involves estimating the static background scene from input video frames
using techniques such as temporal averaging or statistical modeling.

Foreground segmentation identifies moving objects by detecting pixels that deviate significantly
from the background model.

Post-processing steps, such as morphological operations or contour analysis, are applied to refine
the segmentation results and extract meaningful object boundaries.

18
Figure 5 : Shadow detection results based on HSV color space.

Integration with Vehicle Detection:

In vehicle detection applications, background subtraction is used to segment moving vehicles from
the background scene.

Detected foreground regions are further processed and analyzed to extract vehicle features, such
as size, shape, and motion characteristics, enabling robust vehicle detection and localization.

4.4 Integration of Optical Flow and Background Subtraction


The integration of optical flow and background subtraction techniques offers complementary
strengths for vehicle detection and speed tracking systems. By combining the capabilities of both
methods, we can enhance the accuracy, robustness, and efficiency of the overall system.

19
Integration Framework:

Optical flow provides fine-grained motion estimation, capturing subtle movements of vehicles
within the scene.

Background subtraction offers scene segmentation, isolating moving vehicles from the static
background and reducing false detections.

Joint Processing Pipeline:

In the integrated framework, optical flow and background subtraction methods are applied
sequentially or concurrently to process input video frames.

Optical flow estimates the motion vectors of pixels, while background subtraction segments
moving objects from the background scene.

The results from both methods are fused or combined to refine vehicle detection and speed
estimation, leveraging the complementary information provided by each technique.

System Optimization and Calibration:

The integrated system requires careful optimization and calibration to ensure coherent operation
and accurate results.

Parameters such as flow regularization, background modeling thresholds, and object tracking
criteria are fine-tuned to achieve optimal performance in different traffic scenarios and
environmental conditions.

4.5 Performance Evaluation


The performance of the integrated optical flow and background subtraction system is evaluated
using quantitative metrics and qualitative assessments. Key performance indicators include
detection accuracy, speed estimation error, computational efficiency, and robustness to noise and
occlusions.

Quantitative Metrics:

Detection accuracy is measured by comparing system outputs with ground truth annotations,
calculating metrics such as precision, recall, and F1 score.

Speed estimation error quantifies the difference between estimated and ground truth vehicle
speeds, providing insights into the accuracy of speed tracking.

20
Qualitative Assessments:

Visual inspection of system outputs allows for qualitative evaluation of detection and tracking
results.

The system's robustness to challenging scenarios, such as occlusions, lighting variations, and
complex motion patterns, is assessed through real-world testing and validation.

4.6 Case Study: Real-World Implementation


A case study demonstrates the real-world implementation of the integrated optical flow and
background subtraction system in a traffic monitoring application. The system is deployed at a
roadside location equipped with surveillance cameras, capturing live traffic footage for analysis
and processing.

Deployment Scenario:

Surveillance cameras are strategically positioned along the roadway to provide comprehensive
coverage of the traffic scene.

The integrated system is deployed on dedicated hardware platforms, enabling real-time processing
of video streams and extraction of vehicle speed and position information.

System Performance:

The performance of the deployed system is evaluated under varying traffic conditions, including
different vehicle speeds, densities, and environmental factors.

Real-world testing allows for validation of the system's accuracy, reliability, and robustness in
practical traffic monitoring scenarios.

4.7 Conclusion
In conclusion, the integration of optical flow and background subtraction techniques offers a
powerful approach for vehicle detection and speed tracking in real-world traffic environments. By
leveraging the complementary strengths of both methods, we can achieve accurate, robust, and
efficient systems for traffic monitoring and management. The systematic integration framework,
optimization strategies, and performance evaluation methodologies outlined in this chapter
provide a comprehensive guide for the development and deployment of advanced traffic
surveillance systems. Through continued research and innovation, we can further enhance the
capabilities and effectiveness of these systems in addressing the challenges of modern
transportation.

21
CHAPTER 5: RESULT

5.1. Shadow Detection Results.


The video of moving vehicles is captured by a camera fixed on an overpass. 'e video is in AVI
format with a frame rate of 15 frames/s. 'e number of moving vehicles is sufficient for the
experiment. 'e experimental results of shadow detection are shown in Figure5,where the left
column is the original images and the right column is the detection results of the shadow area. On
the right column, the white area indicates the shadow area detected by the shadow detection
algorithm based on HSV color space. In order to obtain complete shadow area, the morphological
close operation is performed after the image binary based on the threshold segmentation.

5.2. Comparison of the Proposed Method and Traditional Optical Flow


Method.
In order to verify the effect of the proposed method, we compare the proposed method with the
traditional optical flow method. Figure 6 shows the detection results based on traditional optical
flow method. Figure 6(a) shows the original image, Figure 6(b) shows the grayscale image after
gray processing, Figure 6(c) shows the optical flow vector after optical flow calculation, Figure
6(d)gives the image through morphological filtering, and Figure 6(e) shows the detection result of
moving vehicle. Figures 7 and 8 show the detection results based on the proposed method, where
Figure 7 shows the results of frame#40 and Figure 8 shows the results of frame #100.Figures 7(a)
and 8(a) show the original images, Figures 7(b)and 8(b) show the grayscale images after gray
processing,

5.3 Tracking Results


In this section, we present the results of the vehicle tracking experiments based on the proposed
tracking algorithm. Experimental findings are illustrated in Figures 5 to 7, showcasing the tracking
outcomes in the 2nd, 10th, and 20th frames of the captured video sequence. Specifically, the red
points in Figures 5 to 7 represent the detection results obtained using the optical flow method,
while the blue points in Figures 5 and 6 depict the trajectory of the tracked moving vehicle.

22
Figure 6: Shadow detection results based on HSV color space. (a) Frame #12. (b) Frame #23.

The tracking experiments encompassed various scenarios, including normal traveling conditions
of vehicles on the highway, as well as abnormal conditions such as illumination changes, similar
neighboring objects, occlusions, and scale variations. These diverse scenarios aimed to validate
the effectiveness of the proposed tracking algorithm under different environmental conditions and
challenges.

23
Figure 7 : Detection result of the 100th frame. (a) 'e original image. (b) Grayscale image. (c)
Optical flow vector. (d) Morphological filtering. (e) Test results.

To ensure consistency and fairness in the comparison, the proposed tracking algorithm was
benchmarked against two typical target tracking algorithms: the Kalman filter algorithm and the
Camshift algorithm. Parameter settings for the particle filter, a key component of the proposed
algorithm, were configured with a particle number of 100. Other relevant parameters, including

24
system noise and observation noise, were selected to be as consistent as possible across all three
algorithms.

Given that the tracking results of the test videos were manually labeled, they serve as the ground
truth against which the performance of the tracking algorithms is evaluated. The comparison
focuses on metrics such as tracking accuracy, robustness to environmental variations, and
computational efficiency.

5.4 Conclusion
The tracking experiments conducted in this study provide valuable insights into the performance
of the proposed tracking algorithm for vehicle tracking in challenging real-world scenarios. By
leveraging the optical flow method and a particle filter-based tracking framework, the proposed
algorithm demonstrates promising capabilities in accurately tracking moving vehicles despite
varying environmental conditions and challenges.

Comparison with traditional tracking algorithms such as the Kalman filter and Camshift algorithm
highlights the effectiveness and advantages of the proposed approach. With consistent parameter
settings and rigorous evaluation against ground truth labels, the proposed algorithm showcases
superior tracking accuracy, robustness, and efficiency, particularly in scenarios involving
illumination changes, occlusions, and scale variations.

Overall, the tracking results underscore the potential of the proposed algorithm to enhance the
capabilities of vehicle tracking systems in real-world applications, contributing to improved traffic
monitoring, surveillance, and safety on roadways. Continued research and refinement of the
algorithm can further enhance its performance and applicability in diverse traffic environments.

25
CHAPTER 6: CONCLUSION

This study introduces a novel approach for moving vehicle detection and tracking in complex
transportation environments, leveraging optical flow and immune particle filter algorithms. The
proposed method begins by utilizing the optical flow method to initially detect moving vehicles.
Subsequently, a shadow detection algorithm based on the HSV color space is employed to
accurately identify moving vehicles, overcoming the challenges posed by shadow interference.
Finally, the moving vehicles are robustly tracked using the proposed immune particle filter
algorithm.

Experimental evaluations conducted under complex traffic scenes with shadow interference
demonstrate the efficacy of the proposed method in mitigating the impact of shadows on moving
vehicle detection. The method achieves accurate detection and robust tracking of moving vehicles,
leading to higher accuracy compared to existing algorithms such as Camshift and Kalman filter.
However, it's worth noting that the proposed method is currently limited to daytime conditions
with varying illumination levels, both good and poor. Future research will explore extending the
method to nighttime conditions by considering the utilization of infrared images of moving
vehicles.

Furthermore, the study envisions transferring the experimental results to a cloud computing
platform via a wireless sensor network. This would enable policymakers to access and analyze the
data, enhancing vehicle management strategies. The availability of data used in this study is subject
to request from the corresponding author, facilitating transparency and reproducibility of the
findings.

In summary, the proposed method presents a promising approach to address the challenges of
moving vehicle detection and tracking in complex transportation environments. With further
research and integration into cloud-based platforms, it has the potential to significantly improve
vehicle mana

26
REFERENCES

{1] Raad Ahmed Hadi1,Ghazali Sulong and Loay Edwar George, “Vehicle detection and tracking
techniques :A concise review”, in Signal & Image Processing : An International Journal (SIPIJ)
Vol.5, No.1, February 2014.
[2] https://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html
[3] https://users.fmrib.ox.ac.uk/~steve/review/review/node2.html
[4] Z. Wei, et al., "Multilevel Framework to Detect and Handle Vehicle Occlusion," Intelligent
Transportation Systems, IEEE Transactions on, vol. 9, pp. 161-174, 2008.
[5] Nishu Singla,“Motion Detection Based on Frame Difference Method”, International Journal of
Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 15 (2014), pp.
1559-
[6] B. Suresh, K. Triveni Y. V. Lakshmi, P. Saritha, K. Sriharsha, D. Srinivas Reddy,
“Determination of Moving Vehicle Speed using Image Processing”, International Journal of
Engineering Research & Technology (IJERT) ISSN: 2278-0181 Published by, www.ijert.org
NCACSPV - 2016 Conference Proceedings.
[7] Y. Ma, X. Song, X. Li, and J. Liu, “Research and implementationof real-time monitoring
algorithm based on embeddedtraffic flow,” LCD & Display, vol. 33, no. 9,pp. 787–792, 2018.
[8] T. Gao, Z.-g. Liu, S.-h. Yue, J. Zhang, J.-q. Mei, and W.-c. Gao,“Robust background
subtraction in traffic video sequence,”Journal of Central South University of Technology, vol. 17,
no. 1, pp. 187–195, 2010.
[9] J. Xu, M. Fang, and H. Yang, Motion Detection and Tracking inComputer Vision, National
Defense Industry Press, Beijing,China, 2012.
[10] D. Forsyth, “Object detection with discriminatively trainedpart-based models,” IEEE
Transactions on Pattern Analysis &Machine Intelligence, vol. 32, no. 9, pp. 1627–1645, 2014.

27

You might also like