Smart Traffic Monitoring System
Smart Traffic Monitoring System
MONITORING SYSTEM
1
Agenda/Presentation Outline
Introduction:
The Smart Traffic Monitoring System is designed to enhance road safety and traffic
management through advanced image processing techniques. The project integrates
cutting-edge technologies such as OpenCV and YOLOv8 for efficient vehicle detection,
classification, counting, and speed estimation. This system aims to provide real-time
insights into traffic patterns, enabling effective traffic management strategies.
Image Acquisition:
The process begins with image acquisition through cameras strategically placed in the
target area. High-quality images are crucial for accurate vehicle detection and subsequent
analysis. The acquired images serve as input data for the YOLOv8 algorithm, ensuring a
comprehensive view of the traffic scenario.
Vehicle Detection:
The YOLOv8 algorithm excels in object detection and plays a pivotal role in identifying
vehicles within the acquired images. Real-time detection enables the system to respond
promptly to dynamic traffic conditions, ensuring a robust foundation for subsequent
analysis.
2
Vehicle Classification:
Following detection, the system classifies vehicles based on their size. The algorithm
distinguishes between various vehicle types, such as cars, trucks, and motorcycles. This
classification provides valuable information for traffic planners and law enforcement to
understand the composition of the traffic flow.
Vehicle Counting:
Accurate vehicle counting is crucial for traffic flow analysis. The system employs
advanced counting algorithms to tally the number of vehicles passing through the
monitored area. This data aids in assessing traffic density and identifying peak hours,
facilitating informed decision-making for traffic management.
Speed Estimation:
To enhance road safety, the system estimates the speed of vehicles exceeding specified
limits. By tracking the movement of vehicles over consecutive frames, the system
calculates the speed of each vehicle. This information is particularly useful for identifying
potential traffic violations and implementing appropriate measures.
The final output, including vehicle count, classification, and speed estimation, is presented
in a user-friendly interface. Visualizations such as graphs and charts provide a
comprehensive overview of the traffic scenario. Additionally, real-time alerts can be
generated for speeding vehicles, allowing for immediate intervention by law enforcement.
3
ABSTRACT
Abstract:
The system utilizes YOLOv8, a robust object detection algorithm, for accurate and real-
time vehicle detection. Once vehicles are detected, they are classified based on their size,
facilitating a basic categorization of vehicle types. The system then counts the number of
vehicles present in the scene, providing valuable insights into traffic density. One of the
significant features of the proposed system is the ability to estimate the speed of vehicles
that exceed predefined speed limits. This is achieved by tracking the movement of
vehicles over consecutive frames, allowing for the calculation of their speed.To
implement this system, images are acquired through various sources such as cameras, and
then processed using OpenCV to prepare them for input into the YOLOv8 model. The
detected vehicles are subsequently classified, counted, and their speeds are estimated. The
final output is displayed, providing a comprehensive overview of the traffic scenario. This
Smart Traffic Monitoring System offers a versatile and efficient solution for traffic
management, providing real-time insights into vehicle presence, classification, density,
and speed violations. The integration of YOLOv8 ensures robust and accurate object
detection, making the system suitable for diverse traffic environments.
4
TABLE OF CONTENTS
LIST OF FIGURES i
ii
LIST OF ABBREVIATIONS i
v
1 INTRODUCTION 1
2 LITERATURE SURVEY 3
3 REQUIREMENT ANALYSIS 9
1
4.4 PROJECT MANAGEMENT PLAN
7
1
CONCLUSION
7
REFERENCES 1
8
5
LIST OF FIGURES
4. YOLO Architecture 1
1 4
4. Flow diagram for Vehicle Detection 1
2 5
6
LIST OF ABBREVIATIONS
7
CHAPTER 1
INTRODUCTION
The Counting of vehicles is a critical aspect of the traffic management. By accurately tallying
the number of vehicles, the system provides valuable data for traffic flow analysis, congestion
prediction, and infrastructure planning.The Estimation of the speed of vehicles, exceeding
specified limits adds another layer of functionality to the system. By utilizing the temporal
information from consecutive frames; the system can calculate the speed of vehicles, enabling
the identification of the potential traffic violators.Finally, the system displays the processed
output in a user-friendly interface. This output may include real-time statistics, visualizations,
and alerts, empowering traffic, authorities, to make informed decisions and respond promptly
to dynamic traffic situations. reducing the total number of accidents caused, and thereby the life
of people can be saved.Object detection is a fascinating field in computer vision. It goes to a
whole new level when we’re dealing with video data.
1
OpenCV is a huge open-source library for computer vision that is used for object detection and
image processing and displays them accordingly. OpenCV has plenty of pre-trained classifiers
that can be used to identify objects such as eyes, faces, trees, number plates, cars, etc. We can
use any of the pre-trained classifiersas per our requirements.
We know that a video is essentially a collection of image frames put together and played in a
continuous stream. Therefore, it is noticeable that the vehicle changes coordinates in each
successive frame and, hence, its location. Also, it can be noted that only the pixels representing
the vehicle will undergo changes in these consecutive frames. Thus, the frame differencing
method aims to notice the changes in the pixel and location of the moving vehicle. Using Deep
Learning to detect vehicles in video streams, track them, and apply speed estimation to detect
the MPH/KMH of the moving vehicle can give good results.
2
Problem statement
The Smart Traffic Monitoring System utilizing YOLO (You Only Look Once) v5
addresses the growing challenges associated with conventional traffic monitoring
methods. Traditional systems often struggle to provide real-time and accurate
information due to limitations in object detection and tracking. The YOLO v5
framework, known for its efficiency in real-time object detection, offers a solution to
enhance traffic monitoring capabilities. The problem statement involves improving the
accuracy, speed, and reliability of traffic monitoring by implementing YOLO v5. This
system aims to detect and track vehicles, pedestrians, and other relevant objects in a
traffic environment, providing valuable data for traffic management, safety analysis, and
overall optimization of transportation systems. The integration of YOLO v5 into the
Smart Traffic Monitoring System seeks to address the limitations of existing
technologies and contribute to the development of more advanced and effective traffic
monitoring solutions.
3
CHAPTER 2
LITERATURE SURVEY
This report focuses on constructing a traffic monitoring system for estimating traffic
parameters, such as vehicle counting, classification, and speed estimation. Vision-based vehicle
detection and classification play an important role in real-time traffic management systems.
Real-time vehicle detection in video streams relies heavily on image processing techniques,
such as motion segmentation, edge detection, digital filtering, etc. The vehicle detection
techniques developed by researchers have been given in this literature survey.
Ali Tourani and Asadollah Shahbahrami, [1] presented a Vehicle Counting Method Based on
Digital Image Processing Algorithms to estimate important traffic parameters from video
sequences using video. They introduced a new “linearity” feature in vehicle representation.
This method is different from traditional methods, and it detects the vehicles with good
accuracy. The programming language they used is C++, which doesn’t have all the necessary
libraries compared to Python, and there is mention of speed estimation and nighttime
evaluation, and there is no masking method used for detecting shadows in the traffic video.
These Problems can be solved by using backgroundSubtractorMOG2(), which efficiently
detects shadows.
Amit Kumar, Mansour H. Assaf, Sunil R. Das, Satyendra N. Biswas, Emil M. Petriu, and
Voicu Groza [2] presented an Image Processing Based System for the Classification of
Vehicles for Parking Purposes where they only detected the type of vehicle that is being
parked, and there is no mention of vehicle counting, speed estimation, and they used MATLAB
for image processing, and the technique they used is different from other traditional
approaches, and the accuracy is comparatively low, and the detection of a class of vehicles of
this system is limited to images as the real-world data is of the form is of video form, so there
are many more advanced techniques for object detection.
S. Kul, S. Eken, and A. Sayar [3] presented Distributed and collaborative real-time vehicle
detection and classification over the video streams, which is based on detecting vehicles and
4
classifying them, and they have used BackgroundsubtractorMOG2() for the elimination of
shadows till some extent. The Algorithms they used are ANN, SVM, and AdaBoost for the
Classification of vehicles they are pretty much good algorithms that can better classify and
detect them, like YOLO, R-CNN, CNN, LSTM, and many more, and there is no mention of
detecting the speed of the vehicle and counting them.
Pandu ranga, H.T; Ravi Kiran, M; Raja Shekar, S; Naveen Kumar, [4] proposed Vehicle
Detection and Classification based on Morphological Technique which can detect, classify and
count the number of vehicles using image thresholding techniques without any masking
method and this methodology was good of detecting the specific size of vehicles as they are
using vertical edge detection method for identifying the vehicles. And there is no mention of
speed detection, and this system was limited to images and cannot be further extended by this
methodology.
Jyotsna Tripathi, Kavita Chaudhary, Akanksha Joshi, and Prof. J B Jawale [5] proposed an
Automatic Vehicle Counting and Classification to detect and classify the vehicles the
algorithms used Adaptive Background subtraction for background elimination and Color Based
Classification for detecting and classification of vehicles. This model can classify vehicles like
small, medium, and heavy, not identify the type of the vehicles. There is no mention of speed
estimation and nighttime data, and accuracy is not that good as the methodology was a little bit
outdated and much more efficient algorithms are available.
Rashid, N.U.; Rahman, S.M.M. [6] proposed Classification of Vehicles from Video Using
Multiple Time-Spatial Images,” which is used to detect and classify them accordingly and used
background subtractor for detecting the shadows. A new method, MVDL-based detection, was
used for detection and classification accordingly, this methodology was used to distinguish the
correspondence of vehicles, and there is no mention of speed detection and nighttime data, and
the KNN algorithm is not an efficient algorithm to classify the vehicles.
Zhao, R.; Wang, X [7] Proposed Counting Vehicles from Semantic Regions, Intelligent
Transportation which is used for counting the vehicles. Their methodology is based on tracking
and clustering feature points, and a traffic scene is segmented into local regions and then
processed. By counting vehicles on each path separately, it breaks the challenging problem into
simpler problems. In extremely crowded scenes, vehicles are not perfectly located and detected.
In extremely crowded scenes, vehicles on adjacent lanes may be very close in space and move
5
side by side at the same speed. It poses difficulty in trajectory clustering. This problem can be
alleviated to some extent by first excluding trajectories outside a lane before clustering.
Prem Kumar Bhaskar Suet-Peng Yon, [9] proposed the Image Processing Based Vehicle
Detection and Tracking Method they developed a unique algorithm for vehicle data recognition
and tracking using the Gaussian mixture model and blob detection methods. They
differentiated the foreground from the background in frames by learning the background. The
foreground detector detects the object, and a binary computation defines rectangular regions
around every detected object. To detect the moving object correctly and to remove the noise,
some morphological operations have been applied to improve the performance of the model,
and thereby, detection has become an easy part and perfect.
[1] The methodology that is used is not that efficient, and the accuracy is that good, and python
and its integrated libraries make object detection and image processing faster and more
efficient and faster while compared to C++. There is no mention of nighttime vehicles, and
speed estimation here; the number of frames is limited to 29 frames per second, and the
resolution is limited to 960*540.
[2] The methodology they used was not as efficient as new methodologies came into existence,
and it is limited to images and cannot be further extended by using the same methodology
MATLAB is used for image processing limited to detecting the specific vehicles, and
classification is also restricted to certain futures only.
[3] The process that they developed is not applicable for real-time detections because of
scalability and performance problems. Since Video data is streaming data and is very large in
6
size, the processing of data is quite difficult, so they have not done these for real-time object
detection functionalities and no background noise is removed for the perfection of accuracy.
[4] The algorithm they proposed is based on different techniques, including image differencing,
thresholding, edge detection, and binary morphological process. .To reduce the noise in an
output image multiple thresholding, they performed with different thresholding values. No
masking methodology was used for detecting the shadows.
[5] They used Adaptive Background Subtraction Color Based Classification for detecting and
classifying the vehicles. This Single system provides multiple domain outputs. Tracking
criminal vehicles with the help of color and type of vehicle. More work is needed to reduce
occlusions. Only classify the vehicles like small, medium, and heavy, not identify the type of
vehicles. There are many more advanced algorithms for identifying and classification of objects
which gives more performance compared to this methodology.
[6] A new method, MVDL-based detection, was used for the detection and classification of
them accordingly. TSI methodology was used to distinguish the correspondence of vehicles,
and there is no mention of speed detection and nighttime data, and the KNN algorithm is not an
efficient algorithm to classify the vehicles.
[7] Their methodology breaks the problem into simpler problems, which count vehicles on each
path separately. The model of each path and its source and sink add strong regularization on the
motion and the sizes of vehicles and can thus significantly improve the accuracy of vehicle
counting. This approach has some limitations and can be improved in several aspects. A
semantic region could be detected if pedestrians frequently walk through a zebra crossing.
Pedestrian paths and vehicle paths can bedistinguished by simple human intervention or the
distributions of object sizes and speed along the paths.
[8] The approach they used to detect the vehicles is pretty much good, and they also mentioned
their research on night time data and their analysis of night and day time analysis was
mentioned and then can be further extended by adding some features to it like adding
estimation of speed and adding some more features for selection methods. The number of
classes of prediction can also be extended by using YOLO pretrained algorithm.
[9] The methodology they used is quite interesting and impressive, and they used noise
elimination methods to improve the accuracy of the model feature extraction methodologies are
7
also performed to detect necessary features and vehicle data recognition and tracking using the
Gaussian mixture model and blob detection methods are used to identify the vehicles and
classify them, and there is no mention of speed estimation.
The main problems in the existing system are mainly the model’s accuracy and not
using advanced deep learning frameworks that efficiently detect and classify them as per
their classes. The classification of a few models is limited to few classes only, but there
can be a further extension of classes for classification of vehicles, and the majority of
researchers have done detection on images as the real-world data is of the video format,
so it can’t be used for solving real-world problems to monitor the traffic flow analysis
the researchers have not much concentrated on removing the noise from the traffic video
to improve the accuracy of detection of vehicles. The algorithms like KNN, SVM, and
AdaBoost like algorithms are used for the detection of objects and classifying them, but
there are pretty good algorithms like R-CNN, YOLO, and LSTM, and many more
algorithms can be used to detect the classes of vehicles and the accuracy during night
time data is not that accurate, many researchers had not concentrated on the speed of
vehicles which plays a vital role for identifying the traffic rule violators, and there are
many more advancements in technology which improves the performance the model.
This project aims to solve a few problems in the existing system by using various deep
learning and machine learning methodologies which can be used for monitoring traffic.
8
CHAPTER 3
REQUIREMENT ANALYSIS
This study of detecting the type of vehicles, calculating the vehicle count, and
estimating the speed of the vehicles, has raised many questions. Collecting the proper
traffic video was an essential task for the project. After obtaining the video removing
the noise from the video played a crucial role as it affects the model’s accuracy. By
using the BackgroundSubtractor() method, the noise in the video can be removed.
Resolving all the above issues was a tough task. As soon as the issues were resolved, it
was observed that the model was detecting the shadows of vehicles which had led to
detect multiple images at a time, which would later disturb the count of the vehicles
when calculated. It also affects the resultant speed of the vehicle. For removing the
shadows the BackgroundSubtractorMOG2() method is used. The noise and shadows of
the vehicles are cleaned at a nice rate. Adjusting the frames for objects(vehicles) that
were not being well detected by the BackgroundSubtractorMOG2() is resolved by using
YOLO (You Only Look Once) algorithm. YOLO is one of the most efficient algorithms
in the field of object detection as the name suggests, it takes the whole input once, and
in one forward pass, it gives us the predictions with the help of a pre-trained dataset
called coco. The YOLO algorithm can take every image from the video and classifies
them according to their respective classes. By this, we could frame the image properly;
adjusting the imaginary line for counting the vehicles was tough. The proper
determining of the imaginary line is important as this line is both used for detecting the
speed of the vehicles and also used to count the number of vehicles. Selecting an
appropriate value for estimating the speed is the toughest task. Speed is calculated by
the time an object from one frame to another frame, and by using formulas, we estimate
the speed of that vehicle.
9
3.2 SOFTWARE REQUIREMENTS SPECIFICATION DOCUMENT
3.2.1 INTRODUCTION:
10
There is a need for a smart traffic management system for
controlling and understanding traffic flow analysis. Vehicle
detection using deep learning can be carried out using various deep
learning algorithms. YOLO is one among them. YOLO is fast and
accurate. YOLO algorithm uses neural networks to provide real-
time object detection. This Smart traffic management system helps
the traffic police to identify the vehicle count and speed of each and
every individual vehicle so that they can keep track of the number
of vehicles and speed of every vehicle and find the number of
vehicles that cross the specified speed limit set by them.
We are developing a deep learning model that can detect the vehicles and count the number of
vehicles and estimate the speed of the vehicles. Functional Requirements of the smart traffic
monitoring System are:
1. Collecting the proper video consisting of vehicles.
2. The Collected video can be used to pass as input to the model.
3. The developed model needs to detect, count, and classify the vehicles accordingly.
4. YOLO is an efficient algorithm to detect and classify them with the help of pre-trained
weights like YOLO V5n, and YOLO 5x.
5. Estimating the speed of each and every vehicle and saving the results.
With the help of Graphical Processing Units (GPUs), the processing power increases, and
performance also increases. With the help of the YOLO algorithm, a pre-trained model can be
used in order to improve the performance of the model. YOLO is famous because of its speed
and accuracy. So, using the YOLO algorithm improves the model’s performance and gives
good results.
3.2.5. Design Constraints
With the help of the software present, the system is able to detect only a few things like count
and classifying the vehicles. Smart Traffic monitoring system comes as a new system adding a
few more additional features and improved accuracy, making an easy-to-use interface for
people to use the software.
11
3.2.6. Non-Functional Attributes
The system must be designed so that background tasks can continue while the user performs
foreground tasks. The response time of the model should be low. The system must give
stability and the possibility to reuse the results in the future.
The system must be working in real-time, which allows the users to view it 24/7. The system
shall be able to be maintainable. The system shall be able to be extended in the future.
Deployment of the smart traffic management application should be done on the cloud with
cloud computing capabilities with GPUs to process the real-time data for the traffic monitoring
system.
3.2.7. Appendices
Software Requirements:
Installing Anaconda Individual edition 64-bit (PY 3.8)
Use Jupiter notebook in Anaconda Navigator for running project Python notebook.
Python Notebook also works in Google Collab and Kaggle Notebook editor.
Git-hub repositories are used for cloning the repositories and saving the results.
Hardware Requirements:
License: Free use and redistribution under the terms of the EULA for Anaconda
Individual Edition.
Operating system: Windows 8 or newer, 64-bit macOS 10.13+, or Linux, including
Ubuntu, RedHat, CentOS 7+, and others.
12
System architecture: Windows- 64-bit x86, 32-bit x86; MacOS- 64-bit x86; Linux- 64-
bit x86, 64-bit aarch64 (AWS Graviton2 / arm64), 64-bit Power8/Power9, s390x (Linux
on IBM Z & Linux ONE).
Minimum 5 GB disk space to download and install
CHAPTER 4
Goal
The main aim of this proposed system is used to detect and classify the objects
(vehicles), count the number of objects, and estimate the speed of the objects from a
traffic video.
a) Detection of vehicles:
Many algorithms like Faster RCNN work by detecting possible regions of interest using the
Region Proposal Network and then performing recognition on those regions separately; YOLO
performs all of its predictions with the help of a single fully connected layer. So, in one
forward pass, we can make all our predictions. Methods that use Region Proposal Networks
thus perform multiple iterations for the same image, while YOLO gets away with a single
iteration.
b) Count of vehicles:
After the detection of the object, counting should also be done, as counting is an essential task
for traffic flow analysis some of the methodologies without using shadow elimination do not
give good results because it tries to detect the entire shadow also another vehicle it affects the
accuracy of the model in YOLO there is a method called non-max separation which tries to
eliminate the duplicate object detection.
13
c) Classification of vehicles:
With the help of a pre-trained custom dataset called coco, the model can predict the class of the
vehicle. If the classification is not accurate, then the creation of own custom dataset and
training them is a crucial task in order to improve the accuracy of the model.
Initially, collecting the traffic video is an essential task, which can be used for the
detection of vehicles, and usage of the open cv library of computer vision to locate the
images, and for reading the video into frames, use of background subtractorMOG2 used
for distinguishing the foreground objects from background objects. Firstly, import all
necessary libraries into the notebook. Then apply frame differencing on every pair of
consecutive frames. Perform image dilation on the output image, find contours in the
output image on the image, and Shortlist contours appearing in the detection zone,
saving them along with the final counter.
YOLO Algorithm, which can better detect and identify the type of vehicle with the help
of a pre-trained dataset called “coco dataset”. An XML file is used for storing the
bounding boxes so that they can be identified easily and accurately. The Object
Detection API consumes these XML files and converts them into the CSV format,
which can further be converted into the record format that is required to train the model.
The speed of an object is a measure of how far it moves in a set time period. Therefore,
the first step in calculating the speed of an object on the screen is to calculate the
distance that the object moved. To estimate the speed of vehicles, one needs to view the
video frame by frame and calculate the distance when an object moves from one frame
to another, as speed estimation plays a vital role in traffic data management with the
help of knowing the average speed limit of each and every vehicle one can identify the
14
number of vehicles crossing the speed limit and by saving the results into a file brings to
the end of the project.
15
The block diagram of the proposed system is shown in the above figure.
It starts with collecting a proper traffic video for the detection of vehicles. All the necessary
packages are imported, like OpenCV for detecting the objects(vehicles) and YOLO (You
Only Look Once) algorithm for classifying the objects. The dataset that we are using
represents the data in the XML files and the corresponding images. Each XML file in the data
contains the information of all the bounding boxes. After importing all the necessary
packages, the video is read frame by frame using a module called
BackgroundSubtratorMOG2() in OpenCV. Preprocessing of the frames is done to detect the
vehicles. By using the YOLO algorithm, one can classify the type of vehicles. With proper
estimations and calculations, we try to find the speed of the vehicles. After all the required
task is done, we are going to save the model.
ANACONDA NAVIGATOR:
Anaconda Navigator is remembered for the Anaconda distribution and permits clients to
send off applications and oversee condo packages, environments, and channels without
using command-line commands. Navigator can search for packages, install them in an
environment, run the packages and update them. With the help of the anaconda
command prompt, we can install all necessary packages.
GIT-HUB:
We use Git-hub for cloning repositories like YOLO and uploading the files into Git-hub
and saving them there. GitHub also supports the development of open-source software,
which is software with source code that anyone can inspect, modify, and enhance the
models. We might also use Kaggle for training the model as it supports faster execution
as it supports GPU.
16
YOLO V8:
The SMART Traffic Monitoring System leverages the YOLO (You Only Look Once)
version 8 algorithm for efficient and accurate object detection in real-time video
streams. YOLO v8 represents a state-of-the-art object detection model renowned for its
speed and accuracy, making it particularly suitable for applications like traffic
monitoring. The YOLO v8 algorithm employs a single neural network to simultaneously
predict bounding boxes and class probabilities for multiple objects within an image or
video frame. This unified approach enhances speed by eliminating the need for multiple
passes through the neural network. In the context of the SMART Traffic Monitoring
System, YOLO v8 excels at detecting and tracking vehicles, pedestrians, and other
relevant objects on the road. Moreover, the system incorporates speed estimation
capabilities, allowing it to analyze the movement of detected objects and provide real-
time speed information. This combination of YOLO v8's robust object detection and
speed estimation enhances the overall effectiveness of the traffic monitoring system,
enabling swift and accurate responses to dynamic traffic scenarios.
Future Enhancement
A SMART TRAFFIC MONITORING SYSTEM employing YOLOv8 for object
detection and incorporating speed estimation represents a cutting-edge solution for
enhancing traffic management and safety. YOLOv8 (You Only Look Once version 8), a
17
state-of-the-art real-time object detection algorithm, enables efficient identification and
tracking of various objects such as vehicles, pedestrians, and cyclists in live video
streams. The inclusion of speed estimation capabilities further elevates the system's
functionality by providing real-time insights into the velocity of moving objects within
the monitored area. This information can be pivotal for traffic authorities, allowing them
to promptly respond to potential incidents, optimize traffic flow, and enhance overall
road safety. Future enhancements to this system could explore advanced machine
learning techniques for predicting and adapting to traffic patterns, integrating additional
sensors for more comprehensive data collection, and incorporating intelligent algorithms
for predicting potential congestion areas. Additionally, the integration of a user-friendly
interface and connectivity with other smart city systems could streamline
communication and decision-making processes, ultimately contributing to a more
responsive and adaptive urban traffic infrastructure.
REFERENCES: -
[1] Dhingra, Swati, et al. "Internet of things-based fog and cloud computing
technology for smart traffic monitoring." Internet of Things 14 (2021): 100175.
[2] Lee, Wei-Hsun, and Chi-Yi Chiu. "Design and implementation of a smart traffic
signal control system for smart city applications." Sensors 20.2 (2020): 508.
[3] Ali Tourani Asadollah Shahbahrami, Vehicle Counting Method Based on Digital
Image Processing Algorithms 2015 2nd International Conference on Pattern
Recognition and Image Analysis (IPRIA 2015) March 11-12, 2015.
18
on Contemporary Computing and Informatics (IC3I)
[7] Jiang, Ding. "The construction of a smart city information system based on the
Internet of Things and cloud computing." Computer Communications 150 (2020):
158-166
[10] Ming Zhu, Mingqiang Wei, Qiufeng Lin, et al” Real-time Traffic
Monitoring and Analysis System Based on Deep Learning
Techniques,” Published in: IEEE Access, 2022. DOI:
10.1109/ACCESS.2022.3090410
[11] A.H.S. Lai, G.S.K. Fung, N.H.C. Yung,” Vehicle Type Classification from
Visual-Based Dimension Estimation”, Intelligent Transportation Systems, 2001.
Proceedings. 2001 IEEE, Oakland, CA, USA, pp. 201- 206, 25-29 Aug. 2001.
[12] Bouvie, C.; Scharcanski, J.; Barcellos, P.; Lopes Escouto, F.,” Tracking and
counting vehicles in traffic video sequences using particle filtering,” Instrumentation
and Measurement Technology Conference (I2MTC), 2013 IEEE International, vol.,
no.,pp.812,815, 6-9 May 2013.
[13] Mithun, N.C.; Rashid, N.U.; Rahman, S.M.M.,” Detection and Classification of
Vehicles from Video Using Multiple Time-Spatial Images,” Intelligent
Transportation Systems, IEEE Transactions on, vol.13, no.3, pp.1215, 1225, Sept.
2012
19
[14] S. Kul, S. Eken, and A. Sayar Distributed and collaborative real-time vehicle
detection and classification over the video streams, Int. J. Adv. Robot. Syst., vol. 14,
no. 4, p. 172988141772078, Jul. 2017.
[15] Zhao, R.; Wang, X.,” Counting Vehicles from Semantic Regions,” Intelligent
Transportation Systems, IEEE Transactions on, vol.14, no.2, pp.1016, 1022, June
2013.
20