0% found this document useful (0 votes)
37 views55 pages

Real Time Traffic

The document outlines a project focused on developing a Real-Time Traffic Management System using YOLOv8, an advanced object detection model, to enhance urban traffic monitoring and management. It aims to address issues like traffic congestion and road safety by automatically detecting and classifying vehicles from live video feeds, facilitating dynamic traffic signal control and violation detection. The system is designed to be scalable, cost-effective, and capable of providing valuable data for future urban planning and infrastructure improvements.

Uploaded by

deepajanu2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views55 pages

Real Time Traffic

The document outlines a project focused on developing a Real-Time Traffic Management System using YOLOv8, an advanced object detection model, to enhance urban traffic monitoring and management. It aims to address issues like traffic congestion and road safety by automatically detecting and classifying vehicles from live video feeds, facilitating dynamic traffic signal control and violation detection. The system is designed to be scalable, cost-effective, and capable of providing valuable data for future urban planning and infrastructure improvements.

Uploaded by

deepajanu2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 55

TABLES OF CONTENT

ABSTRACT

LIST OF TABLES

LIST OF FIGURES

LIST OF SYMBOLS AND ABBREVIATIONS

1. INTRODUCTON

1.1 Introduction

1.1.1 Real-time Traffic Management Using YOLOv8


1.1.2 YOLOv8

1.1.3 How YOLOv8 is helpful in traffic management?


1.1.4 Object Classification in Traffic Management Using
YOLOv8

1.2 Problem statement

1.3 Objectives

1.4 Strengths and weaknesses

2 LITERATURE SURVEY

3 SYSTEM ANALYSIS

3.1 Existing System

3.2 Proposed System

3.3 System Architecture

4 MODULE DESCRIPTION

4.1 Data Flow Diagram

4.1.1 Module Description

4.2 UML Diagrams

4.2.1 Use Case Diagram

4.2.2 Class Diagram


4.2.3 Sequence Diagram

4.2.4 Activity Diagram


5 SAMPLE CODING

5.1 Code

5.2 Output

6 CONCLUSION AND FUTURE SCOPE

6.1 Conclusion

6.2 Future Scope

7 REFERENCES
LIST OF FIGURES

Figure Number Title Page Number

Fig. 3.2 Architecture Diagram

Fig. 4.1 Data Flow Daiagram

Fig. 4.2.1 Use Case Diagram

Fig. 4.2.2 Class Diagram

Fig. 4.2.3 Sequence Diagram

Fig. 4.2.4 Activity Diagram


ABSTARCT

In the face of growing urbanization and increasing vehicular congestion,


intelligent traffic management systems have become essential for ensuring road
safety and efficient traffic flow. This project presents a Real-Time Traffic
Management System leveraging the capabilities of YOLOv8 (You Only Look
Once, version 8), a state-of-the-art object detection model. The system is
designed to detect, track, and classify vehicles in real-time using live video
feeds from traffic cameras. By utilizing YOLOv8’s enhanced accuracy and
speed, the model can efficiently identify various types of vehicles including
cars, buses, bikes, and trucks, even in complex traffic conditions. The data
extracted is further processed to analyse traffic density, monitor violations such
as signal jumping or wrong-way driving, and dynamically control traffic signals
to reduce congestion. The proposed system aims to serve as a scalable and cost-
effective solution for smart city traffic monitoring, enabling improved urban
mobility and road safety through AI-powered automation
CHAPTER-1

INTRODUCTION

1.1 Introduction

As urban areas continue to grow rapidly, cities face increasing challenges in


managing traffic congestion, ensuring road safety, and minimizing travel time.
Conventional traffic control systems often rely on fixed timers and manual
monitoring, which are insufficient in responding to dynamic traffic conditions.
The need for intelligent, automated solutions has led to the integration of
advanced computer vision and machine learning techniques into traffic
management systems. By harnessing the power of artificial intelligence and
sensor technology, this system offers a comprehensive solution to monitor and
analyse live traffic feeds, providing ways to detect the crucial information
needed to manage the traffic efficiently. With a user-friendly interface and
precise data analysis, the Real-Time Traffic Management System that utilizes
YOLOv8 ensures a seamless experience for the traffic polices, for the general
public and drivers eliminating the frustration of endlessly waiting in the traffic
for the long time. By continuously monitoring vehicle density and movement,
the system can dynamically adjust traffic signal timings to reduce congestion
and prevent bottlenecks—especially during peak hours. As a pioneering solution
in the realm of smart traffic management, this platform sets a new standard for
convenience, sustainability, and efficiency in the practical implementation of
YOLOv8 in a smart traffic surveillance environment and highlights its potential
in enabling smarter, safer cities.
1.1.1Real-time Traffic Management Using YOLOv8

Real-Time Traffic Management using YOLOv8 is an intelligent, automated


system that leverages computer vision and deep learning—specifically the
YOLOv8 (You Only Look Once, version 8) object detection model—to
monitor, analyse, and manage vehicular traffic using live video feeds from
surveillance cameras. The system detects and classifies various types of vehicles
(such as cars, bikes, buses, and trucks) in real-time, enabling dynamic traffic
analysis, congestion control, and violation detection. By processing each video
frame with YOLOv8, the system can track vehicle movements, evaluate traffic
density, identify infractions (such as red-light violations or wrong-way driving),
and support the adaptive control of traffic signals. This modern approach
enhances traditional traffic systems by providing a scalable, efficient, and
intelligent solution for urban traffic monitoring and management. By integrating
advanced algorithms and sensors, the platform can quickly identify density of
the traffic and update the information in real time.

1.1.2 YOLOv8

YOLOv8, short for "You Only Look Once, version 8", is the latest and most
advanced version of the YOLO family of object detection models developed by
Ultralytics. It is a state-of-the-art deep learning model designed for real-time
object detection, classification, and segmentation tasks. YOLOv8 improves
significantly upon its predecessors in terms of accuracy, speed, and flexibility,
making it suitable for a wide range of computer vision applications, including
surveillance, traffic monitoring, robotics, and industrial automation. Built using
the PyTorch framework, YOLOv8 supports both image and video processing
and is capable of identifying multiple objects in a single frame with high
precision. It introduces key improvements such as a new neural architecture,
anchor-free detection, better performance on small object detection, and support
for tasks like instance segmentation and pose estimation. The model is also
lightweight and optimized for edge devices, allowing for real-time processing
even on modest hardware. With an easy-to-use API, support for custom datasets,
and pre-trained models available, YOLOv8 has quickly become a popular
choice for developers and researchers aiming to build intelligent, vision-based
systems.

1.1.3 How YOLOv8 is helpful in traffic management?

YOLOv8 plays a critical role in enhancing modern traffic management systems


by providing fast, accurate, and reliable real-time object detection capabilities.
In traffic scenarios, YOLOv8 can automatically detect and classify different
types of vehicles—such as cars, buses, trucks, and motorcycles—directly from
live video feeds captured by surveillance cameras. This enables continuous
monitoring of roads and intersections without the need for manual intervention.
Its high speed and accuracy allow for real-time analysis of vehicle movement
and density, which can be used to dynamically control traffic signals, reducing
congestion and improving flow. Moreover, YOLOv8 facilitates automatic
detection of traffic violations including red-light jumping, wrong-way driving,
and illegal parking, thereby aiding law enforcement agencies in maintaining
road discipline. The system is also capable of tracking incidents such as stalled
vehicles or accidents, enabling quicker emergency response. By leveraging
YOLOv8, traffic authorities and urban planners can gather valuable data for
long-term planning, infrastructure improvement, and smart city development.
Its flexibility, ease of integration with existing infrastructure, and low
computational requirements make YOLOv8 a cost-effective and scalable
solution for intelligent traffic management.

1.1.4 Object Classification in Traffic Management Using YOLOv8

YOLOv8 classifies objects in traffic management by analysing each video


frame captured from traffic surveillance cameras using a deep convolutional
neural network (CNN). The model processes the entire image in a single pass
and predicts bounding boxes around objects along with their respective class
labels. In the context of traffic management, these objects typically include
vehicles such as cars, trucks, buses, motorcycles, bicycles, and even pedestrians.
YOLOv8 utilizes a predefined set of classes—usually derived from a trained
dataset like COCO or a custom traffic-specific dataset—to identify and
categorize each object. Each detected object is assigned a confidence score that
indicates how certain the model is about its prediction. YOLOv8’s architecture
allows it to detect multiple objects in real time with high accuracy and speed,
even in complex or crowded scenes. This ability to distinguish between various
vehicle types and other road elements is crucial for tasks such as traffic density
estimation, lane management, rule violation detection, and dynamic signal
control. By continuously classifying and tracking these objects, YOLOv8
enables intelligent, automated traffic monitoring that supports the development
of smart city infrastructure.

1.2 Problem Statement

In rapidly growing urban areas, the increasing number of vehicles has led to
severe traffic congestion, longer travel times, and higher chances of road
accidents. Traditional traffic management systems typically depend on fixed-
timer traffic signals and manual surveillance, which are inefficient and unable to
adapt to real-time traffic conditions. These limitations not only lead to poor
traffic flow but also contribute to increased fuel consumption, environmental
pollution, and frustration among road users. Moreover, the manual detection of
traffic violations and road incidents is time-consuming and resource-intensive
for traffic enforcement agencies.

A major shortcoming of conventional systems is their lack of intelligence and


automation. They do not have the ability to monitor traffic dynamically, analyze
road usage patterns, or react to changes such as sudden congestion, illegal
parking, or traffic rule violations. There is a growing need for intelligent traffic
monitoring solutions that can operate in real-time and provide accurate insights
to traffic authorities for better decision-making and control.

This project aims to address these issues by developing a Real-Time Traffic


Management System using YOLOv8, a cutting-edge deep learning model
specialized in object detection. YOLOv8 can process live video feeds from
roadside surveillance cameras to accurately detect and classify various objects
such as cars, bikes, trucks, buses, and pedestrians. The system can continuously
monitor traffic density, track vehicle movements, and automatically detect
violations like red-light jumping or wrong-way driving. This real-time data can
be used to optimize traffic signals, improve traffic flow, and alert authorities in
case of emergencies or unusual activities.

By integrating computer vision and artificial intelligence into traffic


management, the proposed system offers a scalable, cost-effective, and highly
efficient solution for modern cities. It empowers traffic authorities with accurate
data, reduces human dependency, and contributes to the development of
smarter, safer, and more sustainable urban transport infrastructure.

1.3 Objective

The primary objective of the proposed system is to design and implement a


Real-Time Traffic Management System that utilizes YOLOv8, an advanced
deep learning model, to improve the efficiency, safety, and intelligence of urban
traffic monitoring. The system aims to automatically detect and classify
different types of vehicles—including cars, buses, trucks, and motorcycles—
from live surveillance video feeds with high accuracy and real-time
responsiveness. The following are specific objectives:

To monitor traffic flow and density in real-time: Continuously track vehicle


movement and count vehicles to analyse traffic congestion and flow patterns at
intersections and highways. By using object tracking algorithms like Deep
SORT, Byte Track help follow those vehicles across frames to count them and
analyse their movement over time, providing real-time traffic density analysis.

To detect and report traffic rule violations automatically: Identify violations


such as red-light jumping, wrong-way driving, and lane changes to enhance
road safety and aid law enforcement. The Custom Violation Detection logic
Algorithm checks vehicle behaviour against traffic rules using zone-based
detection (virtual lines, region of interest) to identify violations.

To store and analyse traffic data for future planning: By using the tools like
Data Logging + Analytics Tools (e.g., SQLite, Pandas, NumPy) Detected and
classified traffic data are stored in a database or files and later processed using
data analysis libraries to generate insights for city planners and traffic
authorities

To reduce manual monitoring efforts and human error: The integration of object
detection, tracking, and automated alert systems ensures minimal human
intervention, allowing traffic surveillance to run 24/7 with high reliability.

1.4 Strength and weakness

Proposed Strengths:

High Detection Accuracy: YOLOv8 is one of the most advanced object


detection models available, capable of identifying and classifying multiple
vehicle types and traffic entities with a high degree of precision, even in
challenging environments (low light, occlusion, etc.).

Real-Time Processing: YOLOv8 is optimized for speed, allowing the system to


detect and analyse traffic conditions from live video feeds in real-time. This is
crucial for applications like signal optimization, violation detection, and
emergency response.
Intelligent Automation: By using deep learning, the system can automatically
detect rule violations, monitor vehicle flow, and make decisions without human
intervention, reducing manual labour and human error.

Scalable and Flexible: The model supports different configurations (YOLOv8n,


YOLOv8s, YOLOv8m, etc.) that can be deployed on a variety of hardware,
from high-end servers to low-cost edge devices, making it suitable for small-
scale intersections to large-scale smart city networks.

Easy Integration: YOLOv8 is built using PyTorch and has a user-friendly


interface via the Ultralytics API, making it easy to integrate with existing CCTV
infrastructure, dashboards, or cloud services.

Data Logging and Analytics Ready: The system can store real-time data for later
analysis, helping in long-term infrastructure planning, traffic forecasting, and
policy-making.

Supports Multi-Task Learning: Besides object detection, YOLOv8 supports


segmentation, classification, and pose estimation, which can be extended for
applications like pedestrian tracking, parking management, or accident
detection.

Existing Weaknesses:

Lack of Real-Time Adaptability: The existing systems operate on fixed signal


timings or manually configured rules, which cannot adapt to sudden changes in
traffic flow, leading to unnecessary delays and congestion.

Limited Detection Capabilities: Traditional systems use inductive loops or basic


motion sensors to count vehicles but cannot differentiate between types of
vehicles (cars, trucks, buses, bikes) or detect pedestrians and obstructions
accurately.
Manual Monitoring and High Human Dependency: CCTV footage is often
monitored manually, making it time-consuming and prone to human error. This
also limits the system's ability to monitor multiple intersections simultaneously
and effectively.

No Automatic Violation Detection: Existing systems generally do not have


built-in mechanisms to automatically detect and report traffic rule violations
like red-light jumping, wrong-way driving, or illegal parking.

Lack of Data-Driven Insights: Most existing systems do not collect or analyse


long-term traffic data for planning or decision-making, missing out on
opportunities to improve infrastructure or design smarter traffic policies.

Poor Performance in Adverse Conditions: Traditional detection systems often


fail in poor weather conditions such as heavy rain, fog, or snow, reducing the
system's reliability in such environments.
CHAPTER 2

LITERATURE REVIEW

1. M. Zichichi, S. Ferretti, and G. D’angelo, ‘‘A framework based on


distributed ledger technologies for data management and services in intel
ligent transportation systems,’’ IEEE Access, vol. 8, pp. 100384–100402,

M. Zichichi, S. Ferretti, and G. D’angelo delve into the realm of Smart Traffic
Management System in their systems in their study published that Data are
becoming the cornerstone of many businesses and entire systems infrastructure.
Intelligent Transportation Systems (ITS) are no different. The ability of
intelligent vehicles and devices to acquire and share environmental
measurements in the form of data is leading to the creation of smart services for
the benefit of individuals. This consists of a system architecture to promote the
development of ITS using distributed ledgers and related technologies. This
advanced platform offers an architecture based on Distributed Ledger
Technologies (DLTs) to offer features such as immutability, traceability and
verifiability of data.

2. A. Khan, F. Ullah, Z. Kaleem, S. Ur Rahman, H. Anwar, and Y.-Z. Cho,


‘‘EVP-STC: Emergency vehicle priority and self-organising traffic control
at intersections using Internet-of-Things platform,’’ IEEE Access, vol. 6,
pp.

Their study presents an Internet-of-Things-based platform for emergency


vehicle priority and self-organised traffic control (EVP-STC) management at
intersections. With the increasing number of automobiles, traffic jams in urban
areas are becoming a critical issue. Traffic jams, especially those at
intersections, not only increase delays for drivers but also increase fuel
consumption and air pollution. The advanced and a novel platform and protocol
called EVP-STC that contains three main systems. The first system, called the
intersection controller, is installed at traffic lights and collects emergency
vehicle position information and vehicle density data at each road segment
approaching an intersection. The second system is installed at each road
segment and contains force resistive sensors to detect vehicles. It transmits the
detected information to the intersection controller via ZigBee. The third system
is installed in emergency vehicles and provides GPS coordinates to the
intersection controller to avoid any waiting time for emergency vehicles at
intersections.

3. A. Pundir, S. Singh, M. Kumar, A. Bafila, and G. J. Saxena, ‘‘Cyber


physical systems enabled transport networks in smart cities: Challenges
and enabling technologies of the new mobility era,’’ IEEE Access, vol. 10,
pp. 16350–16364, 2022

A. Pundir, S. Singh, M. Kumar, A. Bafila, and G. J. Saxena, presented a paper


titled “Cyber physical systems enabled transport networks in smart cities:
Challenges and enabling technologies of the new mobility era”. Their work
focused on developing Wireless communication technologies, smart sensors,
enormously enhanced computational capabilities, intelligent controls merge to
form Cyber-Physical Systems (CPSs). The synergy achieved due to this
integration will considerably transform how humans’ interaction with
engineered systems in future smart cities. Such cities will leverage technologies
to design, develop, and implement intelligent solutions to provide inclusive
development, efficient community infrastructure, and a clean and sustainable
environment. One of the domains likely to witness paradigm- shift in future
smart cities is transport. The development of urban structures, functionality, and
prosperity are intricately connected to how the city designs its mobility
infrastructure.
4. Z. Khan, A. Koubaa, B. Benjdira, and W. Boulila, ‘‘A game theory
approach for smart traffic management,’’ Comput. Electr. Eng., vol. 110,

Sep. 2023, Art. no. 108825.

In their study, Khan Koubaa examine the rapid increase in population and
transportation resources presents numerous challenges, including traffic
congestion and accidents. Their research emphasizes the importance of smart
traffic management (STM) framework that combines the Internet of
Vehicles (IOV) and game theory to manage traffic loads at road intersections.
The intersection is considered a non-cooperative game, where traffic flow for
each route is determined by the Nash Equilibrium (NE) to ensure that no
individual can improve their performance by changing their strategy.

5. U. S. Shanthamallu, A. Spanias, C. Tepedelenlioglu, and M. Stanley, ‘‘A


brief survey of machine learning methods and their sensor and IoT
applications,’’ in Proc. 8th Int. Conf. Inf., Intell., Syst. Appl. (IISA)

U. S. Shanthamallu, A. Spanias, C. Tepedelenlioglu, and M. Stanley conducted


a brief survey on the algorithms and the concepts used for Machine learining
and its applications which is used in development of smart traffic management.
This includes the machine learning and various learning modalities including
supervised and unsupervised methods and deep learning paradigms. Ad also the
applications of machine learning algorithms in various fields including pattern
recognition, sensor networks, anomaly detection, Internet of Things (IoT) and
some of the software tools.

6. J.P.P.Cunha,C.Cardeira,andR.Melício,‘‘Trafficlightscontrolprototype
using wireless technologies,’ in Proc. Int. Conf. Renew. Energies Power
Quality, Madrid, Spain

J.P.P. Cunha, C. Cardeira, and R. Melício conducted a study on traffic light


control system based on wireless communication technologies. Traffic density is
increasing at an alarming rate in developing countries which calls for intelligent
dynamic traffic light control systems to replace the conventional manual and
time based ones. The approach followed is based in a secure wireless sensor
network to feed real time data to the intelligent traffic light control. A physical
prototype was implemented for experimental validation. The physical prototype
showed robustness against local failures or unforeseen cases showing that the
communication between modules keeps an acceptable packets received ratio.
CHAPTER 3

SYSTEM ANALYSIS

3.1 Existing System


The existing traffic management system is a combination of manual and semi-
automated methods used to regulate the movement of vehicles and pedestrians
on roads. Traditional traffic control relies on infrastructure such as traffic
signals, signboards, lane markings, and traffic personnel to manage flow and
ensure road safety. Traffic police are often deployed at busy intersections to
manually guide vehicles during peak hours, emergencies, or special events.
While these methods have served their purpose for decades, they are often
unable to cope with the increasing number of vehicles in modern cities.

In many urban areas, traffic signals are operated on fixed timers that do not
adjust based on real-time traffic conditions. This can lead to unnecessary delays
and longer queues at intersections. Some regions also use basic sensors like
inductive loops embedded in roads, which can detect the presence of vehicles
and trigger signal changes. Surveillance cameras are installed in major cities for
monitoring traffic violations, but in many places, these are used more for
enforcement than for real-time traffic optimization.

Data collected from cameras and sensors is usually sent to centralized traffic
control rooms, where traffic operators can monitor conditions and respond to
incidents like accidents or breakdowns. However, these responses are often
reactive rather than proactive, as the system lacks the ability to adapt to real-
time changes automatically. In smaller cities or rural areas, the dependence is
still primarily on manual control, with limited use of technology.

Overall, while the existing system provides basic traffic control and
enforcement, it is often inefficient, especially in handling sudden changes in
traffic volume or unexpected events. The lack of real-time adaptability, limited
data integration, and manual dependency highlight the need for more advanced
solutions such as intelligent traffic management systems that use real-time data,
automation, and smart technologies to improve efficiency, safety, and
sustainability in urban mobility.

3.2 Proposed System

The proposed traffic management system introduces an advanced, AI-powered


approach that uses YOLOv8 (You Only Look Once version 8) for real-time
vehicle detection and traffic monitoring. YOLOv8 is a state-of-the-art object
detection algorithm known for its high accuracy and fast processing speed,
making it ideal for traffic-related applications. The system utilizes live video
feeds from traffic cameras, which are processed using YOLOv8 to detect and
classify vehicles such as cars, buses, trucks, and two-wheelers in real time.

Once the vehicles are detected, the system counts them, tracks their movement,
and analyses traffic flow at intersections or along major roads. This real-time
data helps determine vehicle density in different lanes, detect congestion, and
identify violations such as signal jumping or wrong-way driving. Based on this
information, the system can automatically adjust traffic signal timings using an
adaptive algorithm to reduce waiting time and improve traffic flow efficiency.
In cases of abnormal behaviour or accidents, the system can instantly alert
authorities for a quicker response.

In addition to traffic signal optimization, the YOLOv8-based system can also be


integrated with a central control dashboard that displays live analytics, heat
maps, and video monitoring. This enables traffic operators to make data-driven
decisions and manage traffic more effectively, especially during peak hours or
emergencies. The system can also store historical data for future analysis,
helping city planners identify traffic patterns and plan infrastructure
improvements accordingly.
Overall, this proposed system offers a modern, intelligent solution to traffic
management by combining computer vision, machine learning, and real-time
automation. It reduces dependency on manual monitoring and outdated fixed-
time signals, resulting in a more responsive, scalable, and efficient traffic
control system. By leveraging the power of YOLOv8, the system enhances road
safety, reduces congestion, and contributes to the development of smarter and
more sustainable cities.

3.3 System Architecture

Fig.3.3 System Architecture

The proposed traffic management system leverages the power of YOLOv8 (You
Only Look Once version 8), a state-of-the-art deep learning algorithm designed
for real-time object detection. By using live video feeds from traffic
surveillance cameras, YOLOv8 detects and classifies vehicles such as cars,
trucks, motorcycles, and buses on the road. The system analyses traffic flow at
intersections and along major roads by detecting vehicle presence, tracking
movement, and monitoring congestion. YOLOv8’s ability to process images
quickly and accurately makes it ideal for handling large volumes of traffic data
in real-time. section outlines the key components and functionalities of the
proposed system, highlighting its innovative features and its working.

Video Input Layer (Traffic Surveillance Cameras): The system begins with real-
time video feed captured from high-resolution CCTV cameras installed at traffic
signals, junctions, and highways. These cameras act as the primary source of
data, continuously recording vehicle movement from multiple angles.

Preprocessing Module: The raw video stream is passed through a preprocessing


module, where each video frame is resized, formatted, and optimized for fast
processing. This stage may also include noise reduction, brightness adjustment,
and frame extraction, preparing the input for YOLOv8.

YOLOv8-Based Object Detection Engine: YOLOv8, a powerful deep learning


model, is the core of the system. It processes each frame and performs real-time
object detection, identifying and classifying vehicles such as cars, buses,
motorcycles, trucks, and bicycles. YOLOv8 not only detects the objects but also
tracks their positions and movements across consecutive frames using unique
object IDs.

Postprocessing: Post-processing is a crucial phase in the real-time traffic


management system that follows the object detection stage performed by
YOLOv8. While YOLOv8 provides raw detection outputs such as bounding
boxes, class labels, and confidence scores for each detected object (e.g.,
vehicles), post-processing refines this data to extract meaningful insights for
decision-making. The main objective of post-processing is to convert low-level
detection data into high-level traffic information such as vehicle counts, traffic
density, lane-wise distribution, and violation detection.

Object Tracker: Object tracking is a fundamental component of real-time traffic


management systems that enhances the capabilities of object detection
performed by YOLOv8. Object tracking assigns a unique ID to each detected
vehicle and maintains that identity as the vehicle moves across multiple frames.
This process is essential for monitoring traffic flow, calculating vehicle speed
and trajectory, and detecting traffic violations. To implement object tracking
alongside YOLOv8, additional tracking algorithms such as Deep SORT (Simple
Online and Realtime Tracking) or BYTETrack are commonly used

Data Analytics & Traffic Analysis Module: The output from YOLOv8
(bounding boxes and class labels) is sent to the analytics module. Here, vehicle
count, lane-wise density, traffic flow direction, and rule violations are
calculated. This module can also analyse time-based trends, such as rush hours
or off-peak times, and detect anomalies like stalled vehicles or accidents.

Traffic Analyser: Based on the real-time vehicle count and traffic density, the
system dynamically adjusts traffic signal durations. For example, if YOLOv8
detects a higher vehicle count on one road compared to another, the green signal
time can be extended to clear congestion. This decision-making logic is
programmed using threshold-based or AI-based rules.

Cloud Storage & Traffic Logs DB: All detected events, traffic data, and video
footage are logged and stored in a cloud or local server for future reference.
This historical data can be used for training machine learning models,
improving accuracy, or assisting in long-term city planning.

Notification System: After detecting and analysing traffic data, the system must
effectively communicate important information to various stakeholders,
including traffic authorities, emergency services, and the general public. The
notification system serves this purpose by generating real-time alerts, reports,
and updates based on the insights gathered from object detection and tracking
modules..

Monitoring Dashboard: A central control panel or dashboard visualizes all the


real-time analytics, live video streams, and system performance metrics. Traffic
authorities can monitor each junction, override automated settings if needed,
and receive alerts for unusual conditions like wrong-way driving or emergency
vehicles.
CHAPTER 4

MODULES DESCRIPTION

4.1 DATA FLOW DIAGRAM

Fig 4.1 Data Flow Diagram

4.1.1 MODULE DESCRIPTION


Data Acquisition Module: Captures real-time video feeds from traffic cameras,
drones, or vehicle-mounted cameras.
Components:
IP camera feeds / CCTV input
Video stream reader (e.g., OpenCV, FFmpeg)
Frame grabber for real-time processing
Object Detection Module (YOLOv8): Detects traffic-related objects such as
cars, bikes, buses, trucks, pedestrians, traffic lights, etc.
Tools:
YOLOv8 (via Ultralytics implementation in Python)
Pre-trained or custom-trained model on traffic datasets (e.g., MS COCO,
BDD100K)
Vehicle Tracking Module: Tracks detected objects across frames for motion
analysis and congestion detection.
Common Algorithms:
DeepSORT, ByteTrack, or other MOT algorithms
Integration with YOLOv8 detections
Traffic Analysis Module: Analyses object trajectories and densities to infer
traffic patterns and behaviours.
Functionalities:
Vehicle counting
Lane occupancy estimation
Speed estimation
Violation detection (e.g., red light running, illegal U-turns)
Violation Detection Module: Identifies traffic incidents such as accidents,
stalled vehicles, or pedestrian crossings in unsafe zones.
Methods:
Abnormal motion detection
Sudden stop detection
Collision proximity alerts
Notification Module: Sends alerts or traffic updates to authorities or display
systems.
Channels:
Email/SMS/Push notifications
API integration with traffic control systems
Dashboards or mobile apps
Visualization & Dashboard Module: Displays real-time feeds, object
overlays, stats, and alerts.
4.2 UML DIAGRAMS
UML stands for Unified Modeling Language. UML is a standardized
general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models
of object oriented computer software. In its current form UML is comprised of
two major components: a Meta-model and a notation. In the future, some form
of method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented
software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.

GOALS:

The Primary goals in the design of the UML are as follows:


1. Provide users a ready-to-use, expressive visual modeling Language so
that they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.

4.2.1 USE CASE DIAGRAM


A use case diagram in the Unified Modeling Language (UML) is a type
of behavioral diagram defined by and created from a Use-case analysis. Its
purpose is to present a graphical overview of the functionality provided by a
system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the
actors in the system can be depicted.

Purpose of the Diagram

The primary purpose of this use case diagram is to show at a high level what the
system does from the perspective of an external observer (the user). It doesn't
delve into how the system operates internally or how the functionalities are
implemented; instead, it focuses on what the system does in terms of user
interactions. This diagram is a valuable tool for understanding the requirements
and functionalities of the system without getting into the technical details of
how those requirements are fulfilled.
Fig 4.2.1 Use case Diagram

The use case diagram for the “Real-time Traffic Management System” visually
represents the interactions between the Developer and User (Actors) and the
system’s key functionalities. Here's a breakdown of its components and what
each part signifies:

Actor
User: A user is anyone who interacts with or benefits from the system's
functionalities. Depending on the use case, there are several types of users with
different levels of access and responsibilities.
In this system the user can be
Traffic Control Operator (Primary User): Active user who interacts with the
system daily and monitors live camera feeds and detection overlays, watches
real-time object detection (YOLOv8) results, views vehicle counts, traffic
density, lane occupancy and responds to system-detected incidents (accidents,
stalled vehicles).
Enforcement Officer / Police: Enforcement-focused user who uses the system
for violation detection. He gets real-time alerts for traffic violations like:
o Red-light running
o Illegal U-turns
o Speeding
System Administrator: Technical user responsible for backend maintenance.
The technical user or system administrator deploys and manages YOLOv8
model updates, maintains detection pipelines and hardware integration,
Manages logs, security, storage, and uptime and troubleshoots model
performance issues (false positives, misses).
General Public (Passive User): Indirect user who benefits from the system.
The General Public User sees live traffic updates on public display boards or
apps and adjusts travel plans based on congestion reports.

Use cases:
Vehicle Detection and Classification: Detects vehicles in live video (cars,
bikes, buses, trucks, etc.). And can know what types of vehicles are on the road
and in which lanes.
Vehicle Counting: Each detected vehicle is counted as it crosses a virtual line
or region. Measure flow rates, detect congestion.
Vehicle Counting: Each detected vehicle is counted as it crosses a virtual line
or region. Measure flow rates, detect congestion.
Illegal Turn / Lane Change Detection: Vehicle path is tracked and compared
against road rules. Identify vehicles making illegal maneuvers.
Pedestrian Detection in Unsafe Zones: YOLOv8 identifies pedestrians
crossing roads unsafely. It gives an alert for jaywalking, improve pedestrian
safety.
4.2.2 CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that describes the structure of a
system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes. It explains which class
contains information.

Fig 4.2.2 Class Diagram


4.2.3 SEQUENCE DIAGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams
are sometimes called event diagrams, event scenarios, and timing diagrams.

Fig 4.2.3 Sequence Diagram


4.2.4 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise


activities and actions with support for choice, iteration and concurrency. In the
Unified Modeling Language, activity diagrams can be used to describe the
business and operational step-by-step workflows of components in a system. An
activity diagram shows the overall flow of control.
Fig 4.2.4 Activity Diagram

CHAPTER 5
SAMPLE COING
5.1 Code
test.py
import cv2
import pandas as pd
import numpy as np
from ultralytics import YOLO
model=YOLO('yolov8s.pt')

def RGB(event, x, y, flags, param):


if event == cv2.EVENT_MOUSEMOVE :
colorsBGR = [x, y]
print(colorsBGR)

cv2.namedWindow('RGB')
cv2.setMouseCallback('RGB', RGB)

cap=cv2.VideoCapture('vidyolov8.mp4')

my_file = open("coco.txt", "r")


data = my_file.read()
class_list = data.split("\n")
print(class_list)
count=0
while True:

ret,frame = cap.read()
if not ret:
break
count += 1
if count % 3 != 0:
continue
frame=cv2.resize(frame,(1020,500))

results=model.predict(frame,show=True)
cv2.imshow("RGB", frame)
if cv2.waitKey(1)&0xFF==27:
break

cap.release()
cv2.destroyAllWindows()

tracker.py
import math

class Tracker:
def __init__(self):
# Store the center positions of the objects
self.center_points = {}
# Keep the count of the IDs
# each time a new object id detected, the count will increase by one
self.id_count = 0

def update(self, objects_rect):


# Objects boxes and ids
objects_bbs_ids = []

# Get center point of new object


for rect in objects_rect:
x, y, w, h = rect
cx = (x + x + w) // 2
cy = (y + y + h) // 2

# Find out if that object was detected already


same_object_detected = False
for id, pt in self.center_points.items():
dist = math.hypot(cx - pt[0], cy - pt[1])

if dist < 35:


self.center_points[id] = (cx, cy)
# print(self.center_points)
objects_bbs_ids.append([x, y, w, h, id])
same_object_detected = True
break

# New object is detected we assign the ID to that object


if same_object_detected is False:
self.center_points[self.id_count] = (cx, cy)
objects_bbs_ids.append([x, y, w, h, self.id_count])
self.id_count += 1

# Clean the dictionary by center points to remove IDS not used anymore
new_center_points = {}
for obj_bb_id in objects_bbs_ids:
_, _, _, _, object_id = obj_bb_id
center = self.center_points[object_id]
new_center_points[object_id] = center

# Update dictionary with IDs not used removed


self.center_points = new_center_points.copy()
return objects_bbs_ids

main.py
import cv2
import pandas as pd
import numpy as np
from ultralytics import YOLO
from tracker import*
import time
from math import dist
model=YOLO('yolov8s.pt')

def RGB(event, x, y, flags, param):


if event == cv2.EVENT_MOUSEMOVE :
colorsBGR = [x, y]
print(colorsBGR)

cv2.namedWindow('RGB')
cv2.setMouseCallback('RGB', RGB)

cap=cv2.VideoCapture('veh2.mp4')

my_file = open("coco.txt", "r")


data = my_file.read()
class_list = data.split("\n")
#print(class_list)

count=0

tracker=Tracker()

cy1=322
cy2=368
offset=6

vh_down={}
counter=[]
vh_down_time={}

vh_up={}
vh_up_time={}
counter1=[]

while True:
ret,frame = cap.read()
if not ret:
break
count += 1
if count % 3 != 0:
continue
frame=cv2.resize(frame,(1020,500))
results=model.predict(frame)
# print(results)
a=results[0].boxes.data
px=pd.DataFrame(a).astype("float")
# print(px)
list=[]

for index,row in px.iterrows():


# print(row)

x1=int(row[0])
y1=int(row[1])
x2=int(row[2])
y2=int(row[3])
d=int(row[5])
c=class_list[d]
if 'car' in c:
list.append([x1,y1,x2,y2])
bbox_id=tracker.update(list)
for bbox in bbox_id:
x3,y3,x4,y4,id=bbox
cx=int(x3+x4)//2
cy=int(y3+y4)//2

cv2.rectangle(frame,(x3,y3),(x4,y4),(0,0,255),2)

if cy1<(cy+offset) and cy1 > (cy-offset):


vh_down[id]=time.time()
if id in vh_down:
if cy2<(cy+offset) and cy2 > (cy-offset):
elapsed_time=time.time() - vh_down[id]
if counter.count(id)==0:
counter.append(id)
distance = 10 #meters
a_speed_ms = distance / elapsed_time
a_speed_kh = a_speed_ms * 3.6
cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(cx,cy),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.putText(frame,str(int(a_speed_kh))+'km/h',
(x4,y4),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)

#####going up#####
if cy2<(cy+offset) and cy2 > (cy-offset):
vh_up[id]=time.time()
if id in vh_up:

if cy1<(cy+offset) and cy1 > (cy-offset):


elapsed_time=time.time() - vh_down[id]

if counter1.count(id)==0:
counter1.append(id)
distance1 = 10 #meters
a_speed_ms1 = distance1 / elapsed_time
a_speed_kh1 = a_speed_ms1 * 3.6
cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(cx,cy),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,25,255),2)
cv2.putText(frame,str(int(a_speed_kh1))+'km/h',
(x4,y4),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)

cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(cx,cy),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
if counter1.count(id)==0:
counter1.append(id)

cv2.line(frame,(267,cy1),(829,cy1),(255,255,255),1)

cv2.putText(frame,('1line'),
(274,318),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)

cv2.line(frame,(167,cy2),(932,cy2),(255,255,255),1)

cv2.putText(frame,('2line'),
(181,363),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
d=(len(counter))
cv2.putText(frame,('goingdown:')+str(d),
(60,40),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
u=(len(counter1))
cv2.putText(frame,('goingup:')+str(u),
(60,130),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.imshow("RGB", frame)
if cv2.waitKey(1)&0xFF==27:
break
cap.release()
cv2.destroyAllWindows()
speed.py
import cv2
import pandas as pd
import numpy as np
from ultralytics import YOLO
from tracker import*
import time
from math import dist
model=YOLO('yolov8s.pt')

def RGB(event, x, y, flags, param):


if event == cv2.EVENT_MOUSEMOVE :
colorsBGR = [x, y]
print(colorsBGR)

cv2.namedWindow('RGB')
cv2.setMouseCallback('RGB', RGB)

cap=cv2.VideoCapture('veh2.mp4')
my_file = open("coco.txt", "r")
data = my_file.read()
class_list = data.split("\n")
#print(class_list)

count=0

tracker=Tracker()

cy1=322
cy2=368

offset=6

vh_down={}
counter=[]

vh_up={}
counter1=[]

while True:
ret,frame = cap.read()
if not ret:
break
count += 1
if count % 3 != 0:
continue
frame=cv2.resize(frame,(1020,500))

results=model.predict(frame)
# print(results)
a=results[0].boxes.data
px=pd.DataFrame(a).astype("float")
# print(px)
list=[]

for index,row in px.iterrows():


# print(row)

x1=int(row[0])
y1=int(row[1])
x2=int(row[2])
y2=int(row[3])
d=int(row[5])
c=class_list[d]
if 'car' in c:
list.append([x1,y1,x2,y2])
bbox_id=tracker.update(list)
for bbox in bbox_id:
x3,y3,x4,y4,id=bbox
cx=int(x3+x4)//2
cy=int(y3+y4)//2

cv2.rectangle(frame,(x3,y3),(x4,y4),(0,0,255),2)
if cy1<(cy+offset) and cy1 > (cy-offset):
vh_down[id]=time.time()
if id in vh_down:

if cy2<(cy+offset) and cy2 > (cy-offset):


elapsed_time=time.time() - vh_down[id]
if counter.count(id)==0:
counter.append(id)
distance = 10 # meters
a_speed_ms = distance / elapsed_time
a_speed_kh = a_speed_ms * 3.6
cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(x3,y3),cv2.FONT_HERSHEY_COMPLEX,0.6,(255,255,255),1)
cv2.putText(frame,str(int(a_speed_kh))+'Km/h',
(x4,y4 ),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)

#####going UP#####
if cy2<(cy+offset) and cy2 > (cy-offset):
vh_up[id]=time.time()
if id in vh_up:

if cy1<(cy+offset) and cy1 > (cy-offset):


elapsed1_time=time.time() - vh_up[id]
if counter1.count(id)==0:
counter1.append(id)
distance1 = 10 # meters
a_speed_ms1 = distance1 / elapsed1_time
a_speed_kh1 = a_speed_ms1 * 3.6
cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(x3,y3),cv2.FONT_HERSHEY_COMPLEX,0.6,(255,255,255),1)
cv2.putText(frame,str(int(a_speed_kh1))+'Km/h',
(x4,y4),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)

cv2.line(frame,(274,cy1),(814,cy1),(255,255,255),1)

cv2.putText(frame,('L1'),(277,320),cv2.FONT_HERSHEY_COMPLEX,0.8,
(0,255,255),2)

cv2.line(frame,(177,cy2),(927,cy2),(255,255,255),1)

cv2.putText(frame,('L2'),(182,367),cv2.FONT_HERSHEY_COMPLEX,0.8,
(0,255,255),2)
d=(len(counter))
u=(len(counter1))
cv2.putText(frame,('goingdown:-')+str(d),
(60,90),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)

cv2.putText(frame,('goingup:-')+str(u),
(60,130),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.imshow("RGB", frame)
if cv2.waitKey(1)&0xFF==27:
break
cap.release()
cv2.destroyAllWindows()

tracker.py
from collections import defaultdict
import math

class Tracker:
def __init__(self, max_distance=35, max_history=30):
self.track_history = defaultdict(lambda: []) # {id: [(x, y), (x, y), ...]}
self.id_count = 0
self.max_distance = max_distance
self.max_history = max_history

def update(self, objects_rect):


objects_bbs_ids = []

for rect in objects_rect:


x1, y1, x2, y2 = rect
cx = (x1 + x2) // 2
cy = (y1 + y2) // 2
same_object_detected = False
for obj_id, track in self.track_history.items():
prev_center = track[-1]
dist = math.hypot(cx - prev_center[0], cy - prev_center[1])
if dist < self.max_distance:
self.track_history[obj_id].append((cx, cy))
if len(self.track_history[obj_id]) > self.max_history:
self.track_history[obj_id].pop(0) # Retain only the last
'max_history' points
objects_bbs_ids.append([x1, y1, x2, y2, obj_id])
same_object_detected = True
break

if not same_object_detected:
self.track_history[self.id_count].append((cx, cy))
objects_bbs_ids.append([x1, y1, x2, y2, self.id_count])
self.id_count += 1

new_track_history = defaultdict(lambda: [])


for obj_bb_id in objects_bbs_ids:
_, _, _, _, object_id = obj_bb_id
new_track_history[object_id] = self.track_history[object_id]

self.track_history = new_track_history.copy()
return objects_bbs_ids
5.2 OUTPUT
CHAPTER 6

CONCLUSION

6.1 Conclusion

The implementation of real-time traffic management using YOLOv8


demonstrates the effectiveness of modern deep learning techniques in enhancing
urban mobility and safety. YOLOv8, with its superior speed and accuracy,
proved to be well-suited for detecting and tracking vehicles, pedestrians, and
other traffic-related entities in real-time scenarios. By leveraging live video
feeds and integrating the detection pipeline into a traffic monitoring system, the
project successfully showcased how intelligent surveillance can support better
traffic flow analysis, congestion management, and even rule enforcement.

The results indicate significant potential for scalability and deployment in smart
city infrastructures. However, the system's performance can be further improved
by integrating additional features such as license plate recognition, predictive
modeling using historical data, and adaptive traffic light control.

In conclusion, this project marks a meaningful step toward intelligent


transportation systems, demonstrating that computer vision solutions like
YOLOv8 can play a critical role in modernizing traffic management and paving
the way for safer and more efficient urban environments.

6.2 FUTURE SCOPE

The success of this project opens up numerous avenues for future development
and real-world applications of real-time traffic management using YOLOv8.
Some promising directions include:

1. Integration with Smart Traffic Lights: By connecting detection outputs


with traffic signal systems, traffic lights can be dynamically controlled
based on real-time congestion levels, reducing wait times and improving
flow efficiency.

2. Vehicle Type and License Plate Recognition: Extending YOLOv8 with


modules for identifying vehicle types (car, truck, bike, etc.) and
recognizing license plates can help in traffic law enforcement, tolling
systems, and stolen vehicle tracking.

3. Pedestrian Safety Enhancements: Adding pedestrian movement


tracking can assist in preventing accidents and enhancing safety at
crosswalks and intersections.

4. Accident Detection and Alert Systems:


The model can be trained to detect anomalies such as crashes or stalled
vehicles, triggering immediate alerts to emergency services or control
centers.

5. Traffic Violation Detection: Integrating rule-based logic with object


detection can help identify violations such as wrong-way driving, red
light jumping, or illegal parking.
6. Scalability Across Cities: The system can be adapted for deployment
across different cities by retraining YOLOv8 with region-specific
datasets, making it highly scalable and versatile.

7. Edge Computing and IoT Integration: Deploying the model on edge


devices like smart cameras or embedded systems (e.g., Jetson Nano,
Raspberry Pi) can minimize latency and enable decentralized traffic
management.

8. Data Analytics and Prediction:Combining YOLOv8 with time-series


models can enable prediction of traffic patterns, which can be useful for
urban planning, event management, and congestion forecasting.

9. Environmental Monitoring: Traffic data can be cross-referenced with


environmental sensors to assess pollution levels and design eco-friendly
urban transport strategies.
CHAPTER 7

REFERENCES

[1] M. Zichichi, S. Ferretti, and G. D’angelo, ‘‘A framework based on


distributed ledger technologies for data management and services in intel ligent
transportation systems,’’ IEEE Access, vol. 8, pp. 100384–100402, 2020.
[2] A. Pundir, S. Singh, M. Kumar, A. Bafila, and G. J. Saxena, ‘‘Cyber physical
systems enabled transport networks in smart cities: Challenges and enabling
technologies of the new mobility era,’’ IEEE Access, vol. 10, pp. 16350–16364,
2022.
[3] Z. Khan, A. Koubaa, B. Benjdira, and W. Boulila, ‘‘A game theory approach
for smart traffic management,’’ Comput. Electr. Eng., vol. 110, Sep. 2023, Art.
no. 108825.
[4] U. S. Shanthamallu, A. Spanias, C. Tepedelenlioglu, and M. Stanley, ‘‘A
brief survey of machine learning methods and their sensor and IoT
applications,’’ in Proc. 8th Int. Conf. Inf., Intell., Syst. Appl. (IISA), Aug. 2017,
pp. 1–8.
[5] N. Choudhury, R. Matam, M. Mukherjee, and L. Shu, ‘‘Beacon syn
chronization and duty-cycling in IEEE 802.15.4 cluster-tree networks: A
review,’’ IEEE Internet Things J., vol. 5, no. 3, pp. 1765–1788, Jun. 2018.
[6] N. Choudhury, R. Matam, M. Mukherjee, J. Lloret, and E. Kalaimannan,
‘‘NCHR: A nonthreshold-based cluster-head rotation scheme for IEEE 802.15.4
cluster-tree networks,’’ IEEE Internet Things J., vol. 8, no. 1, pp. 168–178, Jan.
2021.
[7] A. B. M. Adam, M. S. A. Muthanna, A. Muthanna, T. N. Nguyen, and A. A.
A. El-Latif, ‘‘Toward smart traffic management with 3D placement optimization
in UAV-assisted NOMA IIoT networks,’’ IEEE Trans. Intell. Transp. Syst., vol.
24, no. 12, pp. 15448–15458, Dec. 2023.
[8] I. García-Magariño, M. M. Nasralla, and S. Nazir, ‘‘Real-time analysis of
online sources for supporting business intelligence illustrated with Bitcoin
investments and IoT smart-meter sensors in smart cities,’’ Electronics, vol. 9,
no. 7, p. 1101, Jul. 2020.
[9] K. Cao, Y. Liu, G. Meng, and Q. Sun, ‘‘An overview on edge computing
research,’’ IEEE Access, vol. 8, pp. 85714–85728, 2020.
[10] X. Xiong, K. Zheng, L. Lei, and L. Hou, ‘‘Resource allocation based on
deep reinforcement learning in IoT edge computing,’’ IEEE J. Sel. Areas
Commun., vol. 38, no. 6, pp. 1133–1146, Jun. 2020.
[11] C. Chakraborty, K. Mishra, S. K. Majhi, and H. Bhuyan, ‘‘Intelligent
latency-aware tasks prioritization and offloading strategy in distributed fog-
cloud of things,’’ IEEE Trans. Ind. Informat., vol. 19, no. 2, pp. 2099–2106,
Feb. 2023.
[12] A. Khan, F. Ullah, Z. Kaleem, S. Ur Rahman, H. Anwar, and Y.-Z. Cho,
‘‘EVP-STC: Emergency vehicle priority and self-organising traffic control at
intersections using Internet-of-Things platform,’’ IEEE Access, vol. 6, pp.
68242–68254, 2018.
[13] I. García-Magariño, M. M. Nasralla, and J. Lloret, ‘‘A repository of method
fragments for agent-oriented development of learning-based edge computing
systems,’’ IEEE Netw., vol. 35, no. 1, pp. 156–162, Jan. 2021.
[14] S. Kaleem, A. Sohail, M. U. Tariq, and M. Asim, ‘‘An improved big data
analytics architecture using federated learning for IoT-enabled urban intelligent
transportation systems,’’ Sustainability, vol. 15, no. 21, p. 15333, Oct. 2023.
[15] Y. K. Teoh, S. S. Gill, and A. K. Parlikad, ‘‘IoT and fog-computing-based
predictive maintenance model for effective asset management in Industry 4.0
using machine learning,’’ IEEE Internet Things J., vol. 10, no. 3, pp. 2087–
2094, Feb. 2023.

You might also like