Real Time Traffic
Real Time Traffic
ABSTRACT
LIST OF TABLES
LIST OF FIGURES
1. INTRODUCTON
1.1 Introduction
1.3 Objectives
2 LITERATURE SURVEY
3 SYSTEM ANALYSIS
4 MODULE DESCRIPTION
5.1 Code
5.2 Output
6.1 Conclusion
7 REFERENCES
LIST OF FIGURES
INTRODUCTION
1.1 Introduction
1.1.2 YOLOv8
YOLOv8, short for "You Only Look Once, version 8", is the latest and most
advanced version of the YOLO family of object detection models developed by
Ultralytics. It is a state-of-the-art deep learning model designed for real-time
object detection, classification, and segmentation tasks. YOLOv8 improves
significantly upon its predecessors in terms of accuracy, speed, and flexibility,
making it suitable for a wide range of computer vision applications, including
surveillance, traffic monitoring, robotics, and industrial automation. Built using
the PyTorch framework, YOLOv8 supports both image and video processing
and is capable of identifying multiple objects in a single frame with high
precision. It introduces key improvements such as a new neural architecture,
anchor-free detection, better performance on small object detection, and support
for tasks like instance segmentation and pose estimation. The model is also
lightweight and optimized for edge devices, allowing for real-time processing
even on modest hardware. With an easy-to-use API, support for custom datasets,
and pre-trained models available, YOLOv8 has quickly become a popular
choice for developers and researchers aiming to build intelligent, vision-based
systems.
In rapidly growing urban areas, the increasing number of vehicles has led to
severe traffic congestion, longer travel times, and higher chances of road
accidents. Traditional traffic management systems typically depend on fixed-
timer traffic signals and manual surveillance, which are inefficient and unable to
adapt to real-time traffic conditions. These limitations not only lead to poor
traffic flow but also contribute to increased fuel consumption, environmental
pollution, and frustration among road users. Moreover, the manual detection of
traffic violations and road incidents is time-consuming and resource-intensive
for traffic enforcement agencies.
1.3 Objective
To store and analyse traffic data for future planning: By using the tools like
Data Logging + Analytics Tools (e.g., SQLite, Pandas, NumPy) Detected and
classified traffic data are stored in a database or files and later processed using
data analysis libraries to generate insights for city planners and traffic
authorities
To reduce manual monitoring efforts and human error: The integration of object
detection, tracking, and automated alert systems ensures minimal human
intervention, allowing traffic surveillance to run 24/7 with high reliability.
Proposed Strengths:
Data Logging and Analytics Ready: The system can store real-time data for later
analysis, helping in long-term infrastructure planning, traffic forecasting, and
policy-making.
Existing Weaknesses:
LITERATURE REVIEW
M. Zichichi, S. Ferretti, and G. D’angelo delve into the realm of Smart Traffic
Management System in their systems in their study published that Data are
becoming the cornerstone of many businesses and entire systems infrastructure.
Intelligent Transportation Systems (ITS) are no different. The ability of
intelligent vehicles and devices to acquire and share environmental
measurements in the form of data is leading to the creation of smart services for
the benefit of individuals. This consists of a system architecture to promote the
development of ITS using distributed ledgers and related technologies. This
advanced platform offers an architecture based on Distributed Ledger
Technologies (DLTs) to offer features such as immutability, traceability and
verifiability of data.
In their study, Khan Koubaa examine the rapid increase in population and
transportation resources presents numerous challenges, including traffic
congestion and accidents. Their research emphasizes the importance of smart
traffic management (STM) framework that combines the Internet of
Vehicles (IOV) and game theory to manage traffic loads at road intersections.
The intersection is considered a non-cooperative game, where traffic flow for
each route is determined by the Nash Equilibrium (NE) to ensure that no
individual can improve their performance by changing their strategy.
6. J.P.P.Cunha,C.Cardeira,andR.Melício,‘‘Trafficlightscontrolprototype
using wireless technologies,’ in Proc. Int. Conf. Renew. Energies Power
Quality, Madrid, Spain
SYSTEM ANALYSIS
In many urban areas, traffic signals are operated on fixed timers that do not
adjust based on real-time traffic conditions. This can lead to unnecessary delays
and longer queues at intersections. Some regions also use basic sensors like
inductive loops embedded in roads, which can detect the presence of vehicles
and trigger signal changes. Surveillance cameras are installed in major cities for
monitoring traffic violations, but in many places, these are used more for
enforcement than for real-time traffic optimization.
Data collected from cameras and sensors is usually sent to centralized traffic
control rooms, where traffic operators can monitor conditions and respond to
incidents like accidents or breakdowns. However, these responses are often
reactive rather than proactive, as the system lacks the ability to adapt to real-
time changes automatically. In smaller cities or rural areas, the dependence is
still primarily on manual control, with limited use of technology.
Overall, while the existing system provides basic traffic control and
enforcement, it is often inefficient, especially in handling sudden changes in
traffic volume or unexpected events. The lack of real-time adaptability, limited
data integration, and manual dependency highlight the need for more advanced
solutions such as intelligent traffic management systems that use real-time data,
automation, and smart technologies to improve efficiency, safety, and
sustainability in urban mobility.
Once the vehicles are detected, the system counts them, tracks their movement,
and analyses traffic flow at intersections or along major roads. This real-time
data helps determine vehicle density in different lanes, detect congestion, and
identify violations such as signal jumping or wrong-way driving. Based on this
information, the system can automatically adjust traffic signal timings using an
adaptive algorithm to reduce waiting time and improve traffic flow efficiency.
In cases of abnormal behaviour or accidents, the system can instantly alert
authorities for a quicker response.
The proposed traffic management system leverages the power of YOLOv8 (You
Only Look Once version 8), a state-of-the-art deep learning algorithm designed
for real-time object detection. By using live video feeds from traffic
surveillance cameras, YOLOv8 detects and classifies vehicles such as cars,
trucks, motorcycles, and buses on the road. The system analyses traffic flow at
intersections and along major roads by detecting vehicle presence, tracking
movement, and monitoring congestion. YOLOv8’s ability to process images
quickly and accurately makes it ideal for handling large volumes of traffic data
in real-time. section outlines the key components and functionalities of the
proposed system, highlighting its innovative features and its working.
Video Input Layer (Traffic Surveillance Cameras): The system begins with real-
time video feed captured from high-resolution CCTV cameras installed at traffic
signals, junctions, and highways. These cameras act as the primary source of
data, continuously recording vehicle movement from multiple angles.
Data Analytics & Traffic Analysis Module: The output from YOLOv8
(bounding boxes and class labels) is sent to the analytics module. Here, vehicle
count, lane-wise density, traffic flow direction, and rule violations are
calculated. This module can also analyse time-based trends, such as rush hours
or off-peak times, and detect anomalies like stalled vehicles or accidents.
Traffic Analyser: Based on the real-time vehicle count and traffic density, the
system dynamically adjusts traffic signal durations. For example, if YOLOv8
detects a higher vehicle count on one road compared to another, the green signal
time can be extended to clear congestion. This decision-making logic is
programmed using threshold-based or AI-based rules.
Cloud Storage & Traffic Logs DB: All detected events, traffic data, and video
footage are logged and stored in a cloud or local server for future reference.
This historical data can be used for training machine learning models,
improving accuracy, or assisting in long-term city planning.
Notification System: After detecting and analysing traffic data, the system must
effectively communicate important information to various stakeholders,
including traffic authorities, emergency services, and the general public. The
notification system serves this purpose by generating real-time alerts, reports,
and updates based on the insights gathered from object detection and tracking
modules..
MODULES DESCRIPTION
GOALS:
The primary purpose of this use case diagram is to show at a high level what the
system does from the perspective of an external observer (the user). It doesn't
delve into how the system operates internally or how the functionalities are
implemented; instead, it focuses on what the system does in terms of user
interactions. This diagram is a valuable tool for understanding the requirements
and functionalities of the system without getting into the technical details of
how those requirements are fulfilled.
Fig 4.2.1 Use case Diagram
The use case diagram for the “Real-time Traffic Management System” visually
represents the interactions between the Developer and User (Actors) and the
system’s key functionalities. Here's a breakdown of its components and what
each part signifies:
Actor
User: A user is anyone who interacts with or benefits from the system's
functionalities. Depending on the use case, there are several types of users with
different levels of access and responsibilities.
In this system the user can be
Traffic Control Operator (Primary User): Active user who interacts with the
system daily and monitors live camera feeds and detection overlays, watches
real-time object detection (YOLOv8) results, views vehicle counts, traffic
density, lane occupancy and responds to system-detected incidents (accidents,
stalled vehicles).
Enforcement Officer / Police: Enforcement-focused user who uses the system
for violation detection. He gets real-time alerts for traffic violations like:
o Red-light running
o Illegal U-turns
o Speeding
System Administrator: Technical user responsible for backend maintenance.
The technical user or system administrator deploys and manages YOLOv8
model updates, maintains detection pipelines and hardware integration,
Manages logs, security, storage, and uptime and troubleshoots model
performance issues (false positives, misses).
General Public (Passive User): Indirect user who benefits from the system.
The General Public User sees live traffic updates on public display boards or
apps and adjusts travel plans based on congestion reports.
Use cases:
Vehicle Detection and Classification: Detects vehicles in live video (cars,
bikes, buses, trucks, etc.). And can know what types of vehicles are on the road
and in which lanes.
Vehicle Counting: Each detected vehicle is counted as it crosses a virtual line
or region. Measure flow rates, detect congestion.
Vehicle Counting: Each detected vehicle is counted as it crosses a virtual line
or region. Measure flow rates, detect congestion.
Illegal Turn / Lane Change Detection: Vehicle path is tracked and compared
against road rules. Identify vehicles making illegal maneuvers.
Pedestrian Detection in Unsafe Zones: YOLOv8 identifies pedestrians
crossing roads unsafely. It gives an alert for jaywalking, improve pedestrian
safety.
4.2.2 CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that describes the structure of a
system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes. It explains which class
contains information.
CHAPTER 5
SAMPLE COING
5.1 Code
test.py
import cv2
import pandas as pd
import numpy as np
from ultralytics import YOLO
model=YOLO('yolov8s.pt')
cv2.namedWindow('RGB')
cv2.setMouseCallback('RGB', RGB)
cap=cv2.VideoCapture('vidyolov8.mp4')
ret,frame = cap.read()
if not ret:
break
count += 1
if count % 3 != 0:
continue
frame=cv2.resize(frame,(1020,500))
results=model.predict(frame,show=True)
cv2.imshow("RGB", frame)
if cv2.waitKey(1)&0xFF==27:
break
cap.release()
cv2.destroyAllWindows()
tracker.py
import math
class Tracker:
def __init__(self):
# Store the center positions of the objects
self.center_points = {}
# Keep the count of the IDs
# each time a new object id detected, the count will increase by one
self.id_count = 0
# Clean the dictionary by center points to remove IDS not used anymore
new_center_points = {}
for obj_bb_id in objects_bbs_ids:
_, _, _, _, object_id = obj_bb_id
center = self.center_points[object_id]
new_center_points[object_id] = center
main.py
import cv2
import pandas as pd
import numpy as np
from ultralytics import YOLO
from tracker import*
import time
from math import dist
model=YOLO('yolov8s.pt')
cv2.namedWindow('RGB')
cv2.setMouseCallback('RGB', RGB)
cap=cv2.VideoCapture('veh2.mp4')
count=0
tracker=Tracker()
cy1=322
cy2=368
offset=6
vh_down={}
counter=[]
vh_down_time={}
vh_up={}
vh_up_time={}
counter1=[]
while True:
ret,frame = cap.read()
if not ret:
break
count += 1
if count % 3 != 0:
continue
frame=cv2.resize(frame,(1020,500))
results=model.predict(frame)
# print(results)
a=results[0].boxes.data
px=pd.DataFrame(a).astype("float")
# print(px)
list=[]
x1=int(row[0])
y1=int(row[1])
x2=int(row[2])
y2=int(row[3])
d=int(row[5])
c=class_list[d]
if 'car' in c:
list.append([x1,y1,x2,y2])
bbox_id=tracker.update(list)
for bbox in bbox_id:
x3,y3,x4,y4,id=bbox
cx=int(x3+x4)//2
cy=int(y3+y4)//2
cv2.rectangle(frame,(x3,y3),(x4,y4),(0,0,255),2)
#####going up#####
if cy2<(cy+offset) and cy2 > (cy-offset):
vh_up[id]=time.time()
if id in vh_up:
if counter1.count(id)==0:
counter1.append(id)
distance1 = 10 #meters
a_speed_ms1 = distance1 / elapsed_time
a_speed_kh1 = a_speed_ms1 * 3.6
cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(cx,cy),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,25,255),2)
cv2.putText(frame,str(int(a_speed_kh1))+'km/h',
(x4,y4),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.circle(frame,(cx,cy),4,(0,0,255),-1)
cv2.putText(frame,str(id),
(cx,cy),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
if counter1.count(id)==0:
counter1.append(id)
cv2.line(frame,(267,cy1),(829,cy1),(255,255,255),1)
cv2.putText(frame,('1line'),
(274,318),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.line(frame,(167,cy2),(932,cy2),(255,255,255),1)
cv2.putText(frame,('2line'),
(181,363),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
d=(len(counter))
cv2.putText(frame,('goingdown:')+str(d),
(60,40),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
u=(len(counter1))
cv2.putText(frame,('goingup:')+str(u),
(60,130),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.imshow("RGB", frame)
if cv2.waitKey(1)&0xFF==27:
break
cap.release()
cv2.destroyAllWindows()
speed.py
import cv2
import pandas as pd
import numpy as np
from ultralytics import YOLO
from tracker import*
import time
from math import dist
model=YOLO('yolov8s.pt')
cv2.namedWindow('RGB')
cv2.setMouseCallback('RGB', RGB)
cap=cv2.VideoCapture('veh2.mp4')
my_file = open("coco.txt", "r")
data = my_file.read()
class_list = data.split("\n")
#print(class_list)
count=0
tracker=Tracker()
cy1=322
cy2=368
offset=6
vh_down={}
counter=[]
vh_up={}
counter1=[]
while True:
ret,frame = cap.read()
if not ret:
break
count += 1
if count % 3 != 0:
continue
frame=cv2.resize(frame,(1020,500))
results=model.predict(frame)
# print(results)
a=results[0].boxes.data
px=pd.DataFrame(a).astype("float")
# print(px)
list=[]
x1=int(row[0])
y1=int(row[1])
x2=int(row[2])
y2=int(row[3])
d=int(row[5])
c=class_list[d]
if 'car' in c:
list.append([x1,y1,x2,y2])
bbox_id=tracker.update(list)
for bbox in bbox_id:
x3,y3,x4,y4,id=bbox
cx=int(x3+x4)//2
cy=int(y3+y4)//2
cv2.rectangle(frame,(x3,y3),(x4,y4),(0,0,255),2)
if cy1<(cy+offset) and cy1 > (cy-offset):
vh_down[id]=time.time()
if id in vh_down:
#####going UP#####
if cy2<(cy+offset) and cy2 > (cy-offset):
vh_up[id]=time.time()
if id in vh_up:
cv2.line(frame,(274,cy1),(814,cy1),(255,255,255),1)
cv2.putText(frame,('L1'),(277,320),cv2.FONT_HERSHEY_COMPLEX,0.8,
(0,255,255),2)
cv2.line(frame,(177,cy2),(927,cy2),(255,255,255),1)
cv2.putText(frame,('L2'),(182,367),cv2.FONT_HERSHEY_COMPLEX,0.8,
(0,255,255),2)
d=(len(counter))
u=(len(counter1))
cv2.putText(frame,('goingdown:-')+str(d),
(60,90),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.putText(frame,('goingup:-')+str(u),
(60,130),cv2.FONT_HERSHEY_COMPLEX,0.8,(0,255,255),2)
cv2.imshow("RGB", frame)
if cv2.waitKey(1)&0xFF==27:
break
cap.release()
cv2.destroyAllWindows()
tracker.py
from collections import defaultdict
import math
class Tracker:
def __init__(self, max_distance=35, max_history=30):
self.track_history = defaultdict(lambda: []) # {id: [(x, y), (x, y), ...]}
self.id_count = 0
self.max_distance = max_distance
self.max_history = max_history
if not same_object_detected:
self.track_history[self.id_count].append((cx, cy))
objects_bbs_ids.append([x1, y1, x2, y2, self.id_count])
self.id_count += 1
self.track_history = new_track_history.copy()
return objects_bbs_ids
5.2 OUTPUT
CHAPTER 6
CONCLUSION
6.1 Conclusion
The results indicate significant potential for scalability and deployment in smart
city infrastructures. However, the system's performance can be further improved
by integrating additional features such as license plate recognition, predictive
modeling using historical data, and adaptive traffic light control.
The success of this project opens up numerous avenues for future development
and real-world applications of real-time traffic management using YOLOv8.
Some promising directions include:
REFERENCES