0% found this document useful (0 votes)
21 views19 pages

Density Based Traffic Light Management

Uploaded by

rishuu9878
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views19 pages

Density Based Traffic Light Management

Uploaded by

rishuu9878
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

DENSITY BASED TRAFFIC LIGHT

MANAGEMENT SYSTEM

Team Members

Sangram Singh Yadav (21104095)


Arwinder Singh (21104024)
Reetesh Kumar (21104083)

Supervised by:
Dr Balwinder Raj
(Department of ECE)
INTRODUCTION

Traffic congestion is becoming a serious problem with a large number


of cars on the roads. Vehicles queue length waiting to be processed at
the intersection is rising sharply with the increase of the traffic flow,
and the traditional traffic lights cannot efficiently schedule it.

Inefficient utilization of traffic signals leads to traffic congestion.


Vehicles have to wait for more time than required, which is avoidable.

Waiting for more than required time with running engines leads to
environmental degradation due to harmful exhaust gases.
OBJECTIVES
Develop a traffic light management system that adjusts
signal timing based on real-time traffic density.
Yolo v3 : Implement object detection using the YOLO v3
model to count vehicles.
Arduino : Use an Arduino to control traffic lights based on
detected vehicle density.
COMPONENTS
Camera: A high-resolution camera to capture
real-time traffic video.
YOLO v3 Model: A pre-trained deep learning
model for object detection.
Arduino: A microcontroller to control the traffic
lights.
Control Algorithm: An algorithm using Machine
Learning to adjust traffic light timing based on
vehicle counts.
DATA ACQUISITION
A camera is positioned at a strategic location, such as an intersection, to capture video streams of traffic flow.
The video feed is processed to detect vehicles using YOLO v3.

Object Detection with YOLO v3


The YOLO (You Only Look Once) v3 model is a deep learning-based object detection framework known for its
speed and accuracy. We train YOLO v3 to detect different vehicle types, including cars, trucks, and
motorcycles. The model outputs bounding boxes and class labels, enabling vehicle counting.

Arduino-Based Control
An Arduino microcontroller manages the traffic light system. It receives vehicle count data from the YOLO v3 model and
adjusts traffic light timing accordingly. The control algorithm ensures a balanced flow of traffic and prioritizes lanes.

Control Algorithm
The control algorithm uses a dynamic approach to adjust traffic light timing based on real-time vehicle counts. It sets
minimum and maximum green light times and uses vehicle density to adjust within these limits. If one lane has a
significantly higher vehicle count, it receives a longer green light time to reduce congestion.
Implementation
Hardware Setup
Install high-definition cameras at strategic locations to capture live traffic footage.

Set up Arduino microcontrollers at each intersection to process data and control traffic lights.

Connect the cameras and Arduino boards to a central processing unit, such as a computer or
edge device, for data processing and decision-making.
Integration
Establish communication protocols between the hardware components (cameras, Arduino boards)
and the central processing unit hosting the software modules.

Integrate the object detection algorithm into the system to process the live camera feed and detect
vehicles.
Integrate the traffic light control algorithm to analyze vehicle density data and adjust signal timings
accordingly.
ABOUT YOLO

A single neural network predicts bounding boxes and


class probabilities directly from full images in one
evaluation.

Yolo architecture passes the image once through the


neural network and output is the prediction.

The architecture is splitting the input image in mxm grid.


For each detected object bounding boxes are generated
and for each grid the 7 valued vector is calculated.

The bounding box is more likely to be larger than the grid


itself.
PROBLEMS
ABOUT YOLO (VECTOR REPRESENTATION)
ABOUT YOLO (THE NEURAL NETWORK)
ABOUT YOLO (NON MAX SUPRESSION)
ABOUT YOLO
Advantages
Speed (On a Pascal Titan X, YOLO v3 processes images at 30 FPS
and has a mAP of 57.9% on COCO test-dev).

The network is able to generalize the image better.

Faster version (with smaller architecture)

Open Source: https://pjreddie.com/darknet/yolo/

Limitation
The model struggles with small objects that appear in
groups, such as flocks of birds.
CONCEPTS AND LIBRARIES USED
Image Input - Using real time traffic images.
Object Detecion - YOLO v3 model is used for image detection.

Numpy - For handling arrays.


Matplotlib - To display the results in a window.
Open CV - Python library to load and use the YOLO v3 model in the
project.
YOLO v3 - For object and motion detection. The algorithm module ,
weights , configurations and coco name files have been downloaded
from the YOLO v3 website.
Weights, configuration and coco name files are downloaded from the
YOLO v3 website.
METRICS
These metrics are on Coco Datasets. (Common Objects in
Context by Microsoft)

COCO is a large-scale object detection, segmentation, and


captioning dataset.

COCO features -
330K images 80 object categories 1.5 million object instances.
THANKYOU

You might also like