0% found this document useful (0 votes)
380 views8 pages

Vehicle Counting For Traffic Management Using Opencv and Python

This document describes a system for vehicle counting using OpenCV and Python. The proposed system uses background subtraction, Gaussian filtering, and object tracking techniques to detect vehicles in video footage from traffic cameras. It detects vehicles, builds trajectories to track them over time, and counts the total number of vehicles. The system was evaluated on multiple datasets containing thousands of vehicles and achieved high detection accuracy compared to ground truth counts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
380 views8 pages

Vehicle Counting For Traffic Management Using Opencv and Python

This document describes a system for vehicle counting using OpenCV and Python. The proposed system uses background subtraction, Gaussian filtering, and object tracking techniques to detect vehicles in video footage from traffic cameras. It detects vehicles, builds trajectories to track them over time, and counts the total number of vehicles. The system was evaluated on multiple datasets containing thousands of vehicles and achieved high detection accuracy compared to ground truth counts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

VEHICLE COUNTING FOR TRAFFIC MANAGEMENT

USING OPENCV AND PYTHON

Abstract: One of the main components of the smart traffic concept is vehicle identification and
monitoring. Without a thorough understanding of the city's current traffic patterns, modern city
planning and construction is impossible. Surveillance footage is an undervalued stream of traffic
data that can be discovered using a range of IT methods and solutions, including machine
learning techniques. In Jelgava, Latvia, a solution for real-time vehicle traffic control, logging,
and counting has been suggested. It locates vehicles on an image from an exterior surveillance
camera using an object tracking model. Vehicles are detected and forwarded to the tracking
module, which builds the vehicle trajectory and counts it. In this project, opencv and python are
used to detect and count vehicles. This study is part of the RETRACT initiative (Enabling Resilient
Urban Transit Networks in Smart Cities).
Software Requirements: python3.8 , opencv , Numpy , Time (libraries of python)
Hardware requirements: keyboard , mouse , monitor , video.mp4(through traffic camera)
Introduction: Building new roads as a way of relieving heavy traffic is not a feasible way all the time,
as it has a high construction cost and lack of space in the urban areas where it is highly needed. As
reported by the US Federal Highway Administration’s Highway Statistics Summary to 1995, both the
number of vehicles and the number vehicle miles driven have more than doubled in the past 25 years;
this will lead to reduced speeds and increased travel time. Which causes instabilities in flow, more
commonly known as “stop and go traffic.” On the other hand the latest report form the Department of
Transport in UK indicate the huge increment in traffic every year since 1994 to 2018 as in the below
figure. As a result, the trends have shifted toward Active Traffic Management Centers (ATMC) and The
use of Intelligent Transportation Systems (ITS). So it’s not only thinking solely on building new roads or
infrastructure repair, it’s about introducing Information System into transportation management. These
systems enable elements within the transportation system to become intelligent by incorporating
sensors and so to communicate with each other using wireless technologies, thus safety and traveler
convenience would be increased. For the traffic management centers to be in the maximum efficiency,
easy operations and useful information should be provided to the public and traffic control personnel in
a timely manner. In order to design a smart Urban Traffic management system; it should perform the
following functions: Data collection; Data fusion: Analysis and processing; Decision; Action. Normally,
traffic management project targeted some or all of the following: Congestion Avoidance; Priority-Based
Traffic Management and Average Waiting Time Reduction. According to Folds et al. (1993), the mission
of an ideal traffic management center is "to facilitate the safe movement of people and goods, with
minimal delay, throughout the roadway system."
Usage of video cameras instead of other sensors has several advantages: easy maintenance, high
flexibility, compact hardware, and software structure, which enhance the mobility and performance
(Thomessen, 2017). On the contrary, intrusive traffic sensing technologies cause traffic disruption during
its installation process and are unable to detect slow or static vehicles (Mandellos et al., 2011). Ethical
and privacy considerations related to video footage use for traffic monitoring are out of scope of current
research. Usually, these topics are regulated by state or local municipality laws (e.g. in Latvia there are
outdoor signs warning that video recording takes place in particular area).

The Annual Increment of Road Traffic

Hierarchal Functionality of Urban Traffic Control System

Existing Methodologies:

Vehicle counting plays a significant role in vehicle behavior analysis and traffic incident detection for
established video surveillance systems on expressway. Since the existing sensor method and the
traditional image processing method have the problems of difficulty in installation, high cost, and low
precision, a novel vehicle counting method is proposed, which realizes efficient counting based on
multivehicle detection and multivehicle tracking. For multivehicle detection tasks, a construction of the
new expressway dataset consists of a large number of sample images with a high resolution (1920 × 1080)
captured from real-world expressway scenes (including the diversity climatic conditions and visual
angles) by Pan-Tilt-Zoom (PTZ) cameras, in which vehicle categories and annotation rules are defined.
Laser sensors: Laser sensors are applied to detect vehicles, to measure the distance between the sensor
and the vehicles, and the speed and shape of the vehicles. This kind of sensor does not allow detecting
fast vehicles, is susceptible to rain, and presents difficulty in detecting two-wheeled vehicles.1 A vision-
based system is chosen here for several reasons: the quality of data is much richer and more complete
compared to the information coming from radar, ILD, or lasers. Furthermore, the computational power
of contemporary computers is able to meet the requirements of image processing

Proposed Methodology
Using opencv, python, guassianfilter and background elimination processes
OpenCV is a cross-platform library using which we can develop real-time computer vision
applications. It mainly focuses on image processing, video capture and analysis including
features like face detection and object detection.
In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of
blurring an image by a Gaussian function (named after mathematician and scientist Carl
Friedrich Gauss). It is a widely used effect in graphics software, typically to reduce image noise
and reduce detail.
Background substraction:

1. Read data from videos or image sequences by using cv::VideoCapture ;


2. Create and update the background model by using cv::BackgroundSubtractor
class;
3. Get and show the foreground mask by using cv::imshow ;

Related work:

Background Subtraction : The main aim of this section is to provide a brief summary of the state-of-the-
art moving object detection methods based on a reference image. The existing methods of background
subtraction can be divided according to two categories:7 nonparametric and parametric methods.
Parametric approaches use a series of parameters that determines the characteristics of the statistical
functions of the model, whereas nonparametric approaches automate the selection of the model
parameters as a function of the observed data during training.

Moving Vehicle Extraction and Counting:

Synopsis: In this work, we have developed a system that automatically detects and counts vehicles. The
proposed system consists of five main functions: motion detection, shadow removal, occlusion
management, vehicle tracking, and trajectory counting. The input of the system is, for instance, a video
footage (in the current version of the system, we use a prerecorded video), while the output of the
system is an absolute number of vehicles. The following sections describe the different processing steps
of the counting system.

Motion Detection Motion detection, which provides a classification of the pixels into either foreground
or background, is a critical task in many computer vision applications. A common approach to detect
moving objects is background subtraction, in which each new frame is compared to the estimated
background model

Synopsis of the proposed system for vehicle counting.

Background subtraction using Gaussian mixture model:


Synopsis of the motion detection module.

Moving region detection:

Results:
The evaluation work was divided into two stages.

During the first stage, we acquired three different datasets on the same site .This site is also equipped
with inductive loops, which are convenient for comparison purposes. The first dataset (named Cloudy)
was shot during cloudy weather, and thus with cloudy illumination and without shadows. The second
one (Sunny) was shot during a very sunny day and with severe shadows. The third one (Transitions) was
shot in the presence of sparse clouds leading to sudden illumination changes. The three datasets are
∼20 min long and contain between 1300 and 1500 vehicles each, according to the ground truth. During
the second stage, a longer dataset was shot in another site and contains many difficulties due to
shadows .It contains 3111 vehicles and is a 37-minlong video. Casted shadows from vehicles are more
spread and stretched due to the sun position. In the observed scene, there are two kinds of shadows:
those that are stationary and created by road panels, and those moving and coming from swaying
branches. Moreover, as we are next at an exit road, the road marking is denser. Table 1 shows the
vehicle counting and classification results. The ground truth has been obtained manually. For each
vehicle class, from the results automatically computed by our system, the number of false negatives
(undetected vehicles), false positives (mistakenly counted vehicles), and misclassified (assigned to a
wrong class) vehicles are calculated. The system is evaluated according to • classification performance
using recall = true positives / ground truth, and precision = true positives / (detected vehicles −
misclassified); “total recall” and “total precision” are the averages of the values obtained with the three
vehicle categories; • detection performance using detection rate = 1 − false negatives / ground truth,
false detection rate = false

Background elimination -1 gaussian filter


Conclusions:
In this work, we developed an advanced road vehicle counting system. The aim of such a system is to
replace or complement in the future the old systems based on ILD. The system has been tested with
different kinds of illumination changes (cloudy, sunny, transitions between sun and clouds), obtaining
results better than those of ILD. The developed algorithm is able to eliminate several kinds of shadows
depending on the time of the day. Another particular strength of the method proposed is its ability to
deal with severe occlusions between vehicles. Multicore programming allows us to achieve real-time
performances with only a piece of software. The perspective of this work is, with the same sensor, to
continue to calculate traffic indicators like occupation rate or density of vehicles. The two previous
indicators could be used to calculate more global congestion indicators. The infrastructure operators are
very interested in having such statistics in real time for management purposes.

References:

1. “Sensors for intelligent transport systems,” 2014, http://www. transport-intelligent.net/english-


sections/technologies-43/captors/?? lang=en.

2. “Citilog website,” 2015, http://www.citilog.com.

3. “FLIR Systems, Inc.,” 2016, http://www.flir.com/traffic/.

4. S. Birchfield, W. Sarasua, and N. Kanhere, “Computer vision traffic sensor for fixed and pan-tilt-zoom
cameras,” Technical Report Highway IDEA Project 140, Transportation Research Board, Washington, DC
(2010).

5. C. C. C. Pang, W. W. L. Lam, and N. H. C. Yung, “A method for vehicle count in the presence of
multiple-vehicle occlusions in traffic images,” IEEE Trans. Intell. Transp. Syst. 8, 441–459 (2007).

6. M. Haag and H. H. Nagel, “Incremental recognition of traffic situations from video image sequences,”
Image Vis. Comput. 18, 137–153 (2000).
7. L. Unzueta et al., “Adaptive multicue background subtraction for robust vehicle counting and
classification,” IEEE Trans. Intell. Transp. Syst. 13, 527–540 (2012).

8. S. Greenhill, S. Venkatesh, and G. A. W. West, “Adaptive model for foreground extraction in adverse
lighting conditions,” Lec. Notes Comput. Sci. 3157, 805–811 (2004).

9. K. Kim et al., “Real-time foreground-background segmentation using codebook model,” Real-Time


Imaging 11, 172–185 (2005).

10. C. R. Wren et al., “Pfinder: real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach.
Intell. 19, 780–785 (1997).

11. G. Gordon et al., “Background estimation and removal based on range and color,” in IEEE Computer
Society Conf. on Computer Vision and Pattern Recognition, pp. 459–464 (1999).

12. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in
IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 2246–2252 (1999).

13. C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans.
Pattern Anal. Mach. Intell. 22, 747– 757 (2000).

14. M. Harville, G. G. Gordon, and J. I. Woodfill, “Foreground segmentation using adaptive mixture
models in color and depth,” in Proc. IEEE Workshop on Detection and Recognition of Events in Video, pp.
3–11 (2001).

15. J. Rittscher et al., “A probabilistic background model for tracking,” in Proc. ECCV, pp. 336–350 (2000).

You might also like