0% found this document useful (0 votes)
22 views6 pages

Vechicle Counting System in Urban Areas A Practical Case

The document discusses a vehicle counting system developed using computer vision techniques. It describes the implementation of a prototype that counts vehicles on roads to provide traffic data to city administrators. The paper outlines previous related works, the iterative development methodology used, and details of the prototype system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views6 pages

Vechicle Counting System in Urban Areas A Practical Case

The document discusses a vehicle counting system developed using computer vision techniques. It describes the implementation of a prototype that counts vehicles on roads to provide traffic data to city administrators. The paper outlines previous related works, the iterative development methodology used, and details of the prototype system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2022 IEEE 7th International conference for Convergence in Technology (I2CT)

Pune, India. Apr 07-09, 2022

Vechicle Counting System in Urban Areas: A


Practical Case
Jonathan Almeida Steven Guamán Sang Guun Yoo
Departamento de Ciencias de la Departamento de Ciencias de la Departamento de Informática y
Computación Computación Ciencias de la Computación
2022 IEEE 7th International conference for Convergence in Technology (I2CT) | 978-1-6654-2168-3/22/$31.00 ©2022 IEEE | DOI: 10.1109/I2CT54291.2022.9823982

Universidad de las Fuerzas Armadas Universidad de las Fuerzas Armadas Escuela Politécnica Nacional
ESPE ESPE Quito, Ecuador
Sangolquí, Ecuador Sangolquí, Ecuador [email protected]
[email protected] [email protected] Smart Lab
Smart Lab Smart Lab Escuela Politécnica Nacional
Escuela Politécnica Nacional Escuela Politécnica Nacional Quito, Ecuador
Quito, Ecuador Quito, Ecuador Corresponding Author

Abstract— Controlling the vehicular traffic problems has data (using little effort and cost of implementation) to city
become a priority in modern cities. In this work, a vehicle administrators, so that they can do a proper planning.
counting system is presented that quantifies the number of Additionally, the present work intends to share the
vehicles that travel on a road by means of their detection, using experience of the implementation of the aforementioned
a discipline known as computer vision. The present work system so that other works can use it as a case study and a
describes the details of the implementation of a functional guide for the implementation of more advanced systems.
prototype which was developed using the iterative development
methodology. Automating the vehicle counting process with a The rest of the paper is organized as follows. Section 2
technology of this type allows obtaining more accurate results describes briefly some of previous works. Then, Section 3
in a faster and more comfortable way, facilitating the process explains the methodology used in this work for developing
of decision-making performed by the authorities on subjects of the vehicle counting system. Later, In section 4, the different
vehicle mobility and city planning. iterations performed to reach the final prototype of the
proposed system is detailed. Finally, Section 5 concludes the
Keywords—car counting, computer vision, OpenCV, object present work.
recognition
II. PREVIOUS WORKS
I. INTRODUCTION
To carry out this work, the analysis of similar works was
Nowadays, cities around the world are experiencing a taken into account, in which computer vision is applied to
rapid growth in different aspects, and one of the fastest one is vehicles recognition and counting. In the previous works
the urban traffic which is reaching its maximum capacity [1]. developed by Ferreira [7], Tang et al. [8], Allamehzadeh et
Given the number of vehicles that transit in urban areas, the al. [9], Kadiķis et al. [10] and Shirazi et al. [11] a similar
access to congested areas is becoming more difficult each model to the proposed in this work is used i.e., identifying
day, and trying to solve this problem, the authorities in and counting vehicles on urban roads.
charge of urban planning analyze and process the
information concerning to the number of vehicles that In the case of Ferreira's work, the author propose an
circulate in different streets and avenues of the city. identification and counting system for vehicles that circulate
However, many cities still carry out the process of counting in a certain period of time, that is, they makes use of pre-
vehicles that transits in different streets and avenues in an old recorded videos. In Tang et al.'s work, the limitations caused
style way. These processes range from people recording the by changes in lighting and speed variations of vehicles are
number of vehicles manually to counting them by using described. In other hand, in Kadikis et al.'s work, the authors
sensors. The first method have the limitations of that a draw a line in the video that allows, in addition to identifying
person must be present in the place to count the number of the vehicles, to count them. Finally in Allamehzadeh et al.'s
cars and that the data gathered by the person can be not work, the authors achieve a correct detection accuracy of
accurate. On the other hand, the usage of sensors can have 94%; meanwhile, in the Shirazi et al.’s work, the authors
some limitations such as the difficulty of its implementation, achieve an accuracy between 89% to 94% in counting
high costs, and the easiness of damage due to external factors vehicles.
[2].
Even though there are several previous works, the
Faced with this situation, computer vision technology intention of the present work is to create our own know-how
appears as an interesting alternative for counting vehicles. in developing vehicle counting systems. The system
For example, a study carried out by F. Nelli explains that the proposed in this work uses segmentation of the area of
area of Computer Vision has undergone a great development interest where circulate the vehicles and two lines that
being able to solve problems that were previously impossible detects the movement of the vehicles, improving the
[3]. Additionally, there are several works that indicate the percentage of detection of previous works such as [9] and
advantages of using computer vision in different areas of our [11] and improving some of the limitations of other works
lives [4, 5, 6]. such as [8, 10].
Based on this background, the present work has carried An additional improvement to the previous works
out the development of a car counting system using the analyzed in this article is that we have developed a prototype
computer vision technology that allows to deliver reliable

978-1-6654-2168-3/22/$31.00 ©2022 IEEE 1


Authorized licensed use limited to: KLE Technological University. Downloaded on January 21,2024 at 06:51:03 UTC from IEEE Xplore. Restrictions apply.
that can be portable allowing its execution on a Raspberry Pi range of time was taken based on the data delivered by
or on a traditional computer, providing ease of use anywhere. previous works [8-10].
III. METHODOLOGY Kernel Build and image processing: In this iteration, a
3x3 square kernel was used with a ksize and anchor of 3.
For the development of this work, an adaptation of the Additionally, as indicated previously, the video was divided
agile development methodology known as Iterative into frames (images) and such images were processed (see
Development [12] was selected, which is characterized by Fig. 4) before detecting the vehicles.
better capturing changing requirements and divides a project
into several iterations. As a result of each iteration, an
improved product is obtained, which is analyzed to see if
there is any limitation that needs to be improved; if there is
one, a new iteration is proposed to solve the found
limitations and restrictions (see Fig. 1). In the present work,
seven iterations were carried out until reaching the final
prototype.

Fig. 3. System Architecture for the First Iteration

Fig. 1. Methodology used for the development of the proposed work

IV. PROPOSED SYSTEM


The prototype has been developed using Python 3.7 and
using the OpenCV library as the main tool to have the
functionality of computer vision. The general process of the
system is shown in Fig. 2. In the proposed flow, a raw video
is delivered as an input, the system processes it and outputs
another video detecting and counting the objects (i.e.,
vehicles).

Fig. 2. General Process of the Prototype Fig. 4. Frame before and after image processing

A. First iteration Tests and Results: The method of minimum size was
Architecture: In the first iteration, the architecture used to detect a vehicle in the video. It was decided to
indicated in Fig 3 was considered. The proposed solution initialize the minimum height and width that an object must
proposes to identify the vehicles from videos that are have to be considered as a vehicle. The algorithms
delivered as inputs. The internal process divides the video considered for vehicle detection was Mixture of Gaussian
into frames (images) and the backgrounds of such images are (MOG), k-Nearest Neighbor (KNN) and Geo-metric
subtracted. Then, the frames are converted into black and Multigrid (GMG). The test of such algorithms was done to
white images, with the aim of identifying the objects (i.e., three different videos of duration between 30 and 60 seconds
vehicles) easily. Subsequently, erosion and dilation methods recorded from different perspectives. Table 1 compares the
are applied, so that the image can become cleaner and free of car detection results performed by the three algorithms. After
noise. Finally, the objects are detected, and a square are analyzing the results, it was determined that the MOG
drawn in each detected object. algorithm presents a lower average of relative error (11.86%)
compared to the other two that delivered errors of 25,18%
Length of Test Videos: For the present work, videos (KNN) and 17.44% (GMG). Based on this analysis, it was
with a length between 30 and 60 seconds were used. This determined that the MOG algorithm was the most suitable to
carry out the development of the present prototype. In

2
Authorized licensed use limited to: KLE Technological University. Downloaded on January 21,2024 at 06:51:03 UTC from IEEE Xplore. Restrictions apply.
addition, in the initial tests, it was possible to identify that the have a considerable reduction of false positives, since the
large percentage of errors was due to false positives (which objects not in movement will not cross the collision line
detected different objects as vehicles), false negatives (which plotted on the video.
did not detect vehicles in the video) and multiple detections
(which detected several times the same vehicle during video Test and Results: The car counting module was tested
playback). on the same videos of the previous iteration; one example is
shown in Fig. 6. In this iteration, an average accuracy of 90%
TABLE I. COMPARISON OF DIFFERENT ALGORITHMS was obtained. It was possible to reduce considerably the false
positive error, which occurs when objects other than vehicles
Video 1 (Length: 60 seconds, dimension 1280x720) are detected, or when the same vehicle is detected several
Algorithm
Absolute Relative False False Multiple times. However, the presence of false negatives was still
Error Error Positive negatives detection detected, which occur when the center point drawn on the
MOG 7.00 12.5% 2 4 3
detected vehicle does not cross the line drawn on the screen,
KNN 24.00 42.86% 14 8 11
GMG 11.00 19.64% 15 1 5
or the crossing is not detected. This issue was registered to
Video 2 (Length: 32 seconds, dimension 1280x720)
be solved in the next iteration.

Algorithm
Absolute Relative False False Multiple Furthermore, an additional test was considered in this
Error Error Positive negatives detection iteration which consisted of working with long-duration
MOG 1.00 1.92% 0 0 0
KNN 7.00 13.46% 5 4 2
videos (i.e. 70 minutes). This additional test was to verify if
GMG 6.00 11.54% 16 2 5 the system maintains its stability with the usage of heavy
Video 3 (Length: 40 seconds, dimension 1280x720)
videos; the results of this test was satisfactory showing that
Absolute Relative False False Multiple
the system can support long length videos without affecting
Algorithm
Error Error Positive negatives detection its performance.
MOG 11.00 21.15% 0 3 6
KNN 10.00 19.23% 0 4 8
GMG 11.00 21.15% 2 7 24

Limitations in this iteration and possible solution:


There are many false positives, false negatives and multiple
detections. It is proposed to draw a line on the playback
screen and plot a point on the detected vehicles to reduce the
aforementioned errors.
B. Second iteration
In the second iteration, the proposed solutions of the first
one was applied i.e., draw a line on the playback screen and
plot a point on the detected vehicles, and when the point of
Fig. 6. Vehicle Detection in Second Iteration
the vehicles go through the line, they are counted.
Architecture: For this iteration, the initial architecture C. Third Iteration
was modified, adding the vehicle counting module (see Fig, In this iteration, the car counting was modified to
5). improve its performance. The behavior of the algorithm was
revised, and we noticed that, at the moment that a vehicle
crosses the line plotted on the screen, it was not counted in
some situation. It was determined that the line plotted on the
screen was very thin and sometimes the plotted point in the
center of the detected vehicle passed through the line without
touching it, omitting this way the car counting. In this
situation, this iteration used a wider line and used a flag
variable in each detected object to avoid multiple counting
when crossing the wide line.
Tests and Results: With the improvement made in this
iteration, the efficiency of the system was risen to 92%, In
this iteration, it was possible to learn that: (1) By enlarging
the line plotted on the video, false negatives were reduced,
and (2) multiple counting of the same detected object was
also reduced with the use of a flag variable on the detected
object of interest.
Fig. 5. Second iteration’s Architecture
D. Fourth Iteration
Vehicle Counting Module: This module processes each In this iteration, an improvement of the system was made
frame of the video. When, the program detects a vehicle, it in terms of car detection area. In this iteration, two lines were
proceeds to draw a rectangle around the object and a point in generated to create a car detection area (threshold), so that
its center. In such a way that when the plotted point in the only within this area, the rectangle around the detected object
center of the object crosses the plotted line on the screen, it is is drawn. Thus, if a detected car crosses the first line, it will
counted as a new vehicle. With this method, it is expected to

3
Authorized licensed use limited to: KLE Technological University. Downloaded on January 21,2024 at 06:51:03 UTC from IEEE Xplore. Restrictions apply.
be counted and will no longer be tracked when it passes For the present iteration, the ROI was created by
through the second line (see Fig. 7). including only the road in front of the camera, with the aim
of excluding the detection of objects outside this area (see
Additionally, some technical improvements were Fig. 8). To achieve this, the numpy.zeros_like() method was
performed such as: (1) the algorithm MOG was replaced by applied, returning an array full of zeros with the same shape
MOG2 for background subtraction to reduce the level of and type of the array indicated when instantiating it.
leftovers of objects and deliver a more precise detection; (2)
for the creation of the kernel, the np.ones () method was Finally, with the cv2.bitwise_and() method, the original
used, handling a new set in the form of an array, filled with frame and the transformed frame were obtained as
ones, along with the np.uint8 () method which returns an parameters, to result in the union of the matrix that make up
integer between 0 to 255, needed to form the kernel area; and both images. To do this, each position of the matrix was
(3) a mask for the frame was created using the concatenated bit by bit to search for coincidences so that, in
MORPH_OPEN method. This method has the functionality the case of finding them, it was stored in a third matrix and
of eroding the frame and then dilating it. It is mainly useful returned as a resulting matrix. In execution, the application
for eliminating noise that can be produced in the image when of ROI is reflected in the transformation of the video, as
the background is subtracted. Then, the inverse method was shown in Fig. 9.
applied, called MORPH_CLOSE, which dilates the image
shown in the frame and then erodes it, which is useful to
close the small holes that can appear inside the objects in the
foreground. Subsequently, to detect the best possible contour
of the vehicle to be counted, the
cv2.CHAIN_APPROX_NONE method was used.
1. Tests and Results: A simple test was carried out
with one video used in the previous iteration, to
rapidly verify the improvements. The test video had
a duration of 46 seconds and the vehicle counting
accuracy was 100%. This result shows an
improvement from the last iteration. However, this
result is limited since we have just tested with only
one video.
Fig. 8. ROI in Fifth Iteration

Fig. 7. Vehicle Detection in Fourth Iteration

With posterior tests, it was possible to learn that: (1) the


holes located in the streets or roads affect the accuracy of the
system, (2) the videos used must be recorded without camera
movement since camera movements generate noise that
affect vehicle detection and counting effectiveness, and (3)
the better the quality of the videos used (higher than 720p),
the better the results in terms of accuracy.
E. Fifth Iteration
In this iteration, the generation of a region of interest Fig. 9. Image after cv2.bitwise_and()
(ROI) is proposed. ROI is the area of the video on which the
vehicle counting is performed. Reducing the area of video to Tests and Results: The configurations for the creation of
the one where the vehicles are transited, the level of error the ROI are internal in the code, so videos with the same
could be reduced. It is important to indicate that since the characteristics were used to perform the performance tests to
Vehicle Counting System can be configured accordingly the determine if the ROI positively affects the final count. Each
position of the camara in the road, the ROI solution could be test performed on the test videos had a specification of
applied in real situations. vertices that form the desired polygon.

4
Authorized licensed use limited to: KLE Technological University. Downloaded on January 21,2024 at 06:51:03 UTC from IEEE Xplore. Restrictions apply.
For this test, 8 different videos were used. The accuracy The developed prototype had an average accuracy of
of the system ranged from 90.3% to 100% having an average 93.26% in the videos used for test. The result presented in
accuracy of 93.26%. Most common errors were due to the this work is not comparable to previous works since different
false positives caused by light reflection on roads, shadows videos were used to evaluate all the methods. But the results
detected as additional vehicles, and misdetection of long presented in this work presents a similar or improved level of
trucks in several vehicles. accuracy than previous works. We hope that the experience
of the implementation of the prototype shared in this work
F. Sixth Iteration can be used as a case study and a guide for the
For this iteration, reflected in Fig. 10, it was included a implementation of more advanced systems.
functionality to store the information regarding the
processing of the videos for future analysis. To do this, a test
file was created in which the behavior of the system is stored
in details.

Fig. 10. Sixth iteration’s Architecture

Additionally, a graphical interface was generated,


organizing the content of the panels and presenting text
boxes that indicate the characteristics of the video and the
real-time result of the execution of the application (see Fig
11).
G. Seventh Iteration
In this iteration, the implementation of the system was
done in a Raspberry Pi 4, with the intention to have a
portable solution. For this iteration, some library was
changed since they are not compatible with the Raspbian
(e.g. “from PIL import ImageGrab” to “import pyscreenshot
as ImageGrab”). Fig. 11. User interface of the prototype

V. CONCLUSIONS ACKNOWLEDGMENT
The development of this work, based on the agile The authors gratefully acknowledge the financial support
development methodology known as Iterative Development, provided by the Escuela Politécnica Nacional, for the
went through seven iterations; three of them consisted on development of the project PVS-2018-023 – “Conteo
solving the raised limitations and building the first version of automático de automóviles”.
the prototype. When carrying out the corresponding tests, it
was verified that it was necessary to improve the prototype REFERENCES
altering the code and the internal processing of the video. In [1] J. Barriga et al., “Smart Parking: A Literature Review from the
order to have a more accurate vehicle count, two additional Technological Perspective,” Appl. Sci., vol 9(29), 2019.
iterations were performed including ideas such as an area of [2] F. T. Espinoza, B. G. Gabriel and M. J. Barros, "Computer vision
interest and ROI. The final iteration was used to create a classifier and platform for automatic counting: More than cars," 2017
portable solution based on Raspberry Pi. IEEE Second Ecuador Technical Chapters Meeting (ETCM), 2017,
pp. 1-6, doi: 10.1109/ETCM.2017.8247454.

5
Authorized licensed use limited to: KLE Technological University. Downloaded on January 21,2024 at 06:51:03 UTC from IEEE Xplore. Restrictions apply.
[3] F. Nelli, “Image Analysis and Computer Vision with OpenCV,”. In: [8] N. Tang, C. Do, T.B. Dinh, T.B. Dinh, “Urban Traffic Monitoring
Python Data Analytics. Apress, Berkeley, CA. System,” Lecture Notes in Computer Science, vol 6839, 2012, doi:
https://doi.org/10.1007/978-1-4842-3913-1_14 10.1007/978-3-642-25944-9_74
[4] Y. Li and Y. Zhang, "Application Research of Computer Vision [9] A. Allamehzadeh, M.S. Aminian, M. Mostaed, C. Olaverri-Monreal,
Technology in Automation," 2020 International Conference on “Automatic Vehicle Counting Approach Through Computer Vision
Computer Information and Big Data Applications (CIBDA), 2020, pp. for Traffic Management,” Lecture Notes in Computer Science, vol
374-377, doi: 10.1109/CIBDA50819.2020.00090. 10672, 2018, doi: 10.1007/978-3-319-74727-9_48
[5] J. Asmuth et al., "Multimedia applications of computer vision," [10] R. Kadiķis, K. Freivalds, “Efficient Video Processing Method for
Proceedings Fourth IEEE Workshop on Applications of Computer Traffic Monitoring Combining Motion Detection and Background
Vision. WACV'98 (Cat. No.98EX201), 1998, pp. 290-291, doi: Subtraction,” Lecture Notes in Electrical Engineering, vol 221, 2013,
10.1109/ACV.1998.732910. doi: 10.1007/978-81-322-0997-3_12
[6] J. Asmuth et al., "Multimedia applications of computer vision," [11] M.S. Shirazi, B. Morris, “Vision-Based Vehicle Counting with High
Proceedings Fourth IEEE Workshop on Applications of Computer Accuracy for Highways with Perspective View,” Lecture Notes in
Vision. WACV'98 (Cat. No.98EX201), 1998, pp. 290-291, doi: Computer Science, vol 9475, 2015, doi: 10.1007/978-3-319-27863-
10.1109/ACV.1998.732910. 6_76
[7] J. Ferreira, “Urban Traffic Management System by [12] C. Larman, Agile & Iterative Development: A Mamager Guide.
Videomonitoring,” Advances in Computational Science, Engineering Addison-Wesley, 2004
and Information Technology pp 1-9, 2013

6
Authorized licensed use limited to: KLE Technological University. Downloaded on January 21,2024 at 06:51:03 UTC from IEEE Xplore. Restrictions apply.

You might also like