Traffic Analysis Based On Digital Image Processing in Python
Traffic Analysis Based On Digital Image Processing in Python
Abstract: Vehicle counting process provides appropriate Video processing is a subcategory of Digital Signal
information about traffic flow, vehicle crash Processing techniques where the input and output signals
occurrences and traffic peak times in roadways. An are video streams. In computers, one of the best ways to
acceptable technique to achieve these goals is using reach video analysis goals is using image processing
digital image processing methods on roadway camera methods in each video frame. In this case, motions are
video outputs. This paper presents a vehicle counter- simply realized by comparing sequential frames[7]. Video
classifier based on a combination of different video- processing includes pre-filters, which can cause contrast
image processing methods including object detection, changes and noise elimination along with video frames
edge detection, frame differentiation and the Kalman pixel size conversions[6]. Highlighting particular areas of
filter. An implementation of proposed technique has videos, deleting unsuitable lighting effects, eliminating
been performed using python programming language. camera motions and removing edge-artifacts are
This paper describes the methodology used for image performable using video processing methods[29].
processing for traffic flow counting and classification OpenCv library of python is equipped with functions that
using different library and algorithm with real time allow us to manipulate videos and images. OpenCV-
image. Python makes use of Numpy, which is a library for
Keywords: Vehicle Counting, Vehicle Detection, Traffic numerical operations with a MATLAB-style syntax. All
Analysis, Object Detection, Video-Image Processing. the OpenCV array structures are converted to and from
Numpy arrays. This also makes it easier to integrate with
I. INTRODUCTION other libraries that use Numpy such as SciPy and
Matplotlib.[34]
Vehicles detection and counting are done a non-intrusives
sensor/manual method/video infrared, magnetic, radar, B. RGB to Grayscale Conversion:
ultrasonic, acoustic, and video imaging sensors and
In video analysis, converting RGB color image to
intrusive sensor sensors include pneumatic road tube,
grayscale mode is done by image processing methods.
piezo-electric sensor, magnetic sensor, and inductive loop
The main goal of this conversion is that processing the
. Non inrusive technology has advantages over intrusive
grayscale images can provide more acceptable results in
technology which requires closing of traffic lanes and put
comparison to the original RGB images[11]. In video
construction workers in harm’s way, stop traffic or a lane
processing techniques the sequence of captured video
closure and non-intrusive sensors are above the roadway
frames should be transformed from RGB color mode to a
surface and don’t typically require a stop in traffic or lane
0 to 255 gray level. When converting an RGB image to a
closure. Both types of sensors have advantages and
grayscale mode, the RGB values for each pixel should be
disadvantages . But the accuracy from video or digital
taken, and a single value reflecting the brightness
counting manually is very high as compared to other
percentage of that pixel should be prepared as an
technology.
output[2].
However the image processing is time consuming and
C. Power-Law Transformation:
requires some automation to save the time for image
count and classification. In current era of python type Enhancing an image provides better contrast and a more
programming language has much addition of image detailed image as compared to a non-enhanced one. There
processing and time saving for vehicle detection, counting are several image enhancement techniques such as power-
and classification. The paper states about way to image law transformation, linear method and Logarithmic
process, type of filter used and proposed technique are method. Image enhancement can be done through one of
able to detect, count and classify the image accurately. these grayscale transformations. Among them, power-law
transformation method is an appropriate technique which
II. BACKGROUND INFORMATION has the basic form below.
A. Video Processing: V=Avγ (1)
141
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com
Where V and v are output and input gray levels, γ is Where xk and yk are the state and measurement vectors,
Gamma value and A is a positive constants (in the wk and vk are the process and measurement noise, F k and
common case of A=1). The python code that implements Hk are the transition and measurement values and k is
power law transformation is- desired time step[28]. The Kalman filter also specifies
that the measurements and the error terms express a
power_law_transformation=cv2.pow(gray,0.6)
Gaussian distribution, which means in vehicle detection
The second argument is the gamma value. Consequently, each vehicle can only be tracked by one Kalman filter
choosing the proper value of γ can play an important role [22],[31]. Therefore the number of Kalman filters applied
in image enhancement process and preparing suitable to each video frame depends on the number of detected
details identifiable in image. vehicles.
D. Canny Edge Detection: III. PREVIOUS WORKS
Object detection can be performed using image matching Using image/video processing and object detection
functions and edge detection. Edges are points in digital methods for vehicle detection and traffic flow estimation
images at which image brightness or gray levels changes purposes has attracted a huge attention for several years.
suddenly in amount.[33] The main task of edge detection Vehicle detection/tracking processes have been performed
is locating all pixels of the image that correspond to the using one of these methodologies[8]:
edges of the objects seen in the image. Among different
Matching
edge detection methodologies, Canny algorithm is a
simple and powerful edge detection method. Since edge Threshold and segmentation
detection is susceptible to noise in the image, first step is Point detection
to remove the noise in the image with a 5x5 Gaussian
Edge detection
filter. Smoothened image is then filtered with a Sobel
kernel in both horizontal and vertical direction to get first Frame differentiation
derivative in horizontal direction ( Gx) and vertical Optical flow methods
direction ( Gy)[9]. From these two images, we can find
edge gradient and direction for each pixel as follows: It can be said that one of the most important researches in
object detection fields, which has resulted in the auto-
Edge_Gradient(G)=√G2x+G2y (2) scope video detection systems is introduced in [15]. In
−1
Angle(θ)=tan (Gy/Gx) (3) some works such as [21], forward and backward image
differencing method used to extract moving vehicles in a
Gradient direction is always perpendicular to edges. It is roadway view. Some studies like [17] and [4] proved that
rounded to one of four angles representing vertical, the use of feature vectors from image region can be
horizontal and two diagonal directions. After getting extremely efficient for vehicle detections goals. Some
gradient magnitude and direction, a full scan of image is others represented the accurate vehicle dimension
done to remove any unwanted pixels which may not estimation using a set of coordinate mapping functions as
constitute the edge. For this, at every pixel, pixel is it can be seen in [16]. Furthermore, some studies have
checked if it is a local maximum in its neighborhood in developed a variety of boosting algorithms for object
the direction of gradient. OpenCV puts all the above in detection using machine learning methods which can
single function, cv2.Canny() [12]. detect and classify moving objects by both type and color
such as [18] and [19]. Named approaches have both their
E. The Kalman Filter:
advantages and disadvantages.
Images typically have a lot of speckles caused by noise
which should be removed by the means of filtration. The IV. PROPOSED TECHNIQUE
Kalman filter is a powerful and useful tool to estimate a Different from previous works, the method proposed in
special process using some kind of feedback this paper uses a combination of both “Frame
information[14]. The Kalman filter is used to provide an Differentiation” and “Edge Detection” algorithms to
improved estimate based on a series of noisy estimates. provide better quality and accuracy for vehicle detection.
This filter specifies that the fundamental process must be By using the Kalman filter, position of each vehicle will
modeled by a linear dynamical structure: be estimated and tracked correctly. This filter also used to
xk = Fk-1xk-1 + wk-1 (4) classify detected vehicles in different specified groups and
count them separately to provide a useful information for
yk = Hkxk + vk (5) traffic flow analysis. The
142
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com
flowchart of the method is represented in Figure 1. C are grayscale versions with gamma values 0.6 and 0.9,
respectively. The implementation of Figure 2 results can
Classification be obtained by using the python code shown in Figure 3.
Video Frames
Detection Zone
Fig. 2. Input RGB Video Frame (A) and Grayscale
Definition
Converted With Different Γ Values (B and C)
143
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com
144
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com
The traditional Kalman Filter assumes that model Total passed vehicles, which will be shown in yellow,
parameters are known beforehand. The KalmanFilter class help to analyze traffic flow in a period of time. Also by
however can learn parameters using calculating the bounding boxes height and width in pixels,
KalmanFilter.em() (fitting is optional). Then the hidden vehicle types can be distinguished and counted by related
sequence of states can be predicted counters. Furthermore, in both counted vehicles, edges
using KalmanFilter.smooth(): will be covered with green rectangles, which shows that
they belong to Type 2 (even the green numbers inside
measurements = [[1,0], [0,0], [0,1]] bounding boxes confirm this result).
kf.em(measurements).smooth([[2,0], [2,1], [2,2]])[0]
145
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com
[1] D. Beymer, P. McLauchlan, B. Coifman, J. Malik, [13] J. Zhou, D. Gao, D. Zhang, “Moving Vehicle
“A Real-time Computer Vision System for Detection for Automatic Traffic Monitoring,” IEEE
Measuring Traffic Parameters,” IEEE Conference on Transactions on Vehicular Technology, Vol. 56, NO.
Computer Vision and Pattern Recognition, 1997. 1, 2007.
[2] M. Fathy, M. Y. Siyal, “An Image Detection [14] G. Welch, G. Bishop, “An Introduction to the
Technique, Based on Morphological Edge Detection Kalman Filter”, The University of North Carolina at
and Background Differencing for Real-time Traffic Chapel Hill, 2006.
Analysis,” Pattern Recognition Letters, Vol. 16, pp. [15] P. G. Michalopoulos, “Vehicle Detection Video
1321-1330, 1995. Through Image Processing: The Autoscope System,”
[3] V. Kastrinaki, M. Zervakis, K. Kalaitzakis, “A IEEE Transactions on Vehicular Technology, Vol.
Survey of Video Processing Techniques for Traffic 40, No. 1, 1991.
Applications,” Image and Vision Computing, Vol. [16] A. H. S. Lai, G. S. K. Fung, N. H. C. Yung, “Vehicle
21, pp. 359-381, 2003. Type Classification from Visual-Based Dimension
[4] D. A. Forsyth, J. Ponce, “Computer Vision: A Estimation,” IEEE Intelligent Transportation
Modern Approach,” Prentice Hall, 2003. Systems Conference, pp. 201-206, 2001.
[5] T. R. Currin, “Turning Movement Counts. In [17] D. G. Lowe, “Distinctive Image Features from
Introduction to Traffic Engineering: A Manualfor Scaled-Invariant Keypoints,” International Journal
Data Collection and Analysis,” Stamford Wadsworth of Computer Vision, pp. 91-110, 2004.
Group, pp. 13–23, 2001. [18] P. T. Martin, G. Dharmavaram, A. Stevanovic,
[6] W. Yao, J. Ostermann, Y. Q. Zhang, “Video “Evaluation of UDOT’s Video Detection Systems:
Processing and Communications,” Signal Processing System’s Performance in Various Test Conditions,”
Series, ISBN: 0-13-017547-1, Prentice Hall, 2002. Report No: UT-04.14, 2004.
[7] P. Choudekar, S. Banerjee, M. K. Muju, “Real Time [19] O. Hasegawa, T. Kanade, “Type Classification,
Traffic Light Control Using Image Processing,” Color Estimation, and Specific Target Detection of
Indian Journal of Computer Science and Moving Targets on Public Streets,” Machine Vision
Engineering, Vol. 2, No. 1, ISSN: 0976-5166. and Applications, Vol. 16, No. 2, pp. 116-121, 2005.
[8] N. Chintalacheruvu, V. Muthukumar, “Video Based [20] R. Cucchiara, M. Piccardi, P. Mello, “Image
Vehicle Detection and Its Application in Intelligent Analysis and Rule-based Reasoning for a Traffic
Transportation Systems,” Journal of Transportation Monitoring System,” IEEE Transactions on
Technologies, Vol. 2, pp. 305-314, 2012. Intelligent Transportation Systems, Vol. 1, Issue 2,
pp 119-130, 2000.
[9] R. Milances Gil, S. Pun, T. Pun, “Comparing
Features for Target Tracking in Traffic Scenes,” [21] Q. Cai, A. Mitiche, J. K. Aggarwal, “Tracking
Pattern Recogition, Vol. 29, No. 8, pp. 1285-1296, Human Motion in an Indoor Environment,”
1996. International Conference on Image Processing,
USA, Vol. 1, pp. 215-218, 1995.
146
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com
[22] T. Le, M. Combs, Q. Yang, “Vehicle Tracking based [33] I. E. Igbinosa, “Comparison of Edge Detection
on Kalman Filter Algorithm,” Technical Reports Technique in Image Processing Techniques”,
Published by the MSU Department of Computer International Journal of Information Technology and
Science, ID: MSU-CS-2013-02, 2013. Electrical Engineering, ISSN: 2306-708X, Vol. 2,
Issue 1, 2013.
[23] Sh. Agarwal, A. Awan, D. Roth, “Learning to Detect
Objects in Images via a Sparse, Part-based [34] Learning OpenCV: Computer Vision with the
Representation,” IEEE Transactions on Pattern OpenCV Library By Gary Bradski, Adrian Kaehler.
Analysis and Machine Intelligence, 2004.
[24] Sh. Agarwal, D. Roth, “Learning a Sparse
Representation for Object Detection,” European
Conference on Computer Vision, Vol. 1, ISBN: 978-
3-540-43748-2, pp. 113-127, 2002.
[25] A. Torralba, K. P. Murphy, W. T. Freeman, “Shared
Features for Multiclass Object Detection,” Toward
Category-Level Object Recognition, ISBN: 978-3-
540-68794-8, pp. 345-361, 2006.
[26] L. Bohang, L. Qingbing, C. Duiyong, S. Hailong,
“Pattern Recognition of Vehicle Types and
Reliability Analysis of Pneumatic Tube Test Data
under Mixed Traffic Condition,” 2nd International
Asia Conference on Informatics in Control,
Automation and Robotics, ISSN: 1948-3414, pp. 44-
47, 2010.
[27] L. Feng, W. Liu, B. Chen, “Driving Pattern
Recognition for Adaptive Hybrid Vehicle Control,”
SAE 2012 World Congress and Exhibition, pp. 169-
179, 2012.
[28] D. Simon, “Optimal State Estimation: Kalman, H
Infinity, and Nonlinear Approaches,” Wiley-
Interscience, 2006.
[29] R. Gonzalez, R. E. Woods, “Digital Image
Processing,” 2nd Edition, Prentice-Hall, 2002.
[30] S. Siang Teoh, T. Bräunl, “A Reliability Point and
Kalman Filterbased Vehicle Tracking Technique,”
International Conference on Intelligent Systems, pp.
134-138, 2012.
[31] K. Markus, “Using the Kalman Filter to Track
Human Interactive Motion Modeling and
Initialization of the Kalman Filter for Translational
Motion,” Technical Report, University of Dortmund,
Germany, 1997.
[32] D. Nagamalai, E. Renault, M. Dhanuskodi.
“Implementation of LabVIEW Based Intelligent
System for Speed Violated Vehicle Detection”, First
International Conference on Digital Image
Processing and Pattern Recognition, ISSN: 1865-
0929, pp. 23-33, 2011.
147