0% found this document useful (0 votes)
227 views

Traffic Analysis Based On Digital Image Processing in Python

This document summarizes a research paper that proposes a vehicle detection and counting method using digital image processing in Python. The method uses techniques like video processing, RGB to grayscale conversion, power-law transformation for image enhancement, Canny edge detection for object detection, and Kalman filtering for vehicle tracking across frames. An implementation of this approach was developed using Python programming language and open source libraries like OpenCV. The proposed method aims to accurately detect, count and classify vehicles in real-time video streams for traffic analysis applications.

Uploaded by

praveena.s
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
227 views

Traffic Analysis Based On Digital Image Processing in Python

This document summarizes a research paper that proposes a vehicle detection and counting method using digital image processing in Python. The method uses techniques like video processing, RGB to grayscale conversion, power-law transformation for image enhancement, Canny edge detection for object detection, and Kalman filtering for vehicle tracking across frames. An implementation of this approach was developed using Python programming language and open source libraries like OpenCV. The proposed method aims to accurately detect, count and classify vehicles in real-time video streams for traffic analysis applications.

Uploaded by

praveena.s
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Journal of Electrical Electronics & Computer Science Engineering

Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222


Available Online at www.ijeecse.com

Vehicle Detection and Counting Method Based on Digital Image Processing in


Python
Reha Justin1, Dr. Ravindra Kumar2
1 2
Intern, Principal Scientist, CSIR-Central Road Research Institute, Transportation Planning Division Delhi, India
1
[email protected], [email protected]

Abstract: Vehicle counting process provides appropriate Video processing is a subcategory of Digital Signal
information about traffic flow, vehicle crash Processing techniques where the input and output signals
occurrences and traffic peak times in roadways. An are video streams. In computers, one of the best ways to
acceptable technique to achieve these goals is using reach video analysis goals is using image processing
digital image processing methods on roadway camera methods in each video frame. In this case, motions are
video outputs. This paper presents a vehicle counter- simply realized by comparing sequential frames[7]. Video
classifier based on a combination of different video- processing includes pre-filters, which can cause contrast
image processing methods including object detection, changes and noise elimination along with video frames
edge detection, frame differentiation and the Kalman pixel size conversions[6]. Highlighting particular areas of
filter. An implementation of proposed technique has videos, deleting unsuitable lighting effects, eliminating
been performed using python programming language. camera motions and removing edge-artifacts are
This paper describes the methodology used for image performable using video processing methods[29].
processing for traffic flow counting and classification OpenCv library of python is equipped with functions that
using different library and algorithm with real time allow us to manipulate videos and images. OpenCV-
image. Python makes use of Numpy, which is a library for
Keywords: Vehicle Counting, Vehicle Detection, Traffic numerical operations with a MATLAB-style syntax. All
Analysis, Object Detection, Video-Image Processing. the OpenCV array structures are converted to and from
Numpy arrays. This also makes it easier to integrate with
I. INTRODUCTION other libraries that use Numpy such as SciPy and
Matplotlib.[34]
Vehicles detection and counting are done a non-intrusives
sensor/manual method/video infrared, magnetic, radar, B. RGB to Grayscale Conversion:
ultrasonic, acoustic, and video imaging sensors and
In video analysis, converting RGB color image to
intrusive sensor sensors include pneumatic road tube,
grayscale mode is done by image processing methods.
piezo-electric sensor, magnetic sensor, and inductive loop
The main goal of this conversion is that processing the
. Non inrusive technology has advantages over intrusive
grayscale images can provide more acceptable results in
technology which requires closing of traffic lanes and put
comparison to the original RGB images[11]. In video
construction workers in harm’s way, stop traffic or a lane
processing techniques the sequence of captured video
closure and non-intrusive sensors are above the roadway
frames should be transformed from RGB color mode to a
surface and don’t typically require a stop in traffic or lane
0 to 255 gray level. When converting an RGB image to a
closure. Both types of sensors have advantages and
grayscale mode, the RGB values for each pixel should be
disadvantages . But the accuracy from video or digital
taken, and a single value reflecting the brightness
counting manually is very high as compared to other
percentage of that pixel should be prepared as an
technology.
output[2].
However the image processing is time consuming and
C. Power-Law Transformation:
requires some automation to save the time for image
count and classification. In current era of python type Enhancing an image provides better contrast and a more
programming language has much addition of image detailed image as compared to a non-enhanced one. There
processing and time saving for vehicle detection, counting are several image enhancement techniques such as power-
and classification. The paper states about way to image law transformation, linear method and Logarithmic
process, type of filter used and proposed technique are method. Image enhancement can be done through one of
able to detect, count and classify the image accurately. these grayscale transformations. Among them, power-law
transformation method is an appropriate technique which
II. BACKGROUND INFORMATION has the basic form below.
A. Video Processing: V=Avγ (1)

141
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com

Where V and v are output and input gray levels, γ is Where xk and yk are the state and measurement vectors,
Gamma value and A is a positive constants (in the wk and vk are the process and measurement noise, F k and
common case of A=1). The python code that implements Hk are the transition and measurement values and k is
power law transformation is- desired time step[28]. The Kalman filter also specifies
that the measurements and the error terms express a
power_law_transformation=cv2.pow(gray,0.6)
Gaussian distribution, which means in vehicle detection
The second argument is the gamma value. Consequently, each vehicle can only be tracked by one Kalman filter
choosing the proper value of γ can play an important role [22],[31]. Therefore the number of Kalman filters applied
in image enhancement process and preparing suitable to each video frame depends on the number of detected
details identifiable in image. vehicles.
D. Canny Edge Detection: III. PREVIOUS WORKS
Object detection can be performed using image matching Using image/video processing and object detection
functions and edge detection. Edges are points in digital methods for vehicle detection and traffic flow estimation
images at which image brightness or gray levels changes purposes has attracted a huge attention for several years.
suddenly in amount.[33] The main task of edge detection Vehicle detection/tracking processes have been performed
is locating all pixels of the image that correspond to the using one of these methodologies[8]:
edges of the objects seen in the image. Among different
 Matching
edge detection methodologies, Canny algorithm is a
simple and powerful edge detection method. Since edge  Threshold and segmentation
detection is susceptible to noise in the image, first step is  Point detection
to remove the noise in the image with a 5x5 Gaussian
 Edge detection
filter. Smoothened image is then filtered with a Sobel
kernel in both horizontal and vertical direction to get first  Frame differentiation
derivative in horizontal direction ( Gx) and vertical  Optical flow methods
direction ( Gy)[9]. From these two images, we can find
edge gradient and direction for each pixel as follows: It can be said that one of the most important researches in
object detection fields, which has resulted in the auto-
Edge_Gradient(G)=√G2x+G2y (2) scope video detection systems is introduced in [15]. In
−1
Angle(θ)=tan (Gy/Gx) (3) some works such as [21], forward and backward image
differencing method used to extract moving vehicles in a
Gradient direction is always perpendicular to edges. It is roadway view. Some studies like [17] and [4] proved that
rounded to one of four angles representing vertical, the use of feature vectors from image region can be
horizontal and two diagonal directions. After getting extremely efficient for vehicle detections goals. Some
gradient magnitude and direction, a full scan of image is others represented the accurate vehicle dimension
done to remove any unwanted pixels which may not estimation using a set of coordinate mapping functions as
constitute the edge. For this, at every pixel, pixel is it can be seen in [16]. Furthermore, some studies have
checked if it is a local maximum in its neighborhood in developed a variety of boosting algorithms for object
the direction of gradient. OpenCV puts all the above in detection using machine learning methods which can
single function, cv2.Canny() [12]. detect and classify moving objects by both type and color
such as [18] and [19]. Named approaches have both their
E. The Kalman Filter:
advantages and disadvantages.
Images typically have a lot of speckles caused by noise
which should be removed by the means of filtration. The IV. PROPOSED TECHNIQUE
Kalman filter is a powerful and useful tool to estimate a Different from previous works, the method proposed in
special process using some kind of feedback this paper uses a combination of both “Frame
information[14]. The Kalman filter is used to provide an Differentiation” and “Edge Detection” algorithms to
improved estimate based on a series of noisy estimates. provide better quality and accuracy for vehicle detection.
This filter specifies that the fundamental process must be By using the Kalman filter, position of each vehicle will
modeled by a linear dynamical structure: be estimated and tracked correctly. This filter also used to
xk = Fk-1xk-1 + wk-1 (4) classify detected vehicles in different specified groups and
count them separately to provide a useful information for
yk = Hkxk + vk (5) traffic flow analysis. The

142
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com

flowchart of the method is represented in Figure 1. C are grayscale versions with gamma values 0.6 and 0.9,
respectively. The implementation of Figure 2 results can
Classification be obtained by using the python code shown in Figure 3.
Video Frames

Edge Detection Counting

Motion Analysis Result

Detection Zone
Fig. 2. Input RGB Video Frame (A) and Grayscale
Definition
Converted With Different Γ Values (B and C)

The Kalman Filter

Fig. 1. Flowchart of the Technique


Based on Figure 1, the technique includes these steps:
image enhancement process, edge detection, motion
analysis using a combination of different techniques,
detection zone definition, the Kalman filter, vehicle type
Fig. 3. Code for Conversion from RGB to Grayscale and
classification and counting. It is necessary to say that
Image Enhancement
some assumptions made in this work:
 No sudden changes of directions are expected B. Edge Detection:
 No car accidents and crashes are expected Each image (video frame) has three significant features to
achieve detection goals. These features include: edges,
 There is both physical and legal limitations for vehicles
contours and points. Among mentioned features, an
 motion scenes are captured with a view from above to appropriate option is to use edge pixels. Processing of
the roadway surface image pixels enables us to find edge pixels, which are the
The proposed technique to detect and count vehicles is main features of passing vehicles in a roadway video
presented as below: frame. One of the most common ways to find the edges of
an image is to use Canny operator which has been used in
A. Grayscale Image Generation and Image Enhancement: this work. The result is presented in Figure 4 and the
To get better results, vehicle detection process should be corresponding code is presented in Figure 5. As it can be
performed in the grayscale image domain. Hence a RGB seen the output result of edge detection process is
to grayscale conversion is performed on each video frame. demonstrated in a binary image (threshold) with the
To achieve an appropriate threshold level and make detected edge pixels.
results more suitable than the input image, each frame
should be brought in contrast to background. Among
several grayscale transformations, power-law method has
been used in this work. For color conversion we use the
function cv2.cvtColor(input_image,flag) where flag
determines the type of conversion. To convert to
grayscale we use flag cv2.COLOR_BGR2GRAY.
Experimental results in different situations showed that
the best results appear when γ value is set to 0.6 as it can
be seen in Figure 2. This figure shows the result of Fig. 4. A: Original Image B: Edge Detection Result
applying different γ values to grayscale converted image,
where section A is the input RGB color frame and B, and

143
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com

Fig. 5. Code for Canny Edge Detection


The next step is to extract moving edges from sequential
video frames and process the resulting edge information
to obtain quantitative geometric measurements of passing
vehicles.
C. Background Subtraction:
Using provided threshold, the static parts of sequential
video frames should be cleaned. The main challenge here
is that the performance of image analysis algorithms
suffers from darkness, glare, long shadows or bad
illumination at night, which may cause strong noises [3],
[13]. Therefore, the grayscale image might be unspecified
in these situations and make the detection task a bit more
complex. Edges essentially separate two various regions
which are static region (the roadway) and dynamic region
(moving vehicles). The static background is then deleted Fig. 6. Proposed Moving Vehicle Detection Technique
to locate moving objects in each frame. The result zone and Background Subtraction (A,B,C and D)
leaves only vehicles and some details as moving objects
in sequential images which are changing frame to frame.
A combination of forward and backward image
differencing method and Sobel edge detector has been
used in this work. According to this method, three
sequential frames are chosen and the middle one should
be compared to its previous and next frames.
Consequently, extracted edges of each frame detected by Fig. 7. Code for Background Subtraction
Canny edge detection achieved from previous section are D. Detection Zone:
used here. Then the differences of frames can be obtained
by subtracting each two sequential pair of generated As an observation (detection) zone, a region should be
binary images, as in equation 6: defined to display moving vehicle’s edges in a bounding
box at the time that the vehicle enters it. This zone is in
BinaryImage (Canny (Fn-1) ∩ Canny (Fn)) – the middle of the screen and covers 1/3 of its height and
BinaryImage (Canny (Fn) ∩ Canny (Fn+1)) (6) 3/5 of its width (considering minimum and maximum
Where Fn-1 is previous frame, Fn is current frame and Fn+1 available size of detectable passing vehicles in pixels).
is the next frame. This process continues to the last three This area which contains the most traffic can embed both
sequential video frames. The output result is demonstrated small and long vehicles and the main goal of defining it is
in Figure 6. The python code is represented in Figure 7. In to avoid perspective challenges and wrong type counts.
this figure A, B and C represent three sequential frames, Based on proposed method in background subtraction
where D demonstrates the output background subtraction level, a vehicle is detected in three sequential frames.
method. Using this technique moving vehicles are When a moving vehicle is detected, a bounding box
detected in three sequential frames. whelming vehicle borders in binary image is drawn.
E. The Kalman Filter:
The bounding boxes could also be used to count and
classify passing vehicles. This can be done by the Kalman
filtering technique. In roadway videos, the edge detection

144
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com

function provides an inaccurate position of moving array([[ 0.85819709],


vehicles, but the knowledge of the vehicle's current
[ 1.77811829],
position needs to be improved. Since perfect
measurements cannot be guaranteed due to movement of [ 2.19537816]])
objects, the measurements should be filtered to produce
Common uses for the Kalman Filter include radar and
the best estimation of the accurate track.
sonar tracking and state estimation in robotics. This
The Kalman filter can optimally estimate the current
position of each vehicle and also predict the location of module implements two algorithms for tracking: the
vehicles in future video frames by minimizing noise Kalman Filter and Kalman Smoother. In addition, model
parameters which are traditionally specified by hand can
disorders. It is also used to stop tracking of vehicles
also be learned by the implemented EM algorithm without
proceeding in opposite direction in roadway captured
any labeled training data. All three algorithms are
video. Although edge detection can find moving objects,
contained in the KalmanFilter class in this module.
the Kalman filter makes an optimal estimate of positions
based on a sequence of localization measurement. F. Counting and Classification Functions:
The linear Kalman filter is simpler and used in proposed Vehicle counters are used in computing capacity,
technique. Consider parameter A as area of vehicle’s establishing structural design criteria and computing
bounding box, which has been detected in frame expected roadway user revenue [10]. Typically in
differentiation phase and p(x, y) is the center point of the proposed technique vehicles are classified as four
vehicle where x and y are its distances from horizontal common types:
and vertical edges. Now by integration of proposed  Type1: bicycles, motorcycles
parameter in (7) and (8) equations resulted in the
following vectors [30]:  Type2: motorcars
 Type3: pickups, minibuses
xk = [x, y, A, vx, vy, vA] T (7)
 Type4: buses, trucks, trailers
yk = [x, y, A] T (8)
It is necessary to have the width and length of each
Where vA is the rate of changes in vehicle’s bounding vehicle’s bounding boxes in pixels to diagnose that the
box, vx and vy are the speed of changes in the movement passing vehicles belongs to which of the mentioned types.
of vehicle’s center point. Subsequently using the Kalman The area of each bounding box shows that which type
filtering technique, the position of each vehicle can be should be allocated for the vehicle. Each vehicle type can
estimated and tracked better. Finally an identifier is be shown by a special rectangle color. Type 1 has been
allocated to each passing vehicle for counting and represented by red, where Type2, Type 3 and Type 4 have
classification purposes. been characterized by green, blue and yellow rectangles,
The Kalman Filter is a unsupervised algorithm for respectively.
tracking a single object in a continuous state space. Given In counting step, four isolated counters used for each
a sequence of noisy measurements, the Kalman Filter is vehicle type and a total counter is needed to store the sum
able to recover the “true state” of the underling object value of them. All counters should count just the vehicles
being tracked. It is implemented using the pykalman which are passing in a specific direction. So if a vehicle
library of python. stops, turns or moves in wrong direction in the detection
Sample code- zone, it should not be counted. In this technique, counting
is according to the number of moving vehicles detected in
from pykalman import KalmanFilter the detection zone and classified in one of mentioned
kf = KalmanFilter(initial_state_mean=0, n_dim_obs=2) groups.

The traditional Kalman Filter assumes that model Total passed vehicles, which will be shown in yellow,
parameters are known beforehand. The KalmanFilter class help to analyze traffic flow in a period of time. Also by
however can learn parameters using calculating the bounding boxes height and width in pixels,
KalmanFilter.em() (fitting is optional). Then the hidden vehicle types can be distinguished and counted by related
sequence of states can be predicted counters. Furthermore, in both counted vehicles, edges
using KalmanFilter.smooth(): will be covered with green rectangles, which shows that
they belong to Type 2 (even the green numbers inside
measurements = [[1,0], [0,0], [0,1]] bounding boxes confirm this result).
kf.em(measurements).smooth([[2,0], [2,1], [2,2]])[0]

145
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com

V. CONCLUSION [10] E. Atkociunas, R. Blake, A. Juozapavicius, M.


Kazimianec, “Image Processing in Road Traffic
In this paper a methodology based on python language Analysis,” Nonlinear Analysis: Modelling and
programming has been proposed. Python is very good Control, Vol. 10, No. 4, pp. 315–332, 2005.
library like numpy, matplotlib, scipy, which can help to
count traffic, classify the traffic and save the time of [11] X. Fu, Z. Wang, D. Liang, J. Jiang, “The Extraction
engineer. Traffic flow is basic data for transportation of Moving Object in Real-Time Web-Based Video
planning process and its accuracy and processing within Sequence,” 8th International Conference on Digital
limited time frame is challenging task for transportation Object Identifier, Vol. 1, pp. 187-190, 2004.
and highway engineer. This tools will be very useful for [12] V. Khorramshahi, A. Behrad, N. K. Kanhere, “Over-
their field application road construction design, traffic Height Vehicle Detection in Low Headroom Roads
planning point of view. Using Digital Video Processing,” World Academy
VI. REFERENCES of Science, Engineering and Technology, 2008.

[1] D. Beymer, P. McLauchlan, B. Coifman, J. Malik, [13] J. Zhou, D. Gao, D. Zhang, “Moving Vehicle
“A Real-time Computer Vision System for Detection for Automatic Traffic Monitoring,” IEEE
Measuring Traffic Parameters,” IEEE Conference on Transactions on Vehicular Technology, Vol. 56, NO.
Computer Vision and Pattern Recognition, 1997. 1, 2007.

[2] M. Fathy, M. Y. Siyal, “An Image Detection [14] G. Welch, G. Bishop, “An Introduction to the
Technique, Based on Morphological Edge Detection Kalman Filter”, The University of North Carolina at
and Background Differencing for Real-time Traffic Chapel Hill, 2006.
Analysis,” Pattern Recognition Letters, Vol. 16, pp. [15] P. G. Michalopoulos, “Vehicle Detection Video
1321-1330, 1995. Through Image Processing: The Autoscope System,”
[3] V. Kastrinaki, M. Zervakis, K. Kalaitzakis, “A IEEE Transactions on Vehicular Technology, Vol.
Survey of Video Processing Techniques for Traffic 40, No. 1, 1991.
Applications,” Image and Vision Computing, Vol. [16] A. H. S. Lai, G. S. K. Fung, N. H. C. Yung, “Vehicle
21, pp. 359-381, 2003. Type Classification from Visual-Based Dimension
[4] D. A. Forsyth, J. Ponce, “Computer Vision: A Estimation,” IEEE Intelligent Transportation
Modern Approach,” Prentice Hall, 2003. Systems Conference, pp. 201-206, 2001.

[5] T. R. Currin, “Turning Movement Counts. In [17] D. G. Lowe, “Distinctive Image Features from
Introduction to Traffic Engineering: A Manualfor Scaled-Invariant Keypoints,” International Journal
Data Collection and Analysis,” Stamford Wadsworth of Computer Vision, pp. 91-110, 2004.
Group, pp. 13–23, 2001. [18] P. T. Martin, G. Dharmavaram, A. Stevanovic,
[6] W. Yao, J. Ostermann, Y. Q. Zhang, “Video “Evaluation of UDOT’s Video Detection Systems:
Processing and Communications,” Signal Processing System’s Performance in Various Test Conditions,”
Series, ISBN: 0-13-017547-1, Prentice Hall, 2002. Report No: UT-04.14, 2004.

[7] P. Choudekar, S. Banerjee, M. K. Muju, “Real Time [19] O. Hasegawa, T. Kanade, “Type Classification,
Traffic Light Control Using Image Processing,” Color Estimation, and Specific Target Detection of
Indian Journal of Computer Science and Moving Targets on Public Streets,” Machine Vision
Engineering, Vol. 2, No. 1, ISSN: 0976-5166. and Applications, Vol. 16, No. 2, pp. 116-121, 2005.

[8] N. Chintalacheruvu, V. Muthukumar, “Video Based [20] R. Cucchiara, M. Piccardi, P. Mello, “Image
Vehicle Detection and Its Application in Intelligent Analysis and Rule-based Reasoning for a Traffic
Transportation Systems,” Journal of Transportation Monitoring System,” IEEE Transactions on
Technologies, Vol. 2, pp. 305-314, 2012. Intelligent Transportation Systems, Vol. 1, Issue 2,
pp 119-130, 2000.
[9] R. Milances Gil, S. Pun, T. Pun, “Comparing
Features for Target Tracking in Traffic Scenes,” [21] Q. Cai, A. Mitiche, J. K. Aggarwal, “Tracking
Pattern Recogition, Vol. 29, No. 8, pp. 1285-1296, Human Motion in an Indoor Environment,”
1996. International Conference on Image Processing,
USA, Vol. 1, pp. 215-218, 1995.

146
International Journal of Electrical Electronics & Computer Science Engineering
Special Issue - ICSCAAIT-2018 | E-ISSN : 2348-2273 | P-ISSN : 2454-1222
Available Online at www.ijeecse.com

[22] T. Le, M. Combs, Q. Yang, “Vehicle Tracking based [33] I. E. Igbinosa, “Comparison of Edge Detection
on Kalman Filter Algorithm,” Technical Reports Technique in Image Processing Techniques”,
Published by the MSU Department of Computer International Journal of Information Technology and
Science, ID: MSU-CS-2013-02, 2013. Electrical Engineering, ISSN: 2306-708X, Vol. 2,
Issue 1, 2013.
[23] Sh. Agarwal, A. Awan, D. Roth, “Learning to Detect
Objects in Images via a Sparse, Part-based [34] Learning OpenCV: Computer Vision with the
Representation,” IEEE Transactions on Pattern OpenCV Library By Gary Bradski, Adrian Kaehler.
Analysis and Machine Intelligence, 2004.
[24] Sh. Agarwal, D. Roth, “Learning a Sparse
Representation for Object Detection,” European
Conference on Computer Vision, Vol. 1, ISBN: 978-
3-540-43748-2, pp. 113-127, 2002.
[25] A. Torralba, K. P. Murphy, W. T. Freeman, “Shared
Features for Multiclass Object Detection,” Toward
Category-Level Object Recognition, ISBN: 978-3-
540-68794-8, pp. 345-361, 2006.
[26] L. Bohang, L. Qingbing, C. Duiyong, S. Hailong,
“Pattern Recognition of Vehicle Types and
Reliability Analysis of Pneumatic Tube Test Data
under Mixed Traffic Condition,” 2nd International
Asia Conference on Informatics in Control,
Automation and Robotics, ISSN: 1948-3414, pp. 44-
47, 2010.
[27] L. Feng, W. Liu, B. Chen, “Driving Pattern
Recognition for Adaptive Hybrid Vehicle Control,”
SAE 2012 World Congress and Exhibition, pp. 169-
179, 2012.
[28] D. Simon, “Optimal State Estimation: Kalman, H
Infinity, and Nonlinear Approaches,” Wiley-
Interscience, 2006.
[29] R. Gonzalez, R. E. Woods, “Digital Image
Processing,” 2nd Edition, Prentice-Hall, 2002.
[30] S. Siang Teoh, T. Bräunl, “A Reliability Point and
Kalman Filterbased Vehicle Tracking Technique,”
International Conference on Intelligent Systems, pp.
134-138, 2012.
[31] K. Markus, “Using the Kalman Filter to Track
Human Interactive Motion Modeling and
Initialization of the Kalman Filter for Translational
Motion,” Technical Report, University of Dortmund,
Germany, 1997.
[32] D. Nagamalai, E. Renault, M. Dhanuskodi.
“Implementation of LabVIEW Based Intelligent
System for Speed Violated Vehicle Detection”, First
International Conference on Digital Image
Processing and Pattern Recognition, ISSN: 1865-
0929, pp. 23-33, 2011.

147

You might also like