Traffic Flow Detection Using Lab View
Traffic Flow Detection Using Lab View
Traffic Flow Detection Using Lab View
(1)
1
0 1
n n
i i i j
i j i
T Y Y X X
= =
= + = +
(2)
Where, Yi represents the number of vehicles (all types of vehicles) that
are detected after one step. Xi represents the number of vehicles after one step, for
each type of vehicle (i represents the type of vehicle).
T represents the overall number of vehicles after all steps have been
executed, and for all types of vehicles. This is the final value provided by the
application. In Fig.4 the software procedure can be visualized.
Traffic flow detection using lab view video processing techniques 185
Fig. 4. The procedure that provides the final vehicle number
As we can see, this procedure provides the total number of vehicles as well
as the number of vehicles of each type existent in the database created earlier.
This ensures a better granularity of the detection process.
3. Combining the edge detection and square curve fitting algorithm in
order to achieve the traffic flow detection
Another image processing method that can be used for traffic flow
detection is the edge detection. Unlike the template method, the edge method is
not a straight-forward procedure because alone it cannot provide the desired
information. It feeds out of the application a raw data that is obtained solely by
processing the image and without considering or identifying what kind of objects
are detected in the video stream. Because of this drawback, this procedure is used
in combination with other functions like the object detection function in the region
of interest and the square curve-fitting algorithm.
The edge detection function has the goal of extracting the edges from a
gray-level image. The Lab View function can be connected with an image mask,
which has an 8 bit dimension. The source image, which supports the image
processing, must be created using a frame capable of supporting the dimension of
the processing matrix. For example, a 3x3 matrix has a minimum frame size of
one. The size of the destination image frame is not relevant [5].
The Threshold field represents the minimum value of the pixel which
will appear in the destination image. Concerning the fact that the processed
images are very dark and they are not dynamic, it is very rare the usage of a
threshold value greater than zero. Because of this, the chosen threshold is the
default one (0).
186 Rzvan Ghi, Ana Maria Nicoleta Mocofan
Another important setting of the function is the method field which
specifies the type of the filter used for the process of edge detection. For this Lab
View function there are a few options available [5]:
- Differentiation (default) - processes with a 2x2 matrix;
- Gradient - processes with a 2x2 matrix;
- Prewitt - processes with a 3x3 matrix;
- Roberts - processes with a 2x2 matrix;
- Sigma - processes with a 3x3 matrix;
- Sobel processes with a 3x3 matrix.
The software procedure used for edge detection is shown in Fig.5.
Fig. 5. The procedure for the edge detection process.
The region of interest from the previous methods is used here, but this ROI
(region of interest) is applied after the edge detection process has been
implemented on the original video stream. After the region of interest has been
configured, the Object count function is applied to detect the objects that appear
in the region of interest. The exit values will represent the raw data that will be
processed to obtain the total vehicle number.
A Prewitt filter has been used for the edge detection process. This is a
spatial and non-linear high pass filter which uses a 3x3 matrix. A Prewitt filter
extracts the outer edges of the objects highlighting the light intensity variations
along the horizontal and vertical axes. Each pixel is given a maximum value of its
vertical and horizontal gradient, which has been previously obtained using two
convolution kernels [6], [7],[8].
Traffic flow detection using lab view video processing techniques 187
1 0 1
1 0 1
1 0 1
(3)
1 1 1
0 0 0
1 1 1
(4)
If K(i,j) represents the intensity of the K pixel which has the coordinates
(i,j), the pixel intensities that surrounds K(i,j) can be arranged in the following
way (for a 3x3 matrix)[9]:
( 1, 1) ( , 1) ( 1, 1)
( 1, ) ( , ) ( 1, )
( 1, 1) ( , 1) ( 1, 1)
i j i j i j
i j i j i j
i j i j i j
K K K
K K K
K K K
+
+
+ + + +
(5)
Considering the matrix (5) and the two Prewitt kernels we can define
the following equation:
K(i,j)= max ||K(i+1,j1)K(i1,j1)+K(i+1,j)K(i1,j)+K(i+1,j+1)K(i1,j+1)|,
|K(i1,j+1)K(i1,j1)+K(i,j+1)K(i,j1)+K(i+1,j+1)K(i+1,j1)|] (6)
Fig. 6. The video stream before (left) and after (right) processing.
The video stream, processed using the Prewitt Filter, is presented in Fig.6
and will be fed into the IMAQ Count Objects virtual instrument shown in Fig.7.
188 Rzvan Ghi, Ana Maria Nicoleta Mocofan
Fig. 7. IMAQ Count Objects virtual instrument [10].
This instrument is used to locate and measure the objects (particles of the
image) found into a rectangular area. For achieving this goal a threshold is
defined. The threshold is applied on the pixel intensities to detach the objects from
the image background.
The image fed into this virtual instrument is transformed into a gray level
image.
The Settings field allows the definition of certain parameters used for
the detection process. In this way it is possible to ignore the objects touching the
edges of the region of interest, the objects that have a smaller size compared with
the defined size or the objects bigger than a pre-established level. Also we have
the option of choosing whether the bright or dark objects will be detected [10].
For this particular case the bright object detection has been chosen.
The values provided by this virtual instrument are the object of peak
detection algorithm that will provide the total number of the vehicles.
Fig. 8. The values generated by the count objects VI.
The peak detection algorithm is applied on the values presented in Fig.8 to
determine the traffic flow volume. The Prewitt filter helps to provide a set of more
homogeneous values, making the detection process easier this way.
The peak detection will have a defined dimension of 3 and a threshold
value of 12. With these settings only the peaks that have a value equal or greater
than 12 will be detected. In Fig.8 we can see the threshold applied to these
values.
The peak detection algorithm uses a square curve fitting method to
determine the location and the number of peaks present in a signal. The chart in
Fig.9 illustrates the fitting of a 6 degree polynomial curve that was fitted on the
values.
Traffic flow detection using lab view video processing techniques 189
Fig. 9. Fitting a 6 degree polynomial over the obtained values.
Concerning the fact that the fitting function of Excel is limited to a 6
degree polynomial, the fitting of a greater polynomial (8 degree) is made using the
Lab View programming language peak detection function presented in Fig.10.
This curve defines better the values resulted earlier in the detection process.
Fig. 10. The peak detection function [10].
The equations defining the two curves are shown below:
190 Rzvan Ghi, Ana Maria Nicoleta Mocofan
10 6 8 5 6 4 2
5 7 2 0.054 1.165 0.872 y e x e x e x x x
= + (7)
12 8 10 7 7 6 6 5
1.65 6.6 1.03 7.8 y e x e x e x e x
= + +
4 4 3 3 4 2 5 9
2.91 4.27 4.66 3.25 3.65 e x e x e x e x e
+ + + + (8)
Despite the capability of fitting an 8 degree polynomial, the peak detection
algorithm will use the square curve fitting method.
The drawbacks of this method consist of the incapability to provide the
vehicle classification information. Thereby only the total number of the vehicles is
provided for the traffic management system.
To be able to extract the classification information, the same signal will be
mediated and the resulted signal will be used to fit another square fit polynomial.
After the signal has been mediated, the number of detected peaks will represent
the number of heavy (long) vehicles that passed through the region of interest.
The number of the small vehicles will be provided by subtracting the
heavy vehicle number from the total number.
In Fig.11 we can see the procedure that makes possible the vehicle
classification.
Fig. 11. The procedure for vehicle classification after the edge detection processing
Traffic flow detection using lab view video processing techniques 191
4. Conclusions
The authors propose the developing of an efficient traffic video detection
algorithm, capable of reducing the costs needed for implementing such a system.
In order to achieve this, a couple of video processing methods have been
developed and tested in order to obtain a higher efficiency.
The video processing and the necessary programming were performed
using a Lab View 7.1 programming language with a Lab View Run-time module
version 8.6 and an IMAQ Vision module version 6.1.
It is clear that the two video traffic detection methods presented in the
upper paragraphs are using different technologies, which implies different benefits
and drawbacks.
The first processing method, the template video processing, is a
straightforward method that feeds the desired information (the number of
vehicles) directly at the exit of the virtual instrument. This fact saves resources,
processing power and also valuable time. Also, another benefit is represented by
the granular classification of the road vehicles that can be provided for the traffic
management system. With this classification, a better approximation of the road
space occupied by the vehicle at a line stop is possible.
The database containing the types of vehicles is a dynamic database that
can be enriched by the user, at a level desired by him or at a level that keeps this
granulation process more practical.
The second video detection method is more elaborate and combines the
edge detection, Prewitt filter, count objects function, peak detection and square
curve fitting to obtain the total number of vehicles. This is not a straightforward
method because it needs a chain of video processing functions and mathematical
algorithms to obtain the final information. Also this method cannot provide a
granular classification without an algorithm that subtracts the heavy vehicle
number from the total number of vehicles to obtain the small vehicle number. The
heavy vehicle number is obtained after a signal mediation process and another
peak detection process.
The results show that the edge detection method presented in the third
chapter outperforms the template image method which, despite its better
classification capability, is not able to perform the congestion detection. Also, the
edge detection method has a good curve-fitting statistical factor, when we
consider the entire set o values resulted from the image manipulation (Fig.9). The
image template method has an unranked statistical factor because it does not use
the curve fitting algorithm.
The further research work will include the integration of these methods
with a couple of night detection algorithms and also the development of a proper
192 Rzvan Ghi, Ana Maria Nicoleta Mocofan
communication topology capable of integrating these traffic flow detection
algorithms.
Acknowledgements
The work has been funded by the Operational Programme Human
Resources Development 2007-2013 of the Romanian Ministry of Labour, Family
and Social Protection through the Financial Agreement
POSDRU/107/1.5/S/76813.
R E F E R E N C E S
[1] Dr. Peter T. Martin, Uqi Feng, Xiaodong Wang, Detector Technology Evaluation, Mountain-
Plains Consortium, November 2003
[2] Marius Minea, Florin Domnel Grafu Telematic n Transporturi ,no iuni
fundamentale i aplica ii (Telematics in transport, applications and fundamental principles),
Matrixrom, Bucharest, 2007.
[3] Luz Elena Y. Mimbela and Lawrence A. Klein. A Summary of Vehicle Detection and
Surveillance Technologies used in Intelligent Transportation Systems, the Vehicle Detector
Clearinghouse, New Mexico State University, Fall 2000
[4] NI Vision 2009 for LabVIEW Help, Edition Date: June 2009
[5] National Instruments, Imaq Vision for G, Iunie 1997.
[6] Jain A. K., Fundamentals of Digital Image Processing, Prentice Hall, 1989.
[7] Russ J. C., The Image Processing, Third Edition. CRC, Springer, IEEE Press, 1999.
[8] Frank Y. Shih, Image processing and pattern recognition, IEEE Press, 2010
[9] Thomas Klinger, Image Processing with LabView, 11 Iunie 2003
[10] NI Vision 2010 for LabVIEW Help, Edition Date: June 2010.