A Novel Background Extraction and Updating Algorithm For Vehicle Detection and Tracking
A Novel Background Extraction and Updating Algorithm For Vehicle Detection and Tracking
and Tracking
Jun Kong1, 2, Ying Zheng1, 2, Yinghua Lu1, Baoxue Zhang2
1
Computer School, Northeast Normal University, Changchun, Jilin Province, China
2
Key Laboratory for Applied Statistics of MOE, China
{kongjun,zhengy043, luyh }@nenu.edu.cn
Abstract
This paper proposes a new adaptive background
extraction and updating algorithm for vehicle detection
and tracking. Gray-level quantification and two
attenuation weights are introduced to reduce the impact
of environment lighting condition in background
extraction method, two discriminant functions are
employed to distinguish false moving objects and true
moving objects for solving the deadlock problem of
background updating. The experimental results show that
the proposed method is more robust, accurate and
powerful than traditional methods, and is simple to
implement and suitable for real-time vehicle detection
and tracking.
1. Introduction
Recently, the Intelligent Traffic System (ITS [1] [2])
has been developed to make the existing traffic control
system more efficient. Identifying moving objects from a
video sequence is a fundamental and critical task in many
computer-vision applications traffic monitoring and
analysis. A common approach to identifying the moving
objects is background subtraction [4] [9], where each
video frame is compared against a reference or
background model. Pixels in the current frame that
deviate significantly from the background are considered
to be moving objects. These foreground pixels are
further processed for object localization and tracking. So
background extraction and updating is often the first and
critical step in many computer vision applications.
There are several ways to extract and update
background images from traffic video streams. Literature
[6] introduces a background image extraction algorithm
based on the changes of pixel colors of each frame of
video sequence. Each pixel in the previous frame is
compared with the same location in several consecutive
frames. If their color values stay the same, it is assumed
that this pixel is not occupied by moving vehicles and the
Corresponding author.
This work is supported by science foundation for young teachers of Northeast Normal University, No. 20061002, China.
Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007
Fig 2. (a)
Fig 2. (b)
2. Background extraction
In a series of traffic image frames, the color value of a
pixel at location ( x, y ) at time t marks as F ( x, y, t ) . In
most cases the pixels color values are in the Red, Green,
and Blue (RGB) color space, and F ( x, y, t ) is a vector
F R ( x, y, t ) , F G ( x, y, t ) ,
and F B ( x, y, t ) . For a given pixel ( x , y ) , its color values
from time t1 to time t n are represented by matrix
M ( x, y ) as follows:
with
three elements:
Fig 2. (c)
Fig 2. Histogram of each channel of location A from
oth frame to 140th frame. Fig 2. (a). Histogram of Red
channel. Fig 2. (b). Histogram of Green channel. Fig 2.
(c). Histogram of Blue channel.
[l s, (l +1) s 1
l = 0, 1, 2, ..., 1]
where s is the size of the range, l is gray-level after
quantification.
We adopt the method used in [11] to introduce two
parameters, each region i has two parameters: one is
regional mean i , which is the weighted average of all
pixel intensities in the gray level range i ; the other is
regional counter Ci , which is the numbers of video
sequence in the gray-level range i . They both update
when the video sequence changes. Considering the heavy
weight of current frame, two attenuation weights are
introduced to reduce the impact of the environment
lighting condition. First, judge the region i which the
current pixel intensity belongs to and then update i and
Ci by formula (1) and (2), array the region by Ci from
big to small, at last, regard i belonging to the region
which has the biggest Ci as background pixel value of
pixel ( x, y ) . For all pixels of frames in video sequence,
repeat the work of above, we can get a background image.
Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007
Ci , n 1
i, n 1
1 + Fn
i =1
Fn [i s, (i + 1) s 1],
Ci , n 1
=
(1)
1
i =0
Fn [i s, (i + 1) s 1],
i , n 1
i, n
2 C i , n 1 + 1 Fn [i s , ( i + 1) s 1],
Ci , n =
Fn [i s , (i + 1) s 1],
2 C i , n 1
(2)
3. Background updating
Traditional background updating algorithms only
update background pixels expect the pixels of moving
objects, they can make the system model more stable, but
they only consider the pixel-level updating, so they may
cause the problem of background updating deadlock
(when the objects, which belong to the background, start
moving, or when the moving objects become stillness, the
method of background subtraction using the traditional
background updating algorithm will get a false object.
The selectivity background updating methods are part of
pixel-level updating, so it is helpless for regional
updating, and then the false object will be detected as a
moving object. This is called background updating
deadlock problem). A novel method is presented for
solving this problem. In what follows, details of the
presented updating method are described. Firstly, in order
to understand the method easier, two discriminant
functions T and S in literature [5] are introduced to
distinguish the false moving objects and the true moving
objects.
(3)
S=
j =0
j =0
k D(2n j ) ( Dn j ) 2
k (k 1)
, k [5, 10],
(4)
{F n ( x, y ) moving objects },
B n ( x, y ) = (1 ) B n 1 ( x, y ) + Fn ( x, y )
(5)
{others},
B n 1 ( x, y )
where Bn ( x, y ), Bn 1 ( x, y ) are
the
intensities
of
4. Experimental results
In order to analyze the performance of the proposed
background extraction method, the image sequences
captured by a traffic surveillance CCD camera were used
to compare the method we proposed with literature [3][8].
In this video sequence, it included 424 frames. The
comparisons of extracted background images by using 65
frames are illustrated in Figure 3. Figure 3(a) represents
the original image of 65th frame. Fig3 (b) (c) and (d)
show the extracted background images using FDI (Frame
Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007
Fig.5. (a)
Fig 3. (a)
Fig 3. (b)
Fig.5. (c)
Fig.5. (d)
Fig 5. Background updating which exists vehicles
from stillness to moving. Fig 5. (a) Background after
traditional updating method. Fig 5. (b) False detected
vehicles using background (a). Fig 5. (c) Background
after proposed updating method. Fig 5. (d) True
detected vehicles using background (c).
Fig 3. (c)
Fig 3. (d)
Fig 3. The comparison of extracted background
use 65 frames. Fig 3. (a) the image of 65th frame. Fig
3. (b) The result of FDI method. Fig 3. (c) The result of
GM method. Fig 3. (d) The result of proposed method.
Fig 4. (a)
Fig 4. (b)
Fig 4. (c)
Fig 4. (d)
Fig 4. The comparison of extracted background use
200 frames. Fig 4. (a) The image of 200th frame. Fig 4.
(b) The result of FDI method. Fig 4. (c) The result of
GM method. Fig 4. (d) The result of proposed method.
Fig.5. (b)
Fig 6. (a)
Fig. 6. (b)
Fig.6. (c)
Fig. 6. (d)
Fig 6. Background updating which exists vehicles
from moving to stillness. Fig 6. (a) Background after
traditional updating method. Fig 6. (b) False detected
vehicles using background (a). Fig 6. (c) Background
after proposed updating method. Fig 6.(d) True
detected vehicles using background (c).
AVD =
100%
+
PSNR = 10 log10
(6)
[ B ( x, y)] )
(
[B( x, y) B ( x, y)]
N
x =1
x =1
N
y =1
(7)
y =1
Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007
12.249ms
>300frames
85%
15.3451
48.285ms
>700frames
94%
29.8107
10.394ms
<100frames
97%
61.2654
5. Conclusion
Background extraction and updating is a crucial step in
a video-based traffic surveillance detection system. The
results show that the proposed background method can
quickly gain a perfect initialization background images,
compared with frame difference iterative method and
Mixture Gaussian model, it can well adapt to the variance
of environment lighting condition and it can save
significant processing time, this is important for real time
vehicle detection and tracking. Updating algorithm
updates moving objects and false objects separately for
resolving the deadlock problem of the background
updating.
6. References
[1] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi,
Traffic monitoring and accident detection at intersections,
IEEE Transactions on Intelligent Transportation Systems, vol.
1, No. 2, Jun 2000, pp. 108118.
[2] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, A
real-time computer vision system for measure traffic
parameters, Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition, San Juan, PR, Jun. 1997, pp.
496501
[3] D. M .Ha*,J.-M.Lee,Y.-D.Kim, Nerual-edge-based vehicle
detection and traffic parameter extraction, Image and Vision
Computering 22(2004) 899-907.
Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007