0% found this document useful (0 votes)
68 views5 pages

A Novel Background Extraction and Updating Algorithm For Vehicle Detection and Tracking

PDF

Uploaded by

Sam Karthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views5 pages

A Novel Background Extraction and Updating Algorithm For Vehicle Detection and Tracking

PDF

Uploaded by

Sam Karthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Novel Background Extraction and Updating Algorithm for Vehicle Detection

and Tracking
Jun Kong1, 2, Ying Zheng1, 2, Yinghua Lu1, Baoxue Zhang2
1
Computer School, Northeast Normal University, Changchun, Jilin Province, China
2
Key Laboratory for Applied Statistics of MOE, China
{kongjun,zhengy043, luyh }@nenu.edu.cn
Abstract
This paper proposes a new adaptive background
extraction and updating algorithm for vehicle detection
and tracking. Gray-level quantification and two
attenuation weights are introduced to reduce the impact
of environment lighting condition in background
extraction method, two discriminant functions are
employed to distinguish false moving objects and true
moving objects for solving the deadlock problem of
background updating. The experimental results show that
the proposed method is more robust, accurate and
powerful than traditional methods, and is simple to
implement and suitable for real-time vehicle detection
and tracking.

1. Introduction
Recently, the Intelligent Traffic System (ITS [1] [2])
has been developed to make the existing traffic control
system more efficient. Identifying moving objects from a
video sequence is a fundamental and critical task in many
computer-vision applications traffic monitoring and
analysis. A common approach to identifying the moving
objects is background subtraction [4] [9], where each
video frame is compared against a reference or
background model. Pixels in the current frame that
deviate significantly from the background are considered
to be moving objects. These foreground pixels are
further processed for object localization and tracking. So
background extraction and updating is often the first and
critical step in many computer vision applications.
There are several ways to extract and update
background images from traffic video streams. Literature
[6] introduces a background image extraction algorithm
based on the changes of pixel colors of each frame of
video sequence. Each pixel in the previous frame is
compared with the same location in several consecutive
frames. If their color values stay the same, it is assumed
that this pixel is not occupied by moving vehicles and the

color values are assigned to the corresponding pixel on


the background image. This process is iterated until color
values for all pixels in the background image are
determined. This algorithm only proves to work well on
freeways under non-congested conditions. Gupte et.at. [7]
use a frame difference iterative method to extract
background images. They first calculate a binary motion
mask, which is the subtraction of two successive frames.
Any pixel with different color values between these two
frames is assumed to be part of a moving object. A
motion mask is used as a gating function to extract the
background image from traffic images by filtering out
moving objects. After a sequence of frames is processed,
the entire background image could be extracted. This
method uses more parameters and frames to extract the
initialized background. We will give the compared
experiments in the results. Elgammal et al. [8] study each
pixels value in the three color channels (red, green, and
blue) in an image sequence, and try to find the
distribution of these values. They assume that the values
submit to a Gaussian distribution, where the probability of
a pixel being a background pixel could be estimated. Then
the decision on whether it is a background or a foreground
pixel is made by comparing the estimated probability with
a given threshold. This method can be used for both
freeways and intersections; this method requires more
computing time and needs more frames to extract the
initial background images. Stauffer et al. [10] propose an
adaptive on-line multi-color model in which the
background color of each pixel is modeled using multiple
clusters each with a Gaussian distribution, this method
can well update the background, however, the time of
learning process is excessive and uses more frames to
extract the initialized background.
In this paper, we propose a novel adaptive background
extraction and updating algorithm. The algorithm can
quickly construct the background estimation model based
on gray-level quantification and two attenuation weights
which can well reduce the impact of environment lighting
condition. Two discriminant functions are employed to
distinguish false moving objects and true moving objects,

Corresponding author.
This work is supported by science foundation for young teachers of Northeast Normal University, No. 20061002, China.

Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007

and then update true moving objects and false moving


objects separately for resolving the deadlock problem of
the background updating.
The rest of this paper is organized as follows: Section
2 details the proposed adaptive background extraction
method. Section 3 describes the presented background
updating method. The experimental results are given in
Section 4. Section 5 presents the conclusions.

Fig 2. (a)

Fig 2. (b)

2. Background extraction
In a series of traffic image frames, the color value of a
pixel at location ( x, y ) at time t marks as F ( x, y, t ) . In
most cases the pixels color values are in the Red, Green,
and Blue (RGB) color space, and F ( x, y, t ) is a vector

F R ( x, y, t ) , F G ( x, y, t ) ,
and F B ( x, y, t ) . For a given pixel ( x , y ) , its color values
from time t1 to time t n are represented by matrix
M ( x, y ) as follows:
with

three elements:

M(x, y) ={F(x, y,t1),F(x, y,t2 ),F(x, y,t3 ),...,F(x, y,tn )},


Since the background is motionless, the color values of
this pixel would approximately be the same during the
entire analysis time (here we assume the analysis time is
short enough to ignore the luminance change of the
scene). Therefore, we assume that it should be the
background-pixel color vector that occurs the most
frequently in M ( x, y ) .

Fig 2. (c)
Fig 2. Histogram of each channel of location A from
oth frame to 140th frame. Fig 2. (a). Histogram of Red
channel. Fig 2. (b). Histogram of Green channel. Fig 2.
(c). Histogram of Blue channel.

As Fig 2. illustrated, the passing of the vehicles cause


curves fluctuation. Mostly the value of each channel
stays round a constant; these constants are the intensities
of pixel A of each channel. Although the background is
motionless, the background gray values are not always the
same because of disturbances in lighting, atmosphere,
cameras, etc. To accommodate the small variations of
background pixel values due to these disturbances, graylevel quantification is introduced to aggregate several
gray values which in a small range into one range, so 256
gray levels can be regrouped into ranges, = 256 / s ,
where s is the size of the range, the choice of the
parameter
s is
done
empirically
(in
our
experiment s = 4 ), so it can make the proposed algorithm
more robust. The range of each gray-level is:

[l s, (l +1) s 1

l = 0, 1, 2, ..., 1]
where s is the size of the range, l is gray-level after

Fig 1. The sample frame of the video sequence

In order to analyze the variety properties of each pixel


intensity which in the traffic video sequence, we take
point A for example, record its intensity in every frame
and give the histograms of the Red, Green, and Blue
(RGB) color space separately. We take one frame of
video sequence as a sample frame, the location of pixel
A is (222, 106), which is illustrated in Figure.1. Fig.2
shows the variety of intensities of pixel A from frame
0th to 140th of the Red, Green, and Blue (RGB) color
space separately.

quantification.
We adopt the method used in [11] to introduce two
parameters, each region i has two parameters: one is
regional mean i , which is the weighted average of all
pixel intensities in the gray level range i ; the other is
regional counter Ci , which is the numbers of video
sequence in the gray-level range i . They both update
when the video sequence changes. Considering the heavy
weight of current frame, two attenuation weights are
introduced to reduce the impact of the environment
lighting condition. First, judge the region i which the
current pixel intensity belongs to and then update i and
Ci by formula (1) and (2), array the region by Ci from
big to small, at last, regard i belonging to the region
which has the biggest Ci as background pixel value of
pixel ( x, y ) . For all pixels of frames in video sequence,
repeat the work of above, we can get a background image.

Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007

Ci , n 1

i, n 1
1 + Fn

i =1
Fn [i s, (i + 1) s 1],

Ci , n 1
=
(1)
1

i =0

Fn [i s, (i + 1) s 1],
i , n 1

i, n

2 C i , n 1 + 1 Fn [i s , ( i + 1) s 1],
Ci , n =
Fn [i s , (i + 1) s 1],
2 C i , n 1

(2)

where 1 , 2 [0, 1] are used to reduce the impact of


the history information. The slower the illumination
changes between the frames, the bigger values of the
speed coefficient are selected.

3. Background updating
Traditional background updating algorithms only
update background pixels expect the pixels of moving
objects, they can make the system model more stable, but
they only consider the pixel-level updating, so they may
cause the problem of background updating deadlock
(when the objects, which belong to the background, start
moving, or when the moving objects become stillness, the
method of background subtraction using the traditional
background updating algorithm will get a false object.
The selectivity background updating methods are part of
pixel-level updating, so it is helpless for regional
updating, and then the false object will be detected as a
moving object. This is called background updating
deadlock problem). A novel method is presented for
solving this problem. In what follows, details of the
presented updating method are described. Firstly, in order
to understand the method easier, two discriminant
functions T and S in literature [5] are introduced to
distinguish the false moving objects and the true moving
objects.

T = max{| Dn Dn j |}, j [1, 5],


Where Dn and Dn j

(3)

are foreground regions which

belong to the difference image Fn B n , and its previous


five foreground regions of difference images. T is the
maximum of absolute value which is from the gray
difference between current difference image foreground
regions and previous 5 difference images foreground
regions.

S=

j =0

j =0

k D(2n j ) ( Dn j ) 2
k (k 1)

, k [5, 10],

(4)

where S denotes the degree of difference image gray


variety in a period of time, the choice of the parameter k
is done empirically. We use background subtraction to get
foreground objects, calculate T and S of each
foreground object, a small T and S indicate that it is a
false moving object; otherwise it is a moving object. In
what follows, details of our Background updating method
are described.

3.1. Background Updating Algorithm


Step 1. Extract the perfect initialization background using
the proposed method.
Step 2. Use background subtraction to gain a difference
image, after that, a series of simple morphological
operations and mean filter are applied for noise removing.
Step 3. Distinguish the true moving objects and false
objects using the two discriminant functions (3) and (4).
Step 4. Use formula (5) to update the true moving objects
and the false objects separately.
B n 1 ( x, y ) + (1 ) Fn ( x, y )

{F n ( x, y ) moving objects },

B n ( x, y ) = (1 ) B n 1 ( x, y ) + Fn ( x, y )
(5)

{Fn ( x, y ) false objects },

{others},
B n 1 ( x, y )

where Bn ( x, y ), Bn 1 ( x, y ) are

the

intensities

of

background of frame n and the frame n 1 . Fn ( x, y ) is

the intensity of the frame n , where [0,1] is a


constant that controls the rate of the adaptation of the
background to the current frame and determines the
sensibility of the update to the variations. The higher the
illumination changes (or the presence of moving objects)
between the frames, the lower value (close to zero) of the
speed coefficient is selected, so that the background
model is mostly influenced by the current image.

4. Experimental results
In order to analyze the performance of the proposed
background extraction method, the image sequences
captured by a traffic surveillance CCD camera were used
to compare the method we proposed with literature [3][8].
In this video sequence, it included 424 frames. The
comparisons of extracted background images by using 65
frames are illustrated in Figure 3. Figure 3(a) represents
the original image of 65th frame. Fig3 (b) (c) and (d)
show the extracted background images using FDI (Frame

Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007

difference iterative method) [3], GM [8] and the proposed


method. We can see the proposed method is closer to the
true background. Figure 4 shows the comparisons of the
extracted background images by using 200 frames. Also
we can see that the result of the proposed method is the
best.

become stillness. Then background subtraction method


gets false moving objects which show in white rectangles
in Fig 5 (b) and Fig.6 (b).

Fig.5. (a)

Fig 3. (a)

Fig 3. (b)

Fig.5. (c)
Fig.5. (d)
Fig 5. Background updating which exists vehicles
from stillness to moving. Fig 5. (a) Background after
traditional updating method. Fig 5. (b) False detected
vehicles using background (a). Fig 5. (c) Background
after proposed updating method. Fig 5. (d) True
detected vehicles using background (c).

Fig 3. (c)
Fig 3. (d)
Fig 3. The comparison of extracted background
use 65 frames. Fig 3. (a) the image of 65th frame. Fig
3. (b) The result of FDI method. Fig 3. (c) The result of
GM method. Fig 3. (d) The result of proposed method.

Fig 4. (a)

Fig 4. (b)

Fig 4. (c)
Fig 4. (d)
Fig 4. The comparison of extracted background use
200 frames. Fig 4. (a) The image of 200th frame. Fig 4.
(b) The result of FDI method. Fig 4. (c) The result of
GM method. Fig 4. (d) The result of proposed method.

Background initialization and updating is a crucial task


in background subtraction, and the first problem has been
successfully resolved by the proposed background
extraction method. The following experiments are about
background updating, the method we proposed can
resolve the deadlock problem of the background updating.
Fig.5 and Fig.6 illustrate the background updating of
video sequence and detected vehicles which exist the
vehicle from stillness to moving and the moving vehicle

Fig.5. (b)

Fig 6. (a)

Fig. 6. (b)

Fig.6. (c)
Fig. 6. (d)
Fig 6. Background updating which exists vehicles
from moving to stillness. Fig 6. (a) Background after
traditional updating method. Fig 6. (b) False detected
vehicles using background (a). Fig 6. (c) Background
after proposed updating method. Fig 6.(d) True
detected vehicles using background (c).

The effect measures used in this paper are accuracy of


vehicle detection (AVD) and power signal-to-noise ratio
(PSNR). They are defined as follows, where is the
number of true vehicles, is the number of error
detection vehicles by the algorithm, B is the standard

background image and B is the background extracted by


the algorithm, the image size is N N .

AVD =

100%
+

PSNR = 10 log10

(6)

[ B ( x, y)] )
(
[B( x, y) B ( x, y)]
N

x =1

x =1
N

y =1

(7)

y =1

Table 1. lists the Experiment results of our proposed


method was adopted or not. Obviously, our proposed
method use less frames to get an initialization background

Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007

image, and the processing time of each frame is least, so


the computation cost of our method is smaller and can
well satisfy the real-time requirement. The value of PSNR
express our background is closer to the true background,
the purpose of background extraction and updating is that
we can detect vehicles accurately by background
subtraction, the accuracy of vehicle detection (AVD) in
Table.1.show that our method is more accurate than
others.
Table 1. A comparison of different performance of
various method. 1. FDI method. 2. GM method. 3. The
proposed method.
The
The process
processing
of
Exp.
AVD
PSNR
time of
initialization
each frame
1

12.249ms

>300frames

85%

15.3451

48.285ms

>700frames

94%

29.8107

10.394ms

<100frames

97%

61.2654

5. Conclusion
Background extraction and updating is a crucial step in
a video-based traffic surveillance detection system. The
results show that the proposed background method can
quickly gain a perfect initialization background images,
compared with frame difference iterative method and
Mixture Gaussian model, it can well adapt to the variance
of environment lighting condition and it can save
significant processing time, this is important for real time
vehicle detection and tracking. Updating algorithm
updates moving objects and false objects separately for
resolving the deadlock problem of the background
updating.

[4] Kim, Z, and J. Malik, Fast Vehicle Detection with


Probabilistic Feature Grouping and its Application to Vehicle
Tracking, Proceedings of IEEE International Conference on
Vision, vol.1, 2003, pp. 524-531.
[5] DAI Xiao-peng, WANG Hua-wei, HUANG Huang, Novel
Method for Background Modeling and Update, Application
Research of Computers, May 2005, pp. 239-141.
[6] Avery, R. P., Y. Wang, and S. G. Rutherford, Length-Based
Vehicle Classification Using Images from uncelebrated Video
Cameras, Proceedings of the 7th International IEEE
Conference on Intelligent Transportation Systems, October
2004, pp. 737-742.
[7] Gupte, S., O. Masoud, R. F. K. Martin, and N. P.
Papanikolopoulos. Detection and Classification of Vehicles,
IEEE Transactions on Intelligent Transportation Systems, Vol.
3, No. 1, March 2002, pp. 37-47.
[8] Elgammal, A., D. Harwood, and L. Davis. Non-parametric
Model for Background Subtraction, Proceedings of IEEE
International Conference, Conference, Computer Vision 99
FRAMERATE Worshop, 1999.
[9] Zheng, J., Y. Wang, N. L. Nihan, and M. Hallenbeck,
Detecting Cycle Failures at Signalized Intersections Using
Video Image Processing, Preprint CD-ROM, the 84th Annual
Meeting of Transportation Research Board, Paper 05-2002.
January 2005.
[10] C. Stauffer and W.E.L. Grimson, Adaptive Background
Mixture Models for Real-time Tracking, Proceedings of IEEE
Conference on Computer Vision and Pattern Recognition, Ft.
Collins, CO, June 1999, pp. 246-252.
[11] LIU Bo WEI Ming-Xu ZHOU He-Qin, A ZoneDistribution Based Adaptive Background Abstraction
Algorithm, Pattern Recognition and artificial intelligence.
Vo1.18, No.3, Jun 2005, pp. 316-321.

6. References
[1] S. Kamijo, Y. Matsushita, K. Ikeuchi, and M. Sakauchi,
Traffic monitoring and accident detection at intersections,
IEEE Transactions on Intelligent Transportation Systems, vol.
1, No. 2, Jun 2000, pp. 108118.
[2] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, A
real-time computer vision system for measure traffic
parameters, Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition, San Juan, PR, Jun. 1997, pp.
496501
[3] D. M .Ha*,J.-M.Lee,Y.-D.Kim, Nerual-edge-based vehicle
detection and traffic parameter extraction, Image and Vision
Computering 22(2004) 899-907.

Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007)
0-7695-2874-0/07 $25.00 2007

You might also like