0% found this document useful (0 votes)
36 views5 pages

Application of A Cumulative Method For C

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views5 pages

Application of A Cumulative Method For C

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

RADIOENGINEERING, VOL. 17, NO.

4, DECEMBER 2008 75

Application of a Cumulative Method


for Car Borders Specification in Image
Martin DOBROVOLNÝ, Pavel BEZOUŠEK, Martin HÁJEK

Faculty of Electrical Engineering and Informatics, Univ. of Pardubice, Studentská 95, 532 10 Pardubice, Czech Republic

[email protected], [email protected], [email protected]

Abstract. The paper deals with low-level car detection In the next part of the article, the original method for
methods in images. The car detection is an integral part of suppression of the edges produced by the standard edge
all intelligent car cruise systems. This article describes the detectors most often based on application of the cumulative
method for suppression of the edges produced by classical method is described in detail. At the input of the present
edge's operators, based on application of the cumulative method the vector of approximate cars coordinates is used
method. The designed method uses the non-stationary and at the output we get an image with only car's outlines
property of the picture background in time-realizations of depicted while the other edges are suppressed.
the image signal.

2. The Cumulative Method for Car's


Keywords Outlines Search
Image processing, cumulative method, Intelligent Traffic scene objects can be split into several groups
Driver Systems (IDS), car detection, edge detection. according to the speed of the intensity function variations
in equidistantly captured pictures. The main presumption is
a difference between a small optical flow in the area of the
car and a massive optical flow in background pixels. Con-
1. Intelligent Driver Systems sequently only a minimal optical flow inside the outlines of
The Intelligent driver systems (IDS) will certainly the moving object and its constant size could be assumed.
contribute to the enhanced safety on roads in future as well Due to the movement of the camera carried by a moving
as to the further reduction the negative consequences of car the background is also non-stationary. For this reason
accidents. One part of the IDS systems is a subsystem the background including the road surface and other ob-
based on an on-board camera observing the car vicinity. jects embodies a massive optical flow and the fast sizes
This system with a build-in camera represents an intelligent variations due to an axonometric distortion in time. But
fellow-rider, monitoring the driver, the car and the these assumptions are not quite satisfied. For instance the
situation around the vehicle. It also gives navigation intensity function progression inside the car outlines is
information to the driver and informs him about critical affected by variable lighting conditions during the car
situations. movement and the size of the car picture variations depend
on the relative speed of the object and the car with camera.
A lot of partial problems should be solved in this area. The background optical flow is variable, too. Due to the
One of the most important problems is a great over-com- axonometric distortion the flow is zero at the middle of the
plexity of the captured traffic scene. The usually applied horizon, whereas at the edges of the scene it reaches its
edge detection methods [1], [2], lead to over-segmentation. maxima.
The whole problem is much more complicated by the non-
stationary character of the background [3]. The standard If suitable transforms suppressing some of these ef-
methods applied in image analysis with the non-stationary fects are applied it is possible to separate the objects in
background use the optical flow analysis for instance [5], successive images and to emphasize the car borders and at
[6], but these methods often fail in the case of a highly the same time to suppress the background by application of
complicated image. The optical flow linearly decreases the cumulative method.
towards the axonometric center of the image where it com-
pletely vanishes [4]. Another approach to the segmentation 2.1 The Method Description
is often based on a suitable color conversion. The input
image is converted into a convenient color space (mostly The method starts with an initial frame In and with co-
HSV), which suppresses the brightness sensitivity [7]. ordinates S={x1, y1..xp, yp} of p potential objects obtained
These methods are unusable by the infra-vision systems by one of the Car-detection methods described in [4]. For
[8], [1]. each object Oi, i = 1…p it is then necessary to find the
76 M. DOBROVOLNÝ, P. BEZOUŠEK, M. HÁJEK, APPLICATION OF A CUMULATIVE METHOD FOR CAR BORDERS …

equivalent image segments in the next n frames with rele- slows down the computation speed. The speed
vant shifted objects. For this reasons the template of size improvement may be achieved by the search window Wi
Ti=(xi±λι, yi±λι) is stripped from the initial frame, where λi size reduction, which is however limited by the condition
is the smallest estimated object i neighborhood radius and (1) or by the template Ti size reduction, which restrains the
xi,yi are the inaccurate coordinates of the object center. position accuracy.
This template is searched in the next maximum k frames to
The significant speed improvement without the unde-
In+k. If the object Oi maximum shift from the initial point in
sirable effects connected with the size reduction of
the image plane could be restricted to ΔSmax, then the matrices Wi or Ti may be achieved by exchanging the
selective scanning window can be defined as: standard two-dimensional correlation by one of the popular
W i = (S i ± n ⋅ ΔS max ) . (1) SAD1 algorithm [10]:

The searching is then realized only in the window Wi, r i SAD (u , v ) = ∑∑ (I i ( x + u , y + v ) − T i ( x, y )) . (3)
x y
which increases the computation speed.
In Fig. 1 an example of the input frame In is presented Due to the SAD algorithm simplicity it is extremely
including the first object axis S1 (green). In the figure also fast and effective. It can be fairly implemented on most of
the template T1=(x1±λι, y1±λι) is depicted in red, which today’s processors. The main finesse of this algorithm is
will be searched in the search window W1 (yellow) in next a substitution of a number of multiplications by the same
max. k frames. number of additions requiring much shorter computation
time. The further advantage of the SAD algorithm over the
two-dimensional correlation is its insensitivity to the mean
value of the Wi and Ti matrices, which eliminates the neces-
sity of the mean value computation.
The result of (3) is again the matrix Ri(u, v) similarly
to the relation (2). The best template Ti position is realized
by the search for the global minimum in the matrix Ri(u, v)
and again before the searching process start the Gaussian
2D low pass filter is applied.

Fig. 1. The example input image with depicted center and the
search window Wi

The object Ti position searching in the window Wi with


proportions (u, v) is usually realized by two-dimensional
correlation algorithm [9]. It is possible to partially improve
and optimize this calculation regarding to the computa-
tional speed. When the mean value of the images In+k is
Fig. 2. Examples of the matrix Ri(u,v) obtained by
removed the normalization could be let out (only the rela- two-dimensional correlation and by the SAD algorithm
tive maximum ri value of the correlation coefficient is of for In+3 (blue marked the minimum, red marked the
interest), then it is possible to simplify computations: maximum).

r i corr 2 (u, v ) = ∑∑ (I i ( x + u, y + v )T i ( x, y )) . (2) In Fig. 2 we may compare results of the Ricorr2(u, v) matrix
x y with marked maximum and RiSAD(u, v) matrix with the
minimum for the In+3 frame after the initial frame in Fig. 1.
The result of the relation (2) is the matrix Ri(u, v) We may see that SAD algorithm achieves the same accu-
with calculated correlation coefficients between the search racy with more than six time shorter2 computation time.
window Wi segments and the template Ti from the initial
image (Fig. 2-left). The best template Ti position When the maximum realistic object shift speed is as-
localization is then realized by a search for the global sumed in the image plane between the frames, the search
maximum in the matrix Ri(u, v). It is convenient to apply window of the objects near the image edges can fall par-
the Gaussian 2D low pass filter before the searching
process starts up to reduce the possibility of sticking in
some local extreme. During the search for the positions of 1 SAD – Sum of Absolute Differences
the template Ti in k frames by (2) as much as
2 By the window size Wi = 151 x 101 pixels and the template size
k⋅(u-x)⋅(v-y)⋅x⋅y multiplications should be realized which Ti = 21 x 21 pixels
RADIOENGINEERING, VOL. 17, NO. 4, DECEMBER 2008 77

tially out of the search window due to the real 3D-scene


motion. This situation is typical particularly in the case of
arriving or turning cars. In such a case it is useful to restrict
the maximum change of coordinates by the following con-
dition:
S i new = (S i ± n ⋅ ΔS max ), S i new ∈ (size(I n+k ) − ξ ) (4)

where ξ represents the protective zone on the image In


edge. The protective zone allows a partial shift of the
search window Wi outside the In+k image coordinate. In this
case only a part of the searched object in the search win- Fig. 4. The example of function ZiSAD for the frame In+3.
dow is found and the image extension by pixels with null
brightness should be realized. The further search in the
next In+k images is then stopped (Fig. 3- the last snap In+4).
The result of the procedure described above is the
matrix of images representing objects shifted against their
starting positions over the non-stationary background. In
Fig. 3 the result of the search for the first object from
Fig. 1 is presented. The object centre (marked by red color
crossing) stay always in the center of the new founded
window W1. The shifting border ξ, which limits the calcu- Fig. 5. The resulting images with the normalized size of objects.
lation process at image edges, is indicated by the green
line. The situation with the object extruded partially out of
the image is presented in the last frame (In+4). In this case 2.3 The Cumulative Method Application
the calculation process is stopped (the condition (4) is not After application of the previous operations it is pos-
satisfied). sible to use a cumulative method on the image signal to
enhance the pixels representing cars and their borders. The
cumulative addition method is based on relatively simple
principle of correlated signals enhancement in
non-correlated background. The maximum effect could be
obtained in the case of ideally correlated object pixels in
totally uncorrelated background (white noise) [9]. In our
case the ideal signal is represented by the high correlated
car's pixels encased with the non-correlated moving back-
Fig. 3. The resulted series of a tracked object. ground's pixels.
But neither of the presumptions is fully satisfied in
2.2 The Scale of the Founded Objects the real signal. Though the shape and the size of the objects
Normalization are equalized after previous transforms, the car pixels have
some properties, reducing the pixels correlations:
Fig. 3 represents the object moving in the image
• The capture system produces the noise which
plane. The realistic movement of objects proceeds in a 3D
space comprising objects coming near as well as receding negatively affects the image signal.
objects. Due to this reason the objects picture sizes are • The car pixels intensity is affected by variable lo-
changing. cal and global lighting conditions.
The object size should then be normalized to the size • The previous transforms add complex and hardly
from the initial frame. The normalization of the In+k frame to describe degradations to the signal (change of
is realized by creating the series of images with variable the scale followed by the filtration, image exten-
zoom in the range from zmin to zmax (typically 0.5 – 2). sions...).
These are compared with the initial image.
Also the background's pixels have not purely the
Z i SAD (z ) = ∑∑ abs (I i n ( x, y ) − zoom (I i n + k ( x, y ))) .(5) white noise character. When the car is not moving and the
¨x y background is stationary then the cumulative method can-
not be used because there is only a small difference be-
In Fig. 4 the function Z(zmin - zmax) response is presented
tween the car pixels after fixation of the moving objects by
showing a minimum for images with equivalent scale.
the above-described methods and the background pixels. If
Application of this procedure for all the images from the vehicle is moving then we may suppose a massive opti-
Fig. 3 leads to the series Ji(1..k) of identical objects with cal flow also at the background pixels. Due to the car fixa-
the same shift and scale in the image (Fig. 5). tion by the previously described methods in individual
78 M. DOBROVOLNÝ, P. BEZOUŠEK, M. HÁJEK, APPLICATION OF A CUMULATIVE METHOD FOR CAR BORDERS …

frames the entire optical flow is concentrated mainly to the this way a segment with the most number of edges is cho-
background pixels. With the growing speed difference sen. In this segment the gravity centre is calculated (Fig. 6
between the car and the background the correlation falls - 4th column) representing the newly specified vertical and
down and the method becomes effective. horizontal center of the car. If vertical and horizontal ob-
ject symmetry is assumed, then it is possible to tilt the
Before the cumulative method is applied the mean
object image over the centre and to add it up. After this the
value from each signal realization is removed (Fig. 5). In
car boundaries with maximum accuracy could be deter-
the next step the individual realization are added to the cu-
mined (Fig. 6 - 5th and 6th column).
mulative signal. Regarding to the fact that the signal is not
ideal the signal to noise ratio (SNR) enhancement decreases
with an increasing number of cumulated frames. That is
why the SNR parameter is evaluated in each step and if the
SNR enhancement is less than a defined limit α, the addi-
tion process is stopped. In the same way also special situa-
tions are handled for instant when the light conditions
change causes that the further addition does not substan-
tially increase the output signal quality. The SNR parameter
is represented by edge number acquired by edge operator
in the region of interest (in the vicinity of the middle
choice window Wi) against the number of edges in
a cumulative image S created from Jin images:

E [edge(Wi )]
SNRi = . (6)
E[edge(S )]
The result of this procedure is the cumulative image
S, with greatly highlighted car pixels. The resulting accu-
mulated image of the first object from Fig. 13 is shown in Fig. 6. The boundaries extraction from the summary image.
Fig. 6. In the figure the car body segments acquired by
previous process can be simply recognized. The other parts By this way the new corrected center and shape of the car
of the car like the chassis, the front window, etc. are only is obtained. The obtained outlines of the car represent only
minimally emphasized due to the variations in the lighten- those areas highlighted by the cumulative method. The
ing. The spacing between the highlighted parts and the boundary parts of the cars have in the real 3D scene more
background is minimally about one order. Also the sky variable lighting during the car motion due to the carriage
segment is considerably correlated and the optical flow curvature and that is why they are not highlighted by the
here is minimal. Using this fact the sky can be very well previous method as much as the inner parts. This fact is
suppressed by using the mask of sky obtained by the clus- corrected by multiplicative factor η (typically η = 1.2),
tering methods as described in [4]. Since the sky does not which magnifies the predicted boundary range (Fig. 6- red
contain abrupt variations of the intensity function it does box) that are the better representation of the real car
not cause over-segmentation of the image. outlines.

2.4 The Outlines Extraction from the


Summary Image
In Fig. 6 the procedure of the car outlines extraction
is presented. The horizontal and vertical car outlines are
extracted separately (Fig. 6 - 1st column). In the first step
the Sobel’s edge operator with forced orientation extracts
the edges. There are artifacts in the picture, made by the
summary of the extended images (Fig. 5). To suppress
over-segmentation the correction of the boundary artifacts
is first applied (Fig. 6 - 2nd column). Then the morphologi-
cal opening using a structural element in a shape of a vec-
tor of 1x3 a 3x1 pixels is applied on the images of the ver-
tical and horizontal edges (Fig. 6 - 3rd column) removing
the edges of an undesirable shape. In the image produced Fig. 7. The car outlines detection by the cumulative method.

3 For a better contrast the color map is chosen emphasizing of the


maxima (red) and the minima (blue) of the signal applied.
RADIOENGINEERING, VOL. 17, NO. 4, DECEMBER 2008 79

3. Conclusion References
Fig. 7 presents examples of detection of the other ob- [1] BETKE, M., HARITAGLU, E., DAVIS, L. Multiple vehicle
detection and tracking in hard real time. In IEEE Intelligent Vehicles
jects (the red outlines) together with the summary images
Symposium, pp. 351–356, 1996.
(the 3rd column) acquired by the above described tech-
niques. In the top line of the images in Fig. 7 the second [2] ZHAO, T., NEVATIA, R. Car detection in low resolution aerial
images. In IEEE Int. Conf. Computer Vision, p. 710-717, 2001.
objects from Fig. 1 is presented. In the bottom line the ob-
ject moving in a slightly right–hand curved road is pre- [3] YING, R., CHIN-SENG, C., YEONG, K. Motion detection with
sented. For the demonstration of the procedure contribution nonstationary background. Machine Vision and Applications,
Springer-Verlag, 2003
the images created with the Sobel dual direction edge op-
erator on the original and on the cumulative picture (the [4] DOBROVOLNY, M. On-road vehicle detection based on image
processing. Doctoral dissertation, University of Pardubice, 2008
2nd column) are displayed. The picture shows the great
difference between the number of edges in the car area and [5] CHRISTOPHE, P. B. Discrete wavelet analysis for fast optic flow
number of edges in the suppressed neighborhood of the car computation. Applied and Computational Harmonic Analysis, July
2001, vol. 11, no. 1, pp. 32-63.
(the 4th column).
[6] BRUHN, A., WEICKERT, J., SCHNORR, C. Lucas/Kanade Meets
In both cases the leading cross (green) is also de- Horn/Schunck: Combining local and global optic flow methods.
picted indicating the car centre. The position of the cross is International Journal of Computer Vision, 2005, vol. 61, no. 3, p.
successively corrected and the car contours are calculated 211–231.
using the above described procedures. By these procedures [7] CHATUVERDI, P. Real-time identification of driveable areas in
the car borders are correctly specified even in the case of semi-structured terrain. Proc. SPIE, vol. 4364, p. 302-312,
highly inaccurate input centre coordinate. Unmanned Ground Vehicle Technology III, 2001,
2001SPIE.4364..302C.
The further examples of the objects detections are [8] ANDREONE, L., ANTONELLO, P. C., BERTOZZI, M., BROGGI,
presented in Fig. 8. For clearness the whole input picture I1 A., FASCIOLI, A., RANZATO, D. Vehicle detection and
including the search window W1 is presented. The new localization in infra-red images. In The IEEE 5th International
centers of the objects are indicated by green lines and their Conference on Intelligent Transportation Systems, 3-6 September
2002, Singapore. pp. 141-146.
outlines by the red ones.
[9] CASTELMAN, K. Digital image processing. Prentice Hall,
September 2, 1995, ISBN: 0-132114-674
[10] FORSYTH, D., PONCE, J. Computer Vision: A Modern Approach.
Prentice Hall, August 24, 2002, ISBN: 0-130851-98-1.

About Authors...
Martin DOBROVOLNÝ was born in the Czech Republic
in December 1976. He received his M.Sc. (2003) and Ph.D
(2008) degrees from the Jan Perner Transport Faculty, Uni-
versity of Pardubice, Czech Republic. He is interested in
Fig. 8. The other objects detection examples. image and signal processing and the computer networks.
The method described in this article effectively filters the Pavel BEZOUŠEK was born in 1943. He received his
undesirable edges produced by edge operators in the back- M.Sc. degree from the CTU, Prague in 1966 and his Ph.D.
ground pixels area. The detection of the cars is robust even degree from the same university in 1980. He was with the
in the case of a very complex image. The efficiency of the Radio Research Inst. of Tesla Pardubice from 1966 till
described method increases with the increasing difference 1994, where he was engaged in microwave circuits and
of the objects speed against the non-stationary background. systems design. Since then he has been with the University
The positive advantage of the method is the possibility to of Pardubice, now at the Inst. of Electrical Engineering and
change the accuracy/speed ratio by scaling of the searching Informatics. Presently he is engaged in radar systems
window Wi and the searching template Ti. The computation design.
time was improved using the SAD algorithm whereas the
accuracy does not change substantially. Martin HÁJEK was born in the Czech Republic in Janu-
ary 1978. He received his M.Sc. degree from Jan Perner
Transport Faculty, University of Pardubice in 2003. Since
2003, he has been a Ph.D student at Jan Perner Transport
Acknowledgements Faculty, University of Pardubice. His research interests
The research described in the paper was financially include digital signal processing, applications of FMCW
supported by the Czech traffic department under grant radars and implementing various methods of signal proc-
No. CG743-037-520. essing into real hardware.

You might also like