Blind-Spot Vehicle Detection Using Motion and Static Feature
Blind-Spot Vehicle Detection Using Motion and Static Feature
6, December 2014
the similar-speed side vehicles can also be detected. (v) The w(j) = 1/length(j), (1)
static and motion features are sequentially treated to detect
moving vehicles and reduce the false alarm. where length (j) is the length of a line segment within the
The remainder of this paper is organized as follows. The ground detection area at j vertical image coordinate as shown
proposed method is presented in Section II. In Section III, in Fig. 3, where partial length (j) is outside of image.
experiment and their results are reported to demonstrate the
performance of the proposed methods. At last, the
conclusions are presented in Section IV.
Detection area
Initialize
Initialize and
and self-test
self-test
Preserve
Preserve the
the yes
previous
previous Interrupt
Interrupt Ground detection area 11.4 m
image
image data
data no
Input
Input images
images
2-D
2-D histogram
histogram analysis
analysis (a)
Traffic
Traffic sign
sign removal
removal
Pyramidal
Pyramidal images
images construction
construction
yes
first
first image
image R4
no
R3
Edge
Edge detection
detection
R2
Multiresolution
Multiresolution optical
optical flow
flow estimation
estimation
Optical
Optical flow
flow pruning
pruning and
and clustering
clustering
R1
no (b)
Positive
Positive optical
optical flow
flow
Fig. 2. Definition of detection1area. (a) The definition of detection area. Four
yes
detection regions of the ground detection area.
no
Detecting
Detecting underneath
underneath shadow
shadow
yes
yes
Alarm Significant
Significant underneath
underneath shadow
shadow
yes no
no significant
significant no
Negative
Negative optical
optical flow
flow
edges
edges
yes
517
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014
gradient magnitude. respectively, then the three threshold values Timin, Timax, and Te
In general, only four materials: asphalt road surface, lane are determined as
marks and traffic signals, underneath shadows of vehicles
Timin = x - 2 x ,
(USV), and vehicle bodies appear in the ground detection area.
Moreover, the pixels of every material show similar gray level Timax = x + 2 x , and
and then form a normal (Gaussian) distribution in brightness. (4)
Lane marks and traffic signals on the ground are brighter than Te = y + 2 y ,
asphalt road surface and have sharp edges. Underneath
where Timin is used separate the asphalt road surface and
shadows of vehicles are always darker than road surface. underneath shadows of vehicles (USV), Timax is used to
Vehicle bodies may brighter or darker than road surface; they separate the asphalt road surface and the lane marks/traffic
cannot be separated by brightness, but they have different signals, and Te is used to extract significant edge pixels which
gradient magnitudes. Based on the above facts, we want to will be used to compute valuable optical flows. Before motion
find the threshold values to separate the four materials on the and static feature extraction, the pixels belonging lane marks
ground detection area. or traffic signals on ground are pre-eliminated to reduce false
The pixels on the ground detection area are represented by detection.
a multivariate Gaussian mixture model (GMM) to model their
distribution in the 2D histogram. At first, 2D histogram is
smoothed by a bilateral filter to eliminate noise but preserve hj
the edge information. Second, the parameters of GMM are
estimated by means of the Expectation-Maximization (EM)
algorithm to maximize the posterior probability [16]. Assume pj
that there are K Gaussian functions and the k-th Gaussian
function is represented by Gk = G ( k k k), k = 1, 2, .., K, pi hi
where k is a mixing parameter for the k-th Gaussian function,
0 k 1, and kK1 k 1. k and k are the 2D mean vector (a) (b)
and covariance matrix of the k-th Gaussian function, Fig. 4. The proposed 2D histogram. (a) A blind-spot image with pixels pi and
respectively. pj in the ground detection area. (b) The corresponding 2D histogram of the
ground detection area; where hi and hj are the corresponding point of pi and
The EM algorithm is iterative procedure for modifying pj in the 2D histogram.
model parameters to a stable state. Assume that there are N
pixels xi‘s on the ground detection area; K Gaussian classes
are considered. The model parameters k kand k for the
next step are iteratively updated as
t 1 iN1 p t ( k | x i ) xi
μk
iN1 p ( k | x i )
t
2
t 1 i 1 p ( k | x i ) xi μk
N t t (a) (b)
Σk , (2) Fig. 5. The 2D histogram of a ground detection area. (a) An original image.
iN1 p ( k | x i )
t
(b) The GMM of 2-D histogram of (a).
t 1 1
k n i 1 p ( k | x i )
N t
gradient
magnitude
where Te
kt t
p ( xi | k ) kt p t ( x i | k )
p t (k | x i ) (3)
pt ( x i) Kj1 j p ( x i |
t t
j)
intensity
is the posterior (conditional) probability for a given pixel xi
belonging to the k-th model G k at the t-th step, p t (xi | k) is the
Timin Timax
prior (conditional) probability of xi while xi is selected from
the k-th model. Fig. 6. Threshold values are defined from the largest group of pixels in a 2D
The recursive refinement is executed until sum of the histogram.
model error is less than a pre-defined threshold value. One 2D
histogram is then modeled by a Gaussian mixture model and B. Multi-Resolution Optical Flow Detection
shown in Fig. 5. Optical flow is the main feature for detecting approaching
Consequentially, the largest group Ni (i.e., the Gaussian vehicles. Multi-resolution optical flows were estimated by the
function whose i is largest) is selected as the decision group least-squared version of L-K differential optical flow
(DG) which will be used to find the three threshold values. estimation method [17]. The optical flows are estimated from
The threshold values are decided as shown in Fig. 6. significant edge points and the edge points are selected based
If = ( x, y) and = ( x, y) are the mean and standard on the pre-determined edge threshold value Te described in
deviation vectors of the largest 2D Gaussian model, the last section.
518
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014
The procedure of optical-flow detection consists of three At last, all remained positive and negative optical flows are
main stages: estimation, filtering, and clustering. In the clustered into groups by a simple clustering method. The
estimation stage, the ground detection area is divided into results of positive and negative optical-flow groups are shown
three subareas as shown in Fig. 7 to calculate three levels of in Fig. 8. In the figure, the red blocks are positive groups and
optical-flow vectors. The three subareas are named O1, O2, blue blocks are negative groups.
and O3 from near to far on the ground detection area with To concentrate the local detection and to distinct different
decision boundaries at x = d1, d2, and d3. Three levels of threaten degrees, a detection mechanism is proposed based on
optical flows are estimated for subarea O1, two levels of the extracted positive optical flows.
optical flows are estimated for O2, and only one level of In the detection area, we accumulate the numbers of
optical flows is estimated for O3. Multi-level optical flows are positive optical flows in every detection regions Ri, i = 1, .. , 4.
estimated by a coarse-to-fine strategy and the coarser optical If Ri has more than m optical flows, we say that there is a
flows have been propagated to the finer level to calculate finer vehicle candidate in Ri and then increase ai by one. Initially, ai
optical flows such that a long range of optical flows can all be is zero. As the time is progressive, the captured images are
extracted. sequentially detected one by one. If a vehicle candidate
appears in Ri, ai is increased by one until ai equals six; if there
is no vehicle candidate in Ri, ai is decreased by one until ai
equals zero. At any minute, if condition
5 ai ai 1 12 , i 1, 2, 3 or
(8)
O3 3 ai 6, i 1, 2, 3, 4
O2
is met, one approaching is said to be detected based on the
O1 motion features.
x=0 d1 d2 d3 319
Fig. 7. Three subareas of ground detection area are defined for estimating
multi-level optical flows.
adjusted by Eq. (1). Other than the above decision of positive and situations are Sunny day, cloudy day, tree shadow
optical flows, the negative optical flows are directly used to influence, high-speed approaching vehicle, in tunnel, traffic
reject the possible of approaching vehicles. The usage of sign on ground, at night hour, and heavily rainy day. Samples
motion features and static features for blind-spot vehicle of detection vehicles in the eight conditions and situations are
detection is special treated as above description and shown in illuminated in Fig. 9.
the last part in Fig. 1. The detection performance is quantitatively evaluated by
theory of statistical hypothesis testing. Four relations between
detection results and the actual situation: true positive (TP),
III. EXPERIMENTS true negative (TN), false positive (FP), and false negative (FN)
The proposed approach was conducted to demonstrate the are represented in Table I. From the definition, the accuracy
performance of blind-spot vehicle detection. All experimental of detection is given as
RGB color images were captured from a waterproof digital
TP TN
camera; they are all in size of 320240 pixels. The digital accuracy (%) 100 . (9)
TP TN FP FN
camera was mounted below the right-side wing mirror. The
proposed methods were implemented by C language with
The experimental results are summarized in Table II. The
standard ANSI C library on a general PC, Intel Core 2
overall accuracy is 95.67 %. The accuracy of the sunny days
Duo P8700 2.53 GHz CPU with Microsoft Windows 7 and cloudy days are higher than that in night and heavily rainy
operation system. The source code was also complied to day. The sunny days and cloudy days have the highest
execute on an embedded system TI DaVanci DM6737 to accuracy in four weather conditions: Sunny day, cloudy day,
evaluate the execution performance. at night hour, and heavily rainy day. The road surface is dusky
and unclear in the night and heavily rainy days; all features:
optical flow, edge, and underneath shadow, are unstable to be
detected. Thus the accuracy is not so good.
From the experimental results, we can find that all four
environment situations: tree shadow, high-speed approaching
vehicle, in tunnel, and traffic sign on ground, do not influence
the detection rate of our blind-spot detection.
The proposed system was executed on the mentioned
(a) (b) personal computer. We got the average execution time for one
frame is about 28 milliseconds. The numbers of feature points
(edge points) in sunny and cloudy days are greater than that in
night and heavily rainy days; thus the execution time in sunny
and cloudy days is slightly greater than that in night and
heavily rainy days.
There are totally 4,268 images in eight weather conditions IV. CONCLUSION
or environment situations for examination. These conditions
A blind-spot detection system was proposed in this study.
520
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014
The proposed blind-spot detection system consists of four Proc. 10th Int. Conf. on Vision in Vehicles, Granada, Spain, Sep. 7-10,
2003, pp.1-6.
stages: estimation of weather-adaptive threshold values, [12] N. Ohta and K. Niijima, “Detection of approaching cars via artificial
multi-resolution optical flow detection, static feature insect vision,” Electronics and Communications in Japan, vol. 88,
detection, and detection decision. no.10, pp. 57-65, 2005.
The proposed approach has the following properties: (i) [13] J. D. Alonso, E. R. Vidal, A. Rotter, and M. Mühlenberg,
"Lane-change decision aid system based on motion-driven vehicle
The approach is adaptive to various weather conditions. (ii) tracking," IEEE Trans. on Vehicular Technology, vol. 57, no. 5, pp.
The detection is not influenced by building shadows and 2736-2746, 2008.
traffic signals on ground. (iii) The detection is not influenced [14] J. Wang, G. Bebis, and R. Miller, "Overtaking vehicle detection using
dynamic and quasi-static background modeling," in Proc. IEEE Conf.
by complicated background due to the pre-defined detection on Computer Vision and Pattern Recognition, Reno, Nevada, June
area. (iv) Multi-resolution optical flow is used to detect 20-26, 2005, pp. 64-71.
vehicles in far or near distance. (v) Multi-resolution optical [15] M. A. Sotelo, J. Barriga, D. Fernández, I. Parra, J. E. Naranjo, M.
Marrón, S. Alvarez, and M. Gavilán, "Vision-based blind spot
flow is able to detect various-speed approaching vehicles. (vi) detection using optical flow," in Proc. 11th Int. Conf. on Computer
Static and motion features were proposed to detect vehicles Aided Systems Theory, Las Palmas, Spain, Feb. 12-16, 2007, pp.
such that the similar-speed side vehicles can also be detected. 1113-1118.
[16] M.-H. Yang and N. Ahuja, “Gaussian mixture model for human skin
(vii) The static and motion features were sequentially treated
color and its applications in image and video databases,” in Proc.
to improve the detection rate and reduce the false alarm. IS&T/SPIE Conf. on Storage and Retrieval for Image and Video
Databases VII, San Jose, CA, Jan. 23, 1999, pp. 458-466.
[17] J. Y. Bouguet, Pyramidal Implementation of the Lucas Kanade
Feature Tracker Description of the algorithm, Intel Microprocessor
REFERENCES Research Labs., 2007.
[1] R. Sosa and G. Velazquez, “Obstacles detection and collision
avoidance system developed with virtual models,” in Proc. IEEE Int.
Conf. on Vehicular Electronics and Safety, Beijing, China, Dec. 13-15, Din-Chang Tseng received his Ph.D. degree in
2007, pp. 1-8. information engineering from National Chiao Tung
[2] T. Mondal, R. Ghatak, and S. R. B. Chaudhuri, “Design and analysis of University, Hsinchu, Taiwan, in June 1988. He has
a 5.88GHz microstrip phased array antenna for intelligent transport been a professor in the Department of Computer
systems,” in Proc. Int. Symp. on Antennas and Propagation, Toronto, Science and Information Engineering at National
Ontario, Canada, July 11-17, 2010, pp. 1-4. Central University, Jhongli, Taiwan since 1996. He is
[3] J. Teizer, B. S. Allread, and U. Mantripragada, “Automating the blind a member of the IEEE. His current research interests
spot measurement of construction equipment,” Automation in include computer vision, image processing, and virtual
Construction, vol. 19, pp. 491-501, 2010. reality; especially in the topics: computer vision
[4] O. Achler and M. M. Trivedi, "Vehicle wheel detector using 2D filter system for advanced safety vehicle, computer vision techniques for human
banks," in Proc. IEEE Intelligence Vehicles Symp., Parma, Italy, computer interaction, and view-dependent multi-resolution terrain
Jun.14-17, 2004, pp. 25-30. modeling.
[5] N. Blanc, B. Steux, and T. Hinz, "LaRASideCam - a fast and robust
vision-based blindspot detection system," in Proc. IEEE Intelligent
Chang-Tao Hsu received his B.S. and M.S. degrees in
Vehicles Symp., Istanbul, Turkey, June 13-15, 2007, pp. 480-485.
the Department of Biomedical Engineering from Chung
[6] E. Y. Chung, H. C. Jung, E. Chang, and I. S. Lee, "Vision based for
Yuan Christian University, Taiwan, in 1996 and 1998,
lane change decision aid system," in Proc. 1st Int. Forum on Strategic
respectively. He is currently working for his Ph.D.
Technology, Ulsan, Korea, Oct. 18-20, 2006, pp. 10-13.
degree in the Department of Computer Science and
[7] M. Krips, J. Velten, A. Kummert, and A. Teuner, "AdTM tracking for
Information Engineering at National Central University,
blind spot collision avoidance," in Proc. IEEE Intelligent Vehicles
Jhongli, Taiwan. His research interest is real-time image
Symp., Parma, Italy, June 14-17, 2004, pp. 544-548.
processing in intelligent transportation systems.
[8] B.-F. Wu, W.-H. Chen, C.-W. Chang, C.-J. Chen, and M.-W. Chung,
"A new vehicle detection with distance estimation for lane change
warning systems," in Proc. IEEE Intelligent Vehicles Symp., Istanbul, Wei-Shen Chen received his B.S. degree in the
Turkey, June 13-15, 2007, pp. 698-703. Department of Computer Science and Information
[9] J. Zhou, D. Gao, and D. Zhang, "Moving vehicle detection for Engineering from Yuan Ze University, Jhonhli,
automatic traffic monitoring," IEEE Trans. on Vehicular Technology, Taiwan, in 2010, and M.S. degree in the Institute of
vol. 56, no. 1, pp. 51-59, 2007. Computer Science and Information Engineering from
[10] S. Mota, E. Ros, E. M. Ortigosa, and F. J. Pelayo, "Bio-inspired motion National Central University, Jhongli, Taiwan, in
detection for a blind spot overtaking monitor," Int. Journal of Robotics 2012. He is now working in the VIA Technologies,
and Automation, vol. 19, no. 4, pp. 190-196, 2004. Co. as a software engineer.
[11] S. Mota, E. Ros, J. Díaz, G. Botella, F. Vargas-Martin, and A. Prieto,
"Motion driven segmentation scheme for car overtaking sequences," in
521