0% found this document useful (0 votes)
33 views6 pages

Blind-Spot Vehicle Detection Using Motion and Static Feature

This paper proposes a blind-spot vehicle detection system that uses both static and motion features to improve detection rates and reduce false alarms under varying weather and environmental conditions. The system consists of four stages: 1) estimating adaptive threshold values for weather, 2) detecting optical flow for motion, 3) detecting static features, and 4) making a detection decision. Experiments using 14 videos under different conditions achieved a 96% detection rate with few false alarms, showing the system can accurately detect vehicles with various speeds while being robust to shadows, signals and weather.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views6 pages

Blind-Spot Vehicle Detection Using Motion and Static Feature

This paper proposes a blind-spot vehicle detection system that uses both static and motion features to improve detection rates and reduce false alarms under varying weather and environmental conditions. The system consists of four stages: 1) estimating adaptive threshold values for weather, 2) detecting optical flow for motion, 3) detecting static features, and 4) making a detection decision. Experiments using 14 videos under different conditions achieved a 96% detection rate with few false alarms, showing the system can accurately detect vehicles with various speeds while being robust to shadows, signals and weather.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

International Journal of Machine Learning and Computing, Vol. 4, No.

6, December 2014

Blind-Spot Vehicle Detection Using Motion and Static


Features
Din-Chang Tseng, Member, IACSIT, Chang-Tao Hsu, and Wei-Shen Chen

 detection; moreover, both static and motion information can


Abstract—When driving a vehicle on a road, if a driver want be used in the detection process. The static-feature detection
to change lane, he must glance the rear and side mirrors of his methods extract features such as corner points, edge points,
vehicle and turn his head to scan the possible approaching and underneath shadow of vehicles to detect possible
vehicles on the side lanes. However, the view scope by the above
behavior is limited; there is a blind spot area invisible. To avoid
approaching vehicles [4]-[9]. The motion detection methods
the possible traffic accident during lane change, we here propose used motion information such as optical flow to detect
a lane change assistance system to assist changing lane. Two approaching vehicles on a side lane with various speeds
cameras are mounted under side mirrors of the host vehicle to [10]-[12].
capture rear-side-view images for detecting approaching The main challenge of the vision-based detection methods
vehicles. The proposed system consists of four stages: estimation is to adapt various weather conditions and environment
of weather-adaptive threshold values, optical flow detection,
static feature detection, and detection decision. The proposed situations. The static methods may get false alarm to the static
system can detect side vehicles with various approaching speed; objects, build shadows, and traffic signals on ground. The
moreover, the proposed system can also adapt variant weather optical-flow vectors provide the motion information for
conditions and environment situations. Experiment with 14 detecting relative moving vehicles; however, only using
videos on eight different environments and weather conditions, optical-flow features cannot detect the similar-speed side
the results reveal 96 % detection rate with less false alarm.
vehicles [13], [14]. Moreover, different-length optical flows
may not always be detected in a long distance range of
Index Terms—Advanced driver assistance system, blind spot
detection, optical flow, underneath shadow features. approaching vehicles on the side lane.
Steolo et al. [15] proposed a vehicle detection method
using optical flow and Kalman filter. At first, they detected
I. INTRODUCTION edge points; these edge points are then clustered into groups.
Third, they calculated the optical-flow vectors of the edge
In these few decades, the rapidly-increasing vehicle
points in each group. Fourth, the optical flows of groups were
number and factors of road situation, driving environment,
used to generate vehicle candidates. At last, a support vector
and human attention make a large amount of traffic accidents
machine (SVM) algorithm was used to recognize whether a
and casualties. If there is a mechanism to help drivers to detect
vehicle candidate is a front part of a vehicle or not. The
the road situation and driving environment, and then provide
detection rate of the method is dependent on the training
useful information to warn drivers in these situations, the
samples for the SVM. An approaching vehicle has different
danger is therefore avoided. The advanced driver assistance
appearances in various distances; this fact influences the
systems (ADAS) are actively developed to help drivers
recognition rate of the SVM algorithm. Moreover, the
avoiding the possible dangers and assist drivers to drive in
similar-speed vehicle needs extra Kalman filter to decide; the
special environments [1]. Blind spot detection (BSD) system
process is time consuming and more complicated.
is a mechanism of ADAS for assisting driver to change lane. If
To benefit the performance of BSD systems, we here
there are near vehicles on the destination lane where the host
separately use static and motion features to detect
vehicle want to change into, the system will alarm drivers to
approaching vehicles and then combine the detection results
stop changing.
to improve the detection rate and reduce false alarm rate. The
A beneficial BSD system should have the properties:
proposed blind-spot detection system is described in Fig. 1.
accuracy, stable, high-speed execution, and low price. Two
The system consists of four stages: estimation of
kinds of BSD systems have been developed to detect
weather-adaptive threshold values, multi-resolution optical
overtaking vehicles; one is radar-based systems and the other
flow detection, static feature detection, and detection
is vision-based systems. The radar-based systems are stable
decision.
and have good performance, but the size is large, the detection
The proposed approach possesses the following properties.
function is limited, and the price is very expensive [1]-[3].
(i) We specially deal with the used threshold values such that
The vision-based systems capture a consequence images for
the proposed system is able to adapt various weather
conditions. (ii) The special-deal threshold values also make
Manuscript received May 12, 2014; revised June 28, 2014. This work the proposed system being not influenced by building
was supported in part by the National Science Council, Taiwan under the shadows and traffic signals on ground. (iii) We use
grant of the research project NSC 100-2221-E-008-115-MY3.
Din-Chang Tseng, Chang-Tao Hsu, and Wei-Shen Chen are with the multi-resolution optical flow to extract moving vehicles such
Institute of Computer Science and Information Engineering, National that vehicles can be detected in far or near distance. (iv) We
Central University, Jhongli, Taiwan 32001 (e-mail: [email protected]. combine static and motion features to detect vehicles such that
edu.tw, [email protected], [email protected]).

DOI: 10.7763/IJMLC.2014.V4.465 516


International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014

the similar-speed side vehicles can also be detected. (v) The w(j) = 1/length(j), (1)
static and motion features are sequentially treated to detect
moving vehicles and reduce the false alarm. where length (j) is the length of a line segment within the
The remainder of this paper is organized as follows. The ground detection area at j vertical image coordinate as shown
proposed method is presented in Section II. In Section III, in Fig. 3, where partial length (j) is outside of image.
experiment and their results are reported to demonstrate the
performance of the proposed methods. At last, the
conclusions are presented in Section IV.
Detection area

Initialize
Initialize and
and self-test
self-test
Preserve
Preserve the
the yes
previous
previous Interrupt
Interrupt Ground detection area 11.4 m
image
image data
data no
Input
Input images
images

2-D
2-D histogram
histogram analysis
analysis (a)

Traffic
Traffic sign
sign removal
removal

Pyramidal
Pyramidal images
images construction
construction
yes
first
first image
image R4
no
R3
Edge
Edge detection
detection
R2
Multiresolution
Multiresolution optical
optical flow
flow estimation
estimation

Optical
Optical flow
flow pruning
pruning and
and clustering
clustering
R1
no (b)
Positive
Positive optical
optical flow
flow
Fig. 2. Definition of detection1area. (a) The definition of detection area. Four
yes
detection regions of the ground detection area.
no
Detecting
Detecting underneath
underneath shadow
shadow
yes
yes
Alarm Significant
Significant underneath
underneath shadow
shadow
yes no
no significant
significant no
Negative
Negative optical
optical flow
flow
edges
edges
yes

Fig. 1. The procedure of the proposed method.


Length(j)
j
II. THE PROPOSED METHOD
The procedure of the proposed system is sequentially Fig. 3. The definition for distance weighting function.
presented as follows. Before vehicle detection, the detection
area in images is defined as shown in Fig. 2(a) to avoid the
interference of objects on the background. Moreover, to A. Estimation of Weather-Adaptive Threshold Values
concentrate the local detection and to distinct different In this study, we used three threshold values to judge edge
threaten degrees, the ground detection area is further divide pixels, lane marks, and underneath shadows of vehicles (USV);
into four far to near regions R1, R2, R3, and R4 as shown in Fig. however, the threshold values are variant to the weather
2(b), where the ground detection area is included in the conditions. Here we provide a 2D histogram analytic method
detection area to specify targets on the ground. to adaptively extract the three threshold values.
At different distances to the host vehicle, the side vehicles An (1D) intensity histogram is a number distribution of
have different sizes on images; this is called the perspective intensity of pixels in an image. The proposed 2D histogram is
effect in images. At long distance, the side vehicle appears similar to the traditional 1D histogram but with two
small and is detected unstable. If we release the threshold independent variables; one is intensity and the other is
value, many false alarms may then arise. That is, the gradient magnitude, as one example shown in Fig. 4 (b). The
perspective effect always influences the detection rate. To origin of the 2D histogram is the lower-left corner, the
improve the performance, we define a distance weighting horizontal axis represents different intensities of pixels, and
function to compensate the perspective effect. If j is the the vertical axis represents different gradient magnitudes of
vertical coordinate of images, the distance weighting function pixels. The brightness of a point in the 2D histogram
w(j) is defined as represents the number of pixels with specific intensity and

517
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014

gradient magnitude. respectively, then the three threshold values Timin, Timax, and Te
In general, only four materials: asphalt road surface, lane are determined as
marks and traffic signals, underneath shadows of vehicles
Timin =  x - 2 x ,
(USV), and vehicle bodies appear in the ground detection area.
Moreover, the pixels of every material show similar gray level Timax =  x + 2 x , and
and then form a normal (Gaussian) distribution in brightness. (4)
Lane marks and traffic signals on the ground are brighter than Te =  y + 2 y ,
asphalt road surface and have sharp edges. Underneath
where Timin is used separate the asphalt road surface and
shadows of vehicles are always darker than road surface. underneath shadows of vehicles (USV), Timax is used to
Vehicle bodies may brighter or darker than road surface; they separate the asphalt road surface and the lane marks/traffic
cannot be separated by brightness, but they have different signals, and Te is used to extract significant edge pixels which
gradient magnitudes. Based on the above facts, we want to will be used to compute valuable optical flows. Before motion
find the threshold values to separate the four materials on the and static feature extraction, the pixels belonging lane marks
ground detection area. or traffic signals on ground are pre-eliminated to reduce false
The pixels on the ground detection area are represented by detection.
a multivariate Gaussian mixture model (GMM) to model their
distribution in the 2D histogram. At first, 2D histogram is
smoothed by a bilateral filter to eliminate noise but preserve hj
the edge information. Second, the parameters of GMM are
estimated by means of the Expectation-Maximization (EM)
algorithm to maximize the posterior probability [16]. Assume pj
that there are K Gaussian functions and the k-th Gaussian
function is represented by Gk = G ( k k k), k = 1, 2, .., K, pi hi
where  k is a mixing parameter for the k-th Gaussian function,
0   k  1, and kK1 k  1.  k and  k are the 2D mean vector (a) (b)
and covariance matrix of the k-th Gaussian function, Fig. 4. The proposed 2D histogram. (a) A blind-spot image with pixels pi and
respectively. pj in the ground detection area. (b) The corresponding 2D histogram of the
ground detection area; where hi and hj are the corresponding point of pi and
The EM algorithm is iterative procedure for modifying pj in the 2D histogram.
model parameters to a stable state. Assume that there are N
pixels xi‘s on the ground detection area; K Gaussian classes
are considered. The model parameters  k kand  k for the
next step are iteratively updated as
 t 1  iN1 p t ( k | x i ) xi
 μk 
 iN1 p ( k | x i )
t

 2
 t 1  i 1 p ( k | x i ) xi  μk
N t t (a) (b)
Σk  , (2) Fig. 5. The 2D histogram of a ground detection area. (a) An original image.
 iN1 p ( k | x i )
t
 (b) The GMM of 2-D histogram of (a).
 t 1 1
 k  n  i 1 p ( k | x i )
N t
gradient

 magnitude
where Te
 kt t
p ( xi | k )  kt p t ( x i | k )
p t (k | x i )   (3)
pt ( x i)  Kj1  j p ( x i |
t t
j)
intensity
is the posterior (conditional) probability for a given pixel xi
belonging to the k-th model G k at the t-th step, p t (xi | k) is the
Timin Timax
prior (conditional) probability of xi while xi is selected from
the k-th model. Fig. 6. Threshold values are defined from the largest group of pixels in a 2D
The recursive refinement is executed until sum of the histogram.
model error is less than a pre-defined threshold value. One 2D
histogram is then modeled by a Gaussian mixture model and B. Multi-Resolution Optical Flow Detection
shown in Fig. 5. Optical flow is the main feature for detecting approaching
Consequentially, the largest group Ni (i.e., the Gaussian vehicles. Multi-resolution optical flows were estimated by the
function whose i is largest) is selected as the decision group least-squared version of L-K differential optical flow
(DG) which will be used to find the three threshold values. estimation method [17]. The optical flows are estimated from
The threshold values are decided as shown in Fig. 6. significant edge points and the edge points are selected based
If  = ( x,  y) and  = ( x,  y) are the mean and standard on the pre-determined edge threshold value Te described in
deviation vectors of the largest 2D Gaussian model, the last section.

518
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014

The procedure of optical-flow detection consists of three At last, all remained positive and negative optical flows are
main stages: estimation, filtering, and clustering. In the clustered into groups by a simple clustering method. The
estimation stage, the ground detection area is divided into results of positive and negative optical-flow groups are shown
three subareas as shown in Fig. 7 to calculate three levels of in Fig. 8. In the figure, the red blocks are positive groups and
optical-flow vectors. The three subareas are named O1, O2, blue blocks are negative groups.
and O3 from near to far on the ground detection area with To concentrate the local detection and to distinct different
decision boundaries at x = d1, d2, and d3. Three levels of threaten degrees, a detection mechanism is proposed based on
optical flows are estimated for subarea O1, two levels of the extracted positive optical flows.
optical flows are estimated for O2, and only one level of In the detection area, we accumulate the numbers of
optical flows is estimated for O3. Multi-level optical flows are positive optical flows in every detection regions Ri, i = 1, .. , 4.
estimated by a coarse-to-fine strategy and the coarser optical If Ri has more than m optical flows, we say that there is a
flows have been propagated to the finer level to calculate finer vehicle candidate in Ri and then increase ai by one. Initially, ai
optical flows such that a long range of optical flows can all be is zero. As the time is progressive, the captured images are
extracted. sequentially detected one by one. If a vehicle candidate
appears in Ri, ai is increased by one until ai equals six; if there
is no vehicle candidate in Ri, ai is decreased by one until ai
equals zero. At any minute, if condition

5  ai  ai 1  12 , i  1, 2, 3 or
 (8)
O3 3  ai  6, i  1, 2, 3, 4
O2
is met, one approaching is said to be detected based on the
O1 motion features.

x=0 d1 d2 d3 319
Fig. 7. Three subareas of ground detection area are defined for estimating
multi-level optical flows.

In the filtering stage, optical-flow vectors are filtered by


their lengths and directions. Different-distance approaching
vehicles have different-length optical-flow vectors even they (a) (b)
are in the same speed; thus the optical flows are separated into Fig. 8. Examples of optical flow clustering: (a) Positive optical-flow groups.
three classes to filter. The classification is the same as the (b) Negative optical-flow groups.
optical-flow subareas O1, O2, and O3 shown in Fig.7. Three
subareas use different threshold values to filter optical flows. C. Static Feature Detection
The decision rules for length are defined as To avoid the false detection of similar-speed vehicles from
motion-feature detection, we continuously uses two static
 u  O1 and u  2.00 pixels,
 features: underneath shadow of vehicles and sum of edge
u is remained, if u  O 2 and u  1.00 pixels, (5) magnitudes to extract all possible vehicle candidates on the
u  O 3 and u  0.25 pixels,
ground detection area. Based on the defined threshold value
Timin described in Section II-A, the pixels whose gray levels
where u is an optical-flow vector.
are less than Timin are classified as members of underneath
The direction of an optical-flow vector is useful for judging
shadow of vehicles. These pixels are then clustered. If any
a feature point approaching or leaving the host vehicle. In this
detection region Ri has a significant cluster of underneath
study, optical-flow vectors are classified into positive and
shadow pixels, a vehicle is then detected.
negative optical flows. A positive optical flow is an
The underneath shadow is unclear in dark hours;
approaching motion vector and a negative optical flow is a
underneath shadow cannot be used to detect approaching
leaving motion vector relative to the host vehicle. The
vehicles. However, there are head light of approaching
decision rules for positive and negative optical flows are
vehicles in the dark hours. Thus, the edge clusters ca be used
defined as
to detect approaching vehicles in the dark hours. Based on the
u is a positive optical flow,
defined threshold value Te described in Section II-A, the
if {u  O1 and -20 o <  < 20 o} or
pixels whose gradient magnitudes are greater than Ten are
{u  O2 and -15 o <  < 15 o} or (6) classified as significant edge points. If any detection region Ri
{u  O3 and -10 o <  < 10 o} has a significant number of edge points, a vehicle is then
u is a negative optical flow, detected.
if {u  O1 and 160 o <  < 200 o} or D. Detection Decision
{u  O2 and 165 o <  < 195 o} or (7)
To compensate the perspective effect, the gray levels and
{u  O3 and 170 o <  < 190 o} edge magnitudes in all above detection criteria are need to be
where  is the phase angle of u.
.
519
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014

adjusted by Eq. (1). Other than the above decision of positive and situations are Sunny day, cloudy day, tree shadow
optical flows, the negative optical flows are directly used to influence, high-speed approaching vehicle, in tunnel, traffic
reject the possible of approaching vehicles. The usage of sign on ground, at night hour, and heavily rainy day. Samples
motion features and static features for blind-spot vehicle of detection vehicles in the eight conditions and situations are
detection is special treated as above description and shown in illuminated in Fig. 9.
the last part in Fig. 1. The detection performance is quantitatively evaluated by
theory of statistical hypothesis testing. Four relations between
detection results and the actual situation: true positive (TP),
III. EXPERIMENTS true negative (TN), false positive (FP), and false negative (FN)
The proposed approach was conducted to demonstrate the are represented in Table I. From the definition, the accuracy
performance of blind-spot vehicle detection. All experimental of detection is given as
RGB color images were captured from a waterproof digital
TP  TN
camera; they are all in size of 320240 pixels. The digital accuracy (%)  100 . (9)
TP  TN  FP  FN
camera was mounted below the right-side wing mirror. The
proposed methods were implemented by C language with
The experimental results are summarized in Table II. The
standard ANSI C library on a general PC, Intel Core 2
overall accuracy is 95.67 %. The accuracy of the sunny days
Duo P8700 2.53 GHz CPU with Microsoft Windows 7 and cloudy days are higher than that in night and heavily rainy
operation system. The source code was also complied to day. The sunny days and cloudy days have the highest
execute on an embedded system TI DaVanci DM6737 to accuracy in four weather conditions: Sunny day, cloudy day,
evaluate the execution performance. at night hour, and heavily rainy day. The road surface is dusky
and unclear in the night and heavily rainy days; all features:
optical flow, edge, and underneath shadow, are unstable to be
detected. Thus the accuracy is not so good.
From the experimental results, we can find that all four
environment situations: tree shadow, high-speed approaching
vehicle, in tunnel, and traffic sign on ground, do not influence
the detection rate of our blind-spot detection.
The proposed system was executed on the mentioned
(a) (b) personal computer. We got the average execution time for one
frame is about 28 milliseconds. The numbers of feature points
(edge points) in sunny and cloudy days are greater than that in
night and heavily rainy days; thus the execution time in sunny
and cloudy days is slightly greater than that in night and
heavily rainy days.

TABLE I: FOUR RELATIONS BETWEEN DETECTION RESULTS AND THE


(c) (d) ACTUAL SITUATION
Action Vehicle exist No vehicle
Detection
False positive (FP)
Vehicle detect True positive (TP)
(Type I error)
Don’t detect False negative (FN)
True negative (TN)
vehicle (Type II error)

TABLE II: THE STATISTIC RESULTS IN EACH WEATHER CONDITIONS


(e) (f) Weather Total Vehicle # of detected Type I/Type
Accuracy
conditions frames number vehicles II error #
Sunny 717 546 542 12/4 97.77%
Cloudy 274 162 157 8/5 95.26%
Tree shadow 350 220 218 4/2 98.29%
Quick vehicle 209 57 55 3/2 97.61%
In tunnel 694 317 307 21/10 95.53%
Traffic sign 476 321 317 10/4 97.06%
(g) (h)
Fig. 9. Examples of approaching vehicle detection in different weather Night hour 874 432 409 39/23 92.91%
conditions and environment situations. (a) Sunny day. (b) cloudy day. (c) Heavily rainy 674 537 512 13/25 94.36%
Tree shadow influence. (d) High-speed approaching vehicle. (e) In tunnel. (f)
Traffic sign on ground. (g) At night hour. (h) Heavily rainy day.

There are totally 4,268 images in eight weather conditions IV. CONCLUSION
or environment situations for examination. These conditions
A blind-spot detection system was proposed in this study.

520
International Journal of Machine Learning and Computing, Vol. 4, No. 6, December 2014

The proposed blind-spot detection system consists of four Proc. 10th Int. Conf. on Vision in Vehicles, Granada, Spain, Sep. 7-10,
2003, pp.1-6.
stages: estimation of weather-adaptive threshold values, [12] N. Ohta and K. Niijima, “Detection of approaching cars via artificial
multi-resolution optical flow detection, static feature insect vision,” Electronics and Communications in Japan, vol. 88,
detection, and detection decision. no.10, pp. 57-65, 2005.
The proposed approach has the following properties: (i) [13] J. D. Alonso, E. R. Vidal, A. Rotter, and M. Mühlenberg,
"Lane-change decision aid system based on motion-driven vehicle
The approach is adaptive to various weather conditions. (ii) tracking," IEEE Trans. on Vehicular Technology, vol. 57, no. 5, pp.
The detection is not influenced by building shadows and 2736-2746, 2008.
traffic signals on ground. (iii) The detection is not influenced [14] J. Wang, G. Bebis, and R. Miller, "Overtaking vehicle detection using
dynamic and quasi-static background modeling," in Proc. IEEE Conf.
by complicated background due to the pre-defined detection on Computer Vision and Pattern Recognition, Reno, Nevada, June
area. (iv) Multi-resolution optical flow is used to detect 20-26, 2005, pp. 64-71.
vehicles in far or near distance. (v) Multi-resolution optical [15] M. A. Sotelo, J. Barriga, D. Fernández, I. Parra, J. E. Naranjo, M.
Marrón, S. Alvarez, and M. Gavilán, "Vision-based blind spot
flow is able to detect various-speed approaching vehicles. (vi) detection using optical flow," in Proc. 11th Int. Conf. on Computer
Static and motion features were proposed to detect vehicles Aided Systems Theory, Las Palmas, Spain, Feb. 12-16, 2007, pp.
such that the similar-speed side vehicles can also be detected. 1113-1118.
[16] M.-H. Yang and N. Ahuja, “Gaussian mixture model for human skin
(vii) The static and motion features were sequentially treated
color and its applications in image and video databases,” in Proc.
to improve the detection rate and reduce the false alarm. IS&T/SPIE Conf. on Storage and Retrieval for Image and Video
Databases VII, San Jose, CA, Jan. 23, 1999, pp. 458-466.
[17] J. Y. Bouguet, Pyramidal Implementation of the Lucas Kanade
Feature Tracker Description of the algorithm, Intel Microprocessor
REFERENCES Research Labs., 2007.
[1] R. Sosa and G. Velazquez, “Obstacles detection and collision
avoidance system developed with virtual models,” in Proc. IEEE Int.
Conf. on Vehicular Electronics and Safety, Beijing, China, Dec. 13-15, Din-Chang Tseng received his Ph.D. degree in
2007, pp. 1-8. information engineering from National Chiao Tung
[2] T. Mondal, R. Ghatak, and S. R. B. Chaudhuri, “Design and analysis of University, Hsinchu, Taiwan, in June 1988. He has
a 5.88GHz microstrip phased array antenna for intelligent transport been a professor in the Department of Computer
systems,” in Proc. Int. Symp. on Antennas and Propagation, Toronto, Science and Information Engineering at National
Ontario, Canada, July 11-17, 2010, pp. 1-4. Central University, Jhongli, Taiwan since 1996. He is
[3] J. Teizer, B. S. Allread, and U. Mantripragada, “Automating the blind a member of the IEEE. His current research interests
spot measurement of construction equipment,” Automation in include computer vision, image processing, and virtual
Construction, vol. 19, pp. 491-501, 2010. reality; especially in the topics: computer vision
[4] O. Achler and M. M. Trivedi, "Vehicle wheel detector using 2D filter system for advanced safety vehicle, computer vision techniques for human
banks," in Proc. IEEE Intelligence Vehicles Symp., Parma, Italy, computer interaction, and view-dependent multi-resolution terrain
Jun.14-17, 2004, pp. 25-30. modeling.
[5] N. Blanc, B. Steux, and T. Hinz, "LaRASideCam - a fast and robust
vision-based blindspot detection system," in Proc. IEEE Intelligent
Chang-Tao Hsu received his B.S. and M.S. degrees in
Vehicles Symp., Istanbul, Turkey, June 13-15, 2007, pp. 480-485.
the Department of Biomedical Engineering from Chung
[6] E. Y. Chung, H. C. Jung, E. Chang, and I. S. Lee, "Vision based for
Yuan Christian University, Taiwan, in 1996 and 1998,
lane change decision aid system," in Proc. 1st Int. Forum on Strategic
respectively. He is currently working for his Ph.D.
Technology, Ulsan, Korea, Oct. 18-20, 2006, pp. 10-13.
degree in the Department of Computer Science and
[7] M. Krips, J. Velten, A. Kummert, and A. Teuner, "AdTM tracking for
Information Engineering at National Central University,
blind spot collision avoidance," in Proc. IEEE Intelligent Vehicles
Jhongli, Taiwan. His research interest is real-time image
Symp., Parma, Italy, June 14-17, 2004, pp. 544-548.
processing in intelligent transportation systems.
[8] B.-F. Wu, W.-H. Chen, C.-W. Chang, C.-J. Chen, and M.-W. Chung,
"A new vehicle detection with distance estimation for lane change
warning systems," in Proc. IEEE Intelligent Vehicles Symp., Istanbul, Wei-Shen Chen received his B.S. degree in the
Turkey, June 13-15, 2007, pp. 698-703. Department of Computer Science and Information
[9] J. Zhou, D. Gao, and D. Zhang, "Moving vehicle detection for Engineering from Yuan Ze University, Jhonhli,
automatic traffic monitoring," IEEE Trans. on Vehicular Technology, Taiwan, in 2010, and M.S. degree in the Institute of
vol. 56, no. 1, pp. 51-59, 2007. Computer Science and Information Engineering from
[10] S. Mota, E. Ros, E. M. Ortigosa, and F. J. Pelayo, "Bio-inspired motion National Central University, Jhongli, Taiwan, in
detection for a blind spot overtaking monitor," Int. Journal of Robotics 2012. He is now working in the VIA Technologies,
and Automation, vol. 19, no. 4, pp. 190-196, 2004. Co. as a software engineer.
[11] S. Mota, E. Ros, J. Díaz, G. Botella, F. Vargas-Martin, and A. Prieto,
"Motion driven segmentation scheme for car overtaking sequences," in

521

You might also like