0% found this document useful (0 votes)
30 views

Lane Detection Algorithm Based On Temporal-Spatial

This document summarizes a lane detection algorithm based on temporal-spatial information matching and fusion. The algorithm first segments images into binary features from effective ranges. It then transforms the feature images into a top-view space using reverse perspective transformation and combines them with vehicle INS information. Through temporal-spatial matching and fusion of multi-frame feature images in the top-view space, lane features with abundant data in a wider range are obtained. Clustering is then used to achieve single-lane feature clustering and filter interference. Lane curves are fitted using least-squares, and prediction between frames improves accuracy. Experimental results show the algorithm has strong anti-interference, high efficiency, and stable performance.

Uploaded by

Sabari Selvam B
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Lane Detection Algorithm Based On Temporal-Spatial

This document summarizes a lane detection algorithm based on temporal-spatial information matching and fusion. The algorithm first segments images into binary features from effective ranges. It then transforms the feature images into a top-view space using reverse perspective transformation and combines them with vehicle INS information. Through temporal-spatial matching and fusion of multi-frame feature images in the top-view space, lane features with abundant data in a wider range are obtained. Clustering is then used to achieve single-lane feature clustering and filter interference. Lane curves are fitted using least-squares, and prediction between frames improves accuracy. Experimental results show the algorithm has strong anti-interference, high efficiency, and stable performance.

Uploaded by

Sabari Selvam B
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

ReView by River Valley This

Technologies CAAI Transactions


article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

CAAI Transactions on Intelligence Technology

ISSN 2468-2322 01/08 pp 1-9

Volume 2, Issue 4, December 2017

A Lane Detection Algorithm Based on Temporal–Spatial Information Matching and


Fusion

Jun Wang*, Bin Kong, Tao Mei, Hu Wei


Jun Wang, Bin Kong, Tao Mei, and Hu Wei are with Institute of Intelligent Machines, Chinese Academy of
Science, Hefei 230031, China. E-mail: [email protected].

Jun Wang and Bin Kong are with Department of Automation, University of Science and Technology of China,
Hefei 230026, China.

* To whom correspondence should be addressed.

Manuscript received: 2017-09-12; accepted: 2017-11-16

Abstract: Lane detection is one of the key technologies of unmanned systems. Given the problems
of less single-image information and poor anti-interference ability, a lane detection algorithm based
on temporal–spatial information matching and fusion by reverse perspective transformation is
proposed in this study. Using this algorithm, the images captured by a camera are first segmented
with the binary features extracted from the effective ranges. Next, the feature images are
transformed into their reverse perspectives in the top-view space and are combined with the INS
information of the unmanned vehicle. The continuous multi-frame feature image data undergo
temporal–spatial information matching and fusion in the top-view space, thereby obtaining the
lane feature images with abundant data within a wider range. Cluster analysis is then conducted
on the feature image information after matching to achieve single-lane feature data clustering and
to filter out interference. Using the least-squares curve-fitting algorithm, the lane feature data are
fitted to a parametric curve after clustering. Finally, a model of the prediction is established
between two adjacent frame images to achieve lane tracking prediction and to further improve the
accuracy of the lane detection. Experimental results show that the superior advantages of the lane
detection algorithm proposed in this study include strong anti-interference, high detection
efficiency, and excellent algorithm stability and timelines.

Key words: reverse perspective transformation; binary features; temporal–spatial information


matching and fusion; least-squares curve-fitting algorithm; lane tracking prediction

1 Introduction
Unmanned vehicles have a significant influence on the development of national defense,
society, economy, and science and technology and are currently part of the strategic research
objectives of all countries in the high-tech field [1-3]. Unmanned systems consist of multiple
functional modules with various functions, including environmental perception, behavioral
decision-making, path planning, vehicle motion control, and so on. Lane detection is one

2018/10/16 16:47:32 IET Review Copy Only 2


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

element of environmental perception and has a very important role in the behavioral
decision-making and path planning of unmanned vehicles [4-6].

Lane detection has been a hot research topic since the 1990s [11–18] and has been widely used
in driver assistance systems, serving as lane departure warning system for highway
environments with simple road conditions and well-defined lanes [19–21]. Studies show that the
usage of lane departure warning systems in the United States has driven down the lane
departure crashes of small cars by 10% and those of large pickups by 30% [7–8].At the University
of California PATH Program, for example, magnetic markers were embedded in the road and
used with a magnetometer mounted to the front of the car [9]. While this is a very reliable
system, the implementation cost is too high to be practical in all roads. Another system,
implemented at the IV Lab at the University of Minnesota, combines differential GPS
information with high resolution digital maps to determine lane position and is demonstrated
to work in all weather conditions [10]. However, this system requires digital maps with under a
meter of accuracy which currently do not exist at wide enough level to make this system
practical. Based camera- systems provide an extremely rich stream of data produced by
imaging equipment, typically mounted in the crew cabin, to identify lane lines.

In recent years, studies on lane detection have always focused on the applicability,
robustness, timeliness, and other characteristics of the algorithm [22–26]. In terms of their features,
detection algorithms can be categorized into the following two classes: (1) feature-based and (2)
model-based algorithm.

The former primarily uses the road surface features, such as gradient, color, texture, and so
on and extracts road information by detecting the growth of the region and variations in the
edge gradient [27–35]. In Ref. [29], an inverse perspective projection transformation is applied to
search the virtual image of the ground plane for the horizontal brightness contour and to detect
the segmented region by changing the brightness via the adaptive threshold. In Ref. [30], the lane
direction in the near-vision field is found via the edge distribution function (EDF), and a
direction-changeable filter is used to process the image, suppress the interference of unwanted
information and noise, and effectively enhance the lane information. However, edge features
depend on the quality of the input image, so that the processing effects of the algorithm in
which shadows, watermark, and/or any other noise interfere are not good. To solve these
problems, a gradient-enhanced algorithm is proposed [31], and a color conversion vector is
generated and converted, thereby attaining the gray-level image with the maximum lane
gradient and effectively reducing shadow interference. Ref. [32] proposes the conversion of the
RGB image into the CIE color space for significance analysis and the use of k-means clustering
to calculate the lane section obtained from image segmentation. In Ref. [33], linear analysis is
conducted on each color channel of the RGB image, which facilitates the calculation of the color
distribution of the road surface and signs and the segmentation and extraction of the lane
region. In Ref. [34], the calculation is simplified: the size of the input color image is reduced, the
Bayes classifier is used, and the likelihood of each pixel of the road surface is determined, with
the image segmented into two sections, the road and non-road sections. The feature-based lane
detection algorithm is chiefly characterized by insensitivity to road shape and by a relatively
higher robustness; however, it is more sensitive to shadows and watermarks and involves a
relatively complicated calculation process.

The model-based algorithm transforms road detection into the solution of the road model
[36–47, 51, 52]
. Considering that the road has a specific model, it is modeled to exert some constraints
on and simplify the detection process. Given that the model accounts for the overall road status,

2018/10/16 16:47:32 IET Review Copy Only 3


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

with the parametric representation of the road, the model-based algorithm has a strong
resistance to the interferences of shadows, watermarks, and other external influences.

The model-based algorithm can be subdivided into the image space-based algorithm and
the top-view space-based algorithm. The former is used to establish lane line models to detect
the lane lines in the original image space. Representative studies on the model algorithm based
on the vanishing point cite the following sources: in Ref. [42], a lane detection algorithm based
on the spline curve model is proposed. The spline curve is used to characterize the lane line
model in the original image space. The concept of the vanishing point is formed based on the
perspective projection transformation. By segmenting the image plane and detecting lines
through the Hough transform, the vanishing point is predicted with the control point of the
spline curve used for fitting the lane, thereby ultimately obtaining the final lane by fitting the
spline curve using the control point. In Ref. [43], a CHEVP algorithm is put forward, with the
probability of the vanishing line based on the model of the vanishing point. The vanishing point
and the vanishing line are predicted by iteratively introducing the Hough transform. In Ref.
[44], the vanishing point and the vanishing line are tracked based on the CHEVP algorithm by
introducing the particle filter algorithm. Considering the results of the adjacent frames in the
lane detection process, the algorithm is robust against occlusions, shadows, and traffic signs. In
Ref. [50], a new structured road detection algorithm based on the well-distributed and
aperiodic B-spline model is proposed. The edge image is extracted using the Canny edge
detection algorithm, and the least squares is used to fit the road surface markings. The middle
line of the road is finally obtained from the extracted road surface markings, thereby fitting the
road. To accurately locate the curve, the control point of the road model is solved using the
maximum deviation of the position shift (MDPS). In Ref. [51], a lane detection algorithm is
proposed based on the B-Spline, and the borders of the road are assumed to be two parallel
straight lines. Road detection is abstracted as the middle line of the road, and the B-spline curve
is used to fit the road. The algorithm based on the original image space model is vulnerable to
non-edge interference because the adopted lane detection model cannot reflect the actual road
structure and the detection rate is not very high.

The algorithm based on the top-view space model obtains the original image from the
top-view space via reverse perspective transformation and establishes a lane model to detect
the lane. As mentioned in Ref. [41], Pomerleau et al. enhanced the constraints on the algorithm
with the clothoid-based lane model on the assumption that the width between the lanes is fixed.
The algorithm is conducive to the rapid location of the road surface markings to meet the
constraints in the image and has good timeliness in a relatively simple road environment. When
the road model is complex, that is, the road widens or the lane is missing, the detection results
are undesirable. In Ref.[45], a rapid structured road detection algorithm based on the hyperbola
is introduced, and it involves extracting two parallel marking lines of the road via the
mid-to-side method. The algorithm is simple and has strong anti-interference, but adopts the
hyperbola for fitting the road. When more interference points appear, the fitting curve of the
road fluctuates violently. A statistical Hough transform-based lane detection algorithm with
gradient constraints is introduced for the first time for lane detection in Ref. [47], where the
original image is converted into the top-view space by conducting reverse perspective
transformation, calculating the brightness, gradient, size, and direction of the image, extracting
the lane features by means of the gradient constraints, and finally detecting the lane through the
statistical Hough transform. This method is effective in freeways and other straight road
environments, but poor in curvy roads, especially in those marked with dotted lines. The TLC
algorithm, which is used in Ref. [46] as a typical algorithm for lane detection in the top-view
space, directly converts the original image captured with a camera into the top-view image by

2018/10/16 16:47:32 IET Review Copy Only 4


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

creating geometric correspondence between the visual plane and the top-view plane, extracting
the edge features from the top-view image, searching the lane features, and finally fitting the
lane using the quadratic curve. A lane detection algorithm based on the top-view space is
offered in Ref. [40]. First, the contours are extracted after the adaptive binarization of the
original image. Next, it is converted into the top-view space through reverse perspective
transformation. Third, the lane features from the top-view space are searched using linear
cluster analysis. Finally, the lane is fitted using the cubic B-spline. This algorithm exerts certain
constraints on the detection process by introducing the actual road model parameters.
However, given the single-frame image information for detection and the less information and
presence of more interference in the real road scene (e.g., shadows, stains, lane wear, etc.), they
are likely to cause mistakes in lane detection.

Considering these issues, a lane detection algorithm based on the matching and fusion of
the top-view space is proposed in this study. The image features are converted into the
top-view space after segmentation using reverse perspective transformation. The constraints on
the detection process are derived by introducing the actual road model parameters. Combined
with INS information, the continuous multi-frame lane feature information is integrated into a
single-frame feature image using the matching algorithm. Consequently, the lane information
containing abundant data within a wider range is obtained, the interference of unrelated edges
is reduced, and the anti-interference ability in the lane detection is improved. The detection
results of adjacent frames are tracked and predicted, thereby establishing a long-distance and
wider-range lane-tracking model of the top-view space and further improving the accuracy of
lane detection.

2 Implementation of the Algorithm

2.1 Algorithm flowchart

The algorithm proposed in this study is divided into the following parts. (1) Image
segmentation is mainly designed for image binarization and noise filtering. (2) Reverse
perspective transformation is conducted to convert the results of image segmentation into the
top-view space and facilitate subsequent processing. The core of the lane detection algorithm in
this study is processed in the top-view space. (3) The position relations between multi-frame
images in the top-view space within a continuous time are predicted using the unmanned
vehicle INS information, specifically through the matching and fusion of the feature
information. The matching and fusion of the continuous multi-frame lane data are
implemented, and more abundant lane data are obtained. (4) The lane search results are
clustered to obtain the lane data from the above item classified through fuzzy clustering
analysis, and the feature data of each lane are finally classified. (5) Lane fitting is conducted to
classify the lane feature data as the above item is fitted into a parameter curve using the
least-squares fitting algorithm. (6) Lane tracking is then applied using the Kalman filter
algorithm to establish the prediction-tracking relationship between the frames before and after
the lane detection and further improve the accuracy and efficiency of lane detection. The entire
process involved in the lane detection algorithm is shown in Fig. 1.

2018/10/16 16:47:32 IET Review Copy Only 5


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

Initialize
algorithm

Read a single- Read INS


frame image information

Image
segmentation

Reverse projective Historical


transformation data

Feature fusion

Clustering
search

Establish a
Curve fitting tracking
model

Output

Fig. 1 Flowchart of the lane detection algorithm.

2.2 Image segmentation


Urban roads are structured roads with some white and yellow markings that are particular to them.
Road detection essentially consists of conducting lane-marking detection in the resultant image. The
most commonly used method is image segmentation, a process and technique in which the captured
image is divided into a plurality of specific regions with unique properties. Objects of interest are
also raised, as well as the key to shift from image processing to image analysis.

Image segmentation typically involves contour extraction and edge detection. The simplest
method of contour extraction is global binarization, which involves the conversion of the gray-level
image into a binary image based on a gradient threshold. This method results in apparent defects in
the image detail, and it is a time-consuming and complicated process that hardly meets the
requirements of timely road image processing. Using the local binarization method, the whole image
is segmented into N windows according to certain rules, and all the pixels of each window is
divided into two parts for binarization by a unified threshold T. Using the local adaptive
binarization method [40] and based on local binarization, the settings of the threshold are optimized,
and the threshold for the binarization of each pixel depends on the surrounding pixel, thereby better
showing the details of the binary image.

The edge features contain important lane feature information. In the road image, the gray level
at the junction of the lane and road surface changes dramatically and can be analyzed by edge
detection. The basic idea of edge detection is to highlight the edge of the image locally using the edge
enhancement operator and to extract the edge point set by setting the threshold. The commonly used
detection operators include the differential operator, Laplacian operator, and canny operator [48,49].
The canny edge detection operator is a relatively new edge detection operator that allows the
following: Gaussian filtration, prediction of the magnitude and direction of the gradient via the
finite difference of the first-order partial derivatives, non-maxima suppression of the magnitude of
the gradient, good edge detection and connection via the dual-threshold algorithm, and better
balance between edge detection and noise reduction. Given that the lane is substantially extended
longitudinally, the lane edge features can be effectively extracted along the direction of y.

2018/10/16 16:47:32 IET Review Copy Only 6


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

The method of image segmentation in this study combines contour extraction and edge
detection, through which the binary contour of the image is extracted using the local adaptive
binarization algorithm and the contour noise is filtered. Using the edge detection algorithm, the edge
features of the image in the longitudinal direction are extracted, and the boundary noise in the
horizontal direction is filtered. Finally, the binary images obtained from the two methods are
processed using the logical AND operator (&&), thereby attaining the desired effect of image
segmentation. The implementation of the image segmentation algorithm is as follows.

Self-adaptive
Contour
binary contour
filtration
extraction

Input
AND Out image
image
CANNY edge Edge
extraction filtration

Fig. 2 Flowchart of image segmentation.

Fig. 3b represents the contour extraction results, with good contour extraction effects, but with
boundary noise in the horizontal direction. Fig. 3c represents the edge extraction results, which
show that the boundary noise in the horizontal direction is effectively suppressed, but too much
contour noise is produced. Fig. 3d represents the final results after the combination of the two,
effectively filtering the contour noise and the boundary noise.

(a) (b)

(c) (d)

2018/10/16 16:47:32 IET Review Copy Only 7


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

Fig. 3 Effect drawing of image segmentation: (a) the original image captured with a video camera, (b) the
image after contour extraction and filtration, (c) the image after edge extraction and filtration, and (d) the
final results of image segmentation.

2.3 Reverse perspective transformation


Given the presence of an included angle between the optical axis of the camera and the road when
shooting, some visual attributes of the lane are difficult to intuitively represent in the camera plane.
For instance, different lane lines are parallel and have relatively fixed spacing. In addition, the lane
image matching between different frames is hard to implement in the camera plane. Thus, the lane
data on the camera plane must be converted into the top view. In the top-view coordinate system,
the shadows and interferences of other vehicles within the lane are eliminated using the lane
attributes, and between-frame data matching and fusion are implemented.

The visual system of the unmanned vehicle consists of three coordinate systems: the image,
camera, and world coordinate systems. The image coordinate system is a coordinate system of the
image captured using a video camera. Each pixel serves as the unit, and the coordinates of the pixel
(u, v) represent the column and row, respectively, where the pixel is located. The camera coordinate
system is a 3D coordinate system with the camera as the physical center. The relationship between
the image and the camera coordinate systems is established by the camera imaging model, usually
using the pinhole camera model. Assuming that the light of the imaging model is cast on the video
camera image via an infinitely small hole, a 3D scene is formed, as shown Fig. 4. In the figure, xc, yc
and zc represents a 3D scene point under the camera coordinate system, Xc, Yc, and Zc represent the
coordinate axes of the camera, and f represents the focal length of the camera.

X c P( xc , yc , zc )

p

o o
Zc
f

Yc

Fig. 4 Perspective projection in the pinhole camera model.

The world coordinate system (WCS) refers to the absolute coordinate system of the camera in a
real environment, the coordinate system in the unmanned vehicle system, or the coordinate system
in the top-view space, as mentioned in this study paper, which consists of Xw, Yw, and Zw. The
relationship among the image coordinate system, camera coordinate system, and WCS are as shown
in Fig. 5. In the figure, Xw, Yw, and Zw represent the WCS coordinate axes, Xc, Yc, and Zc represent the
camera coordinate axes, and u, v represent the image coordinate axes.

Yw
Xc

v
Oc
Yc
Zc
u
Xw
Ow

Zw

2018/10/16 16:47:32 IET Review Copy Only 8


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

Fig. 5 Relationship between the camera and WCS.

The conversion relationship between the image coordinate system and the WCS is as shown in
Equation (1):

 1 
 0 u0 
 xw 
u   dX  f 0 0
t   yw 
0
R
s v  =  0 v0   0 0 0   T
1
1  zw 
f
 dY  0
l   0 1 0   
1 
0
 0 0 1
 
(1)

 xw 
 fx 0
t   yw 
0 u0
R
=  0 0   T = M 1M 2 X w = MX w
1  zw 
fy v0
 0 0
0 1 0   
1

where dX, dY represent the physical dimensions of each pixel in the directions of the X -and Y -axis,
respectively, s represents the scale factor, R represents the spin matrix, t represents the translation
vector, fx=f/dX represents the scale factor in the direction of the U-axis, fy=f/dY represents the scale
factor in the direction of the V-axis, M1 represents the internal parameter matrix of the camera, M2
represents the outer parameter matrix of the camera, and M represents the perspective projection
matrix.

Reverse perspective transformation consists of the reverse of the perspective projection, which
indicates the process in which the image space coordinates (u, v) on the camera plane are converted
into those in the top-view space (Xw, Yw, Zw). The reverse perspective transformation matrix is
obtained from the mapping between the image coordinate system and the top-view space coordinate
system. Converting Equation (1),

X w 
u   
  Yw  (2)
v  = N Z 
1   
w
 
1

where N represents the 3×4 matrix determined by the internal and external parameters of the
camera. Converting Equation (2),

X w 
  u 
Yw  = N 'v  (3)
Z w   
  1 
 
1

where N’ represents the 4×3 reverse perspective transform matrix.

NN ' = E 3 (4)

The reverse perspective transform matrix N’ can be obtained by calibrating the camera [50], and
the (Xw, Yw, Zw) that corresponds to the image pixel (u, v) on each camera plane can be calculated by
Equation (3). Fig. 6 shows an effect drawing of the reverse perspective transformation after image
segmentation.

2018/10/16 16:47:32 IET Review Copy Only 9


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

(a) (b)

Fig. 6 Effect drawing of reverse perspective transformation: (a) the camera coordinate system and (b)
top-view space coordinate system after reverse perspective transformation.

2.4 Matching and fusion of feature information


During the processing of single-frame image information, the single-frame image contains less
information, as well as more interference in the real road scene, such as shadows, stains, wear, and
so on, which are likely to cause lane detection errors. The lane feature information in the sequence of
multi-frame images captured with a camera is retained. Given the INS speed of the unmanned
vehicle and the angular speed information, the multi-frame lane information is matched and fused,
which in turn retains the historical information of the road image captured while driving, builds a
lane image within a wider range, and improves the current accuracy and robustness of lane
detection.

Given the speed and angular velocity of unmanned vehicles, the equation for the position
conversion is as follows:

E = B + Vt cos(wt )(5)


where E represents the end position, B represents the start position, V represents the speed of
vehicle, w represents the angular speed, and Δt represents the camera sampling cycle.

For multi-frame camera images converted into the top-view space within a period of time, the
appropriate policy must be selected to obtain the image sequence fused into one and form
continuous and effective information. The lane coordinates in the top-view space constantly change
with the current position of the unmanned vehicle in the top-view space, given that the origin of
coordinates constantly changes while driving. Thus, the first few frames of the lane position must be
re-described every time the top-view space image is updated; that is, the position must be converted
in accordance with certain laws of motion. In X-Y of the top view coordinate system, the travel
direction of the unmanned vehicle is assumed to be in the direction of the Y-axis, and the lane
position of t in the X-Y coordinate system is (xt, yt). When the acceleration of the unmanned vehicle
can be ignored, the lane position of t+1 is calculated by the following equation:

y 't +1 = yt + Vt (6)

x't +1 = xt (7)
where V represents the speed of the unmanned vehicle and Δt represents the top-view image update
cycle. When the angular speed of the unmanned vehicle is w, the rotation of the lane position in the
coordinate system is calculated by the following equation:

2018/10/16 16:47:32 IET Review Copy Only 10


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

xt'
xt +1 = xt'2 + yt'2 cos(arc cot + wt ) (8)
yt'

xt'
y t +1 = xt'2 + yt'2 cos(arc cot + wt )(9)
yt'

Using the conversion of the space coordinates system, the motion of the unmanned vehicle can
be divided into two types: translational and rotational motion. The lane position (x’t+1, y’t+1) of the
unmanned vehicle in the translational motion can be obtained using Equations (6) and (7), and the
lane position (xt+1, yt+1) in the rotational motion can be obtained using Equations (8) and (9). The
effect drawing of feature matching and fusion is shown in Fig. 7.

(a) (b)

Fig. 7 Effect drawing of feature matching and fusion: (a) the image of the current frame in the top-view
space and (b) image after the matching and fusion of the feature information.

2.5 Lane clustering algorithm

After the feature matching and fusion of the sequence image data in the top-view space are
completed, a lane feature image with abundant data is obtained. This contains the data information
of multiple lanes, and the data information of each lane is classified as a class through cluster
analysis. Cluster analysis is conducted on the feature image to classify the data of the feature image
by lane features, with each cluster representing the feature data set of a lane, as in Fig. 8. In this
study, a linear cluster analysis algorithm based on the density features is adopted. By locating the
high-density region with linear features that are separate from the low-density region, the feature
dataset of different lanes is differentiated.

2018/10/16 16:47:32 IET Review Copy Only 11


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

(a) (b)

Fig. 8 Diagram of the cluster analysis: (a) the original feature image and (b) cluster analysis Clustering is
implemented on the lane feature image in this study as discussed below.

An overview of the presented cluster algorithm is given in Algorithm 1.

Algorithm 1 Clustering algorithm proposed in this study

1. Q  change column from start to end

2. S  find pixel form row start to end

3. if find pixel value>0; do

4. T  k-means clustering from current column to end;

5. if have k-means clustering ;do

6. put pixel to clustering number; then

7. put pixel value == 0;

8. else

9. return T;

10. if T(column number)== length

11. return S;

12. if find pixel value==0 ;do

13. return S;

14. if (row number)= width; do

15. return Q;

16. if Q(column number)== length

17. end;

2.6 Curve fitting

The general shape of a real urban road is regular, with a unitary trend appearing within a limited
distance, so that the road shape is fully embodied by the quadratic curve, instead of any cubic curve,
which is prone to the divergence phenomena caused by a small amount of data interference. The
general quadratic equation is given by:

y = c0 + c1 x + c2 x 2 (10)

Using the least squares method, and substituting c0, c1, and c2 in the undetermined coefficients,
the quadratic curve is plotted, with the corresponding data points close to the curve as much as

2018/10/16 16:47:32 IET Review Copy Only 12


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

possible. In general, the data points do not fall on the quadratic curve completely, which indicates
that the corresponding coordinates have certain errors and that the curve equation is not satisfied.
The overall error of all the data points is then expressed by the following residual sum of squares:

 
i =k 2

F (c0 , c1 , c2 ) =  (c0 + c1 xi + c2 xi2 ) − yi (11)


i =1

This is a triple function with regard to c0, c1, and c2, and k represents the total number of data
points. The value of each line equation parameter is determined by taking the minimum value of the
objective function. Assuming the function:

F F F
= = = 0 (12)
c0 c1 c2

The following equation of a normal to a quadratic polynomial function is obtained, thereby


finding the minimum point of the function:

 i =k i =k
  i =k 
 k x x i
2
i    yi 
 i =k i =1
i =k
i =1
i =k  c0   i = k i =1

 x xi3   c1  =   xi yi  (13)
 x 
2

i =1
i
i =1
i
i =1
  i =1 
 i =k i =k i =k  c2   i = k 
 xi2 x 3
 xi 
4
 xi yi  2

 i =1   i =1 
i
i =1 i =1

The above equation is solved by the select all PCA Gaussian elimination method. The solutions
c0, c1, and c2 are obtained, and the quadratic equation in question is therefore determined with the
fitting results, as shown in Fig. 9.

(a) (b)

Fig. 9 Effect drawing of curve fitting: (a) the original feature image, (b) the clustering results with white
points and the least squares-based curve fitting results with a red curve.

2.7 Lane tracking

The shaking of the vehicle body, light and shade changes, vehicle interference, and other factors
during driving may cause image jitter, damages to lane markings, dirt pavement and shadows,
thereby resulting in the failure to recognize single- or multi-frame lane images. Given that the video

2018/10/16 16:47:32 IET Review Copy Only 13


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

sequence changes continuously over time, the lane on the timeline also has some connection
properties, and the position correlation between two adjacent frames can be established, thereby
allowing the use of the detection results of the previous image to detect the lane in the current
image. The algorithm proposed in this study is based on the top-view space model, where the lane
changes between the adjacent frames are very insignificant. This is approximated by the following
linear equation:

X '(k + 1) = A(k )X (k ) + B(k ) (14)

 cos  sin   (15)


A(k ) = 
− sin  cos  

 − (xk + xk ) cos  − (yk + yk ) sin  + xk  (16)


B( k ) =  
− (xk + xk ) sin  − (yk + yk ) cos  + yk 

wherein X(k) represents the lane detection results at the time point of k, X’(k+1) represents the
prediction results of k+1, θrepresents the included angle between the unmanned vehicle body and
the road in front, xk and yk represent the location of the unmanned vehicle in the image at the time
point of k, and Δxk and Δyk represent the changes in the location of the unmanned vehicle in the
horizontal and longitudinal directions at the time point of k. These data can be obtained from the
INS-based information. Using Equations (14–16), the most likely close-range lane region in the
next frame can be predicted from the detection results of the lane image of the current frame,
thereby achieving lane detection and tracking, with the lane feature search scope narrowed and the
detection accuracy improved. An effect drawing of lane tracking is shown in Fig. 10.

(a) (b)

Fig. 10 Effect drawing of lane tracking: (a) the image after the matching and fusion in the top-view space
and (b) the lane tracking, the results for which are marked in green and those for the curve fitting are
marked in red.

3 Results
The algorithm proposed in this study is applied to the “Pioneer Smart” unmanned vehicle (Fig.
11) developed by Hefei Institutes of Physical Science of the Chinese Academy of Sciences. It is
furnished with several visual and INS sensors for lane detection, a camera for image acquisition
(640 pixels×480 pixels, 15HZ), and hardware and software platforms (Intel Core™ i5-3120M

2018/10/16 16:47:32 IET Review Copy Only 14


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

CPU 2.5 GHz, RAM 2.5 G, VS2008, and OpenCV). Figs. 12–18 show the lane detection results of
several typical urban roads using the algorithm: (a) the original image captured by a camera, (b)
the feature image after image segmentation, (c) the feature image after matching and fusion,
and (d) the image after lane tracking and fitting, where the lane tracking results are marked in
green and the fitting results are marked in red.

Fig. 11 Installation of the sensors of the unmanned vehicle.

Fig. 12 shows an undulating road with double yellow lines in the middle. Given that the
algorithm proposed in this study paper is based on reverse perspective transformation in the
top-view space, the real lane model can be visualized intuitively in the top-view space. In an actual
road environment, the lanes should be parallel to one another with fixed spacing and directly
constrained by the actual road model parameters, so that the double yellow lines are not mistakenly
detected as two lanes.

(a) (b) (c) (d)

Fig. 12 Effect drawing of lane detection and tracking with double yellow lines.

As shown in Figs. 13 and 14, despite the presence of passing vehicles and shadows in the actual
road environment, the algorithm proposed in this study has a good filtering effect, with the shadows
filtered by image binarization and the vehicle interference eliminated based on the wider-range and
abundant lane feature data obtained after the reverse perspective transformation in the top-view
space and temporal–spatial information matching and fusion.

2018/10/16 16:47:32 IET Review Copy Only 15


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

(a) (b) (c) (d)

Fig. 13 Effect drawing of lane detection and tracking in the presence of shadows and vehicle interferences.

(a) (b) (c) (d)

Fig.14 Effect drawing of lane detection and tracking in the presence of shadows and serious vehicle
interferences.

Fig. 15 represents a road containing intersections with zebra crossings, traffic signs, and some
other interfering factors. The lane feature information in the binary image of the current frame is
blurred. The algorithm, along with the wealth of available wider-range data and lane tracking
mechanism, introduced in this study allows the full use of historical information in the lane
detection of the current frame, thereby avoiding any false detection or any missing caused by the
lack of lane feature information on the current frame.

(a) (b) (c) (d)

Fig. 15 Effect drawing of lane detection and tracking in the presence of zebra crossings and arrow signs.

Figs. 16 and 17 represent the curves depicted by the dotted lines. In Fig. 16, the feature
information of three lanes with four lines is well matched and fused, with good detection results.
However, in Fig. 17, given that the curvature of the dotted-line curve is too large, the field of view of
the camera contains only the left lane and the present lane, with a total of three lines for matching
and fusion. Thus, the final detection results are limited to the left lane and the current lane.

2018/10/16 16:47:32 IET Review Copy Only 16


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

(a) (b) (c) (d)

Fig. 16 Effect drawing of lane detection and tracking of the curve depicted by dotted lines.

(a) (b) (c) (d)

Fig. 17 Effect drawing of lane detection and tracking with an excessively large curvature.

Fig. 18 represents a curve that is depicted by dotted lines and contains a small intersection with
zebra crossings and traffic signs and whose curvature is also large. The algorithm proposed in this
study eliminates the interferences of the dotted-line curve and other non-lane interferences under
such road condition, thereby successfully achieving matching, fusion, and tracking. The final test
results are accurate, and the algorithm exhibits good flexibility and strong anti-interference.

(a) (b) (c) (d)

Fig. 18 Effect drawing of lane detection and tracking with curvature and in the presence of zebra crossings
and arrow signs.

To verify the superiority of the algorithm proposed in this study, two typical algorithms—the
CHEVP algorithm proposed by Wang Yue and the TLC algorithm proposed by Gianni Cario—are
selected for comparison. The CHEVP algorithm is a lane detection algorithm based on the original
image space model, and the TLC algorithm is a typical lane detection algorithm based on the
top-view space model.

2018/10/16 16:47:32 IET Review Copy Only 17


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

In Figs. 19–23, the detection results of the algorithm proposed in this study are compared with
those of the CHEVP and TLC algorithms: (a) the lane detection results of the CHEVP algorithm, (b)
those of the TLC algorithm, and (c) those of the algorithm proposed in this study (i.e., all the results
are marked in red).

Under undulating road conditions with double yellow lines, as shown in Fig. 19, neither the
CHEVP algorithm nor the TLC algorithm has detection errors. However, the CHEVP algorithm can
detect only the current lane, and without introducing the actual lane model, the double yellow lines
on the left side are often mistakenly detected as the left-sideline of the lane, as shown in Fig. 19(a).
The TLC algorithm requires image processing in the top-view space, which allows the introduction
of the actual lane model. However, given the limited information of the current frame, the lane
detection results of the left or right-side lanes are not very good, as shown in Fig. 19(b). The
algorithm proposed in this study overcomes the shortcomings of the CHEVP and TLC algorithms
and can effectively detect the side lines of the three lanes, as shown in Fig. 19(c).

(a) (b) (c)

Fig. 19 Comparison of the algorithms in terms of their detection results.

The CHEVP algorithm mistakenly detects zebra crossings and arrow signs as lane features, as
shown in Fig. 20(a). Fig. 20 (b) shows the detection results of the TLC algorithm, and the zebra
crossings and arrow signs lead to a failure to detect the left-sideline of the current lane and errors in
detecting the right-side line. By contrast, the detection results of the algorithm proposed in this
study are correct, as shown in Fig. 20(c).

(a) (b) (c)

Fig. 20 Comparison of the algorithms and their detection results.

Fig. 21 represents a curve depicted by dotted lines. The CHEVP algorithm predicts the vanishing
point via linear detection using the Hough transform and detects the left and right-side lines of the

2018/10/16 16:47:32 IET Review Copy Only 18


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

current lane by seeking the optimal lines on the left and right sides. This leads to the solid line
having a higher weight than the dotted line and to the direct estimation of the vanishing point from
the left yellow solid line of the left lane and the right white solid line of the right lane, as shown in
Fig. 21(a). The TLC algorithm fails to detect the left-side line of the lane and causes errors in
detecting the right line because of failed clustering, as shown in Fig. 21(b). By contrast, the results of
the algorithm proposed in this study are correct, as shown in Fig. 21(c).

(a) (b) (c)

Fig. 21 Comparison of the algorithms and their detection results.

Under the road conditions shown in Fig. 22, the CHEVP algorithm fails to correctly detect the
left-side line of the lane within the dotted–solid line transition region because of the large curvature,
as shown in Fig. 22a. The clustering analysis of the TLC algorithm is difficult to complete because of
insufficient information in the current frame, and only a very small part of the lane is detected, as
shown in Fig. 22b. By contrast, the algorithm proposed in this study can detect the lane correctly.
However, given the very large curvature, the field of view of the camera does not cover the right
lane, so that the algorithm detects only the side lines of the current and left-side lanes, as shown in
Fig. 22c.

(a) (b) (c)

Fig. 22 Comparison of the algorithms and their detection results.

Fig. 23 represents a curve depicted by dotted lines and contains zebra crossings and arrow signs.
Both the CHEVP and TLC algorithms result in errors in detection because of the interfering factors,
as shown in Figs. 23a and 23b. By contrast, the results of the algorithm proposed in this study are
correct, as shown in Fig. 23c.

2018/10/16 16:47:32 IET Review Copy Only 19


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

(a) (b) (c)

Fig. 23 Comparison of the algorithms and their detection results.

To further prove the effectiveness and robustness of the algorithm proposed in this study, several
sections of the freeways and urban roads in Hefei, Anhui Province are chosen for a test. Such
sections involve a wide variety of road conditions, such as tree shadows, sidewalks, vehicle
interferences, strong light, horizontal linear lanes, and others. During the test, the accuracy of the
algorithm is evaluated by comparing the difference between the detection result and the ground
truth. When collecting experimental data, all the data contains lane line information, although there
are a variety of interference. Therefore, it can be assumed that all video frames should be detected
lane. The correct rate of the lane line detection is the ratio of the correctly detect lane number of
frames and the total number of frames.

It can prove the effectiveness of the experimental results in several aspects. (1) Lane shape and
trend, we can determine the effectiveness of the experimental results by comparing the test results
with the original image lane shape and trend. (2) Lane position. In this paper, the experimental data
is collected by the human driving experimental vehicle, so the real situation of the vehicle in most of
the time in the middle of the current lane regional location, so we can determine the effectiveness of
the experimental results through the result of lane location of the results. (3) Lane width. As the lane
on the road there is a relatively fixed width, so we assume that the current lane test results fall within
a certain width are considered correct detection. Table 1 shows the detection results of the CHEVP
and TLC algorithms and those of the algorithm proposed in this study for lane detection, with
several videos of a video sequence that is statistically tested. The results consist of the statistics of the
current lane, and the statistics of the urban roads exclude intersections. (4) Practical application
effect. The method has been applied to series of unmanned vehicles those researched by Chinese
Academy of Sciences and achieved good results. When collecting experimental data, all the data
contains lane information, although there are a variety of interference. Therefore, it can be assumed
that all video frames should be detected lane. The correct rate of the lane line detection is the ratio of
the correctly detect lane number of frames and the total number of frames.

Table 1 Comparison of the algorithm proposed in this study and the CHEVP/TLC algorithm in terms of
detection rate.

Algorithm Freeway Urban road

CHEVP algorithm 89.76% 76.24%

2018/10/16 16:47:32 IET Review Copy Only 20


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

TLC algorithm 93.12% 82.67%

Algorithm proposed in this paper 99.59% 88.70%

As Table 1 shows, the CHEVP algorithm has good effects on the processing of the original image
captured with a camera. However, given a limited practical range, it is susceptible to any non-lane
interference and has a poor lane edge detection rate. The TLC algorithm, which requires image
processing in the top-view space, is easy to introduce into the actual road model and therefore
effectively eliminates simple road surface interferences. However, the algorithm, which relies on
single-frame image processing, is difficult to apply in a wider range and abundant lane data from
the top-view image, and also is not very good in eliminating some complex road surface
interferences, such as traffic signs, zebra crossing, and others. The lane algorithm proposed in this
study has wider-range and complete lane feature data, reduces the interference of any irrelevant
edge, and achieves a higher detection rate regardless of freeways or urban roads. Although
windows, vehicles, and some traffic signs exist, freeways have very few changes in terms of road
conditions and more complete lanes. Moreover, the detection rate of the algorithm proposed in this
study is close to 100%. Given serious lane wear and more complex interferences in the part of the
sections of urban roads, the detection rate of the algorithm decreases slightly, but is still better than
the CHEVP and TLC algorithms.

Fig. 24 shows the calculations of the preview point using the CHEVP and TLC algorithms and
the algorithm proposed in this study. The preview point refers to the center point of the lane with a
certain distance between the target lane and the unmanned vehicle. Based on practical experience,
the center point of the lane with a distance of 20 m between the current lane and the unmanned
vehicle is taken as the preview point in this study. The working principle of the unmanned vehicle
system is that the decision-making system does not directly use any lane information from the
sensing system but converts lane information into preview point information to guide the direction
of the motion of the unmanned vehicle. Thus, the accuracy and reliability of the preview point
information have direct effects on the travel of the unmanned vehicle. The preview point data in Fig.
24b are obtained from the results of the same set of lane data using the three algorithms. As the
figure shows, the fluctuations of the preview point obtained from the lane detection algorithm
proposed in this study, which are marked in red in Fig. 24b, are less significant than those obtained
from the other two algorithms, which are marked in blue and green, respectively, in Fig. 24b. This
indicates that the algorithm proposed in this study is superior to the CHEVP and TLC algorithms in
terms of the accuracy and reliability of lane detection.

2018/10/16 16:47:32 IET Review Copy Only 21


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

Fig. 24 Statistics of the three algorithms in calculating the preview point.

The algorithm proposed in this study also has an ideal processing speed and therefore facilitates
the statistics of the captured videos. Table 2 presents the average processing time of the CHEVP and
TLC algorithms and the algorithm proposed in this study. As the table shows, the CHEVP algorithm
is more time consuming because of the repetitive adoption of the Hough transform for line
calculation. Both the TLC algorithm and the algorithm proposed in this study adopt reverse
perspective transformation. However, the TLC algorithm first directly processes the original image
by reverse perspective transformation, whereas the algorithm proposed in this study is less time
consuming and processes the original image by reverse perspective transformation after
segmentation.

Table 2 Comparison of the algorithm proposed in this study and the CHEVP/TLC algorithms in terms of
speed.

Algorithm Time (ms)

CHEVP algorithm 60

TLC algorithm 18

Algorithm proposed in this study 16

5 Conclusion
This study mainly describes a lane detection algorithm based on reverse-projection space feature
matching and fusion. The algorithm improves the binarization of lane features by combining the
contour extraction and edge detection. It also obtains wider-range and multi-lane feature data by
converting the binary image into the top-view space via reverse perspective transformation
combined with continuous multi-frame image feature matching and fusion of the INS information in
the top-view space. It overcomes the shortcomings of conventional lane algorithms in the camera
plane, such as the difficulties of representing the lane model intuitively and the lack of wider-range
lane data from a single image. It also improves anti-interference in lane detection. Finally, the
algorithm also builds a lane prediction model for tracking lane detection, establishes correspondence
between two adjacent frames of the lane image, and further increases the reliability and robustness.
Several road scenes are selected for lane detection, and the results indicate the effectiveness and

2018/10/16 16:47:32 IET Review Copy Only 22


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

robustness of the algorithm proposed in this study. In addition, the algorithm is compared with two
typical algorithms—the CHEVP and TLC algorithms—in terms of lane detection results. The
comparison shows that the algorithm proposed in this study is superior to the two conventional
algorithms under the road conditions. Several sections of freeways and urban roads are also selected
for testing, and statistics show that the algorithm proposed in this study is better than the two
conventional algorithms in terms of detection rate and reliability. Furthermore, it is less time
consuming, has good timeliness, and satisfies the motion planning for unmanned vehicle systems.

Although the algorithm proposed in this study can achieve better results, some problems remain
to be addressed. For instance, when a road surface takes an undulating form, the accuracy of
continuous multi-frame feature matching and fusion declines. In certain particular sections, such as
long-distance tunnels and others, the INS data accuracy may decrease, thereby affecting the
accuracy of continuous multi-frame feature matching and fusion and weakening the lane detection
and tracking. In future research, multiple road features will be further combined with more sensor
information to enhance the flexibility and robustness of the proposed algorithm.

Acknowledgment

The authors gratefully acknowledge the help of our team members, whose contributions were essential for the
development of the unmanned vehicle. The authors also acknowledge the support of two National Nature
Science Foundations of China: “Key technologies and platform for unmanned vehicle in urban integrated
environment” (91120307) and “Key technologies and platform for unmanned vehicle based on visual and
auditory cognitive mechanism” (91320301).

Author Contributions

All four authors contributed to this work during its entire phase. Jun Wang was responsible for the literature
search, algorithm design, and the data analysis. Tao Mei, Bin Kong and Hu Wei made substantial contributions
in the plan and design of the experiments. Jun Wang was responsible for the writing of the article. Hu Wei
helped in modifying the paper. Finally, all the listed authors approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References
1. B. Wilcox, L. Matthies, D. Gennery, B. Cooper, T. Nguyen, T. Litwin. A. Mishkin, and H. Stone, Robotic
vehicles for planetary exploration, in Proceedings of IEEE International Conference on Robotics and Automation,
1992, pp. 175-180.

2. G. Bekey, R. Ambrose, V. Kumar, A. Sanderson, B. Wilcox, and Y. Zheng, WTEC Panel Report on
International Assessment of Research and Development In Robotics. , World Technology Evaluation Center,
2006.

3. U. Franke, S. Mehring, A. Suissa, and S. Hahn, The Daimler-Benz steering assistant a spin-off from
autonomous driving, in Proceedings of the Intelligent.Vehicles’94 Symposium, Paris, 1994.

4. R. Bishop,, A survey of intelligent vehicle applications worldwide, in Proceedings of the IEEE Intelligent
Vehicles Symposium(IV’2002), 2000.

2018/10/16 16:47:32 IET Review Copy Only 23


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

5. D. J. Verburg, A.C.M. van der Knaap, and J. Ploeg, VEHIL: developing and testing intelligent vehicles, in
Proceedings of the IEEE Intelligent Vehicle Symposium (IV’2002), 2002, vol. 2, pp. 537-544.

6. T. Mei, H. W. Liang, B. Kong, J. Yang, H. Zhu, B. C. Li, J. J. Chen, P. Zhao, T. J. Xu, X. Tao, W. Z. Zhang, Y.
Song, H. Wei, and J. Wang, Development of ‘Intelligent Pioneer’ unmanned vehicle, in Proceedings of the
Intelligent Vehicles Symposium (IV), 2012, pp. 938-943.

7. B. H. Wilson, Safety benefits of a road departure crash warning system, Transportation Research Board
87th Annual Meeting, 2008, no. 08-1656.

8. C. Visvikis, T. L. Smith, M. Pitcher and R. Smith, Study on lane departure warning and lane change
assistant systems, Transport Research Laboratory. Rep, 2008, no. PPR 374.

9. R. Rajamani, H. S. Tan, B. K. Law and W. B. Zhang, Demonstration of integrated longitudinal and lateral
control for the operation of automated vehicles in platoons, IEEE Transactions on Control Systems Technology,
vol. 8, no. 4, pp. 695-708, 2000.

10. R. Bishop, Intelligent Vehicle Technology and Trends. Norwood, MA: Artech House, 2005.

11. K. Kluge, Extracting road curvature and orientation from image edge points without perceptual grouping
into features, in Proceedings of IEEE Intelligent Vehicles Symposium, 1994, pp. 109-114.

12. Z. W. Kim, Robust Lane Detection and Tracking in Challenging Scenarios, IEEE Transactions on Intelligent
Transportation System, vol. 9, no. 1, pp. 16-26, 2008.

13. A. Houser, J. Pierowicz, and D. Fuglewicz, Concept of operations and voluntary operational
requirements for lane departure warning systems (LDWS) on-board commercial motor vehicles,
Federal Motor Carrier Safety Administration, Washington, DC, 2005.

14. H. Y. Cheng, B. S. Jeng, P. T. Tseng, and K. C. Fan, Lane detection with moving vehicles in the traffic
scenes, IEEE Transactions on Intelligent Transportation System, vol. 7, no. 4, pp. 571-582, 2006.

15. Q. B. Truong and B. R. Lee , New lane detection algorithm for autonomous vehicles using computer vision,
in Proceedings of the IEEE International Conference on Control, Automation and Systems, Seoul, Korea: IEEE,
2008, pp. 1208-1213.

16. A. Amditis and M. Bimpas , A situation-adaptive lane-keeping support system: overview of the SAF-
ELANE approach, IEEE Transactions on Intelligent Transportation Systems, , vol. 11, no. 3, pp. 617-629, 2010.

17. M. A. Lalimi, S. Ghofrani, and D. Mclernon, A vehicle license plate detection method using region and
edge based methods, Computers & Electrical Engineering, vol. 39, pp. 834–845, 2013.

18. S. Jung, J. Youn, and S. Sull, Efficient Lane Detection Based on Spatiotemporal Images, IEEE Trans.
Intelligent Transportation Systems, vol. 17, pp. 289 - 295, 2016.

19. S. B. Yacoub and J. M. Jolion, Hierarchical line extraction, IEEE Proc Vis Image Signal Process, vol. 142, no. 1,
pp. 7–14, 1995.

20. P. H. Batavia, Driver-adaptive lane departure warning systems.CMU-RI-TR-99-25, 1999.

21. C. R. Jung and C. R. Kelber, A lane departure warning system based ona linear-parabolic lane model, in
IEEE Intelligent Vehicles Symposium, 2004, pp. 891–895.

22. H. Yoo, U. Yang, and K. Sohn, Gradient-enhancing conversion for illumination-robust lane detection, IEEE
Transactions on Intelligent Transportation Systems, vol. 14, no. 3, pp. 1083-1094, 2013.

23. A. Borkar, M. Hayes, and M. T. Smith, An efficient method to generate ground truth for evaluating lane
detection systems, in Proceedings of the IEEE International Conference on Acoustics Speech and Signal
Processing(ICASSP), 2010, pp. 1090–1093.

2018/10/16 16:47:32 IET Review Copy Only 24


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

24. A. B. Hillel, R. Lerner, D. Levi, and G. Raz, Recent progress in road and lane detection: a survey, Machine
Vision and Applications, vol. 25, pp. 727–745, 2014.

25. B. S. Shin, Z. Xu, and R. Klette, Visual lane analysis and higher-order tasks: a concise review, Machine
Vision and Applications, vol. 25, pp. 1519–1547, 2014.

26. J. Son, H. Yoo, S. Kim and K. Sohn, Real-time illumination invariant lane detection for lane departure
warning system, Expert Syst. Appl., vol. 42 ,no. 4 , pp. 1816 -1824 , 2015.

27. K. Y. Chiu and S. F. Lin, Lane detection using color-based segmentation, in Proceedings of the IEEE
Intelligent Vehicles Symposium, Washington D. C., USA: IEEE, 2005, pp. 706-711.

28. S. Azali, T. Jason, M. H. A. Hijazi, and S. Jumat, Fast lane detection with randomized hough transform, in
Proceedings of theInformation Symposium on Information Technology, KualaLumpur, Malaysia: IEEE, 2008,pp .
1-5.

29. M. Bertozzi and A. Broggi, GOLD: A Parallel Real-Time Stereo Vision System for Generic Obstacle and
Lane Detection, IEEE Transaction on Image Processing, vol. 7, no. 1, pp. 62-81, 1998.

30. C. R. Jung and C. R. Kelber, A robust linear-parabolic model for lane following, in Proceedings of the 17th
Brazilian Symposium on Computer Graphics and Image Processing, Curitiba, Brazil, 2004, vol. 10, pp. 72 -79.

31. H. Yoo, U. Yang, and K. Sohn, Gradient-Enhancing Conversion for Illumination-Robust Lane Detection,
IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 3, pp. 1083-1094, 2013.

32. C. Ma and M. Xie, A method for lane detection based on color clustering, in Proceedings of International
Conference on Knowledge Discovery and Data Mining, Phuket, 2010, pp. 200-203.

33. Z. Tao, P. Bonnifait, V. Fremont, and J. Ibanez-Guzman, Lane marking aided vehicle localization,in
Proceedings of 2013 16th International IEEE Conference on Intelligent Transportation Systems(ITSC), 2013.

34. A. Geiger, M. Lauer, C. Wojek, C. Stiller, and R. Urtasun, 3D Traffic Scene Understanding From Movable
Platforms, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 1012 – 1025, 2014.

35. Q. Q. Li, L. Chen, M. Li, S. L. Shaw, and A. Nuchter, A sensor-fusion drivable-region and lane-detection
system for autonomous vehicle navigation in challenging road scenarios, IEEE Transactions on Vehicular
Technology, vol. 63, no. 2, pp. 540-555, 2014.

36. H. J. Liu, Z. B. Guo, J. F. Lu, and J. Y. Yang, A fast method for vanishing point estimation and tracking and
its application inroad images, in Proceedings of the 6th International Conference on ITS Telecommunications
Proceedings, Chengdu, China: IEEE, 2006, pp. 106-109.

37. M. Meuter, S. Muller-Schneiders, A. Mika, S. Hold, C. Nunn C, and A. Kummert, A novel approach to
lane detection and tracking, in Proceedings of the 12th International IEEE Conference on Intelligent
Transportation Systems, St. Louis, USA: IEEE, 2009, pp. 1-6.

38. B. G. Zheng, B. X. Tian J. M. Duan, and D. Z. Gao, Automatic detection technique of preceding lane and
vehicle, in Proceedings of the IEEE International Conference on Automation and Logistics, Qingdao, China: IEEE,
2008, pp. 1370-1375.

39. A. Watanabe, T. Naito and Y. Ninomiya, Lane detection with roadside structure using on-board
monocular camera, in Proceedings of the IEEE Intelligent Vehicles Symposium, Xi’an, China: IEEE, 2009, pp.
191-196.

40. J. Wang, T. Mei, B. Kong, and H. Wei, An approach of lane detection based on Inverse Perspective
Mapping, in Proceedings of 2014 IEEE 17th International Conference on Intelligent Transportation Systems
(ITSC), 2014, pp. 35-38.

41. D. Pomerleau and T. Jochem, Rapidly adapting machine vision for automated vehicle steering, Machine
Vision, vol. 11, no. 2, pp. 19-17, 1996.

2018/10/16 16:47:32 IET Review Copy Only 25


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

42. Y. Wang, D. Shen, and E. K. Teoh, Lane detection using Catmull-Rom spline, in Proceedings of IEEE Int.
Conf. Intelligent Vehicles,1998, pp. 51-57.

43. Y. Wang, E. K. Teoh, and D. Shen, Lane detection and tracking using B-Snake, Image Vision Computing, vol.
22, pp. 269-280, 2004.

44. Y.Wang, L. Bai, and M. Fairhurst, Robust road modeling and tracking using condensation, IEEE Trans.
Intelligent Transportation Systems, vol. 9, pp. 570-579, 2008.

45. Q. Chen and H. Wang, A real-time lane detection algorithmbased on a hyperbola-pair model, in
Proceedings of the IEEE Intelligent Vehicles Symposium, Tokyo, Japan: IEEE, 2006, pp. 510-515.

46. G. Cario, A. Casavola, G. Franze, and M. Lupia, Predictive Time-to-Lane-Crossing Estimation for Lane
Departure Warning Systems, National Highway Traffic Safety Administration, 2009.

47. Y. Z. Peng and H. F. Gao, Lane detection method of statistical Hough transform based on gradient
constraint, Int. J. Intell. Inf. Syst., vol. 4, no. 2, pp. 40-45, 2015.

48. C. Y. Chang and C. H. Lin, An efficient method for lane-mark extraction in complex conditions, in
Proceedings of International Conference on Ubiquitous Intelligence and Computing and Autonomic and Trusted
Computing, Fukuoka, 2012, pp. 330-336.

49. Y. J. Fan, W. G. Zhang, X. Li, L. Zhang, and Z. Cheng, A robust lane boundaries detection algorithm based
on gradient distribution features, in Proceedings of 2011 8th International Conference on Fuzzy Systems and
Knowledge Discovery(FSKD), Shanghai, 2011, no. 3, pp. 1714-1718.

50. M. B. Du, T. Mei, H. W. Liang, J. J. Chen, R. L. Huang, and P. Zhao, Drivers’ Visual Behavior-Guided RRT
MotionPlanner for Autonomous On-Road Driving, Sensors, vol. 16, no. 1, pp. 102, 2016.

51. H. R. Xu, X. D. Wang, and Q. Fang, Structure Road Detection Algorithm Based on B-spline Curve Model,
Acta Automatica Sinica, vol. 37, no. 3, pp. 270-275, 2011.

52. J. Deng, J. Kim, H. Sin, and Y. Han, Fast lane detection based on the B-Spline fitting, Int. J. Res. Eng. Technol.,
vol. 2, no. 4, pp. 134 -137, 2013.

Jun Wang was born in 1982. He received the Bachelor Degree in Electronic Information Engineering in
2005 and received the Master Degree in Control Theory and Engineering in 2008 at Wuhan University of
Science and Technology. From 2008 to present, he has worked in Biomimetic Vision Laboratory, Institute
of Intelligent Machines, Chinese Academy of Sciences. In 2016, he received Doctor Degree in Control
Science and Engineering, University of Science and Technology of China. His research interests include
robot vision and unmanned vehicle environment perception, and participated in several projects supported
by Natural Science Foundation of China.

2018/10/16 16:47:32 IET Review Copy Only 26


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

Bin Kong, Ph.D, Professor. She received the Doctor Degree at University of Science and Technology of
China in 2005. From 1986 to present, she has worked in Institute of Intelligent Machines, Chinese
Academy of Sciences. Main interests are research and application of image processing and computer
vision. She has responsible for and participated in several projects supported by Natural Science
Foundation of China, Fund of Chinese Academy of Science, the National "973" Plan, the National "863"
Plan and developed several industrial machine vision systems.

Mei Tao was born in 1962, Ph.D, research fellow, doctoral advisor. He received the Doctor Degree at
University of Science and Technology of China in 2001. Chairman of the Central Research Institute of
China Security Technology Co., Ltd, Vice chairman of the Youth Federation of the Chinese Academy of
Sciences, Committee Director of Chinese Association of Automation. Special government allowance from
the State Council, special allowance from Anhui Provincial Government and funding from Outstanding
Youth Science and Technology Fund of Anhui Province. He has responsible for and commitment in over
ten research projects, includes the National Key Research, the National "973" Plan and the National "863"
Plan.

Dr. Hu Wei was born in 1974. He received his Master of Science in Applied Mathematics in 2006 and
Doctor of Engineering in Manufacturing Engineering of Aerospace Vehicle in 2011 at Nanjing University
of Aeronautics and Astronautics. From 2011 to present, he has been working at the Biomimetic Vision
Laboratory, Institute of Intelligent Machines, Chinese Academy of Sciences. His research field is machine
vision and industrial automation.

2018/10/16 16:47:32 IET Review Copy Only 27


ReView by River Valley This
Technologies CAAI Transactions
article has been accepted for publication in a future issue of this journal, but has not beenon Intelligence
fully edited. Technology
Content may change prior to final publication in an issue of the journal. To cite the paper please use the doi provided on the Digital Library page.

2018/10/16 16:47:32 IET Review Copy Only 28

You might also like