0% found this document useful (0 votes)
15 views5 pages

(2004IVS) LaneMarkingDetection Formule

Uploaded by

Mihai Buie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views5 pages

(2004IVS) LaneMarkingDetection Formule

Uploaded by

Mihai Buie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2004 IEEE IntelligentVehicles Symposium

University of Parma
Parma, Italy June 1447,2004

An Integrated, Robust Approach to Lane Marking Detection and Lane Tracking


Joel C. McCall and Mohan M. Trivedi

[email protected], [email protected]
Computer Vision and Robotics Research Laboratory
University of California, San Diego
La Jolla, CA, 92093-0434
http://cvrr.ucsd.edu

Abstract Figure 1 shows examples of the varying highway


markings that must be looked at when creating a lane
Lane Detection is a dflcult problem because of the detector. The left hand lane markings on this are bots
varying road conditions that one can encounter while dots (circular reflectors) while a solid white line marks
driving. In this paper we propose a method for lane the right hand lane. Both lane markings show poor
detection using steerableJilters. Steerablefilters provide contrast. The trees on the side of the highway in Figure 1
robustness to lighting changes and shadows and perform cast shadows over the lane markers, m h e r cluttering the
well in picking out both circular reflector road markings image.
as well as painted line road markings. Thefilter results In this paper we propose a method for lane detection that
are then processed to eliminate outliers based on the can work on a variety of different road types under a
expected road geometry and used to update a road and variety of lighting changes and shadowing. In order to do
vehicular model along with data taken internallyfom the this we use steerable filters [l] which can be convolved
vehicle. Results are shown for a 9000-fame image with the input image and provide features that allow them
sequence that include varying lane markings, lighting to be used to detect both bots dots and solid lines while
conditions,showing, and occlusion by other vehicles. providing robustness to cluttering and lighting changes.

1 Introduction 1.1 Previous Work


Lane detection is a well-researched area of computer Many researchers have shown lane detectors based on a
vision with applications in autonomous vehicles and wide variety of techniques. A technique commonly used
driver support systems. This is in part because, despite is based on detecting edges and fitting lines to these edges
the perceived simplicity of finding white markings on a via the Hough transform [2]. The Hough transform is
dark road, it can be very difficult to determine lane often sensitive to clutter from shadows and varying type s
markings on various types of road. These difficulties of lane markings. Neural networks have also been used to
arise from shadows, occlusion by other vehicles, changes attempt to detect lanes and control vehicles [3], but have
in the road surfaces itself, and differing types of lane difficulties on roads not included in their training set.
markings. A lane detection system must be able to pick Techniques using tangent vectors have also been shown
out all manner of markings from cluttered roadways and to be quite robust on well-marked roads, but can fail when
filter them to produce a reliable estimate of the vehicle lane markings are not well defined [ 101.
position and trajectory relative to the lane as well as the Others have attempted to overcome problems of differing
parameters of the lane itself such as its curvature and lane markings by using multiple detectors. For example,
width. Gehrig Et. Al. [4] detect bots dots on California highways
using a specific detector for bots dots using matched
filters and detect solid lane markings using more classical
methods.
Others, such as Southall Et. Al. [ 5 ] , propose stochastic
methods have also been proposed to overcome lighting
and road changes while Broggi Et. Al. [6] developed the
GOLD system for robust obstacle and lane detection.
Figure 1 An example of a highway road, lane markings vary fiom bots
dots to solid lines and are cluttered by shadows and other vehicles.

0-7803-8310-9/04/$20.00 02004 IEEE 533

Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on May 26,2010 at 22:22:29 UTC from IEEE Xplore. Restrictions apply.
Studies have been done in the state of the art of lane
detection [7]. While these methods are all very effective
at solving these problems, they tend too be very specific
to particular road types or conditions. Robust lane
detection remains an unsolved problem because in order
to have a robust lane detector, the system must be
invariant to all manners of road markings, road
conditions, lighting changes, shadowing, and occlusion.

2 Steerable Filters for Lane Detection


Steerable filters have a number of desirable properties that (3)
make them excellent for a lane detection application.
First, they can be created to be separable in order to speed
processing. By separating the filters into an X and Y
component, the convolution of the filter with an image
can be split into two convolutions using the X It has been shown that the response: of any rotation of the
components and Y components separately. Second, a G , filter can be computed using the equation 4 [ 11.
finite number of rotation angles for a specific steerable
filter are needed to form a basis set of all angles of that
steerable filter. This allows us to see the response of a G2'(x, y ) = G, cos(6)2 + G,,,,sin(6)2
filter at a given angle and therefore to tune the filter to (4)
- 2Gv cos(O)sin(O)
specific lane angles or look at all angles at once.
Taking the derivative of (4), setting it equal to 0, and
2.1 Formulation of the steerable filters solving for 8, we can find the values of 8 that correspond
The steerable filters used for the bots dot and lane to the minimum and maximum responses. These
detection are based on second derivatives of two- responses can be computed by the formulas given in 5
dimensional Gaussians. Figure 2 shows an example of a and 6.
steerable filter basis set constructed from equations 1, 2,
and 3.

where,
A = JG, 2
- 2G,G, +'G, 2
+ 4Gxy2
Figure 2 A basis set for steerable filters based on the second derivatives
of a two-dimensional Gaussian 2.2 Application of steerable filters to road
marking detection
Using the formulas 4, 5, and 6, we can find the values and
angles of the minimum and maximum responses, or the
response at a given angle. This is useful for detecting
bots dots because, for circular objects, the minimum and
maximum responses will be very similar. For detecting
lanes, the response in the direction of the lane should be
near the maximum, and the minimum response should be
low.
By applying a threshold to the minimum filter response
we can find the bots dots within an image. Also, applying
a threshold to the difference between the response in the
direction of the lane marking and the minimum response,

534

Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on May 26,2010 at 22:22:29 UTC from IEEE Xplore. Restrictions apply.
we can detect lanes of a specific angle. Figure 3 shows a inputs are then fed into the tracking system to determine
typical California highway scene with lane markings the state of the vehicle and road. The lane angles in the
consisting of both Bots Dots and lines. Figure 4 shows the image coordinates are then feed back into the filtering
image after being filtered and thresholded by the algorithm in order to tune the filters for specific lanes.
minimum value. Figure 5 shows the response to lines in
the orientation of the current lane parameters. Input Video

Figure 3 A typical highway scene in Califomia.

to find Bots Dots

Figure 4 Results of filtering for Bots Dots.

Figure 7 The Lane Tracking System Flow Chart.


Figure 5 Results from filter for a line tuned to the lane angle.

These results show the usefulness of the steerable filter 3.1 Road Modeling
set for relatively normal highway conditions. This The road model used in our system is similar in form to
filtering technique is also very useful for dealing with that used in [SI. The state variable include the vehicle
shadowed regions of road. Figure 6 below shows a road offset from the center of the lane, the vehicle heading
section that is shadowed by trees and the filter response with respect to the lane, the rate of change of the lane
heading, the steering angle of the vehicle, and the velocity
of the vehicle. Currently lane curvature is estimated
using the steering angle and the rate of change of the lane
heading and is only estimated at the vehicles location.
The camera pitch and lane width are assumed constant in
this implementation. The vehicle state is updated in time
each frame as well as updated from the measurements via
a Kalman filter, which is described at the end of the next
section.

3.2 Lane Tracking


Eigure 6 Filter results when lane markings are shadowed with complex
shadows and non-uniform road materials. In order to perform robust tracking, some more post-
processing on the filter results is performed. First, only
3 Lane Detection System the filter candidates within the vicinity of the lanes are
The overall system that we have implemented is used in updating the lanes. This removes outliers from
diagramed in Figure 7. The video input to the system are other vehicles and extraneous road markings. Secondly,
taken from a forward looking rectilinear camera for our for each lane, the first and second moments of the point
test results, but can be taken from any number of cameras candidates are computed. Straight lane markings should
on our test bed vehicle. For more information on this test be aligned to that there is a high variance in the lane
bed, please refer to [l 11. Information about the vehicles heading direction and a low variance in the other
state including wheel velocities and steering angle are direction. Outliers are then removed based on these
acquired from the car via the internal CAN bus. These statistics. Because the algorithm uses a local search about

535

Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on May 26,2010 at 22:22:29 UTC from IEEE Xplore. Restrictions apply.
the lanes for candidates, it requires initialization. In
testing, it was sufficient to initialize the lane tracker
position and trajectory to zero (corresponding to the
center of the lane).
These computed headings and positions in image space
are then transformed into real-world coordinates via an
inverse perspective calculation. The state variables are
then updated using these measurements as well as
measurements of steering angle and wheel velocity
Figure 9 Results during a lane change. (Frame 258 of the test sequence)
provided by the vehicles CAN bus.
These measurements are then feed into a discrete time
Kalman filter for the road and vehicle state as described in
section 3.1. The system and measurement equations as
well as the Kalman update equations at time k are shown
below.
Xk+llk =h k I k + Buk (7)
Yk = "k (8)
Figure 10 Results with lighting changes fiomt going through an overpass.
(Frame 1657 of the test sequence)

4 Experimental Results and Evaluation


Testing was performed on southern California highways.
Figure 11 Results with shadowing from trucks and only Bots Dots lane
These highways contained overpasses, both Bots Dots markings. (Frame 4043 of the test sequence)
lane markers and painted line lane markers, shadowing
from trees and vehicles, and changes in road surface The following graphs (Figures 12 and 13) show the lane
material. Below are images taken from these tests that tracking and vehicle state information extracted from a
demonstrate the system working under these conditions. run shown in figures 8-11 above.
The lines to the side of the vehicles are the detected lanes,
the line emanating from the center of the vehicle is the
vehicles trajectory based on internal sensors, and the
horizontal line is the distance to the lead vehicle, which is
obtained from a LASER RADAR range finder built into
the vehicle.

Figure 8 Results with clutter from nearby cars and tree shadows. (Frame
13 of the test sequence) Figure 12 Lane Position Output from the lane Tracking System

536

Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on May 26,2010 at 22:22:29 UTC from IEEE Xplore. Restrictions apply.
[11 W. T. Freeman and E. H. Adelson. “The design and use of
steerable filters”, PAMI, 13(9):891--906, 1991.
[2] J. McDonald. “Detecting and tracking road markings
using the Hough transform.” Proc. of the Irish Machine
Vision and Image Processing Conference, 2001.
[3] D. Pomerleau. Ralph “Rapidly adapting lateral position
handler.” Proc. IEEE Symposium on Intelligent Vehicles,
September 25-26, 1995.
[4] Stefan Gehrig, Axel Gem, Stefan Heinrich and Bemd
Woltennann, Lane recognition on poorly structured roads
- The Bot Dot problem in California”, Proc. of the 5&
Conference on Intelligent Transportation Systems, 2002.
[5] J. B. Southall and C.J. Taylor “Stochastic road shape
estimation” International Conference on Computer Vision
,pp. 205-212, June 2001
[6] M. Bertozzi and A. Broggi. “GOLD: a Parallel Real-Time
Stereo Vision System for Generic Obstacle and Lane
Figure 13 Steering angle output from the lane tracking system Detection.” IEEE Transactions on Image Processing,
1997.
Lane changes occur around frames 500,2800,and 5200 [7] J. Kosecka, R. Blasi, C. Taylor and J. Malik. “A
and are successfully detected by the system. The road is Comparative Study of Vision-Based Lateral Control
straight in frames 0 though 3700. After that the road Strategies for Autonomous Highway Driving”. In IEEE
curves as can be seen in the sustained steering angles. Int. Conf. on Robotics and Automation, pp 1903-1908,
May 1998.
[8] C.J. Taylor and Jitendra Malik and Joseph Weber. “A
5 Summary and Conclusions Real-Time Approach to Stereopsis and Lane-Finding.“
Lane detection is often complicated by varying road. Intelligent Vehicles 1996,pp 207-212
markings, clutter from other vehicles and complex [9] K. Huang, M. M. Trivedi, T. Gandhi, “Driver’sView and
shadows, lighting changes from overpasses, occlusion Vehicle Surround Estimation using Omnidirectional
from vehicles, and varying road conditions. In this paper Video Stream,’’ Proc. IEEE Intelligent Vehicles
we have presented a solution to the lane detection Symposium, Columbus, OH, pp. 444-449, June 9-11,
problem that shows robustness to these conditions. We 2003
[lo] H. Furusho, R Shirato, M. Shimakage. “A Lane
have shown that using a steerable filter bank provides Recognition Method Using the Tangent Vectors of White
robustness to lighting changes, road marking variation, Lane Markers,” 6th International Symposium on
and shadowing. Further post-processing based on the Advanced Vehicle Control, Sept. 9-13,2002.
statistics of the road marking candidates increases the [ll] J. McCall, 0. Achler, M. M. Trivedi. “The LISA-Q
robustness to occlusion by other vehicles and changing Human-Centered Intelligent Vehicle Test Bed” To Appear
road conditions. Future work will include expanding the in Proc. IEEE Intelligent Vehicles Symposium, Parma,
road model to incorporate piecewise estimates of Italy, June 14-17,2004.
trajectory and curvature. Also, this system can also be
combined into a vehicle surround analysis system such as
that described in H u n g et. a1.[9] to create a intelligent
vehicle driver assistance system.

Acknowledgments

The authors of this paper would like to thank UC


Discovery Grant (Digital Media Innovations program),
Nissan Motor Co. LTD., and their colleagues at the
Computer Vision and Robotic Research Laboratory,
especially Dr. Tarak Gandhi.

References

537

Authorized licensed use limited to: Univ of Calif Irvine. Downloaded on May 26,2010 at 22:22:29 UTC from IEEE Xplore. Restrictions apply.

You might also like