0% found this document useful (0 votes)
18 views4 pages

Lane 2

The document discusses the development of an Automatic Lane Detection system using Python and OpenCV, which is essential for self-driving cars to navigate safely. It outlines various methodologies for detecting lane markings, including edge detection and Hough Transform, while addressing challenges such as occlusion and varying road conditions. The proposed system aims to enhance traffic safety by accurately identifying lane lines and supporting vehicle detection for lane change assistance.

Uploaded by

saurabhkulthe9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views4 pages

Lane 2

The document discusses the development of an Automatic Lane Detection system using Python and OpenCV, which is essential for self-driving cars to navigate safely. It outlines various methodologies for detecting lane markings, including edge detection and Hough Transform, while addressing challenges such as occlusion and varying road conditions. The proposed system aims to enhance traffic safety by accurately identifying lane lines and supporting vehicle detection for lane change assistance.

Uploaded by

saurabhkulthe9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

© June 2021| IJIRT | Volume 8 Issue 1 | ISSN: 2349-6002

Lane line Detection System in Python using OpenCV

Raman Shukla1, Rajat Shukla2, Sarthak Garg3, Sharad Singh4, Pooja Vajpayee5
1,2,3,4,5
Computer Science, Raj Kumar Goel Institute of Technology, Uttar Pradesh, India

Abstract - During the driving operation, humans use their and guide drivers so that the traffic conflicts can be
optical vision for vehicle maneuvering. The road lane reduced.
marking, act as a constant reference for vehicle Increasing the safety and saving lives of human beings
navigation. One of the prerequisites to have in a self-
is one of the basic function of Intelligent
driving car is the development of an Automatic Lane
Transportation System (ITS). Intelligent
Detection system using an algorithm.
Computer vision is a technology that can enable cars to transportation systems are advanced applications
make sense of their surroundings. It is a branch of which aim to provide innovative services relating to
artificial intelligence that enables software to understand different modes of transport and traffic management.
the content of image and video. Modern computer vision This system enables various users to be better
has come a long way due to the advances in deep informed and make safer, more coordinated, and
learning, which enables it to recognize different objects smarter use of transport networks. These road
in images by examining and comparing millions of accidents can be reduced with the help of road lanes or
examples and cleaning the visual patterns that define
white markers that assist the driver to identify the road
each object. While especially efficient for classification
area and non-road area. A lane is a part of the road
tasks, deep learning suffers from serious limitations and
can fail in unpredictable ways. marked which can be used by a single line of
This means that a driverless car might crash into a truck vehicles as to control and guide drivers so that the
in broad daylight, or worse, accidentally hit a pedestrian. traffic conflicts can be reduced.
The current computer vision technology used in Most roads such as highways have at least two lanes,
autonomous vehicles is also vulnerable to adversarial one for traffic in each direction, separated by lane
attacks, by manipulating the AI’s input channels to force markings. Major highways often have two roadways
it to make mistakes. For instance, researchers have separated by a median, each with multiple lanes. To
shown they can trick a self-driving car to avoid
detect these road lanes some system must be employed
recognizing stop signs by sticking black and white labels
that can help the driver to drive safely.
on them.
Lane Line detection is a critical component for self
Index Terms - Deep learning (DL), Machine Learning driving cars and also for computer vision in general.
(ML), Convolutional neural networks, Computer Vision. This concept is used to describe the path for self-
driving cars and to avoid the risk of getting in another
I.INTRODUCTION lane. Using computer vision techniques in Python, we
will identify road lane lines in which autonomous cars
Traffic accidents have become one of the most serious must run. This will be a critical part of autonomous
problems in today's world. Roads are the mostly cars, as the self-driving cars should not cross it’s lane
chosen modes of transportation and provide the finest and should not go in opposite lane to avoid accidents.
connections among all modes. Most frequently
occurring traffic problem is the negligence of the II. LITERATURE REVIEW
drivers and it has become more and more serious with
the increase of vehicles. These road accidents can be Despite the perceived simplicity of finding white
reduced with the help of road lanes or white markers markings on a simple road, it can be very difficult to
that assist the driver to identify the road area and determine lane markings on various types of road.
nonroad area. A lane is a part of the road marked which These difficulties can be shadows, occlusion by other
can be used by a single line of vehicles as to control vehicles, changes in the road surfaces itself, and
different types of lane markings. A lane detection

IJIRT 151905 INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN TECHNOLOGY 1213


© June 2021| IJIRT | Volume 8 Issue 1 | ISSN: 2349-6002

system must be able to detect all manner of markings significantly reduce the amount of data to be processed
from roadways and filter them to produce a reliable and may therefore filter out information that may be
estimate of the vehicle position relative to the lane. To regarded as less relevant, while preserving the
detect road markings and road boundaries various important structural properties of an image. If the edge
methodologies are used like Hough Transform, Canny detection step is successful, the subsequent task of
edge detection algorithm, bilateral filter. The main interpreting the information contents in the original
working of all these are as follows: image may therefore be substantially simplified.
However, it is not always possible to obtain such ideal
2.2.1 Hough Transform edges from real life images of moderate complexity.
The Hough transform is a technique in which features The Canny edge detector is an edge detection
are extracted that is used in image analysis and digital algorithm that uses a multiple stage algorithm so as to
image processing. Previously the classical Hough detect edges in images[3]. Its aim is to discover the
Transform worked on the identification of lines in the optimal edge detection. In this definition, an optimal
image but later it has been extended to identifying edge detector includes the following things
positions of shapes like circles and ellipses. In • Good detection – the algorithm should be able to
automated analysis of digital images, there was a detect as many real edges in the image as possible.
problem of detecting simple geometric shapes such as • Good localization – edges marked through this
straight lines, circle, etc[1]. So in the pre-processing algorithm should approach as close as possible to
stage edge detector has been used to obtain points on the edge in the real image.
the image that lie on the desired curve in image space. • Minimal response – a given edge in the image
But due to some imperfections in image data or in the should only be marked once so as to reduce false
edge detector, some pixels were missing on the desired edges
curve as well as spacial deviation between the
geometric shape used and the noisy edge pixels 2.2.3 Bilateral filter
obtained by the edge detector. So to refine this Bilateral filter is a simple and non-iterative scheme
problem Hough transform is used. In this the grouping which smoothens the image while preserving the
of edge pixels into an object class is performed by edges. The basic idea behind the working of bilateral
choosing appropriate pixels from the set of parametric filter is that the two pixels should be close to one
image objects[2]. The simplest case of Hough another[4]. This filter split an image into large-scale
transform is finding straight lines that are hidden in features i.e. structure and small scale features i.e.
large amounts of image data. For detecting lines in texture. In this filter every sample is replaced by a
images, the image is first converted into binary image weighted average of its neighbors. These weights
using some form of thresholding and then the positive reflects two forces i.e. the closeness of the
or suitable instances are added into the dataset. The neighborhood with the center sample so that larger
main part of Hough transform is the Hough space. weight is assigned to the closer samples, and similarity
Each point (d, T) in Hough space matches to a line at between neighborhood and the center sample so that
angle T and distance d from the origin in the data larger weight is assigned to the similar samples[5].
space. The value of a function in Hough space gives
the point density along a line in the data space. III. METHODOLOGY

2.2.2 Edge Detection When we drive, we use our eyes to decide where to go.
Edge detection works on the idea of the identification The lines on the road that show us where the lanes are,
of points in the digital image at which the image act as our constant reference for where to steer the
brightness changes sharply. The points at which image vehicle. Naturally, one of the first things we would like
brightness changes sharply are organized into a set of to do in developing a self-driving car is to
curved line segments termed as edges. Edge detection automatically detect lane lines using an algorithm[6].
is a fundamental tool in image processing particularly In this project you will detect lane lines in images
in the areas of feature detection and extraction. using Python and OpenCV. OpenCV means "Open-
Applying an edge detection algorithm to an image may

IJIRT 151905 INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN TECHNOLOGY 1214


© June 2021| IJIRT | Volume 8 Issue 1 | ISSN: 2349-6002

Source Computer Vision", which is a package that has technique, we can find lines from the pixel outputs of
many useful tools for analyzing images.The tools we the canny edge detection output.
have are color selection, region of interest selection,
grayscaling, Gaussian smoothing, Canny Edge IV. CONCLUSION
Detection and Hough Tranform line detection. The
stages of the proposed system are shown in Fig. Below In this paper, we proposed the approach to detect
lanes, detect and track multiple vehicles for lane
change support around the test vehicle. For lane
detection, to detect lane in real-time, we use EDLines
algorithm which can detect line segments between 10
ms and 20 ms on 2.2 GHz CPU, and EDlines was
applied to ROI.Therefore, lane detection method has
been implemented in the3.3 GHz Intel CPU and it
takes about 13 ms with each the image.
With frontal view, our algorithm detects three lane
1. Color Selection
areas, frontallane, left-side lane. Moreover, both rear-
First let us select some colors. For Instance: Lane
side lane were detectedwith two rear-side cameras by
Lines are usually White in color and we know the RGB
using our method.
value of White is (255,255,255). Here we will define
Based on the detected lane areas combined with
a color threshold in the variables red_threshold,
horizontal edge feature of vehicles, vehicle candidates
green_threshold and blue_threshold and populate
are detected in each lane, and then the vertical edge are
rgb_threshold with these values. This vector contains
used to verify the vehicle candidates. With wrong
minimum values for red,green and blue(R,G,B).
detection cases, we use Kalman filter to predict and
track vehicle target. Time of vehicle detection is about
2. Region Masking
30 ms, total time for lane detection and vehicle
Assuming that the front facing camera that took the
detection are about 43 ms. From the outcome of
image is mounted in a fixed position on the car, such
experiments our method obtains the goal in support
that the lane lines will always appear in the same
lane change and warning.
general region of the image. Next, taking advantage of
this by adding a criterion to only consider pixels for
REFERENCES
color selection in the region where we expect to find
the lane lines.
[1] J. Long, E. Shelhamer, and T. Darrell, “LANE
DETECTION TECHNIQUES” – A Review.” in
3. Canny Edge Detection
Proceedings of the IEEE Conference on
Now we are applying Canny to the gray-scaled image
Computer Vision and Pattern Recognition, 2015,
and our output will be another image called edges.
pp. 3431–3440.
low_threshold and high_threshold are your thresholds
[2] S. Zheng, S. Jayasumana, B. Romera-Paredes, V.
for edge detection.
Vineet, Z. Su, D. Du, C. Huang, And P.H Torr, “A
Layered Approach to Robust Lane Detection at
4. Hough Transform
Night.” 2015, pp. 1529–1537.
Now that we have detected edges in the region of
[3] V. Badrinarayanan, A. Handa, and R. Cipolla,
interest, we want to identify lines which indicate lane
“Segnet: A deep convolutional encoder-decoder
lines. This is where the hough transform comes in
architecture for robust semantic pixel-
handy.The Hough transformation converts a “x vs. y”
wiselabelling,” arXiv preprint arXiv:1505.07293,
line to a point in “gradient vs. intercept” space. Points
2015.
in the image will correspond to lines in hough space.
[4] M.Cordts,M.Omran,S.Ramos,T.Rehfeld,M.Enzw
An intersection of lines in hough space will thus
iler,R.Benenson,U. Franke,S.Roth,and B.Schiele,
correspond to a line in Cartesian space. Using this
“The cityscapes dataset for semantic urban scene
understanding,” in Proc. of the IEEE Conference

IJIRT 151905 INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN TECHNOLOGY 1215


© June 2021| IJIRT | Volume 8 Issue 1 | ISSN: 2349-6002

on Computer Vision and Pattern Recognition


(CVPR), 2016.
[5] K. He, X. Zhang, S. Ren, and J. Sun, “Deep
residual learning for image recognition,” arXiv
preprint arXiv:1512.03385, 2015.
[6] Shopa, P., N. Sumetha and P.S.K Pathra. “Traffic
sign detection and recognition using OpenCV",
International Conference on Information
Communication and Embedded Systems
(ICICES2014), 2014.

IJIRT 151905 INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH IN TECHNOLOGY 1216

You might also like