Lane Line Detection Using Hough Transform (Pipeling For Videos)
Lane Line Detection Using Hough Transform (Pipeling For Videos)
A PROJECT REPORT
Submitted by
SOURAV DEY-17BCE0019
ISHA -17BCE0282
Slot – L55+L56
Faculty:
ABSTRACT
Chapter 1: INTRODUCTION
Chapter 2: LITERATURE REVIEW
Chapter 3: METHODOLOGY
APPENDICES
REFERENCES
ABSTRACT
According to The Times of India, about 150,000 people were killed in road accidents in the
last year. Of the many other reasons, lane changing is one of the most common cause of
accidents. Experts say that these accidents mostly occur due to distractions often due to the
idea that one can multitask. Another risk factor is driving slowly in the right lane,
compelling the other drivers to drive faster to switch lanes. When we drive, we use our eyes
to decide where to go. The lines on the road that show us where the lanes are acting as our
constant reference for where to steer the vehicle. Lane detection can be considered a
potential solution to the accidents. It is a system devised to warn the driver when the vehicle
begins to move out of its current lane. The system detects the lines of the images from the
input image (or video) and selects the lane marker of the road surface from that. Hough
transform is used to extricate the lines off an image. Such a system can warn the driver when
has the tendency to move out from the lane, without being aware of it.
CHAPTER 1: INTRODUCTION
Reference 1 mentions the performance hough transform on road lanes like this project
has been implemented. The paper also compares the Randomised Hough Transform
(RHT) and Standardised Hough Transform. The RHT computation consists of selecting 2
random pixels from edge image and calculating the parameter from the line of the point
connection. The RHT algorithm lies in the fact that each point in ρ-θ plane can be
expressed with 2 points or 1 line from the original binary edge image. On the Other hand
the SHT, uses finding the local maxima which represent line segments of the image and
extracting the line segments from the maxima positions. Computational complexity and
huge storage memory consumption are it’s disadvantages. This paper thoroughly de-
scribes these two with respect to their thresholds, including the various images it uses to
work. Hence the usage of it because it is very similar to that of this project.
Reference 2 talks about ways to simplify the lane line detection algorithm based on
Hough transform, proposing an algorithm that directly identifying lane line in Hough
space. The image is conducted with Hough transform, and the points conforming to the
parallel characteristics, length and angle characteristics, and intercept characteristics of
lane line are selected in Hough space. The points directly converted into the lane line
equations. Also, the lane lines are conducted with fusion and property identification. The
experimental results showed that the lane can be better identified on expressways and
structured roads. In comparison to a tradition algorithm, the identification is effectively
improved.
Reference 3 Gives the methodology of how the Transformation is applied and how the
code works. It provides the knowledge to understand the code implied and used. explain-
ing following techniques: Colour Selection, Canny Edge Detection, Region of Interest
Selection and Hough Transform Line Detection in detail.
CHAPTER 3: METHODOLOGY
The coloured image taken as input is first converted into a grayscale image. This helps to
reduce work as otherwise work would have to be done on the RGB scale. Then Gaussian
Blur is applied to the image. This is done to reduce image noise. Then the Canny Edge
Detection algorithm is applied on the smoothened image. Then a mask is applied to the
image from the Edge to filter the region of interest. This is done to remove parts of the
image where it is not expected to have roads.
Then the hough lines algorithm is applied to the masked edge image. This gives the line
segments where the edges are near the lane lines. Once the lines are obtained, the slope
of the line gives the left and right edge. A negative slope defines the left line and a
positive slope defines the right lane. Then an average of all the left line segments is taken
and ad extrapolated left line is drawn. The same method is used to get an extrapolated
right line. This gives the output of the lane’s end lines and thus alarming the driver when
the vehicle moves out of the lane.
!
Figure 1: System Overview
Python and OpenCV are used to detect lane lines on the road. We will develop a
process-ing pipeline that works on a series of individual images, and will apply the
result to a video stream.
Pipeline architecture
!
CHAPTER 5: CONCLUSION AND FUTURE WORK
There have been several lane detection algorithms that have come up to solve the
problem. But there are few disadvantages also, like:
The disadvantage of robust lane detection in shadows and low illumination conditions is
that it cannot detect any high dynamic range portion of the image.
Most of the other detection methods cannot verify the colors of lane markers.
But, through Hough Transform, this can be done using lane hypothesis verification.
Therefore, we conclude that Hough transform is a more suitable solution for various
lane detection problems.
In future, this transform could be modified for using it on curvatures and circular roads as
well.
APPENDICES
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
get_ipython().run_line_magic('matplotlib', 'inline')
image = mpimg.imread('test_images/solidWhiteRight.jpg')
def grayscale(img):
mask = np.zeros_like(img)
#filling pixels inside the polygon defined by "vertices" with the fill col-
or
cv2.fillPoly(mask, vertices, ignore_mask_color)
prev_left_x1 = 0
prev_left_x2 = 0
prev_right_x1 = 0
prev_right_x2 = 0
prev_left_avg_m = 0
prev_right_avg_m = 0
pev_left_avg_m = -1
prev_right_avg_m = 1
prev_left_b = 0
prev_right_b = 0
prev_left_line = 0
prev_right_line = 0
# slope= ( y2 - y1 )/( x2 - x1 )
slopes = (lines[:,3] - lines[:,1])/(lines[:,2] - lines[:,0])
slopes = slopes[~np.isinf(slopes)]
slopes = slopes[~np.isnan(slopes)]
left_lines = lines[slopes < -0.5] # Left lines should have negative
slope, thershold=-0.5
right_lines= lines[slopes > 0.5] # Right lines should have positive
slope, threshold=+0.5
left_slopes = slopes[slopes < -0.5]
right_slopes= slopes[slopes > 0.5]
global prev_left_avg_m
global prev_right_avg_m
global prev_left_b
global prev_right_b
keep_prev_left = False
keep_prev_right = False
prev_left_avg_m = left_avg_m
prev_right_avg_m = right_avg_m
prev_left_b = left_b
prev_right_b = right_b
import os
os.listdir("test_images/")
img = mpimg.imread('test_images/solidWhiteCurve.jpg')
img_shape= img.shape
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size=5)
img_edge = canny(img_blur, low_threshold=50, high_threshold=150)
vertices = np.array([[(0,img_shape[0]),(425, 315), (540, 315),
(img_shape[1],img_shape[0])]], dtype=np.int32)
img_masked_edges = region_of_interest(img_edge, vertices)
img_hough_lines = hough_lines(img_masked_edges, rho=1, theta=np.pi/180,
thresh-old=40,
min_line_len=60, max_line_gap=20)
img_lanes = weighted_img(img=img_hough_lines, initial_img=img, α=0.8, β=1.,
λ=0.)
plt.imshow(img_lanes)
img = mpimg.imread('test_images/solidWhiteRight.jpg')
img_shape= img.shape
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size=5)
img_edge = canny(img_blur, low_threshold=50, high_threshold=150)
vertices = np.array([[(0,img_shape[0]),(425, 315), (540, 315),
(img_shape[1],img_shape[0])]], dtype=np.int32)
img_masked_edges = region_of_interest(img_edge, vertices)
img_hough_lines = hough_lines(img_masked_edges, rho=1, theta=np.pi/180,
thresh-old=40,
min_line_len=60, max_line_gap=40)
img_lanes = weighted_img(img=img_hough_lines, initial_img=img, α=0.8, β=1.,
λ=0.)
plt.imshow(img_lanes)
img = mpimg.imread('test_images/solidYellowCurve.jpg')
img_shape= img.shape
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size=5)
img_edge = canny(img_blur, low_threshold=50, high_threshold=150)
vertices = np.array([[(0,img_shape[0]),(425, 315), (540, 315),
(img_shape[1],img_shape[0])]], dtype=np.int32)
img_masked_edges = region_of_interest(img_edge, vertices)
img_hough_lines = hough_lines(img_masked_edges, rho=1, theta=np.pi/180,
thresh-old=40,
min_line_len=60, max_line_gap=30)
img_lanes = weighted_img(img=img_hough_lines, initial_img=img, α=0.8, β=1.,
λ=0.)
plt.imshow(img_lanes)
img = mpimg.imread('test_images/solidYellowCurve2.jpg')
img_shape= img.shape
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size=5)
img_edge = canny(img_blur, low_threshold=50, high_threshold=150)
vertices = np.array([[(0,img_shape[0]),(425, 315), (540, 315),
(img_shape[1],img_shape[0])]], dtype=np.int32)
img_masked_edges = region_of_interest(img_edge, vertices)
img_hough_lines = hough_lines(img_masked_edges, rho=1, theta=np.pi/180,
thresh-old=60,
min_line_len=60, max_line_gap=30)
img_lanes = weighted_img(img=img_hough_lines, initial_img=img, α=0.8, β=1.,
λ=0.)
plt.imshow(img_lanes)
img = mpimg.imread('test_images/solidYellowLeft.jpg')
img_shape= img.shape
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size=5)
img_edge = canny(img_blur, low_threshold=50, high_threshold=150)
vertices = np.array([[(0,img_shape[0]),(425, 315), (540, 315),
(img_shape[1],img_shape[0])]], dtype=np.int32)
img_masked_edges = region_of_interest(img_edge, vertices)
img_hough_lines = hough_lines(img_masked_edges, rho=1, theta=np.pi/180,
thresh-old=60,
min_line_len=60, max_line_gap=30)
img_lanes = weighted_img(img=img_hough_lines, initial_img=img, α=0.8, β=1.,
λ=0.)
plt.imshow(img_lanes)
img_hough_lines_extrapolated = hough_lines(img_masked_edges, rho=1,
theta=np.pi/180, threshold=60,
min_line_len=60, max_line_gap=30,
extrapolate=True)
img_lanes_extrapolated = weighted_img(img=img_hough_lines_extrapolated, ini-
tial_img=img, α=0.8, β=1., λ=0.)
plt.imshow(img_lanes_extrapolated)
img = mpimg.imread('test_images/whiteCarLaneSwitch.jpg')
img_shape= img.shape
img_gray = grayscale(img)
img_blur = gaussian_blur(img_gray, kernel_size=5)
img_edge = canny(img_blur, low_threshold=50, high_threshold=150)
vertices = np.array([[(0,img_shape[0]),(425, 315), (540, 315),
(img_shape[1],img_shape[0])]], dtype=np.int32)
img_masked_edges = region_of_interest(img_edge, vertices)
img_hough_lines = hough_lines(img_masked_edges, rho=1, theta=np.pi/180,
thresh-old=80,
min_line_len=60, max_line_gap=30)
img_lanes = weighted_img(img=img_hough_lines, initial_img=img, α=0.8, β=1.,
λ=0.)
plt.imshow(img_lanes)
def process_image(image):
img_shape= image.shape
img_gray = grayscale(image)
img_hough_lines_extrapolated = hough_lines(img_masked_edges,
rho=1, theta=np.pi/180, threshold=40,
min_line_len=10, max_line_gap=70,
extrapolate=True)
img_lanes_extrapolated = weighted_img(img=img_hough_lines_extrapolated,
initial_img=image, α=0.8, β=1., λ=0.)
return img_lanes_extrapolated
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color
images!!
get_ipython().run_line_magic('time',
'white_clip.write_videofile(white_output, audio=False)')
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
get_ipython().run_line_magic('time',
'yellow_clip.write_videofile(yellow_out-put, audio=False)')
REFERENCES