0% found this document useful (0 votes)
33 views46 pages

DOCU

Uploaded by

Gatti Narmada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views46 pages

DOCU

Uploaded by

Gatti Narmada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

ABSTRACT

This project aims to develop a car lane detection system using Python, NumPy,
and OpenCV. Lane detection is a crucial component of advanced driver assistance
systems (ADAS) and autonomous vehicles, aiding in lane keeping and navigation. The
proposed system utilizes computer vision techniques to analyze images or video streams
from a car's onboard camera and identify lane markings on the road.

The process involves several key steps: preprocessing the input images to enhance
features and reduce noise, detecting edges using techniques like Canny edge detection,
identifying the region of interest (ROI) where lane markings are expected to appear, and
finally, detecting and extrapolating lane lines using methods like the Hough Transform.

Throughout the development process, parameter tuning and optimization are


crucial to ensure accurate and reliable lane detection under various environmental
conditions, such as different lighting conditions, road surfaces, and lane markings.

The system's performance is evaluated using sample images and videos, and
potential improvements, such as curve fitting for more accurate lane detection or
integration with vehicle control systems for real-time lane keeping, are explored.

By implementing this car lane detection system, the project aims to contribute to
the advancement of driver assistance technologies, enhancing road safety and facilitating
the development of autonomous driving systems.

i
TABLE OF CONTENTS
ABSTRACT .........................................................................................................................
i TABLE OF CONTENTS ..................................................................................................
ii LIST OF FIGURES ..........................................................................................................
iv CHAPTER-1
INTRODUCTION..............................................................................................................
1 CHAPTER-2 LITERATURE
SURVEY .................................................................................................. 9
2.1.1 An efficient and scalable simulation model for autonomous vehicles with
economical hardware ....................................................................................................
9
2.1.2 Edge intelligence-assisted smoke detection in foggy surveillance environments-
Feb-2020. ....................................................................................................................
10
2.1.3 Lane line detection with ripple lane line detection network and wasserstein
GAN............................................................................................................................
10
2.2 Existing System ........................................................................................................
11
2.2.1 Drawbacks: ........................................................................................................
11
2.3 Proposed System ......................................................................................................
12
2.3.1 Advantages: .......................................................................................................
13
2.4 MODULES DESCRIPTION....................................................................................
14
2.5 Feasibility study: ......................................................................................................
16
CHAPTER-3 REQUIREMENT
ANALYSIS ........................................................................................ 18
3.1 Software Requirements ............................................................................................
18

ii
3.2 Functional Requirements..........................................................................................
18
3.3 Hardware Requirement ............................................................................................
18
CHAPTER-4 SOFTWARE
DESIGN .................................................................................................... 19
4.1 System architecture: .................................................................................................
19
4.2 Data flow diagram: ...................................................................................................
20
4.3 Methodology ............................................................................................................
21
CHAPTER-5
IMPLEMENTATION .....................................................................................................
22
5.1 CODE .......................................................................................................................
22
5.2 Open
CV ................................................................................................................... 32
5.3 NumPY .....................................................................................................................
33
CHAPTER-6
TESTING ..........................................................................................................................
35 CHAPTER-7
RESULT............................................................................................................................ 38
CHAPTER-8 CONCLUSION AND FUTURE
ENHANCEMENT .................................................... 40
8.1 Conclusion ................................................................................................................
40
8.2 Future Enhancement .................................................................................................
40
BIBLIOGRAPHY ............................................................................................................
42
REFERENCES ...............................................................................................................
42

iii
LIST OF FIGURES
Figure 1: Activity Diagram ................................................................................................
19

Figure 2: Block Diagram ...................................................................................................


20

Figure 3: Image Processing ................................................................................................


21

Figure 4:Applying Image Processing to the Video ............................................................


34

Figure 5:Detecting and Counting the Vehicles ..................................................................


34

iv
Car Lane Detection Using NumPy and OpenCV

CHAPTER-1
INTRODUCTION
With the advent of autonomous driving technologies and advanced driver
assistance systems (ADAS), the development of robust car lane detection systems has
become increasingly important. Lane detection plays a pivotal role in ensuring safe and
efficient navigation on roads, providing vital information to vehicles about lane
boundaries and positioning.

This project focuses on the development a car lane detection system using Python
programming language, along with the powerful libraries NumPy and OpenCV. By
harnessing the capabilities of computer vision, the system aims to analyze images or
video feeds captured by onboard cameras and accurately identify lane markings on the
road.

The significance of lane detection extends beyond autonomous vehicles; it also


finds applications in driver assistance features such as lane departure warning systems,
lane-keeping assistance, and adaptive cruise control. By accurately detecting and tracking
lane markings, vehicles can maintain proper lane positioning, mitigate the risk of
accidents, and enhance overall driving safety.

Throughout this project, we will delve into various techniques and algorithms
commonly employed in lane detection, including edge detection, region of interest (ROI)
selection, and line detection using the Hough Transform. Additionally, we will explore
methods to optimize and fine-tune parameters to ensure robust performance across
diverse driving conditions.

Furthermore, the project aims to provide a platform for experimentation and


exploration, allowing for the integration of advanced techniques such as curve fitting and
machine learning for more sophisticated lane detection capabilities.Ultimately, the
development of an effective car lane detection system holds great promise for
revolutionizing the automotive industry, paving the way for safer, more efficient, and
ultimately autonomous transportation systems.

PYTHON INSTALLATION

What is Python:

Dept. of CSE, BVCEC(A), Odalarevu. 1


Car Lane Detection Using NumPy and OpenCV

These are some facts about Python. Python is present most widely used
multipurpose and high-level programming language.it allows programming in Object-
Oriented and Procedural paradigms. Python programs commonly are smaller than other
programming languages like Java. Programmers have to write relatively less and
indentation requirement of the language, makes them readable all the time. Python
language is being used by the almost all tech-giant companies like – Google, Amazon,
Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python language is
huge collection of standard library it can be used for the following–Machine Learning

 GUI applications (such as Kivy, Tkinter, PyQt, etc.)

 Web frameworks such as Django (used by YouTube, Instagram, Dropbox)

 Image processing (such as Opencv and Pillow)

 Web scraping (such as Scrapy, Beautiful Soup and Selenium)

 Test Framework

 Multimedia

Advantages of Python:-
Let's see how Python outperforms other languages.

Advantages of Python Over Other Languages

1. Less Coding

When performing the same task in other languages, all tasks performed in Python
require less coding. Python is also excellent standard library support, so there is no need
to search for third-party libraries to get the job done. This is the main reason why many
people recommend that beginners learn Python.

2. Affordable

Python is a free resource, so individuals, small businesses, or large organizations


can use available free resources to build applications, and Python is a popular and widely
used community support.

3. Python is for Everyone

Python can run in any environment, be it Linux, Mac or Windows. Programmers


need to learn different languages for different jobs, but with Python, you can

Dept. of CSE, BVCEC(A), Odalarevu. 2


Car Lane Detection Using NumPy and OpenCV

professionally use Python to create web applications, perform data analysis and machine
learning, automate things, perform web scraping, and create powerful games and
visualization in it all a terrain programming language

Disadvantages of Python
We have seen why Python is the best choice for our project. But if you allocate it,
you should also be aware of its consequences. Let us now look at the disadvantages of
choosing Python over another language.

1.Speed Limitations

We have seen that Python code executes line by line. But because Python is
interpreted, its execution speed is very slow. This is not a problem, unless speed is the
focus of the project work. In other words, the speed is necessary, and the benefits Python
provides are enough to distract us from its speed limitations.

2.Weak in Mobile Computing and Browsers

Although Python is a good server-side language, Python is rarely seen on the


client side. Most importantly, is rarely used to implement smartphone-based applications.
One of these applications is called Carbonnelle. Although Brighton exists, it is not so
famous because it is not so safe.

3.Design Restrictions

Python is a dynamically typed language. This means that you do not need to
declare variable types when writing code content. It uses to duck typing. what is that?
Well, it just means that if it looks like a duck, then it must be a duck. This is easy for
programmers in the coding,process, and it will generate runtime errors.

4. Underdeveloped database access layer

Compared to the most widely used technologies such as JDBC (Java Database
Connectivity) and ODBC (Open Database Connectivity), Python's database access layer
is a bit underdeveloped and it is used less in large companies.

5.Simple

We are not kidding. Python's simplicity is indeed a problem. I don't do Java, I'm
more of a Python person. To me, its syntax is so simple that Java code seems unnecessary.

Dept. of CSE, BVCEC(A), Odalarevu. 3


Car Lane Detection Using NumPy and OpenCV

Install Python on Windows and Mac Step by Step:

Step 1: Go to the official site and use Google Chrome or any other web browser to
download and install Python. Or click on the following link: https://www.pytho

Now check the latest and correct version of your operating system.

Step2: Click on the Download Tab.

Step 3: You can select the yellow Download Python for Windows 3.7.4 button, or you
can scroll down and click the download of the corresponding version. Here, we are
downloading the latest version of Python for Windows 3.7.4.

Dept. of CSE, BVCEC(A), Odalarevu. 4


Car Lane Detection Using NumPy and OpenCV

Step 4: Scroll down the page until you find the "File" option.

Step 5: Here you will see different versions of Python and operating systems.

 To download Windows 32bit Python, you can select the built-in Windows X86
Zip file,

 Windows X86 executable installer or Windows X86 installer on the Web.

 To download Windows 64bit Python, you can select any option of the three
options. Zip File Embeddable Windows X866, Windows X8664 executable
installer or Windows X8664 installer based on the web.

Here you will install the installer based on the Windows X8664 website. Here, the first
part of the Python version was completed. Now we will go in advance the second part
when installing Python

I. Note: You can click on the option of the release of the version to know the changes or
updates made in the version.

Dept. of CSE, BVCEC(A), Odalarevu. 5


Car Lane Detection Using NumPy and OpenCV

Installation of Python

Step 1: Go to Download and Open the downloaded python version to carry out the
installation process.

Step 2: Before you click on Install Now, make sure to put a tick on Add Python 3.7 to

Step 3: Click on Install NOW After the installation is successful. Click on Close.

With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation.

Note: The installation process might take a couple of minutes.

Verify the Python Installation

Step 1: Click on Start

Dept. of CSE, BVCEC(A), Odalarevu. 6


Car Lane Detection Using NumPy and OpenCV

Step 2: In the Windows Run Command, type “cmd”.

Step 3: Open the Command prompt option.

Step 4: Let us test whether the python is correctly installed.

Type python – V and press Enter.

Step 5: You will get the answer as 3.7.4

Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.

Check how the Python IDLE works

Step 1: Click on Start

Step 2: In the Windows Run command, type “python idle”.

Dept. of CSE, BVCEC(A), Odalarevu. 7


Car Lane Detection Using NumPy and OpenCV

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program

Step 4: To go ahead with working in IDLE you must first save the file. Click

on File Click on Save

Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I
have named the files as Hey World.

Step 6: Now for e.g. enter print

The Scikitlearn tool has been used to import machine learning algorithms. The data set
is divided into training set and test set in the proportions of 50:50, 70:30 and 90:10
respectively. Each classifier uses the training set for training, and the test set is used to
evaluate the performance of the classifier. The performance of the classifier is evaluated by
calculating the precision, false negative rate, and false positive rate of the classifier

Dept. of CSE, BVCEC(A), Odalarevu. 8


Car Lane Detection Using NumPy and OpenCV

CHAPTER-2
LITERATURE SURVEY
2.1 INTRODUCTION
2.1.1 An efficient and scalable simulation model for autonomous vehicles with
economical hardware

(Muhammad Sajjad , Muhammad Irfan , Khan Muhammad , Javier Del Ser , Javier
Sanchez-Medina) Mar. 2021

The concept of autonomous vehicles, also known as self-driving or driverless cars, has
gained significant traction in recent years with various companies and research labs
worldwide investing in their development. These vehicles have the potential to
revolutionize transportation by offering safer, more efficient, and reliable alternatives to
conventional driving. They aim to address the alarming rate of car accidents globally,
which result in millions of fatalities and injuries annually. Through advancements in
technology such as sensors, communication systems, operating systems, and
computational hardware, researchers are striving to enhance the capabilities of
autonomous vehicles. Competitions like the DARPA Grand and Urban challenges have
spurred innovation in this field, leading to collaborations between IT companies,
automakers, and research institutions. Connected Vehicle (CV) technology, facilitating
communication between vehicles and infrastructure, holds promise for improving traffic
management and safety. A key focus is on developing lightweight deep learning models
that can operate in real-time on resource-constrained devices like Raspberry Pi, enabling
autonomous driving functionalities such as obstacle detection, traffic sign recognition,
and maneuvering. This paper outlines a framework for a self-driving model car based on
Raspberry Pi, discussing its capabilities and contributions to advancing autonomous
driving technology.

Dept. of CSE, BVCEC(A), Odalarevu. 9


Car Lane Detection Using NumPy and OpenCV

2.1.2 Edge intelligence-assisted smoke detection in foggy surveillance environmentsFeb-


2020.

(K. Muhammad, S. Khan, V. Palade, I. Mehmood, and V. H. C. de Albuquerque)

Traditional methods face challenges such as limited accuracy and high false alarms,
prompting research into AI-assisted detection. Techniques range from color-based to deep
learning approaches, aiming to improve accuracy and efficiency. Recent developments
include dynamic texture analysis and deep CNN-based methods, addressing issues like
computational complexity and accuracy. However, existing methods often struggle in
foggy surveillance environments, necessitating novel solutions. This work contributes a
lightweight CNN-based method tailored for foggy scenes, achieving improved accuracy
and reduced false alarms. Comprehensive experiments on benchmark datasets validate its
effectiveness, highlighting its suitability for real-world deployment in smart cities and
industrial settings. The paper concludes with insights into future research directions to
further enhance smoke detection capabilities in surveillance systems.

2.1.3 Lane line detection with ripple lane line detection network and wasserstein GAN
(Y. Zhang, Z. Lu, D. Ma, J.-H. Xue, and Q. Liao) – 2021

In today's automotive landscape, cars are integral to transportation, and ensuring safety
amidst increasingly complex traffic conditions is paramount. Advanced Driver Assistance
Systems (ADAS) serve as vital aids, employing sensors to gather environmental data and
analyze it for potential hazards, with lane line detection being a crucial component.
Detecting lane lines aids drivers in understanding road conditions and preparing for
maneuvers, thereby preventing accidents caused by driver fatigue or inattention. Lane
line detection systems face various challenges, including different lane markings,
occlusion, defects, and interference. Traditional methods rely on visual lane features,
employing steps like image preprocessing, feature extraction, and lane detection.
However, these methods are often limited by assumptions and specific features, making
their performance susceptible to interference. In contrast, learning-based approaches,
particularly deep learning methods, have emerged to overcome these limitations.
Convolutional Neural Networks (CNNs) have been utilized to directly detect lane lines
from images, showcasing promising performance. However, challenges such as noise
interference, shadows, and varying weather conditions persist. To address these issues
and advance lane line detection, we propose RiLLD-Net, a simple yet efficient network
that effectively highlights lane line properties and removes interference in common

Dept. of CSE, BVCEC(A), Odalarevu. 10


Car Lane Detection Using NumPy and OpenCV

scenarios. Building upon RiLLD-Net, we introduce Ripple-GAN, a more robust lane line
detection network capable of handling challenging scenes with partial lane line
information. Ripple-GAN integrates RiLLD-Net with Wasserstein Generative Adversarial
Network (WGAN) and semantic segmentation, effectively addressing occlusion and
complex scenarios. Experimental validation demonstrates the effectiveness of our
proposed methods. The paper concludes with insights into future research directions.

2.2 Existing System


The existing systems for car lane detection primarily rely on computer vision
techniques implemented through software algorithms. These systems typically utilize
cameras installed in vehicles to capture images or video streams of the road ahead. The
captured data is then processed using image processing and computer vision algorithms to
detect and track lane markings.

One common approach involves edge detection algorithms such as the Canny edge
detector to identify edges in the image that correspond to lane markings. Following edge
detection, techniques like the Hough Transform are often employed to detect straight lines
representing lane boundaries.

However, existing systems may face challenges such as sensitivity to varying lighting
conditions, noise from other vehicles or road markings, and difficulties in accurately
detecting curved or discontinuous lane markings.

Moreover, real-time performance is critical for practical deployment in vehicles,


necessitating efficient algorithms and hardware optimization to achieve timely processing
of image data. Despite these challenges, existing systems have demonstrated significant
progress and have been integrated into various advanced driver assistance systems
(ADAS) to enhance vehicle safety and autonomy.

2.2.1 Drawbacks:

 Sensitivity to Environmental Conditions: Existing systems may struggle to


perform consistently under varying lighting conditions, weather conditions (such

as rain or fog), or when faced with complex road scenarios, such as sharp curves
or intersections. Changes in lighting and shadows can affect the visibility of lane
markings, leading to inaccurate detection.

Dept. of CSE, BVCEC(A), Odalarevu. 11


Car Lane Detection Using NumPy and OpenCV

 Limited Robustness: These systems may have difficulty distinguishing lane


markings from other road features or markings, such as cracks, shadows, or
temporary obstacles. This can result in false positives or missed detections,
compromising the system's reliability and safety.

 Difficulty in Detecting Curved Lanes: Most existing systems are designed to


detect straight lane markings using techniques like the Hough Transform.
However, they may struggle to accurately detect and track curved or irregularly
shaped lanes, particularly on winding roads or highways with varying curvature.

 Computational Complexity: Real-time performance is essential for practical


deployment in vehicles, but many existing algorithms can be computationally
intensive, requiring significant processing power and resources. This can pose
challenges for implementation on embedded systems with limited computational
capabilities.

 Limited Adaptability: Existing systems may lack adaptability to different road


types, lane configurations, and traffic conditions. They may not be able to
effectively handle scenarios such as temporary lane closures, construction zones,
or unconventional lane markings.

 Dependency on Calibration and Tuning: Fine-tuning parameters and calibrating


the system for optimal performance under different conditions can be
laborintensive and may require expertise. This dependency on manual calibration
makes the system less robust and increases the risk of errors or inconsistencies.

2.3 Proposed System


The proposed system for car lane detection represents a significant advancement
in the field, as it eliminates the need for additional information such as lane width, time to
lane crossing, or offset between lane centers. Moreover, it operates without requiring
camera calibration or coordinate transformation, simplifying the setup process and
reducing computational overhead.

This system has been meticulously designed and rigorously tested to ensure
robust performance across diverse road conditions and lighting scenarios. Through
extensive experimentation, including scenarios with changing illumination and shadows
on various road types, the system has demonstrated its ability to reliably detect lane
markings without being constrained by speed limits.

Dept. of CSE, BVCEC(A), Odalarevu. 12


Car Lane Detection Using NumPy and OpenCV

By leveraging innovative algorithms and advanced image processing techniques,


the proposed system excels in accurately identifying lane boundaries, even in challenging
environments where traditional systems may falter. Its adaptability and robustness make it
a promising solution for enhancing road safety and supporting autonomous driving
technologies in real-world applications.

2.3.1 Advantages:

Simplicity and Ease of Implementation: The proposed system eliminates the need for
additional information such as lane width or camera calibration, making it simpler to
implement and deploy. This simplification streamlines the setup process and reduces the
complexity of the system.

Robust Performance: Extensive testing under various road conditions and lighting
scenarios has demonstrated the system's robustness. It consistently performs well in
detecting lane markings without being significantly affected by factors such as changing
illumination or shadows.

Versatility: The system's robust performance extends to different road types and
environments, making it versatile and adaptable. Whether on highways, urban streets, or
rural roads, the system reliably detects lane boundaries without the need for manual
adjustments or recalibration.

Real-time Operation: The proposed system operates in real-time, allowing for timely
processing of image data from onboard cameras. This capability is essential for practical
deployment in vehicles, ensuring that lane detection occurs promptly to support driver
assistance features or autonomous driving functionalities.

Reduced Dependency on External Factors: By not requiring additional information


such as lane width or time to lane crossing, the system reduces its dependency on external
factors. This independence enhances its reliability and makes it less susceptible to errors
or inaccuracies arising from variations in road conditions or environmental factors.

Enhanced Safety: Accurate lane detection is crucial for ensuring road safety, particularly
in the context of advanced driver assistance systems (ADAS) and autonomous vehicles.
By reliably identifying lane boundaries, the proposed system contributes to safer driving
experiences and helps prevent accidents caused by lane departure or drifting.

Dept. of CSE, BVCEC(A), Odalarevu. 13


Car Lane Detection Using NumPy and OpenCV

2.4 MODULES DESCRIPTION


 OpenCV Using Display the Lane

 Preprocessing

 Define Color Masks

 Edge Detection and Select Region

 Lane Detection

 Lane Departure Recognition

OpenCV Using Display the Lane Module:

The system allows users to stream live video from their system's camera and
connects the application to access this video feed. Once the application accesses the
video, it initiates the lane detection process, marking detected lanes with bounding boxes.
To cease lane tracking, users can simply press the "esc" key on their keyboard, prompting
the system to halt the streaming of live video. This user-friendly functionality enables
seamless interaction with the lane detection application, empowering users to control the
process effortlessly.

Preprocessing Module:

Image processing plays a crucial role in achieving accurate lane detection results.
The initial step involves converting color images into grayscale, a process essential for
subsequent analysis. Grayscale processing simplifies image representation by eliminating
color information, thus facilitating further processing steps. Following grayscale
conversion, the next crucial step is binarization, wherein the grayscale image is
transformed into a binary image, typically consisting of black and white pixels. Various
algorithms have been proposed for binarization, each aiming to accurately distinguish
between foreground and background pixels. Notably, a novel approach introduces an
adaptive thresholding technique to enhance traditional binarization algorithms. By
dynamically adjusting the threshold based on local image characteristics, this adaptive
method significantly improves binarization performance, particularly for images with
varying lighting conditions or aged appearances. This innovation underscores the
importance of continual advancements in image processing techniques to optimize lane
detection accuracy across diverse scenarios.

Define Color Masks Module:

Dept. of CSE, BVCEC(A), Odalarevu. 14


Car Lane Detection Using NumPy and OpenCV

Color masks serve as a crucial tool in image processing, enabling the selective
extraction of pixels based on their color properties. Specifically, the objective is to isolate
white and yellow pixels while disregarding other colors, effectively segmenting the image
into regions of interest. This segmentation process is instrumental in preparing the image
for subsequent edge detection algorithms. Additionally, prior to edge detection, it is
advantageous to apply a smoothing filter to the image. This smoothing operation helps
mitigate the impact of noise, ensuring that artificial edges are not erroneously detected,
thereby enhancing the accuracy of edge detection results.

Overall, the strategic utilization of color masks and smoothing techniques


contributes to the refinement and optimization of the image processing pipeline for robust
lane detection performance.

Edge Detection and Select Region Module:

One of the primary motivations behind these steps lies in the necessity to compute
line equations from discrete pixels, a crucial aspect of the lane detection process. To
facilitate this computation, edges in the image are initially detected through the
application of an Edge Filter on the grayscale representation of the image. This filter
identifies strong edge pixels, characterized by significant gradients, which surpass a
predefined threshold. Subsequently, a mask, initially represented as a matrix of zeroes
matching the dimensions of the grayscale image, is employed. This mask, applied to the
grayscale image via bitwise AND operation, effectively isolates regions of interest
corresponding to the detected edges. By selectively highlighting these edge pixels, the
algorithm sets the stage for subsequent analysis, enabling the extraction and computation
of line equations essential for accurate lane detection.

Lane Detection Module:

The methodology employed for lane line determination varies based on the type
of lane curvature. For straight lanes, the approach leverages the starting and ending points
to establish the lane lines. Conversely, for curved lanes, the algorithm computes the
curvature direction, particularly focusing on the right base, and employs a least squares
fitting technique to delineate the waveform of the lane. Wei et al. introduced
enhancements to the traditional Hough Transform method by incorporating the concept of
points of disappearance and adjacent pixels. This refined approach enables more accurate
lane detection by refining the classic Hough Transform. By representing lines as points
and employing sinusoidal or linear representations (based on the coordinate system used),

Dept. of CSE, BVCEC(A), Odalarevu. 15


Car Lane Detection Using NumPy and OpenCV

the Hough Transform identifies lines passing through these points. Subsequently, the
algorithm deduces that points lying on the same line share common characteristics,
facilitating precise lane identification and localization.

Dept. of CSE, BVCEC(A), Odalarevu. 16


Car Lane Detection Using NumPy and OpenCV

Lane Departure Recognition Module: When the discrepancy between the vehicle and
the left lane line is noticeably smaller than the gap to the right lane line, the car veers
towards the left; conversely, it steers towards the right. The extent of deviation of the
vehicle is quantified by the ratio of the deviation distance to the width of the driveway
depicted in the image. Upon surpassing a predetermined threshold, the Lane Departure
Warning System (LDWS) activates, prompting the driver to realign the vehicle within the
safe driving zone delineated by the lanes.

2.5 Feasibility study:


A feasibility study for the car lane detection project would assess various aspects to
determine whether the project is viable and worth pursuing. Here are key considerations
for the feasibility study:

Technical Feasibility: This aspect evaluates whether the project's technical requirements
can be met using available resources, technology, and expertise. It involves assessing the
feasibility of implementing image processing algorithms, integrating libraries like NumPy

Dept. of CSE, BVCEC(A), Odalarevu. 17


Car Lane Detection Using NumPy and OpenCV

and OpenCV, and achieving real-time performance for lane detection on different
hardware platforms.

Market Feasibility: Investigate the market demand and potential for the proposed car
lane detection system. Analyze the market trends, competition, and the need for advanced
driver assistance systems (ADAS) and autonomous driving technologies. Determine if
there is a market opportunity for the project, considering factors such as safety
regulations, consumer preferences, and industry adoption.

Financial Feasibility: Assess the financial viability of the project by estimating the costs
associated with development, testing, and deployment. Consider expenses such as
hardware/software procurement, personnel costs, research and development expenses,and
potential revenue streams

(e.g., product sales, licensing, partnerships). Determine if the projected returns justify the
investment and if funding or financing options are available.

Operational Feasibility: Evaluate the practicality and effectiveness of implementing the


car lane detection system within operational environments. Consider factors such as
system integration with existing vehicles or infrastructure, compatibility with different
vehicle models, ease of installation and maintenance, and user acceptance. Assess any
logistical challenges or operational constraints that may impact the project's
implementation.

Legal and Regulatory Feasibility: Ensure compliance with legal and regulatory
requirements related to automotive safety standards, data privacy, intellectual property
rights, and any other relevant regulations. Identify potential legal hurdles or regulatory
barriers that may affect the project's development, deployment, or commercialization.

Environmental and Social Feasibility: Consider the environmental and social


implications of the project, including its impact on road safety, traffic management, and
environmental sustainability. Assess whether the proposed car lane detection system
aligns with societal values, promotes safety, and contributes positively to the
transportation ecosystem.

Dept. of CSE, BVCEC(A), Odalarevu. 18


Car Lane Detection Using NumPy and OpenCV

CHAPTER-3 REQUIREMENT ANALYSIS


3.1 Software Requirements
For developing the application, the following are the Software Requirements:

 Python

 OpenCV

 NumPy

Operating Systems supported 1.

Windows 10 and above

2. macOS, or Linux.

Debugger and Emulator

Any Browser (Particularly Chrome)

3.2 Functional Requirements


 Real-time image processing

3.3 Hardware Requirement


For developing the application, the following are the Hardware Requirements:

 Computer or embedded system capable of running Python

 Webcam or camera

Dept. of CSE, BVCEC(A), Odalarevu. 19


Car Lane Detection Using NumPy and OpenCV

CHAPTER-4 SOFTWARE DESIGN


4.1 System architecture:
The system architecture for the car lane detection project is designed to facilitate
the accurate identification of lane markings from input images or video streams. It
comprises several interconnected modules, starting with the Input Module, which
acquires real-time data from cameras. The Preprocessing Module enhances the input data
through image processing techniques like edge detection and Gaussian blur. The Lane
Detection Module analyzes the preprocessed images to detect potential lane boundaries,
while the Lane Line Extraction Module refines and extracts these boundaries for
improved accuracy. The Visualization Module displays the processed data with overlaid
lane markings, providing real-time visual feedback. Optional components include a User
Interface for user interaction and Integration and Deployment modules for seamless
system integration. Testing and Validation modules ensure the system's functionality and
reliability, while Logging and Monitoring modules track performance and record
important events. Configuration Management modules handle system settings and
parameters, enabling customization and adaptation to various environments. This
architecture offers a structured approach to building a robust, scalable, and maintainable
car lane detection system.

Dept. of CSE, BVCEC(A), Odalarevu. 20


Car Lane Detection Using NumPy and OpenCV

Figure 1: Activity Diagram

4.2 Data flow diagram:


In addition to the intended application of the vision lane detection system, it is
important to evaluate the type of conditions that are expected to be encountered. Road
markings can vary greatly between regions and over nearby stretches of road. Roads can
be marked by well-defined solid lines, segmented lines, circular reflectors, physical
barriers, or even nothing at all. The road surface can be comprised of light or dark
pavements or combinations.

An example of the variety of road conditions is presented. Some roads are


relatively simple scene with both solid lines and dashed lines lane markings. Lane
position in this scene can be considered relatively easy because of the clearly defined
markings and uniform road texture. But in other complex scene in which the road surface
varies and markings consist of circular reflectors as well as solid lines, the lane detection
would not be an easy task. Furthermore, shadowing obscuring road markings makes the
edge detection phase more complex.

Dept. of CSE, BVCEC(A), Odalarevu. 21


Car Lane Detection Using NumPy and OpenCV

Figure 2: Block Diagram

4.3 Methodology
In certain proposed systems like lane detection, the focus is on localizing specific
elements such as road markings on painted surfaces. While some systems achieve
satisfactory results, detecting road lanes remains challenging in adverse conditions like
heavy rain, degraded lane markings, and unfavorable meteorological and lighting
conditions. Factors such as the presence of other vehicles occluding road markings and
shadows from surrounding objects further complicate the process. To address these
challenges, a microcontroller serves as the project's main memory, with Python image
processing techniques employed for lane detection. If the vehicle deviates from its lane,
this information is displayed on an LCD screen. Ultrasonic sensors measure the distance
between the vehicle and obstacles ahead, allowing for automatic speed reduction to
prevent accidents. Data updates, including accident alerts triggered by abnormal values
from MEMS sensors, are transmitted to an IoT webpage. In the event of an accident, the
controller notifies rescue teams or family members with the accident location via IoT
communication. It's important to note that the system assumes the vehicle is operating on
flat, straight roads or roads with gentle curves.

Dept. of CSE, BVCEC(A), Odalarevu. 22


Car Lane Detection Using NumPy and OpenCV

Figure 3: Image Processing

CHAPTER-5 IMPLEMENTATION
The provided showcases a comprehensive approach to car lane detection,
leveraging Python, NumPy, OpenCV, and segmentation models. Initially, the dataset is
prepared by downloading and organizing images along with their corresponding lane
labels. Subsequently, data augmentation techniques are applied to the training images to
enhance dataset variability, crucial for robust model training. Through a custom dataset
class, named `CarlaLanesDataset`, images and labels are loaded, with provisions for
augmentation and preprocessing transformations.
Model creation constitutes a pivotal step, where the architecture is defined using
segmentation_models_pytorch library. The chosen model architecture is a Feature
Pyramid Network (FPN) with an efficientnet-b0 encoder, adept at capturing hierarchical
features essential for accurate lane detection. Training proceeds with the specification of a
custom loss function, MultiDiceLoss, optimizer, and evaluation metrics. A train epoch
runner facilitates the training process, allowing for dynamic adjustments in learning rates
to optimize model performance.
Following training, the model undergoes rigorous testing on a separate test dataset to assess its
generalization capabilities. Visualization of predicted lane masks alongside ground truth
masks provides insights into the model's accuracy and efficacy. Integration of OpenCV and

Dept. of CSE, BVCEC(A), Odalarevu. 23


Car Lane Detection Using NumPy and OpenCV

NumPy enriches the implementation by providing robust image processing capabilities and
efficient array manipulation.
5.1 CODE
!pip install -q kaggle
Import the kaggle.json file here
from google.colab import files
files.upload() !mkdir -p
~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json

!kaggle datasets download -d thomasfermi/lane-detection-for-carla-driving-simulator


!unzip lane-detection-for-carla-driving-simulator.zip
### Car Lane Detection Using NumPy and OpenCV in Python - Enhance driver
assistance systems with a lane detection algorithm for improved vehicle navigation.
!pip install matplotlib import matplotlib.pyplot as plt import numpy as
np # linear algebra import pandas as pd # data processing, CSV file
I/O (e.g. pd.read_csv) import os
DATA_DIR = '/content/' x_train_dir =
os.path.join(DATA_DIR, 'train') y_train_dir =
os.path.join(DATA_DIR, 'train_label') x_valid_dir =
os.path.join(DATA_DIR, 'val') y_valid_dir =
os.path.join(DATA_DIR, 'val_label')
# helper function for data visualization def
visualize(**images): """Plot images in one
row.""" n = len(images) plt.figure(figsize=(16,
5)) for i, (name, image) in
enumerate(images.items()):
plt.subplot(1, n, i + 1)

Dept. of CSE, BVCEC(A), Odalarevu. 24


Car Lane Detection Using NumPy and OpenCV

plt.xticks([]) plt.yticks([])
plt.title(' '.join(name.split('_')).title())
plt.imshow(image) plt.show() from
torch.utils.data import DataLoader, Dataset from
torch import LongTensor import re

import cv2 class


CarlaLanesDataset(Dataset):
""" Read images, apply augmentation and preprocessing transformations Args:
images_dir (str): path to images folder masks_dir (str): path to
segmentation masks folder class_values (list): values of classes to
extract from segmentation mask augmentation
(albumentations.Compose): data transfromation pipeline
(e.g. flip, scale, etc.)
preprocessing (albumentations.Compose): data preprocessing
(e.g. noralization, shape manipulation, etc.)
CLASSES = ['background', 'left_marker', 'right_marker']
def __init__(
self,
images_dir,
masks_dir,
classes=None,
augmentation=None,
preprocessing=None,
):
self.ids = os.listdir(images_dir)
#random.shuffle(self.ids)
self.images_fps = [os.path.join(images_dir, image_id) for image_id in self.ids]

Dept. of CSE, BVCEC(A), Odalarevu. 25


Car Lane Detection Using NumPy and OpenCV

get_label_name = lambda fn: re.sub(".png", "_label.png", fn)


self.masks_fps = [os.path.join(masks_dir, get_label_name(image_id)) for
image_id in self.ids]
# convert str names to class values on masks self.class_values =
[self.CLASSES.index(cls.lower()) for cls in classes] self.augmentation =
augmentation self.preprocessing = preprocessing def
__getitem__(self, i):
# read data image = cv2.imread(self.images_fps[i])
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mask = cv2.imread(self.masks_fps[i], 0)
# apply augmentations
if self.augmentation:
sample = self.augmentation(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
# apply preprocessing
if self.preprocessing:
sample = self.preprocessing(image=image, mask=mask)
image, mask = sample['image'], sample['mask'] return
image, LongTensor(mask) def __len__(self):
return len(self.ids)
dataset = CarlaLanesDataset(x_train_dir, y_train_dir,
classes=CarlaLanesDataset.CLASSES) image, mask = dataset[4] # get some sample
visualize( image=image, label = mask
)

import albumentations as albu

def get_training_augmentation():
train_transform = [ albu.ShiftScaleRotate(scale_limit=0.1, rotate_limit=0.,
shift_limit=0.1, p=1, border_mode=0),

albu.IAAAdditiveGaussianNoise(p=0.2),
albu.OneOf(
[

Dept. of CSE, BVCEC(A), Odalarevu. 26


Car Lane Detection Using NumPy and OpenCV

albu.CLAHE(p=1),
albu.RandomBrightness(p=1),
albu.RandomGamma(p=1),
],
p=0.6,
),
albu.OneOf(
[
albu.IAASharpen(p=1),
albu.Blur(blur_limit=3, p=1),
albu.MotionBlur(blur_limit=3, p=1),
],
p=0.6,
),
albu.OneOf(
[
albu.RandomContrast(p=1),
albu.HueSaturationValue(p=1),
],
p=0.6,
),
]
return albu.Compose(train_transform)

def get_validation_augmentation(): return None def


to_tensor(x, **kwargs):
return x.transpose(2, 0, 1).astype('float32') def
get_preprocessing(preprocessing_fn):
_transform =
[ albu.Lambda(image=preprocessing_fn),
albu.Lambda(image=to_tensor),
]
return albu.Compose(_transform)

Dept. of CSE, BVCEC(A), Odalarevu. 27


Car Lane Detection Using NumPy and OpenCV

#### Visualize resulted augmented images and masks


augmented_dataset =
CarlaLanesDataset( x_train_dir, y_train_dir,
augmentation=get_training_augmentation(),
classes=CarlaLanesDataset.CLASSES,
)
# same image with different random transforms for
i in range(3):
image, mask = augmented_dataset[1]
visualize(image=image, label=mask) !pip
install segmentation_models_pytorch import
torch import numpy as np import
segmentation_models_pytorch as smp
loss_string = 'multi_dice_loss' ENCODER =
'efficientnet-b0'
ENCODER_WEIGHTS = 'imagenet'
ACTIVATION = 'softmax2d'
DEVICE = 'cuda'
# create segmentation model with pretrained encoder
model = smp.FPN( encoder_name=ENCODER,
encoder_weights=ENCODER_WEIGHTS,
classes=len(CarlaLanesDataset.CLASSES),
activation=ACTIVATION, #encoder_depth = 4
)
preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER,
ENCODER_WEIGHTS)

train_dataset = CarlaLanesDataset( x_train_dir,


y_train_dir,
augmentation=get_training_augmentation(),
preprocessing=get_preprocessing(preprocessing_fn),
classes=CarlaLanesDataset.CLASSES,
)

Dept. of CSE, BVCEC(A), Odalarevu. 28


Car Lane Detection Using NumPy and OpenCV

valid_dataset = CarlaLanesDataset( x_valid_dir,


y_valid_dir,
augmentation=get_validation_augmentation(),
preprocessing=get_preprocessing(preprocessing_fn),
classes=CarlaLanesDataset.CLASSES,
)

bs_train = 8 bs_valid
=8
train_loader = DataLoader(train_dataset, batch_size=bs_train, shuffle=True) valid_loader
= DataLoader(valid_dataset, batch_size=bs_valid, shuffle=False)

from segmentation_models_pytorch.utils import base from


segmentation_models_pytorch.utils.losses import DiceLoss from
segmentation_models_pytorch.utils.metrics import Accuracy

label_left = CarlaLanesDataset.CLASSES.index('left_marker') label_right


= CarlaLanesDataset.CLASSES.index('right_marker')

class MultiDiceLoss(base.Loss):

def __init__(self, **kwargs):


super().__init__(**kwargs)
self.BinaryDiceLossLeft = DiceLoss()
self.BinaryDiceLossRight = DiceLoss()

def forward(self, y_pr, y_gt):


#print("shape y_pr:", y_pr.shape)
#print("shape y_gt:", y_gt.shape)
# ypr.shape=bs,3,512,1024, ygt.shape=bs,512,1024 left_gt = (y_gt ==
label_left) right_gt = (y_gt == label_right) loss_left =
self.BinaryDiceLossLeft.forward(y_pr[:,label_left,:,:] , left_gt) loss_right =
self.BinaryDiceLossRight.forward(y_pr[:,label_right,:,:] , right_gt) return
(loss_left + loss_right)*0.5

Dept. of CSE, BVCEC(A), Odalarevu. 29


Car Lane Detection Using NumPy and OpenCV

import torch

metrics = [] loss = MultiDiceLoss() optimizer =


torch.optim.Adam(params=model.parameters(), lr=1e-4)

# create epoch runners


# it is a simple loop of iterating over dataloader`s samples
train_epoch = smp.utils.train.TrainEpoch( model,
loss=loss, metrics=metrics, optimizer=optimizer,
device=DEVICE, verbose=True,
)

valid_epoch = smp.utils.train.ValidEpoch(
model, loss=loss, metrics=metrics,
device=DEVICE, verbose=True,
)
best_loss = 1e10

for i in range(0, 5):

print('\nEpoch: {}'.format(i)) train_logs


= train_epoch.run(train_loader) valid_logs
= valid_epoch.run(valid_loader)

# do something (save model, change lr, etc.) if best_loss >


valid_logs[loss_string]: best_loss = valid_logs[loss_string]
torch.save(model, './best_model_{}.pth'.format(loss_string))
print('Model saved!')

if i == 3:
optimizer.param_groups[0]['lr'] = 1e-5
print('Decrease decoder learning rate to 1e-5!')

Dept. of CSE, BVCEC(A), Odalarevu. 30


Car Lane Detection Using NumPy and OpenCV

best_model = torch.load('./best_model_multi_dice_loss.pth')

test_best_model = True if
test_best_model:

# create test dataset test_dataset = CarlaLanesDataset(


x_valid_dir, y_valid_dir,
augmentation=get_validation_augmentation(),
preprocessing=get_preprocessing(preprocessing_fn),
classes=CarlaLanesDataset.CLASSES,
)

test_dataloader = DataLoader(test_dataset)

# evaluate model on test set


test_epoch = smp.utils.train.ValidEpoch(
model=best_model, loss=loss,
metrics=metrics, device=DEVICE,
)

logs = test_epoch.run(test_dataloader)

# test dataset without transformations for image visualization


test_dataset_vis = CarlaLanesDataset( x_valid_dir,
y_valid_dir, classes=CarlaLanesDataset.CLASSES,
preprocessing=get_preprocessing(preprocessing_fn)
)

for i in range(3):
n = np.random.choice(len(test_dataset_vis))

Dept. of CSE, BVCEC(A), Odalarevu. 31


Car Lane Detection Using NumPy and OpenCV

image_vis = test_dataset_vis[n][0].astype('uint8')
image, gt_mask = test_dataset_vis[n]

x_tensor = torch.from_numpy(image).to(DEVICE).unsqueeze(0)
pr_mask_left = best_model.predict(x_tensor)[0,1,:,:] pr_mask_left
= (pr_mask_left.cpu().numpy())

pr_mask_right = best_model.predict(x_tensor)[0,2,:,:]
pr_mask_right = (pr_mask_right.cpu().numpy())

visualize( ground_truth_mask=gt_ma
sk,
predicted_mask_left=pr_mask_left,
predicted_mask_right=pr_mask_right
)

5.2 Open CV
OpenCV, short for Open Source Computer Vision Library, originated at Intel in
1999 under the stewardship of Gary Bradsky. The initial release was made available in
2000, with Vadim Pisarevsky joining Bradsky to oversee Intel's Russian software
OpenCV team. A significant milestone for OpenCV occurred in 2005 when it was utilized
in Stanley, the autonomous vehicle that emerged victorious in the 2005 DARPA Grand
Challenge.
Following this success, OpenCV's development continued with the support of
Willow Garage, a robotics research lab, with Bradsky and Pisarevsky at the helm. Over
time, OpenCV has evolved into a comprehensive platform, boasting a wide range of
algorithms related to computer vision and machine learning. Its versatility is evident in its
compatibility with various programming languages such as C++, Python, and Java,
making it accessible to a diverse community of developers.

Furthermore, OpenCV is platform-agnostic, with support for multiple operating


systems including Windows, Linux, macOS, Android, and iOS. This cross-platform

Dept. of CSE, BVCEC(A), Odalarevu. 32


Car Lane Detection Using NumPy and OpenCV

compatibility ensures that developers can leverage OpenCV's capabilities across a range
of devices and environments. Additionally, OpenCV interfaces with high-speed GPU
operations through technologies like CUDA, enabling accelerated processing for
computationally intensive tasks.
The continued growth and expansion of OpenCV underscore its significance in
the field of computer vision and its relevance in advancing technologies such as
autonomous vehicles, robotics, augmented reality, and more. With active development
and a thriving community, OpenCV remains at the forefront of innovation in the realm of
computer vision.

5.3 NumPY
NumPy is a Python library renowned for its simplicity and versatility in handling
multidimensional arrays. These arrays serve as the fundamental data structure
underpinning Python's extensive data science toolkit. One of the key advantages of
NumPy lies in its ability to deliver exceptional performance. By leveraging optimized
algorithms implemented in C, NumPy achieves remarkable speed, executing
computations in nanoseconds rather than seconds.
Moreover, NumPy facilitates more efficient coding practices by reducing reliance
on explicit loops. This means developers can streamline their code and avoid the
complexities associated with managing loop indices. As a result, the code becomes clearer
and more concise, resembling mathematical equations rather than cumbersome procedural
logic.
Furthermore, NumPy boasts a vibrant community of contributors dedicated to
maintaining its high standards of performance, usability, and reliability. This collective
effort ensures that NumPy remains fast, user-friendly, and free from bugs. Overall,
NumPy empowers developers and data scientists to efficiently manipulate and analyze
large datasets, enabling them to tackle complex computational tasks with ease and
precision.
We tested this system on a laptop powered by an Intel® Core™ (2.00 GHz) CPU
and 4GB RAM. Equipped with HP TrueVision HD camera. The image sequences of
highway scenes were tested. The system is able to track and count the number of vehicles
passing the highway.

Dept. of CSE, BVCEC(A), Odalarevu. 33


Car Lane Detection Using NumPy and OpenCV

Figure 4:Applying Image Processing to the Video

Figure 5:Detecting and Counting the Vehicles

Firstly, the provided video is divided into sequences of images and image
processing is applied on the images. Thereafter, the moving objects are detected and
counted as they pass the line set-up by the device.Our project has an accuracy of 99%. It
is detecting multiple vehicles at a time. It is showing a data on screen about the total cars
detected. It can also detect different traffic video as an input.

Dept. of CSE, BVCEC(A), Odalarevu. 34


Car Lane Detection Using NumPy and OpenCV

CHAPTER-6 TESTING
System Test
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub-assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific
testing requirement.
TYPES OF TESTS Unit testing
Unit testing involves the design of test cases that validate that the internal program logic
is functioning properly, and that program inputs produce valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an individual unit
before integration. This is a structural testing, that relies on knowledge of its construction
and is invasive. Unit tests perform basic tests at component level and test a specific
business process, application, and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to the documented specifications
and contains clearly defined inputs and expected results.
Integrated Testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components
were individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components.
Functional Testing
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.

Functional testing is centered on the following items


Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.

Dept. of CSE, BVCEC(A), Odalarevu. 35


Car Lane Detection Using NumPy and OpenCV

Functions : identified functions must be exercised.


Output :identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes must
be considered for testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system
testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration
points.
White Box Testing
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least its
purpose. It is purpose. It is used to test areas that cannot be reached from a black box
level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box. you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written
in detail.

Dept. of CSE, BVCEC(A), Odalarevu. 36


Car Lane Detection Using NumPy and OpenCV

Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one steps up – software applications at the
company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered. Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

CHAPTER 7 RESULT
TEST CASE 1:

Dept. of CSE, BVCEC(A), Odalarevu. 37


Car Lane Detection Using NumPy and OpenCV

TEST CASE 2:

TEST CASE 3:

Dept. of CSE, BVCEC(A), Odalarevu. 38


Car Lane Detection Using NumPy and OpenCV

CHAPTER-8
CONCLUSION AND FUTURE ENHANCEMENT
8.1 Conclusion
In conclusion, a real-time vision-based lane detection method was proposed,
aiming to enhance road safety by accurately identifying lane boundaries under various
conditions. The methodology involved several key steps, including image segmentation to
isolate road features and removal of shadows for improved clarity. The Canny edge
detection operator was then applied to identify edges corresponding to road lanes or
boundaries.
To address challenges such as occlusion and imperfect road conditions, a
hyperbola-pair road model was introduced, offering robustness in lane detection. The use
of Hough transformation with a restricted search area enabled efficient detection of lanes,

Dept. of CSE, BVCEC(A), Odalarevu. 39


Car Lane Detection Using NumPy and OpenCV

with the intersection of detected lanes forming a crucial reference point known as the
horizon.
Furthermore, a lane scan boundary phase was devised to locate the left and right
vector points representing the road lanes. By utilizing edge images along with left and
right Hough lines, the system effectively allocated lane points, demonstrated through the
representation of two hyperbola lines.
Experimental results indicated that the proposed system met standard
requirements, providing valuable information to drivers for safer navigation. Overall, the
methodology showcases the efficacy of real-time vision-based lane detection in
enhancing road safety and driver awareness, highlighting its potential for practical
implementation in vehicular safety systems.

8.2 Future Enhancement


In future iterations, several enhancements can be pursued to augment the lane
detection system's capabilities. These include delving into advanced lane modeling
techniques such as spline interpolation or Bezier curves to better capture intricate lane
geometries. Integrating deep learning methodologies like convolutional neural networks
(CNNs) could significantly boost accuracy and robustness, especially in challenging
environments. Additionally, semantic segmentation can be employed to differentiate
between various road elements, enabling comprehensive scene understanding. Dynamic
parameter adjustments based on environmental conditions and multi-sensor fusion
techniques can further enhance detection accuracy and adaptability. Real-time
performance optimization, lane change detection, and robust error handling mechanisms
are also critical areas for improvement. Moreover, integrating intuitive human-machine
interaction interfaces and seamlessly integrating with autonomous driving systems will
ensure the system remains relevant and effective in enhancing road safety and driving
experiences.

Dept. of CSE, BVCEC(A), Odalarevu. 40


Car Lane Detection Using NumPy and OpenCV

BIBLIOGRAPHY

REFERENCES

[1] M. Sajjad et. al., “An efficient and scalable simulation model for autonomous
vehicles with economical hardware,” IEEE Trans. Intell. Transp. Syst., vol. 22,
no. 3, pp. 1718– 1732, Mar. 2021.

[2] K. Muhammad, S. Khan, V. Palade, I. Mehmood, and V. H. C. de Albuquerque,


“Edge intelligence-assisted smoke detection in foggy surveillance environments,”
IEEE Trans. Ind. Informat., vol. 16, no. 2, pp. 1067–1075, Feb. 2020.

[3] K. Muhammad, T. Hussain, J. Del Ser, V. Palade, and V. H. C. de Albuquerque,


“DeepReS: A deep learning-based video summarization strategy for
resourceconstrained industrial surveillance scenarios,” IEEE Trans. Ind. In
format., vol. 16, no. 9, pp. 5938– 5947, Sep. 2020.

[4] Z. M. Chng, J. M. H. Lew, and J. A. Lee, “RONELD: Robust neural network


output enhancement for active lane detection,” 2020, arXiv:2010.09548. [Online].
Available: http://arxiv.org/abs/2010.09548.

[5] Y.-B. Liu, M.Zeng, and Q.-H. Meng, “Heatmap-based vanishing point boosts lane

Dept. of CSE, BVCEC(A), Odalarevu. 41


Car Lane Detection Using NumPy and OpenCV

detection,” 2020, arXiv:2007.15602. [Online]. Available:


http://arxiv.org/abs/2007.15602.

[6] Y. Zhang, Z. Lu, D. Ma, J.-H. Xue, and Q. Liao, “RippleGAN: Lane line detection
with ripple lane line detection network and wasserstein GAN,” IEEE Trans. Intell.
Transp. Syst., vol. 22, no. 3, pp. 1532–1542, Mar. 2021.

[7] W. Cheng, H. Luo, W. Yang, L. Yu, and W. Li, “Structure-aware network for lane
marker extractionwith dynamic vision sensor,” 2020, arXiv:2008.06204[Online].
Available: http://arxiv.org/abs/2008.06204.

[8] Z. Qin, H. Wang, and X. Li, “Ultrafast structure-aware deep lane detection,” in
Proc. Eur. Conf. Compute. Vis., 2020, pp. 276–291.

[9] L. Tabelini, R. Berriel, T. M. Paixão, C. Badue, A. F. De Souza, and T. Oliveira-


Santos, “Keep your eyes on the lane: Real-time attention guided lane
detection,” 2020,arXiv:2010.12035.[Online].Available:
http://arxiv.org/abs/2010.12035.

[10] S. Yoo et al., “End-to-End lane marker detection via row-wise classification,” in
Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW),
Jun. 2020, pp. 4335–4343.

Dept. of CSE, BVCEC(A), Odalarevu. 42

You might also like