0% found this document useful (0 votes)
43 views

ACM2

The document discusses exploring the pre-trained YOLOP model for lane detection in autonomous vehicles. It describes the YOLOP architecture which uses CSPDarknet as the backbone and FPN/SPP modules in the neck. The researchers used the BDD100K dataset to train YOLOP for lane detection, drivable area segmentation, and object detection. They tested YOLOP on various challenging scenarios like curved lanes, intersections, and pedestrian lanes to evaluate its performance for an autonomous vehicle prototype using a Jetson Nano device.

Uploaded by

Ricky Layderos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

ACM2

The document discusses exploring the pre-trained YOLOP model for lane detection in autonomous vehicles. It describes the YOLOP architecture which uses CSPDarknet as the backbone and FPN/SPP modules in the neck. The researchers used the BDD100K dataset to train YOLOP for lane detection, drivable area segmentation, and object detection. They tested YOLOP on various challenging scenarios like curved lanes, intersections, and pedestrian lanes to evaluate its performance for an autonomous vehicle prototype using a Jetson Nano device.

Uploaded by

Ricky Layderos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Ricky C. Layderos, Bryan N. Lomerio, and Aaron Francis L.

Pacardo

road. The proposed system’s functionality ranges from displaying


road line positions to advanced applications such as recognizing
lane switching[1].
Wangfeng Cheng et al. [2023] present an instance segmentation-
based lane line detection algorithm for complex traffic scenes. Using
RepVgg-A0 for encoding and a multi-size asymmetric shuffling
convolution model for feature extraction, the algorithm employs
an adaptive upsampling decoder. Successfully deployed on Jetson
Nano, it achieves a 96.7 percent accuracy and real-time speed of
77.5 fps/s. The study reviews traditional and deep learning-based
methods, highlighting the proposed algorithm’s effectiveness in
enhancing autonomous driving safety[2]. Figure 1: YOLOP Architecture

2 METHODOLOGY
2.1 Research Design area, and object detection. The utilization of this dataset ensures
significant time and resource savings while maintaining data qual-
The researchers used exploratory research for this study. It was
ity and diversity, crucial for prototyping in object detection, lane,
used to determine the suitability of YOLOP for lane detection and
and drivable area segmentation development.
possible optimization of drivable area segmentation by using a
custom dataset for training. Then, finally, to test an autonomous
vehicle prototype using a Jetson Nano device. Exploratory research
is often conducted when there is little existing knowledge on a
topic or when a new angle or perspective is being considered. In
this case, the researchers were trying to gain an understanding of
how well YOLOP performed in this specific context and whether
it could be effectively used and utilized on the device to be used,
which was the Jetson Nano. By conducting and using exploratory
research, they aimed to gather preliminary data and insights that
could help guide future research in decision-making.

2.2 Theorems, Algorithm, and Mathematical


Framework
This part presents the theorems, algorithms, and mathematical
framework that is needed in this study. The following will be utilized
in this research entitled “Exploring Pre trained YOLOP for Lane
Detection in Autonomous Vehicle.”
YOLOP Architecture YOLOP is a real-time multi-task detection
architecture with a unified encoder (backbone and neck networks),
as illustrated in Figure 1. It utilizes YOLOv4’s CSPDarknet for the
backbone, addressing gradient duplication issues. The neck net-
work integrates features through spatial pyramid pooling (SPP)
Figure 2: Dataset used for Training
and feature pyramid network (FPN) modules, ensuring information
from various scales and semantic levels. YOLOP employs concate-
nation to merge features effectively. The architecture includes three Test Cases The diverse set of test cases encompasses various
decoders for object detection, drivable area segmentation, and lane challenging road driving scenarios, ensuring a comprehensive eval-
detection, aiming to enhance real-time multi-task detection perfor- uation of the prototype’s performance. The absence of a midline
mance. lane, presence of a single white lane, and double solid yellow lanes
Dataset The research utilized the BDD100K dataset for autonomous simulate real- world situations, testing the system’s ability to nav-
vehicle applications, comprising 100,000 driving images from over igate different road markings accurately. Additionally, scenarios
50,000 rides. Each image has a resolution of 1280x640 and includes such as curved left and right lanes, blurry line lanes, intersections,
diverse scenes such as city streets, residential areas, and highways, and pedestrian lanes further challenge the prototype’s adaptability
along with varying weather conditions. The dataset incorporates and responsiveness, pro- viding a robust assessment of its capa-
annotations for lane detection, object detection, semantic segmen- bilities in handling complex driving environments. Here are the
tation, instance segmentation, panoptic segmentation, multi-object sample test cases: Together, these instruments provided the neces-
tracking, and segmentation tracking. It consists of 70,000 training sary tools for developing an autonomous vehicle prototype capable
images, 20,000 testing images, and 10,000 validation images, provid- of real-time lane keeping and testing of a new deep learning al-
ing comprehensive data for scene tagging, lane marking, drivable gorithm

You might also like