0% found this document useful (0 votes)
25 views6 pages

High Accuracy Lane Line Detection System Using Enhanced Yolo V3

Uploaded by

ashikapramodpm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

High Accuracy Lane Line Detection System Using Enhanced Yolo V3

Uploaded by

ashikapramodpm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2023 IEEE World Conference on Applied Intelligence and Computing

High Accuracy Lane Line Detection System using


Enhanced Yolo V3
Manvendra Singh Gautam Jagyasi
Electronics and Telecommunication Department Electronics and Telecommunication Department
Symbiosis Institute of Technology Symbiosis Institute of Technology
Symbiosis International (Deemed University) Symbiosis International (Deemed University)
Pune, India Pune, India
[email protected] [email protected]
2023 IEEE World Conference on Applied Intelligence and Computing (AIC) | 979-8-3503-1006-1/23/$31.00 ©2023 IEEE | DOI: 10.1109/AIC57670.2023.10263826

Hemant Pachar Saffrine Kingsly


Electronics and Telecommunication Department Electronics and Telecommunication Department
Symbiosis Institute of Technology Symbiosis Institute of Technology
Symbiosis International (Deemed University) Symbiosis International (Deemed University)
Pune, India Pune, India
[email protected] [email protected]

Abstract—A computer vision technique called the YOLO V3 and OpenCV [9] network is trained to detect lane markers,
lane line identification system was created to precisely recognize and a post-processing algorithm is used to extract the lane
and follow the lane markers on a road. It is based on the You boundaries. The authors demonstrate that their system achieves
Only Look Once (YOLO) object detection technique. Real-time
lane line detection is accomplished using a deep neural network high accuracy on the CULane and Caltech Lane datasets. Lane
that has been trained on a sizable collection of road photos. Detection and Departure Warning System Using YOLOv3 and
The proposed system uses a variety of improvements to increase colour clustering shown in [10] proposes a lane detection and
accuracy and performance. The images are partitioned into s departure warning system that uses YOLOv3. The YOLOv3
2S grids to enhance vertical detection density and the detection network is trained to detect lane markers, and a departure
scale has been optimized to detect small targets like lane lines.
The results show that, even in difficult lighting and weather warning algorithm is used to warn the driver if the vehicle de-
circumstances, the system is able to concurrently detect and parts from the lane. The authors demonstrate that their system
track numerous lanes. The average detection accuracy map value achieves high accuracy on the KITTI dataset. YOLOv3-based
for the system is 89.23%. Applications for the YOLO V3 lane Lane Detection is proposed in [11] for Agricultural Robots.
line detecting technology include traffic management, advanced The YOLOv3 network is trained to detect crop rows as lane
driver assistance systems, and autonomous driving.
Index Terms—YOLOv3, Lane Line Detection, autonomous
markers, and a post-processing algorithm is used to extract the
driving crop rows. The authors demonstrate that their system achieves
high accuracy on various crop datasets.
I. I NTRODUCTION The main objective of a lane detection system is to assist
the driver in maintaining the vehicle in the designated lane
An advanced object detection technique in computer vision while it is being driven. Computer vision and image processing
applications is YOLOv3 [1-2] (You Only Look Once version methods are used by lane detection systems [2], [14-20] to
3). It provides real-time results by utilizing a deep neural recognize and track the lane lines on the road. The most
network to find and locate items in an image or video. Yolov3 commonly used lane line detection systems are those based
is a well-liked option for a variety of applications, including on computer vision algorithms, which typically use cameras
autonomous driving, due to its excellent accuracy and speed. or other sensors to detect lane markings on the road. OpenCV
A Lane Detection Based on improved YOLOv3 is pre- [9]is a well-known open-source computer vision package that
sented in [3], increasing efficiency and real-time performance. has many lane line recognition techniques. It is widely utilized
Lane Detection Based on YOLOv3 and Hough Transform in both research and industry and is simply adaptable to
is proposed in [4-6]. The YOLOv3 network is trained to specific needs. Advanced Driver-Assistance Systems referred
detect lane markers, and the Hough transform is used to fit to as ADAS [12] is another existing lane line detection system.
lines to the detected markers. The authors demonstrate that These are electronic devices that are made to help drivers
their system achieves high accuracy on the TuSimple lane while they are driving, increasing ease and safety. ADAS
detection dataset. The paper [7-8] proposes a YOLOv3-based systems monitor the area around the vehicle and give the driver
lane detection system for autonomous driving. The YOLOv3 feedback in real-time using a mix of sensors, cameras, and

979-8-3503-1006-1/23/$31.00 ©2023 IEEE 675


DOI: 10.1109/AIC.2023.112
Authorized licensed use limited to: COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY. Downloaded on August 16,2024 at 05:58:42 UTC from IEEE Xplore. Restrictions apply.
algorithms. 3.Flexibility: YOLOv3 is simple to modify and adapt to
The problem with the existing method is that, the Lane various lane detecting settings. It may be trained to recognise
line detection is not accurate enough in low-light or poor different lane markers, such as solid lines, dashed lines, and
weather conditions and is not able to detect temporary lane double lines, for instance.
markings. It is to see lane lines in bad weather, which can
make it more difficult for drivers to drive safely and impair C. Crashes due to error in Lane line
their ability to see temporary markers on maintenance routes. When a vehicle is moving at a fast speed or through
This paper introduces a lane detection and tracking system congested traffic, errors in lane line identification might result
that operates in real-time and utilizes YOLOv3. Through in crashes or accidents.
training the YOLOv3 network to recognize lane markers and Although they are very few, lane departure collisions
implementing a Kalman filter, the system is able to track the brought on by lane line detecting system mistakes can be
detected lanes as they move over time. The authors showcase quite serious. Around 16 percent of fatal collisions in 2018
that their system is capable of achieving real-time performance were the result of a vehicle departing its lane, according
even when operating on a low-power embedded system. to a data released by the National Highway Traffic Safety
Specific objectives are - Administration (NHTSA) in the United States. Table I shows
• Providing the driver immediate feedback on where their the data regarding various causes of accidents due to driver’s
car is in relation to other lanes. behaviour. Although not all of these collisions were caused by
• Minimising the chance of lane departure collisions, lane line recognition system mistakes, it seems likely that a
thereby increasing driving safety. sizable portion of them were.
• Lane detection accuracy at high speeds.
• Adapt to various road circumstances, such as changing TABLE I
ACCIDENTAL DATA
lighting, the weather, and road markers.
Driver Behaviour Accidents Percentage
II. L ANE LINE DETECTION SYSTEM Improper lane change 20 13
Wrong direction 19 12
A. Yolo V3 Sudden steering/braking 16 10
YOLOv3 is essentially an object detection [1] algorithm Following too closely 9 6
Overtaking on undivided road 4 2
made to find items in pictures or videos. It can be used for Overtaking from left side 4 2
lane line detection nevertheless, with a few adjustments. Turning suddenly without indication 4 2
General overview to use YOLOv3 for Lane Line detection- Dis-obeying traffic signal 4 2
Vehicle Speeding related 52 33
1. Collect and annotate the data-set: Assemble a database Driver distraction 39 25
of photographs of roads with distinct lane markings. Bounding Total Accidents Analysed 171 100
boxes can be used to annotate the lane pixels or to label the
lane lines.
2. Train the YOLOv3 model: To train the YOLOv3 model, III. L ANE DETECTION SYSTEM DESIGN USING ENHANCED
utilise the annotated data-set. Either pre-trained weights or a YOLO V3
completely new model can be trained. Due to its ability to Lane line detection, learning and training, and data prepara-
recognise tiny objects and deal with occlusions, the YOLOv3 tion are some of the components that make up the lane detec-
architecture is highly suited for lane line detection. tion system. These parts make use of an improved YOLOv3
3. Post-processing: You may post-process the output to get algorithm network structure. Figure 1 shows the procedure of
the lane information once the YOLOv3 model has identified implementation. The task of collecting, labelling, screening,
the lane lines. This can be accomplished by separating the and pre-processing lane line samples falls under the purview
lane lines from the bounding boxes, or by identifying the lane of the data preparation component. Two techniques are used
pixels using clustering or segmentation. to gather samples: using the vehicle’s high-definition camera
4. Lane tracking: You may use a variety of algorithms, to take pictures while driving. Data labelling is the process of
like Kalman filters, particle filters, or deep learning-based recognising and tagging various lane lines in images, such as
approaches, to monitor the lanes over time. solid and dotted white and yellow lines. Selecting high-quality
photos and formatting them for use with the YOLOv3 neural
B. How YOLOv3 algorithm is better than ADAS algorithm network are the goals of the screening and pre-processing step.
1. Real Time Detection: Real-time lane detection and image The algorithm mainly makes use of closest neighbour down-
processing capabilities of YOLOv3 give drivers more instant sampling as its image pre-processing technique.
feedback. Each pixel coordinate in the final image is iterated over in
2. Accuracy: In order to increase its accuracy over time, the beginning of the process. The appropriate coordinates in
YOLOv3 uses a deep learning system that can be trained on the original image are then found using the closest neighbour
big data-sets. This indicates that YOLOv3 has a better lane interpolation technique, as specified by formula 1. In this
detection accuracy than ADAS systems. equation, the final image’s coordinates (x’, y’) are represented

676
Authorized licensed use limited to: COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY. Downloaded on August 16,2024 at 05:58:42 UTC from IEEE Xplore. Restrictions apply.
and has a significantly higher map value than other real-time
detection algorithms.
Yolo v1 approaches the detection problem as a regression
task, whereby the image is first convoluted and then scaled to
a uniform size of 448 * 448 pixels. The target is then detected
using the whole connection layer.
area(Boxt ∩ Boxp)
IoU (P red, T ruth) = (2)
area(Boxt ∪ Boxp)
To achieve this, the algorithm divides the input image into s *
s grids. If the center point of an identified target falls within
a grid, then that grid is accountable for detecting the target.
Each grid has to forecast the probability pr (class i — object)
of C condition categories, the confidence level conf for each
boundary box, and the expected value of B boundary boxes.
The number of parameters (num) that must be estimated for
each grid is shown in Formula 3 [4].
N um = s ∗ s ∗ (B ∗ (4 + 1) + C) (3)
A bounding box is a rectangular area that encloses an object
of interest in an image. To define a bounding box, we need to
know its center point (x,y) and its width (W) and height (H)
values. These parameters can be used to compute the bounding
box’s position and dimensions. The confidence score (conf) of
a bounding box indicates how likely it is to contain the target
object, and how precise the prediction is. Formula 4 shows the
mathematical definition of the confidence score.
Conf = P r(Object)IoU (P red, T ruth) (4)
Fig. 1. Implementation process of lane line detection system
Formula (2) uses IoU (Intersection over Union) to measure
the level of similarity between a predicted box and the actual
by t(x’, y’), the RGB value at the original image’s coordinates boundary box. Formula (2) defines IoU as the intersection area
(x, y) is indicated by s(x, y), and the rounding operation is between the two boxes divided by the union area. When the
shown by int. The algorithm assigns the relevant pixel value predicted box perfectly aligns with the actual box, IoU is 1;
from the original image to the final image coordinate and then when they have no overlap, IoU is 0.
saves it. The presence of a target in a bounding box is indicated by
Pr (object), which equals 1 if the target is present and 0 if it’s
t(x′ , y ′ ) = s(int(x + 0.5), int(y + 0.5)) (1)
not.
The Yolov3 algorithm has been adapted to detect lane lines Each grid predicts the likelihood of a single category of
in images. The algorithm processes preprocessed images and targets, and the probability of a certain category given the
extracts features of lane lines at multiple layers, such as 67, presence of a target, pr (class I — object), is calculated based
75, 89, and 97. These features are then used to train the on this assumption [2]. Formula (5) provides the definition of
Yolo detection model, which runs for a predetermined number pr (class I — object).
of training cycles. Once the training process is completed,
the final detection model is formed, and the training phase
P r(Classi |object) = P r(Classi )/P r(object) (5)
concludes. The system is capable of detecting lane lines
in images and providing accurate results after training the Pr (class i) in Formula 5 represents the likelihood that the
detection model using lane line photographs. detection target belongs to a certain class target, and PR
(object) is the likelihood that the detection target is a part
IV. YOLO V3 ALGORITHM
of the bounding box.
A. Algorithm principle Yolo v1 was the foundation for the yolo V2 target real-time
A real-time target detection method named Yolo [13], has inspection method, which was released at the end of 2016.
become a widely used technique for object recognition in During testing, it was discovered that the map value of the
many applications such as autonomous driving. Yolo can yolo V2 is 76.8% at 67 frames per second and 78.6% at 40
detect objects at an impressive rate of 45 frames per second frames per second.

677
Authorized licensed use limited to: COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY. Downloaded on August 16,2024 at 05:58:42 UTC from IEEE Xplore. Restrictions apply.
and 52 results for the third, for a total of 172380 results over
the three detection layers. As can be seen, Yolo V3 makes
more predictions in order to raise target recognition accuracy,
but this causes the algorithm’s processing speed to slow down.
B. Algorithm related parameters
An essential component of the Yolo V3 method, the loss
function is utilised to highlight any discrepancies between the
model’s anticipated value and actual value. The algorithm’s
training model seeks to lower the loss function’s value. The
Fig. 2. Yolo V3’s network structure model. model’s resilience increases as its value decreases. Mean
square difference loss function, cross entropy loss function,
and others are frequently used loss functions. Formula (5)
The primary difference between Yoyo V1 and Yoyo V2 is illustrates the definition of the Yolo V3 algorithm’s loss
that less image input is sent to the network. The input image function L, which is made up of three components: the
is reduced from 448 by 448 to 416 by 416 pixels. Moreover, target classification prediction error Lclass, the bounding box
essential Network layer modifications are also made. Yoyo confidence error Lconf, and the bounding box coordinate
V2 employs the darknet-19 network for classification, which prediction error Lcoord.
consists of 5 maximum pool layers and 19 volume layers. 2
s X
B
Following testing, it is compared to the Darknet network
X obj
Lcoord = λcoord Iij [(xi − x̂i )2 + (yi − ŷi )2
that YoYo v1 has adopted. Moreover, darknet-19’s target i=0 j=0
categorization accuracy map has improved by roughly 3.2%.
+(wi − ŵi )2 + (hi − ĥi )2 ] (6)
A single-stage target detection technique called Yolo V3
(you only look once V3) was proposed in the beginning of 2
s X
X B
obj
2018. The network structure model of Yolo v3 [3] is shown in Lconf = Iij [(Ci − Ĉi )2 ] +
Figure 2. It increases prediction accuracy while simultaneously i=0 j=0
preserving the algorithm’s operating speed. Currently, it is a s X
B 2
X
well-liked real-time target detection algorithm. λnoobj noobj
Iij [(Ci − Ĉi )2 ] (7)
The main differences between Yolo V3 and Yolo V2 are: the i=0 j=0
use of the darknet-53 (106 layers total, including 53 convolu-
tion layers) basic network model; the use of the independent L=Lcoord + Lconf + Lclass (8)
logical classifier logistic in place of the prior softmax function; The calculation of bounding box coordinates involves predict-
and the utilisation of a technique resembling FPN for multi- ing the location of an object within an image. This prediction
scale feature prediction. Yolo V2’s predecessor could only can sometimes be inaccurate, and the difference between the
employ 1 on a single feature map. The ability to recognise actual and projected values is referred to as the ”lcoord”
targets of three sizes is Yolo V3’s most notable feature. Larger variable. This variable is defined in formula 6, which takes into
targets are detected using features with a size of 13 x 13, account the presence of S2 grids in the image, the number of
medium-sized targets are detected using features with a size of predicted bounding boxes per grid (B), and a weight coefficient
26 x 26, and tiny targets are detected using features with a size for the coordinate error.
of 52 x 52.As a result, it accomplishes the task of changing In this formula, ”i” represents a specific grid within the
the detection network in accordance with the target’s size[20]. image, while ”j” represents a particular prediction box within
The three feature maps in Yolo V3 have detection steps that grid. The formula considers the possibility that the object
that range from 32 to 16 to 8 each. The first detection layer of interest may be present in the jth prediction box of the i
for images with an input resolution of 416 * 416 pixels is grid, and assigns a value of either 1 (if the object is present)
situated in the 82nd layer of the darknet-53 network. This or 0 (if it is not).
layer’s feature map is a picture with a resolution of 13 by The values (xi , yi , wi , hi ) correspond to the true center
13 because its detection step is 32. The darknet-53 network’s point coordinates, width, and height of the actual bounding box
94th layer contains the second detection layer. This layer in the ith grid, while the values for the anticipated bounding
has different properties because of its 16th detection step. box’s center point coordinates, width, and height are denoted
A photograph with a resolution of 26 * 26 serves as the by four other numbers. The goal is to minimize the difference
identifying image.The darknet-53 network’s layer 106 is home between the actual and predicted bounding box coordinates,
to the third detection layer. This layer’s distinctive image has as represented by the lcoord variable.
a dimension of 52 * 52 as its detection level is 8. Formula 7 defines the confidence error lconf, which takes
In Yolo V3, there are three bounding boxes B and eighty into account the penalty coefficient (λ) Noobj and the real
target categories C. Formula 1 estimates that there are 13 and predicted confidence values. The parameters used in this
results for the first detection layer, 26 results for the second, formula have the same meaning as those in formula (6).

678
Authorized licensed use limited to: COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY. Downloaded on August 16,2024 at 05:58:42 UTC from IEEE Xplore. Restrictions apply.
V. E XPERIMENT AND RESULT ANALYSIS
A. Experimental environment
To enhance computing speed and minimize running time,
the hardware and software configuration utilized for testing
included an Intel i7 10700f CPU, NVIDIA gtx1060 graphics
card, 16GB of memory, and Ubuntu operating system. Addi-
tionally, Python was chosen as the interactive language.
B. Experimental data set
The selection of experimental data sets and the annotation
of features are essential for target detection, and their accuracy
directly impacts the effectiveness and training efficiency. For
this purpose, a set of 500 road images from cities were chosen
for testing, consisting of 400 training examples and 100 test Fig. 3. Normal illumination
samples. Figure 3 shows the usual representation of lanes on
roads.
After preprocessing the learning examples using the nearest
neighbor down-sampling method, the lane lines in the images
were marked using the expert data set marking program
labelme. Figure 4 and Figure 5 are examples of the marked
images during rainy days and with the dark light at night. The
learning sample images were manually annotated and saved
as JSON format files to ensure accuracy and consistency.
C. Experimental results
The yolov3 algorithm network receives the designated lane
line images for training. The images used in the training
stage are 416 By 416 pixels in size. Some significant index
parameters in the method recorded dynamically during the
training phase. Figure 6 shows the lane lines detected by the
proposed system.
For the safe operation of autonomous vehicles and driver
assistance systems, lane lines on roads and highways must
Fig. 4. rainy days
be detected using a variety of computer vision techniques.
Common measures used to assess these systems’ performance
include accuracy, precision, recall, and F1-score.
Overall, lane line detection system experiments reveal that
deep learning-based methods frequently outperform conven-
tional computer vision methods. To enhance the efficiency and
precision of these systems, new algorithms and methods are
continually being created as part of continuing research in this
area. Table II shows the comparison of the proposed method
with other existing methods. From the table we can see that the
proposed method gives higher percentage of mAP and lower
missed detection rate compared to other techniques.
A lane line detecting system’s performance may also be
impacted by things like the quality of the road surface, the
weather, and the illumination. It is crucial to test the system
under various circumstances and to train the model using a
broad set of training data to increase the robustness in the
system.
VI. C ONCLUSION :
The research on YOLOv3 lane line detection has shown
its potential for various applications, including autonomous Fig. 5. dark light at night
driving, agricultural robots, and ADAS. Researchers have

Authorized licensed use limited to: COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY. Downloaded on August 16,2024 at 05:58:42 UTC from IEEE Xplore. Restrictions apply.
679
[6] Olson C F. Constrained Hough Transforms for Curve Detection[J].
Computer Vision & Image Understanding, 2016, 73(3): 329-345.
[7] Wang, S., Wang, Y., & Wang, L. (2020). YOLOv3-based Lane Detection
with Lane Width Estimation for Autonomous Driving. Proceedings.
[8] S., Mohanapriya & P., Natesan & P., Indhumathi & P., Mohanapriya &
R., Monisha. (2021). Object and lane detection for autonomous vehicle
using YOLO V3 algorithm. AIP Conference Proceedings. 2387. 140009.
10.1063/5.0068836.
[9] X. Zhang, W. Yang, X. L. Tang, and J. Liu, “A fast learning method
for accurate and robust lane detection using two-stage feature extraction
with YOLO v3,” Sensors, vol. 18, no. 12, p. 4308, 2018. I. Culjak,
D. Abram, T. Pribanic, H. Dzapo and M. Cifrek, ”A brief introduction
to OpenCV,” 2012 Proceedings of the 35th International Convention
MIPRO, Opatija, Croatia, 2012, pp. 1725-1730.
[10] MA Chao, XIE Mei. A method for lane detection based on color
clustering[C]//Third International Conference on Knowledge Discovery
and Data Mining. Phutet: IEEE [7] Computer Society, 2010: 200-203.
Fig. 6. Lane lines detected by the proposed system [11] Qiu, Zhengjun & Zhao, Nan & Zhou, Lei & Wang, Mengcen & Yang,
Liangliang & Fang, Hui & He, Yong & Liu, Yufei. (2020). Vision-Based
Moving Obstacle Detection and Tracking in Paddy Field Using Improved
TABLE II
Yolov3 and Deep SORT. Sensors. 20. 4082. 10.3390/s20154082.
C OMPARISON OF TEST PERFORMANCE OF DIFFERENT ALGORITHMS
[12] Abraham, Hillary & Mcanulty, Hale & Mehler, Bruce & Reimer, Bryan.
(2017). Case Study of Today’s Automotive Dealerships: Introduction
Average missed and Delivery of Advanced Driver Assistance Systems. Transportation
Algorithm FPS mAP % detection rate Research Record Journal of the Transportation Research Board. 2660.
(%) 7-14.
Yolo V3 8 84.87 2.4 [13] Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified,
Real-time Object Detection[C]. IEEE Conference on Computer Vision
R-CNN 3 87.35 4.2 and Pattern Recognition, Las Vegas, USA, 2016: 779-788.
Fast R-CNN 4 86.27 8.1 [14] Sun Ting, Qi Yingchun, Geng Guohua. Moving target detection algo-
Enhanced Yolo V3 12 89.23 1.7 rithm based on inter frame difference and background difference [J].
Journal of Jilin University (Engineering Edition), 2016,46 (04): 1325-
1329
[15] Xu Jiuqiang, Jiang Pingping, Zhu Hongbo, Zuo Wei. Improvement of
proposed various modifications to the YOLOv3 architecture vibe algorithm for moving target detection [J]. Journal of Northeast
to improve its performance and adapt it to specific tasks. University (NATURAL SCIENCE EDITION), 2015,36 (09): 1227-1231
[16] Zheng Yuping. Detection and extraction of moving targets in complex
The proposed work presents a detection system that employs background based on embedded platform [D]. Shanghai Normal Uni-
the yolov3 algorithm to overcome the limitations of the versity, 2020
traditional lane detection method. The key improvements are: [17] Lee S, Kim J, Kweon I S. VPGNet: Vanishing Point Guide Network for
Lane and Road Marking Detection and Recognition[C]. International
• The images are partitioned into s 2S grids to enhance Conference on Computer Vision, Venice, Italy, 2017: 2380-2384.
vertical detection density. This approach is particularly [18] Science - Applied Sciences; Shenyang Institute of Automation Re-
useful since lane line images have irregular vertical and searchers Provide New Data on Applied Sciences (The Application
of Improved YOLO V3 in Multi-Scale Target Detection)[J]. Science
horizontal distribution densities. Letter,2019
• The detection scale has been optimized to detect small [19] Tian Juan-Xiu, Liu Guo-Cai, Gu Shan-Shan, Ju ZhongJian, Liu Jin-
targets like lane lines. The system now uses four detection Guang, Gu Dong-Dong. Deep learning in medical image analysis and
its challenges. Acta Automatica Sinica, 2018, 44(03): 401424
scales: 13*13, 26*26, 52*52, and 104*104. [20] Liang Leying. Research on lane detection algorithm based on deep
The experiments indicate the modified algorithm has shown learning [D]. Beijing: Beijing Jiaotong University, 2018: 26-30
detection performance on a level road, although it may be
susceptible to large slopes in the roads. Future research will
focus on developing a solution to address the lane line detec-
tion issue in such scenarios.
R EFERENCES
[1] Redmon J and Farhadi A. 2018 YOLOv3: An Incremental Improvement
[C] IEEE Conference on Computer Vision and Pattern Recognition.
[2] Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger[C]. Con-
ference on Computer Vision and Pattern Recognition, Honolulu, USA,
2017: 6517-6525.
[3] Gaoqing Ji, Yunchang Zheng. Lane Line Detection System Based on
Improved Yolo V3 Algorithm, 13 October 2021, PREPRINT (Version
1) available at Research Square
[4] Deng G, Wu Y F. Double Lane Line Edge Detection Method based
on Constraint Conditions Hough Transform[C]. IEEE 2018 17th In-
ternational Symposium on Distributed Computing and Applications for
Business Engineering and Science, Jiangsu, China, 2018: 4
[5] Shi Linjun, Yu Su. Lane line detection method based on Hough
transform under multiple constraints [J]. Computer measurement and
control, 2018, 26 (9): 9-12

Authorized licensed use limited to: COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY. Downloaded on August 16,2024 at 05:58:42 UTC from IEEE Xplore. Restrictions apply.
680

You might also like