Advanced Driver Assistance System ADASon FPGA
Advanced Driver Assistance System ADASon FPGA
Original Article
Received: 30 April 2023 Revised: 07 June 2023 Accepted: 22 June 2023 Published: 06 July 2023
Abstract - Advanced Driver-Assistance Systems (ADAS) can help drivers in the driving process and increase driving safety
by automatically detecting objects, doing basic classification, implementing safeguards, etc. ADAS integrates multiple
subsystems, including object detection, scene segmentation, lane detection, and so on. In this paper, we establish a
framework for computer vision features, i.e., lane detection, object detection, object distance estimation and traffic sign
recognition of ADAS. Modern machine learning algorithms like Canny edge detection for lane detection and a CNN-based
approach are used for object detection. The system deployed aims to achieve higher (Frames Per Second) FPS for one
channel of 55 FPS. The performance of FPGA is optimized by software and hardware co-design. Realization on the DE-10
Nano board with Cyclone V FPGA and a dual-core ARM Cortex A9, which meets real-time processing requirements. An
increasing amount of automotive electronic hardware and software involves significant changes in the modern automobile
design process to address the convergence of conflicting goals - increased reliability, reduced costs, and shorter development
cycles. The prospectus to tackle car accident occurrences is making ADAS even more critical. This paper proposes an
efficient solution for ADAS on FPGA.
The former is often considered superior to the latter due road. This paper uses the Canny edge detection [4] algorithm
to several reasons like Accurate Edge Localization and for detecting lanes, as described in Figure 2.
Automatic Thresholding. This paper uses the Canny edge
detection algorithm. Camera
Input
HPS
3. Materials and Methods
One way to implement ADAS is to use an FPGA as the
primary hardware platform. An FPGA is a type of Autonomous Car
Output VNC Viewer
Simulator/ Prototype
programmable hardware that can be configured to perform a
wide range of digital functions. It is often used in Fig. 1 Block Diagram
applications where flexibility and fast response times are
important. The Hough transform [5] is an image-processing
technique used in edge detection to improve the accuracy and
The processing unit in an ADAS system is responsible for robustness of lane detection. It is used in a variety of other
analysing the information from the sensors and making applications, such as detecting circles, ellipses, and other
decisions based on that. Sometimes, this may involve running geometric shapes in images.
complex algorithms or machine learning models to interpret
the data and determine the best course of action. The The breakthrough of neural networks is that object
processing unit may be implemented using a microcontroller, detection no longer must be a hand-crafted coding exercise.
a processor, or an FPGA, depending on the application's Neural networks allow features to be learned automatically
specific requirements. The FPGA used in this project is the from training examples.[6]
Intel DE10 Nano. The DE10-Nano is a compact, low-power
development board featuring an Intel Cyclone V FPGA. The The pipeline for object detection and distance estimation
DE10-Nano is equipped with a dual-core Arm Cortex-A9 is seen in Figure 3. This research uses the YOLOv5 [7]
processor, which can be used to run software applications. architecture for object detection. Object detection, a use case
for which YOLOv5 is designed, involves creating features
from input images. These are passed through a prediction
Lane detection is a technology used in self-driving cars system to draw boxes called bounding boxes around objects
ADAS to identify the location and boundaries of lanes on the and predict their classes.
Hough Region
Draw Lines Transform Masking
23
Mayank Kumar et al./ IJVSP, 10(2), 22-26, 2023
Distance
Visualize Final Draw predictions corresponding to Distance
Results on image Bounding Boxes Estimation
23
Mayank Kumar et al./ IJVSP, 10(2), 22-26, 2023
4.2. Object Detection, Object Distance Estimation, Traffic The improvement in the performance of the lane
Light Detection detection system due to pre-processing on FPGA can be seen
Figure 7 describes the features of object detection, object in Table 1.
distance estimation and traffic light detection. The bounding
Table 1. Comparison Results
boxes have different colours for different objects – orange
for cars, red for people, etc. At the header of these bounding Factor FPGA Processor
boxes are the identified objects' labels and a number that lies Frames per Second 81.3 27.01
between 0 to 1 called objectness. Objectness indicates the Time for one image frame 36.99ms 77.84ms
accuracy with which the algorithm has identified the object.
The programming language for pre-processing is
Further, a green bounding box enclosing a traffic signal
Verilog hardware description language. The function works
signaling ‘green for go’ is also seen. In the case of traffic
in 12.3 ms. This means that the performance is 81.3 fps. This
light detection, the bounding box labels the signal identified.
result is 3.01 times better than the computer solution (2.5
GHz Intel Core i5) and 18.4 times better than the ARM
Cortex A9 solution (667 MHz).
References
[1] The Synopsys website, 2023. [Online]. Available: https://www.synopsys.com/designware-ip/technical-bulletin/deep-learning-dwtb-
q217.html
[2] Juan Borrego-Carazo et al., “Resource-Constrained Machine Learning for ADAS: A Systematic Review,” IEEE Access, vol. 8, pp.
40573-40598, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[3] Hamish Simmonds et al., “Autonomous Vehicle Development Using FPGA for Image Processing,” 2019 International Conference on
Field-Programmable Technology, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[4] Ghassan Mahmoud Husien Amer, and Ahmed Mohamed Abushaala, “Edge Detection Methods,” 2015 2nd World Symposium on Web
Applications and Networking, 2015. [CrossRef] [Google Scholar] [Publisher Link]
[5] A Complete Guide On Hough Transform, 2022. [Online]. Available: https://www.analyticsvidhya.com/blog/2022/06/a-complete-guide-
on-hough-transform/
[6] YOLO: Algorithm for Object Detection Explained [+Examples], 2023. [Online]. Available: https://www.v7labs.com/blog/yolo-object-
detection
[7] M. Karthi et al., “Evolution of YOLO-V5 Algorithm for Object Detection: Automated Detection of Library Books and Performace
validation of Dataset,” 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical
Systems, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[8] M.R. Ezilarasan, and J. Brittopari, “An Efficient FPGA-Based Adaptive Filter for ICA Implementation in Adaptive Noise Cancellation,”
SSRG International Journal of Electrical and Electronics Engineering, vol. 10, no. 1, pp. 117-127, 2023. [CrossRef] [Publisher Link]
[9] Shi-Huang Chen, and Ruie-Shen Chen, “Vision-Based Distance Estimation for Multiple Vehicles Using Single Optical Camera,” 2011
Second International Conference on Innovations in Bio-inspired Computing and Applications, 2011. [CrossRef] [Google Scholar]
[Publisher Link]
[10] Abdulhakam AM. Assidiq et al., “Real time Lane Detection for Autonomous Vehicles,” 2008 International Conference on Computer
and Communication Engineering, 2008. [CrossRef] [Google Scholar] [Publisher Link]
24
Mayank Kumar et al./ IJVSP, 10(2), 22-26, 2023
25