0% found this document useful (0 votes)
44 views18 pages

Artificial Intelligence in Self-Driving Study of A

Uploaded by

Kevin Torres
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views18 pages

Artificial Intelligence in Self-Driving Study of A

Uploaded by

Kevin Torres
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

International Journal of Engineering Trends and Technology Volume 71 Issue 8, 225-242, August 2023

ISSN: 2231–5381 / https://doi.org/10.14445/22315381/IJETT-V71I8P220 © 2023 Seventh Sense Research Group®

Review Article

Artificial Intelligence in Self-Driving: Study of


Advanced Current Applications
Guirrou Hamza1, Mohamed Zeriab Es-sadek2, Youssef Taher3
1,2
ENSAM, Mohammed V University Rabat, Morocco.
3
Center of Guidance and Planning of Education Rabat, Morocco.
1Corresponding Author : [email protected]

Received: 17 April 2023 Revised: 12 June 2023 Accepted: 05 August 2023 Published: 15 August 2023

Abstract - In this paper, we investigate the advances of Artificial Intelligence (AI) in the field of self-driving technology. We
provide an overview of the key processes involved in autonomous navigation, including perception, mapping, localization, path
planning, and motion control. We highlight the crucial role of AI in the development of self-driving technologies, in particular
Machine Learning (ML), Deep Learning Networks (DLN), and Computer Vision Techniques (CVT). Special attention is also
given to various existing navigation approaches and the role of ADAS in assisting the driver in various tasks. We discuss how
AI is used to solve the various environmental challenges faced by automotive sensors and the contribution of v2x communication
and the SLAM system to safe and efficient navigation. Finally, We conclude with potential future research segments and
opportunities for AI in the self-driving industry. Overall, this study emphasizes the growing importance of AI in the development
of self-driving technology and its potential to revolutionize the transportation industry.

Keywords - Artificial Intelligence, Self driving, Navigation, Perception, Path planning, Vehicle control, ADAS, V2X, SLAM,
Sensor fusion.

1. Introduction
Autonomous Vehicles (AV) are vehicles that can navigate forward. DLN algorithms have been used to develop cognitive
and drive without human intervention. They use a systems that can respond to changing conditions and make
combination of sensors and advanced AI algorithms to detect decisions based on their understanding of the environment. AI
their environment and make navigation decisions. AV can has also been utilized to enhance motion control, which results
potentially increase safety, improve efficiency and reduce the in higher safety and fewer human errors due to its ability to
need for human drivers [1]. control vehicle movement precisely. This has been
particularly important in developing advanced driver-
AI has helped transform various aspects of the assistance (ADAS) and vehicle-to-everything (V2X)
transportation industry, including perception, localization, communication systems. AI has been applied to sensor fusion
mapping, path planning, and motion control. Different AI to increase the precision of data from various sensors so that
models, such as ML and DLN, have been used to improve cars can make better judgments about their surroundings. This
these automatic processes and make them more efficient and has been crucial in developing ADAS systems since they
accurate [5], [7], [11]. AI has been applied to perception to depend on accurate and full data from various sensors for
enhance perceptual and data analytic skills. Processing and proper operation.
analyzing collected data from various relevant sources
(cameras, radar, and lidar devices) using real-time ML The history of AVs can be traced back to the early 1900s
algorithms allow AVs to make judgments based on their when pioneering engineers and inventors first experimented
environment. As a result, safety has improved, and the number with self-driving vehicles. However, it was not until the
of incidents brought on by human mistakes has decreased. second half of the 20th century that technology advanced to
Localization and mapping have also been improved by AI, the point where AVs could be used in real-world applications.
which is able to pinpoint the vehicle's location and create maps In the 1980s, researchers at Carnegie Mellon College
of its surroundings. This information is used to assist with path developed the first AV, a modified Chevrolet van called
planning and navigation. AI has revolutionized cognition and "Navlab" This vehicle used basic CVTs and sensor technology
path planning, capable of processing large volumes of data in to navigate roads and avoid obstacles [2]. In the following
real-time and making informed decisions about the best way years, several other universities and research institutions

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)


Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

developed similar prototypes that laid the foundation for the systems such as adaptive cruise control, antilock brakes, and
development of AVs. The early 2000s saw significant stability control start at level one. Level two is partial
advances in the field of AVs with the creation of the DARPA automation, where advanced assistance systems such as
Grand Challenge, a competition to promote the development emergency braking or collision avoidance systems are
of self-driving vehicles. integrated. With the accumulated knowledge of vehicle
control and industry experience, Level Two automation is a
In 2005, a team from Stanford College won the feasible technology. Beyond this stage, the real challenge
competition, proving the feasibility of AVs for the first time begins. The third level is conditional automation, where the
[3]. In the years that followed, the automotive industry began driver can focus on tasks other than driving during normal
to invest heavily in AV technology, with major companies operation.
such as Tesla, Google, and Uber leading the way. The
integration of AI, particularly ML and CVT, has been However, the driver must respond quickly to vehicle
instrumental in enabling vehicles to make real-time decisions warnings in an emergency and be ready to take control. In
and navigate complex environments. Although AV addition, Level 3 autonomous driving (AD) systems can only
technology is still in its development, it has the potential to be used in limited areas of operational design, such as on
transform mobility and transportation completely. highways. Levels 4 and 5 do not require human attention at
all. However, level 4 can only be used operationally in a
The use of AVs is anticipated to increase dramatically in limited area where dedicated infrastructure or detailed maps
the upcoming years as businesses attempt to overcome the are available. When the vehicle leaves these areas, it must end
technical and regulatory barriers preventing their its journey by stopping automatically. The fully automated
commercialization. In general, the development of AVs has five-stage system can be used on any road network and in any
been a long and continuous process, with significant weather. Currently, no production vehicles achieve levels 4 or
technological and governmental advancements. The 5 of driving automation. Table 1 shows the human
incorporation of AI has significantly accelerated the intervention in driving and the vehicle features in each stage.
development of these vehicles, and AI is expected to play a
significant role in AVs in the future. The application of AI in AD has been a growing area of
research and development in recent years. Several studies
The Society of Automotive Engineers (SAE) defines five have looked into how AI can be used in AVs for perception,
levels of driving automation. According to the standard [4], control, and decision-making. We examine current research in
level zero represents no automation. Crude driver assistance the field and recent publications as follows.

Table 1. SAE Levels of Driving Automation [4]


SAE L0 SAE L1 SAE L2 SAE L3 SAE L4 SAE L5
When these driver assistance functions are
Even if you are in "the driver's seat," you are not
activated, you are driving even if your feet are off
What is driving whenever these AD features are activated.
the pedals and you are not steering.
required
There is no need for you to
from the In order to ensure safety, you must constantly You must drive if
take over driving thanks to
driver of the monitor these assistance systems and steer, brake, the feature
these automatic driving
vehicle? or accelerate as necessary. demands it.
capabilities.
These are driving assistance functions These are AD features
These features These features
Their functions
assist the assist the These features have limited This function
are restricted to
driver with driver with driving capabilities and will enables the car
What do issuing alerts
acceleration, steering, not work until all necessary to be self-driven
these and short-term
braking, or braking, and requirements are completed. in any situation.
features? assistance.e
steering. acceleration.
Automatic The same as
Lane centering Local
emergency level 4, but with
Lane centering And driverless taxi
braking. the added ability
Or Adaptive Traffic jam Pedals/steering
Blind spot to drive
Feature Adaptive cruise control chauffeur wheel may or
warning anywhere and in
examples cruise control at the same may not be
Lane departure any
time installed
warning. circumstance

226
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

In regard to scene understanding, motion planning, various components, several approaches such as NN and fuzzy
decision-making, vehicle control, social behavior, and logic (FL), and their benefits and drawbacks.
communication, Ben Elallid et al. They concentrated on They highlighted how various sensors and map generation
techniques based on DLN and Reinforcement Learning (RL). make an AV more robust. Finally, they have described the
Additionally, they outlined the outstanding issues and incorporation of ML, and fuzzy neural vehicle systems control
suggested potential future study trajectories [5]. A general [10].
survey of current advancements in AV software systems was
presented by Pendleton et al. They highlighted recent Several ML and DLN algorithms utilized in AD
advancements in each field and gave an outline of the architectures for tasks like motion planning, vehicle
fundamental elements of AV software [6]. An overview of the localisation, pedestrian detection, traffic sign recognition,
state of the art for DLN technologies for AD was presented by road marking identification, automated parking, vehicle
Grigorescu et al. They started by introducing recurrent neural cybersecurity, and fault diagnostics were described by R.
networks (NN), the deep RL paradigm, and AI-based Bachute et al. The technical features of the ML and DLN
architectures for AD. They researched both the End2End algorithms utilized in AD systems were also investigated.
systems, which immediately translate sensory data into These algorithms were examined using parameters such as the
steering commands, and the modular perception, planning, mean union overlap rate, average precision, missed detection
and action pipeline, each module of which is developed using rate, false positive rate per image, and average number of
DLN techniques. Also, they examined current issues with AI erroneous image detections [11].
architectures for AD development, such as their security,
training data sources, and computing hardware [7]. Ma et al. To the best of our knowledge, no review article
investigated how AI may support three key AV functions: comprehensively presents the application of AI to self-driving
vision, localization and mapping, and decision-making. In cars. Including:
order to comprehend the potential applications of AI as well • Perception, data analysis, and addressing environmental
as the difficulties and problems involved in its issues.
implementation, They provided insights into potential • The Navigation and path planning approaches and
opportunities for using AI in conjunction with other emerging algorithms.
technologies: • The effect of using v2x communication and the SLAM
• High-resolution maps, Big Data, and high-performance system to safe and efficient navigation.
computing. • The role of ADAS and vehicle motion control is to assist
• Augmented reality/virtual reality as an advanced drivers with various tasks and control the vehicle's
simulation platform. movement.
• 5G communications for networked AVs.
This motivated us to fill this gap in the literature and present
An overview of the most recent planning and control the summary of our work.
algorithms, with a focus on the urban environment, has been
provided by Paden et al. They examined a variety of strategies We begin by examining the perception process
and their efficacy. The models of vehicle motion used, the constraints, sensor combination and fusion, and data
presumptions made about the structure of the environment, collecting and processing. Later, we discuss the advantages of
and computational requirements [8]. The challenges of vision, V2X communication on AVs for better traffic control and road
localization, path planning, and motion control were examined optimization. Following that, we study several navigation
by Naz et al. in an overview of numerous contemporary AI approaches and road simultaneous localization and mapping.
algorithms employed by AVs [9]. DSAGAR and TS Finally, we look into AVs motion control and advanced
NANJUNDESWARASWAMY presented a comprehensive vehicle driving assistance. Figure 1 summarizes our research
overview of an artificially intelligent vehicle, including its process.
Investigation of the perception process, data analyses and sensor fusion

V2X communication, advantages and applications for AVs

Study of navigation approaches and road simultaneous localization and mapping

Examination of AVs motion control and advanced vehicle driving assistance


Fig. 1 Research process

227
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

2. Perception and Data Fusion Under these circumstances, object detection and classification
AI has revolutionized perception and data fusion in accuracy can be improved by AI algorithms using methods
several areas. In perception, AI techniques such as CV like semantic segmentation and deep learning. To test how fog
machines enable the interpretation and understanding of visual and snow affect the performance of different LiDARs, Jokela
information. By enabling data analysis and prediction, AI- et al. conducted both indoor and outdoor tests [14]. They
powered ML algorithms improve perception systems far found that the more dense the fog and the farther away the
more. In order to provide a more thorough and accurate picture target, the more performance degrades, but they also found
of a particular situation, data fusion employs AI to combine that a darker target is more challenging for the sensors to
and evaluate data from several sources, including sensors, detect than a brighter one. By converting map images to edge
databases, and other data streams. Improvements in speed, profiles to depict road markings in a series of LiDAR signal
accuracy, and dependability in various applications, such as reflection peaks, Aldibaja et al. were able to identify the
transportation, military, and environmental monitoring, are general causes of lateral drift in localization. While moist
among the main advantages of AI-powered perception and materials from the snow-rain weather leave a path of low
data fusion. Figure 2 shows how AVs perceive their reflectivity lines on the road, accumulated snow on the
environment based on 5 senses (camera LiDAR, long-range roadside produces abrupt intensity peaks with erratic
detection radar, medium and short detection radar and distribution for LiDARs [15].
ultrasound) to have full coverage of the surrounding area.
Sheeny et al. explored sensory data perception for
2.1. Perception and Data Processing autonomous and assisted driving using a large-scale RAdar
AD sensors face various environmental conditions that dataset in bad weather. They provided instructions for setting
can affect their performance, such as weather conditions, up, calibrating, and labeling sensors as well as examples of
lighting, and road conditions Zhang et al. [12] and Vergas et data that had been gathered in various road and weather
al. [13]. AI plays a critical role in addressing these challenges conditions [16]. DLN-based self-supervised ego-motion
and improving the accuracy and reliability of systems from estimation was proposed by Almalioglu et al. as a reliable and
AD. The following are some environmental concerns and how additional method for localization in inclement weather. The
AI might help to resolve them: recommended approach is a geometry-aware approach
that combines the strong representational capabilities of visual
2.1.1. Weather Conditions sensors and the weather-independent data provided by radars
Cameras, LIDAR, and radar sensors can all be affected by utilizing an attention-based learning mechanism [17].
unfavorable weather conditions like snow, rain, and fog.

Fig. 2 How AVs perceive the environment [58]

228
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

2.1.2. Lighting information with occupancy grid estimates. Furthermore, they


The effectiveness of sensors can be impacted by various use a Bayesian occupancy approach with a highly parallelized
lighting conditions, including shadows and reflections. These design to obtain the estimates for the occupancy grid [23].
impacts can be taken into account by AI algorithms utilizing Dangle et al. introduced an improved translation approach to
methods like adaptive thresholding and histogram convert thermal infrared to a visual color image using a unique
equalization. A DLN-based picture-enhancing method for AD Convolutional NN architecture. They created a pedestrian
at night was introduced by Li et al. They created a detection system for image enhancement, object recognition,
convolutional NN-based light enhancement network. Before and colorization.
using it to produce image pairs for model development, they
first developed a generation pipeline to transform images The recognition model is given the colored and improved
taken in bright light into images taken in low light. Eventually, images using a pre-trained You Only Look Once version 5
based on the findings, they came to the conclusion that the LE architecture. Based on the coordinates of the edge surrounding
network, which offers more detail and less noise with less the pedestrians, bounding boxes are generated on the resulting
computational effort, can better enhance low-light photos photos [24]. Using a monocular camera and LIDAR to track
[18]. Rashed et al. suggested using motion data from both the dynamic object in three dimensions, Zhao et al. introduced
camera and LiDAR sensors to create a reliable and real-time a complete system for dynamic object tracking in three
convolutional NN architecture for moving object detection in dimensions [25]. The system also includes a re-tracking
low light. They created a dataset called "Dark-KITTI" to show mechanism that resumes tracking when the target reappears in
the effects of their technique on the KITTI dataset by the camera's field of view.
simulating a low-light situation. Compared to their starting
points, they achieve a 10.1% relative improvement on Dark- AI is essential for enhancing the accuracy and
KITTI and a 4.25% relative improvement on Standard-KITTI dependability of AD systems, particularly in difficult
[19]. environmental circumstances. AD systems can better
comprehend their surroundings, make decisions based on
2.1.3. Road Conditions current information, and protect the safety of passengers and
Potholes, gravel, and uneven road surfaces can all other road users by utilizing real-time AI approaches.
influence sensor accuracy and make it challenging to
determine the position and orientation of a vehicle. By 2.2. Sensors Fusion
merging data from several sensors and utilizing methods like Refers to the process of integrating multiple sensor inputs
particle filters and Kalman filters, AI algorithms can increase to provide a more accurate, comprehensive, and reliable
the precision of vehicle localization. A method for detecting representation of the environment. The following are some
roads that consider surface type variation, identifies paved and ways AI is used in sensor fusion:
unpaved surfaces, and detects damage and other information
on other road surfaces that may be relevant to driving safety 2.2.1. Data Fusion
was presented by Rateke and Wangenheim [20]. Chen et al. Involves integrating information from multiple sensors to
suggested a brand-new semi-supervised approach based on obtain a comprehensive and accurate understanding of the
adversarial learning to extract road networks from remote environment. AI techniques, such as ML, DLN, FL and
sensing photos. A small number of poorly annotated data and Bayesian networks, are used to process and analyze the
a sizable amount of weakly annotated data are used for massive amounts of data generated by AV sensors. ML and
training in this method [21]. The You Look Only Once DLN algorithms can be trained on large datasets to recognize
Version 3 CVT model library was used by Bucko et al. to patterns, localize objects, and make decisions based on sensor
achieve automatic pothole detection. This study aimed to data. FL is useful for modeling imprecise or uncertain
investigate the effects of unfavorable circumstances on information, while Bayesian networks provide a probabilistic
pothole identification [22]. framework for reasoning about data. Rubaital et al. proposed
a multi-sensor data fusion for vehicle detection in AV
2.1.4. Dynamic Objects applications. They explored the problem of data fusion of
Dynamic objects such as other vehicles, pedestrians, and camera and LIDAR sensors and suggested a novel 6D data
bicycles can pose a challenge to AD systems because they are representation (RGB+XYZ) to facilitate visual inference [26].
constantly changing and can suddenly appear or disappear. AI
algorithms can improve the accuracy of object detection and A real-time data fusion network with fault tolerance and
classification by using techniques such as DLN and fault diagnosis features was created by Pan et al. The features
Convolutional NN. E. Gomez Hernandez et al. proposed a of the input data are extracted in real time by introducing early
technique for detecting moving objects in the environment of features to create a lightweight network. By estimating the
an AV by considering a DLN detector model and dynamic global and local reliability of sensors, they provided a novel
Bayesian occupancy. The goal of their work is to detect approach to evaluating sensor dependability [27]. A multi-
moving objects in traffic scenes by fusing semantic sensor data fusion technique was created by Liu et al. to

229
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

process the sensor measurements from three different the network model and generate a network model to detect the
common sensor types and generate better navigational data for incorrectly calibrated parameters [32].
autonomous surface vehicle operation in a practical
environment [28]. In general, the application of AI in sensor fusion results
in a more precise and trustworthy representation of the
2.2.2. Sensor Selection environment, which is crucial for applications like AVs,
Refers to the process of choosing the most appropriate robots, and Internet of Things devices.
sensors for a given task or environment. AI techniques such as
ML, DLN, and RL can be used to optimize this process. ML 3. Vehicle-To-Everything Communication
algorithms can analyze large volumes of sensor data to find V2X communication is a critical aspect of AVs, as it
patterns and correlations that can be utilized to enhance sensor allows vehicles to communicate with other vehicles, road
choice. In order to train algorithms for sensor selection, DLN infrastructure, and other environmental devices. AI has the
can be used to extract features from sensor data. By learning potential to play an important role in improving the
from experience, RL can be utilized to improve the functionality of V2X communications and making AVs safer
performance of the sensor selection system gradually. To and more efficient. Here are some applications of AI in V2X
enhance robustness without sacrificing efficiency, Malawad et communications for AVs:
al. introduced a HydraFusion-based strategy for selective
sensor fusion [29]. This method learns to identify the current 3.1. Traffic Management
driving environment and fuses the optimum sensor AI systems can monitor traffic trends, forecast
combinations accordingly. congestion, and make real-time adjustments to enhance traffic
flow using V2X communication data. Large volumes of V2X
2.2.3. Sensor Calibration data can be analyzed using ML techniques to find patterns and
Is the process of modifying sensors to assure their correlations that can be used to enhance traffic management.
accuracy and dependability. The calibration of sensors for AV For instance, In order to facilitate quick and precise decision-
can be optimized using AI techniques like ML and DLN. making, AI may also be utilized to evaluate the enormous
Large volumes of sensor data can be analyzed by ML amounts of data created by V2X interactions in real-time. In
algorithms to find patterns or anomalies that might point to this application, ML, a type of AI that can learn from past data
calibration problems. Moreover, DLN techniques can be used and generate predictions using it, is frequently employed.
to create more precise sensor activity models, increasing Another kind is rule-based systems, which base choices on a
calibration accuracy. In order to ensure that the sensors deliver set of predetermined rules. Wagner et al. suggest using a
correct and trustworthy data for AV systems, AI approaches digital twin to implement the SPaT/MAP V2X connection
can also be employed to modify the sensors in real-time based between vehicles and traffic lights. The primary outcome of
on altering environmental conditions. OpenCalib is a toolkit the suggested remedy is a comprehensive and adaptable traffic
Yan et al. have presented, including numerous sensor control system that makes use of an industrial PLC and
calibration techniques for AD vehicles. The most popular ensures a standardized V2X protocol [33]. Kim et al. studied
sensors are covered by OpenCalib, including LiDAR, the path rerouting method based on V2X communication to
cameras, IMUs, radar, and IMUs. It also includes a variety of enhance traffic flow and showed that V2X communication
application scenarios, including manual and automatic road may enhance traffic flow in the case of a traffic jam. [34]. A
scene calibration, assembly line calibration, and online DLN technique based on the unidirectional long short-term
calibration [30]. Ponton et al. have suggested employing static memory model was proposed by R. Abdellah et al. to estimate
object data for an effective extrinsic calibration of multi- traffic in V2X networks. They explored the prediction
sensor 3D LiDAR systems for AV. They demonstrated an challenges under various scenarios based on the quantity of
effective calibration approach for sensors fixedly installed in packets sent each second. Processing time, mean square error
an AV, utilizing both time- and space-related information and and mean absolute error percentage are used to gauge the
proprioceptive/perceptual information [31]. accuracy of predictions [98].
A data-driven miscalibration detection system for a 3.2. Real-time Decision-Making
camera placed on a vehicle was presented by Jiang et al. They AI algorithms can use V2X communication data to make
suggested a data-driven RGB camera miscalibration detection real-time decisions in complex and unpredictable driving
approach to identify the internally calibrated camera situations, such as entering a highway or avoiding an obstacle.
parameters. The specific procedure entails calibrating the raw AI can also be used to process the vast amounts of data
picture with the erroneous internal parameter to obtain generated by V2X communications in real-time to enable fast
inaccurately calibrated image data, which is then added to the and accurate decision-making. One type of AI commonly used
correctly calibrated internal camera parameters to create an in this application is ML, which can learn from previous data
improper internal camera parameter. This incorrectly and make predictions based on it. Another type is rule-based
calibrated image data is used as input data to the NN to train

230
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

systems, which use a set of predefined rules to make decisions. Intersection-Based V2X Routing through RL in
Xu et al. proposed real-time AI perception of complex roads Vehicular AD Hoc Networks was presented by Luo et al. They
based on 5G-V2X for smart city safety. They combined AI suggested an intersection-based V2X routing protocol that
algorithms and the 5G-V2X framework to propose a real-time includes real-time network state monitoring and a learning
street perception method [36]. A real-time regional route routing strategy based on past traffic flows via Q-learning. A
planning model for connected vehicles based on V2X multi-dimensional Q-table is set up to choose the best road
communication was presented by Wang et al. They suggested segments for packet forwarding at junctions, and an improved
a technique for route planning that accounts for the timing and greedy technique is used to choose the best relays on the
phase of traffic lights on metropolitan road networks. They pathways. Together, these two elements form the hierarchical
used real-time driving data from vehicles to dynamically routing protocol. The monitoring models can identify network
calculate the resistance values of road segments based on the congestion and make timely routing adjustments to avoid
timing and phase information about traffic signals that V2X network congestion. This technique reduces communication
gathered. Then, all anticipated routes based on Dijkstra's delay and overhead while ensuring dependable packet transfer
algorithm are listed in accordance with the topology structure [39].
of the current road network. The best route is then determined
by calculating the projected travel times of each alternative The functionality of V2X communications for AVs could
route and choosing the one with the shortest predicted travel be considerably enhanced by AI, making them safer, more
times [37]. effective, and more efficient.

3.3. Route Optimization 4. Navigation and Path Planning


Considering traffic patterns, road conditions, and other The ability of a vehicle to navigate its environment
aspects, AI systems can use V2X communication data to without human input or supervision is referred to as
optimize routing for AVs. ML, DLN, and RL can be applied autonomous navigation. It includes perception to gather data
to route planning. While DLN can be used to find patterns in about the environment and identify obstacles, localization and
data and enhance route planning, ML can be used to forecast mapping to comprehend the position of the vehicle in the
traffic patterns and optimize routes based on past data. RL can environment, path planning where algorithms are used to
be used to improve route design over time by taking into analyze the environment, motion control and decision-making
account feedback from drivers. Rasheed et al. suggested an to control the movement of the vehicle through the
adaptive 3D beam alignment intelligent vehicular network environment based on the generated path and make decisions
routing for mmWave 5G-based V2X communications. They to avoid obstacles and re-plan the path as needed to ensure that
initially suggested a 3D-based beam alignment and selection the vehicle is traveling. Figure 3 displays the fundamental
technique for location detection. A safe path for trusted data navigational procedures for a vehicle. The autonomous
transmissions was then chosen using a group-based routing driver's decisions are implemented into the powertrain and
method [38]. vehicle dynamics to provide acceleration and braking,
regulate steering, and other functions.

Vehicle in Real
World
Environement

Perception
Motion Control
(Sensing + Data
(Acting + Path
Extraction and
Execution)
Interpretation)

Environmental
Cognition Path
Model of Local
Planning
Map

Localisation and
Map Building

Fig. 3 Flow diagram for vehicle navigation

231
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

AI has several applications in path planning, which activities simultaneously in a 3D environment [43]. A
involves finding the optimal path for an AV to follow. Some homotopy class algorithm and CD for path planning for robot
of the most common applications of AI in path planning motion planning was described by Wahdan et al. In this
include autonomous navigation, real-time path planning, method, the motion planning problem of a rigid-body vehicle
obstacle detection and avoidance, traffic management, and is divided into two subproblems. First, a given free space is
route optimization. Digital maps are created and updated using decomposed into a finite number of simply shaped regions to
AI algorithms, which are also used to evaluate traffic patterns, make the second subproblem natural and simple. Then a
produce motion plans, and optimize driving routes. These detailed motion is planned from the start position to the goal
algorithms determine the most effective way by considering using the global path mentioned above [44].
variables, including the current environmental circumstances,
vehicle restrictions, and task objectives. 4.3. Roadmap Approach
This method is frequently applied to GPS navigation
The prior knowledge of the surroundings needed for path systems, which give vehicle instructions and real-time traffic
planning can be used to categorize navigation strategies. The updates. The method typically entails entering a starting point
terms "local navigation" and "global navigation" are broadly and a destination, after which the algorithm will determine the
distinguished. While the vehicle does not need prior most effective path to get there. Alternative routes, traffic
information about the surroundings as in local navigation, updates, and an anticipated arrival time might all be included.
global navigation requires the vehicle to have knowledge of A vehicle navigation roadmap approach's main objective is to
the environment, the location of the obstacle, and the desired give drivers a precise and detailed strategy for getting to their
position. Global navigation techniques function in a known destination quickly and securely. Niu et al. introduced a novel
environment. Local navigation techniques deal with uncharted Voronoi visibility path planning method that combines the
or hazy terrain. benefits of a visibility graph with a Voronoi diagram to
overcome the path planning issue for unmanned ground
4.1. Artificial Potential Filed vehicles. To compare roads, they employed the procedure
The target and obstacles act as charged surfaces, and the known as "The Voronoi shortest path refined by minimizing
total potential creates an imaginary force on the vehicle. This the number of waypoints." [45]. A modified probabilistic RA
imaginary force pulls the vehicle toward the target and keeps algorithm-based intelligent vehicle path planning was
it away from the obstacles. A method for motion planning described by Li et al. To improve the quality of the sample
using harmonic functions, which uses the analytical points generated, they created a pseudo-random sampling
description of the solution of Laplace's equation, was method based on uniform sampling. Next, they added random
presented by Szulczyński et al. They consider an elliptical incrementation to change the sample points' fluctuation range
obstacle in a two-dimensional environment with static and and successfully avoid the obstacle space. Finally, they used a
dynamic targets. This method ensures collision avoidance and two-way incremental collision detection strategy to set the
approach to the target [40]. On the basis of an enhanced connection threshold between road points and lower the
artificial potential field algorithm, Wang et al. suggested number of collision detection calls [46]. To eliminate
obstacle avoidance path planning for AD vehicles. Using an uncertain path calculations associated with high time and
enhanced artificial potential field. Duan et al. proposed an space complexity of roadmap path planning methods in
algorithm for active obstacle avoidance trajectory planning complex environments for mobile robots, Ayawli et al.
and tracking for AVs using an improved artificial potential introduced a roadmap algorithm with morphological dilatation
field [100]. In order to complete trajectory planning for of the Voronoi diagram [44].
automatic driving, Li et al. suggested an enhanced artificial
potential field approach that added the distance adjustment 4.4. Neural Network
factor, dynamic road repulsion field, speed repulsion field, and The NN approach to vehicle navigation involves using a
acceleration repulsion field. To overcome the issues with the NN to process data from sensors on the vehicle to make
conventional artificial potential field technique, they decisions about the path and movement of the vehicle. This
developed an intrusive weeding algorithm [42]. can include tasks such as path planning, obstacle avoidance,
and lane keeping. A dataset of sensor data gathered from the
4.2. Cell Decomposition vehicle travelling in various settings and situations is used to
In this approach, the area is divided into a grid of smaller train the NN. After being trained, the network can forecast in
cells, each of which is assigned a unique identifier. Each cell real-time what the optimum move is for the vehicle. Ren et al.
is then analyzed and characterized based on its features, such proposed a hybrid intelligent approach for real-time optimal
as road type, traffic volume, and obstacles. This information control based on deep NNs to improve the autonomy and
is then stored in a database and used to create a map of the intelligence of navigation control of automatically controlled
area. Mark et al. presented a greedy depth-first search vehicles [48]. An NN-based prediction model for mission
algorithm and a cell decomposition approach based on GA for planning was put forward by Biswas et al. A group of AVs
the path planning of a manipulator that can perform multiple must work together to go to a set of destinations in an

232
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

environment with static and moving impediments. They uncertainty and imprecision of sensor data and simulate
offered a three-layer solution for mission routing [49]. Motion human decision-making processes, it can be employed in-
planning for highly automated road vehicles was reported by vehicle navigation. This is especially helpful in unexpected
Hegedüs et al. utilizing a hybrid strategy combining nonlinear and dynamic circumstances, like traffic and changing weather
optimization and synthetic NNs. They suggested a trajectory conditions. Song et al. suggested a dynamic path planning
planning system based on nonlinear optimization to approach based on FL and enhanced ant colony optimization
dynamically construct viable, comfortable, and adjustable (ACO). To discover the best path in a road network using the
movements for highly automated or autonomous road vehicles idea of virtual path length, the FL ant colony optimization, the
using model-based vehicle motion prediction [50]. classical ACO, and the enhanced ACO were each applied
independently first [55]. Chen et al. suggested a conditional
4.5. Particle Swarm Optimization deep Q-network for directional planning and used it for end-
A population-based optimization system called particle to-end AD, where the global path directs the vehicle from the
swarm optimization (PSO) is inspired by the social behavior starting point to the destination. They utilize the concept of
of fish or birds. It can be utilized to solve issues in many fuzzy control to address the dependence of various motion
different areas, including vehicle navigation. The particles in commands in Q-nets and create a defuzzification method to
this method stand in for several paths or path plans that could increase the stability of predicting the values of various
be used to solve the navigational issue. A fitness function that motion commands [56]. A real-time traffic circle
computes the quality of the solution based on variables like identification and navigation system for smart cities and
distance, fuel consumption, and obstructions is used to automobiles was presented by A.H. Ali et al. employing laser
evaluate the position of each particle. Particles then move and simulator FL algorithms and sensor fusion in a road
update their positions based on the best solutions found by environment [57].
themselves and other particles. The process is repeated until a
satisfactory solution is found. For the purpose of optimizing 5. Simultaneous Localization and Mapping
the reentry trajectory of hypersonic vehicles with a navigation Simultaneous Localization and Mapping (SLAM) is a
information model, Wu et al. presented a hybrid Gaussian critical technology for AVs. It allows the vehicle to create a
pseudo technique [51]. Based on a modified particle swarm map of its surroundings in real-time and determine its location
optimization technique, Guo et al. developed a global within that map. Various types of AI are employed in SLAM,
trajectory planning and multi-objective trajectory control for including:
autonomous surface vehicles [52]. While Mao et al. proposed
a full-width deviation correction method for trajectory 5.1. Machine Learning
planning of horizontal axis road headers based on an improved To analyze sensor data and produce predictions about the
particle swarm optimization algorithm [53]. A motion environment, ML techniques like NNs and decision trees can
planning algorithm that can be viewed as a component of a be utilized. These predictions can then be used to increase the
hierarchical framework addressing the challenging problem of SLAM system's accuracy. Semantic monocular visual
driving was suggested by Arrigoni et al. The suggested localization and mapping in dynamic contexts was proposed
approach involves numerically solving an optimization by Xiao et al. They developed a comprehensive SLAM
problem with an MPC formulation utilizing accelerated framework called Dynamic SLAM, which is a semantic
particle swarm optimization. The algorithm can operate in an monocular visual simultaneous localization and mapping
urban setting while taking into account moving impediments system that makes use of DLN to enhance performance in
and restrictions, including vehicle dynamics and road dynamic situations [59]. A method for RGB-D SLAM that is
boundaries [54]. reliable and stable in situations with high levels of dynamic
activity was proposed by AI et al. By combining semantic
4.6. Fuzzy Logic segmentation and multiview geometry, they can recognize
The FL approach is a mathematical method that allows moving objects [60]. A method for unsupervised multichannel
one to deal with uncertain, imprecise, or vague information in visual-LiDAR SLAM that can combine visual and LiDAR
a way that resembles human reasoning. Unlike traditional data was proposed by An et al. Their SLAM system consists
Boolean logic, which uses only binary true or false values, FL of a 3D mapping component, a DLN-based loop closure
uses degrees of truth represented by real numbers between 0 detection component, and an unsupervised multichannel
and 1. The vehicle navigation system would process sensor visual LiDAR odometry component. A multichannel recurrent
data and make decisions using fuzzy rules and membership convolutional NN is used in the visual LiDAR odometry
functions in an FL method. The system may, for instance, component. RGB pictures and 360-degree 3D LiDAR data
employ a fuzzy rule that says, "The vehicle should slow down create depth images of the front, left and right viewpoints. The
if it is near an obstacle and the obstacle is moving." The degree properties of a deep convolutional NN were employed to
to which the vehicle is "near" to an obstacle and the speed at detect loop closures. The 3D mapping component of this
which the impediment is "moving" would be determined by method may immediately build 3D environment maps without
the membership function; because FL can accommodate the the need for ground truth data for training [61].

233
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

5.2. Computer Vision with various activities, including monitoring the environment,
CVTs such as object recognition and feature extraction operating the vehicle, and preventing collisions. Using
can be used to detect and identify landmarks in the roads that sensors, cameras, and other technologies, ADAS systems may
can be utilized as reference points for the SLAM system. identify objects and potential collision hazards in the
Sualeh and Kim suggested a 3D MODT-based semantics- environment and alert the driver or take preventative action to
aware dynamic SLAM. To address the challenges of the avoid a collision. AI algorithms process this sensor data to
dynamic world, they combined SLAM with visual LiDAR- identify things and potential dangers and make decisions
based 3D MODT. By considering the finite processing about what action to take.
resources and real-time needs, the suggested system conducts
temporal classification of tracked objects. An efficient tracker Motion control employs various AI methods, including
based on IMM-UKF-JPDAF keeps track of the objects rule-based systems, FL, ML, and DLN. Rule-based systems
geographically while preserving the class association history make judgments on vehicle movements, such as steering,
to address the real-time limitations and defects of object stopping, and accelerating, using a set of predetermined rules.
identification. They created a dynamic object mask that, when FL, which can be helpful in directing the vehicle in
applied to a classified LiDAR point cloud, may imitate challenging driving situations, uses linguistic variables to
cutting-edge semantic segmentation approaches. SLAM express ambiguous and inaccurate information. Techniques
intelligently chooses the visual elements for tracking and for each ML that can learn from data include decision trees,
mapping tasks using the dynamic mask provided by MODT support vector machines, and random forests. Artificial NNs
[62]. are used to handle enormous amounts of data in a process
known as DLN, which can be used to identify objects, detect
5.3. Reinforcement Learning obstacles, and forecast movements.
Based on the rewards and penalties it receives for its
activities, RL techniques can be used to optimize the behavior Moreover, ADAS employs AI in a number of different
of the AV. This could enhance both the effectiveness and ways. One sort of AI that enables ADAS systems to learn from
security of vehicle mobility. Botteghi et al. investigated using data and enhance their effectiveness over time is ML. Another
RL as an effective and robust solution to explore unknown form of AI called CVT enables ADAS systems to detect and
indoor environments and reconstruct their maps. They used recognize items like other vehicles, pedestrians, and traffic
the algorithm SLAM for real-time robot localization and signals using cameras and sensors. ADAS systems also use
mapping [63]. A. Castellanos and A. Placed presented an natural language processing to facilitate speech recognition
Active SLAM Deep RL method. By incorporating the and communication between drivers and vehicles. RL is a sort
conventional utility functions based on optimal trial design of AI used in AD to provide the vehicle with the ability to
theory into rewards, they were able to simplify the costly learn from its actions and improve its behavior to accomplish
computations of the previous techniques and describe the a particular objective, such as navigating through traffic or
Active SLAM paradigm in terms of model-free Deep RL avoiding hazards. Figure 2 shows how vehicles perceive their
[101]. Path planning for active SLAM based on Deep RL in surroundings to bring off driver assistance.
uncharted areas is suggested by Wen et al. They use fully
convolutional residual networks to find the obstacles and get 6.1. Lane-Keeping
a depth image. They use the Dueling DQN algorithm for robot In order to locate a car within a lane and track lane
navigation to plan the obstacle avoidance path, and they markers on the road, AI algorithms are utilized. These
simultaneously use FastSLAM to produce a 2D map of the algorithms build a 3D representation of the surrounding area
surrounding area [65]. Each of these AI methods improves the and forecast the vehicle's future trajectory using sensor data
SLAM system differently and adds to the overall accuracy and from cameras and other sensors. The AI algorithms modify the
dependability of the AV. car's steering based on this data to keep it in the center of the
lane and at a safe distance from other moving vehicles. In
6. Motion Control and Advanced Driver response to shifting road circumstances like curves or shifting
Assistance lane markers, the AI algorithms can also modify the vehicle's
Motion control and ADAS are two different systems that speed and direction in real-time. Lane departure prevention
serve different purposes in AVs and also use AI in different mode and lane-keeping co-pilot mode are two switchable
ways. Motion control is in charge of regulating the vehicle's assistance modes that Bian et al. introduced in their enhanced
movement, including steering, accelerating, and braking. lane-keeping assistance system [67]. At the same time, a lane-
Motion control systems use a number of sensors to gather keeping assistance system for an AV employing a support
information about the surrounding area and the position, vector ML method was proposed by Karthikeyan et al. [66].
speed, and orientation of the vehicle. AI systems then process Zhou et al. presented a lane departure assistance system based
the data to decide how to best control the vehicle's movements. on model predictive control using the linear programming
On the other hand, ADAS systems are intended to help drivers method. The linear programming alternative is less
computationally intensive than other models, such as the

234
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

quadratic programming-based model, making this a preferred signalized intersection to conserve energy, was reviewed by
model for electronic control units in commercial vehicles [68]. Bas et al. for possible market penetration [74].

6.2. Traffic Sign Recognition 6.4. Obstacle Avoidance


Several types of AI can be used in traffic sign recognition, AI algorithms are employed to recognize and distinguish
depending on specific application requirements and available between pedestrians, other cars, and fixed objects like
resources. Using ML methods, such as convolutional NNs, to buildings and street furniture in the vehicle's route. These
categorize traffic signs according to their visual characteristics algorithms build a 3D model of the surrounding area and
is a typical strategy. Convolutional NNs are DLN models forecast where obstacles will be in the future using sensor data
demonstrated to achieve high accuracy rating traffic sign from cameras, lidar, radar, and other sensors. Based on this
recognition applications. They are particularly well adapted to knowledge, the AI algorithms design a safe trajectory for the
picture recognition challenges. An alternative strategy is using car, taking into account things like the vehicle's speed, the
rule-based expert systems that encode knowledge about traffic amount of space on the road, and other drivers' and
signs and their attributes, such as shape, color, and symbols. pedestrians' behavior. In order to avoid or reposition
These systems, which recognize traffic signs using unexpected impediments, the AI algorithms can also change
predetermined rules and heuristics, can be helpful when there the vehicle's speed and direction in real-time. Behzadan and
is not enough data to train ML models. Munir proposed a framework based on Deep RL to evaluate
the behavior of collision avoidance mechanisms operating
Additionally, before using ML or rule-based algorithms, with an optimal adversarial agent trained to place the system
certain traffic sign identification systems preprocess the in unsafe states [75].
images using CVTs, including edge detection, image
segmentation, and feature extraction. Using an effective An unexpected collision avoidance method was put forth
convolutional NN, Bangquan et al. proposed an embedded by Kim et al. Using Deep RL, they created an intelligent self-
real-time traffic sign recognition system [69]. Alghamgham et driving approach that reduces the severity of injuries in
al. developed an autonomous traffic and road sign recognition unforeseen situations involving traffic light violations at an
system that recognizes real-time traffic sign images based on intersection [76]. He et al. suggested a hierarchical control
a deep convolutional NN [70]. While an enhanced traffic sign architecture with decision-making and motion control levels
recognition method for intelligent vehicles was put forward by as the building blocks for an emergency steering control
Cao et al. [102]. method. When making decisions, a path planner based on the
kinematics and dynamics of the vehicle system selects a
6.3. Adaptive Cruise Control collision-free route after a dynamic threat assessment model
The ability for a vehicle to automatically change its speed continuously evaluates the danger of collisions and
in response to traffic circumstances is known as adaptive destabilization. The nonlinearity of the tire's cornering
cruise control. This technology relies heavily on AI, and there behavior and unknown external disturbances are considered
are various types of AI that can be applied to increase the by constructing a lateral motion controller at the motion
effectiveness of adaptive cruise control. ML algorithms can control level. To follow a collision-free trajectory and ensure
analyze vehicle sensor data and adjust speed accordingly. the closed loop is robust and stable, a backstepping sliding
CVT algorithms can detect other vehicles on the road and mode control based on an assessment of tire side force is used
predict their movements so the system can maintain a safe [77].
distance. Deep-learning algorithms can be used to detect
different types of vehicles and adjust speed according to the 6.5. Emergency Braking
risk they pose. AI is essential to efficiently operating emergency braking,
a crucial safety component in contemporary vehicles.
Li et al. studied the car-following behavior of vehicles Emergency braking systems can use various AI techniques
with adaptive cruise control (ACC) using field experiments like DLN, CVT, and ML. ML algorithms can analyze vehicle
with a three-vehicle platoon. Their experiments investigated sensor data to determine whether emergency braking is
the response of ACC under different conditions in relation to required. A vehicle's path may contain pedestrians or other
three categories of influencing factors: ACC, distance setting, moving objects, which CVT algorithms can identify and
traffic speed level, and stimulation by the preceding vehicle. assess for danger. The system may be trained to recognize
Lin et al. made a comparison of Deep RL and Model various threats and react accordingly using DLN algorithms.
Predictive Control for ACC [72]. Nie and Farzaneh created an Socha et al. presented an ML-based automatic emergency
ACC system based on eco-driving for two common traffic braking system for pedestrians with complete safety proof
scenarios with automobiles following. To accomplish the [78]. A sliding mode slip ratio controller and a mechanism for
objectives of eco-driving, driving safety, comfort, and allocating braking torque based on rules were used by Chen et
followability [73]. The Eco-CACC system, a cruise control al. to design an emergency brake control strategy [79]. While
system that automatically reduces a vehicle's speed close to a a nonlinear model predictive deceleration control was used by

235
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

Mu et al. to build an automated emergency braking approach Although both motion control and ADAS systems use AI,
[80]. they perform different functions in AVs. Motion control
systems are responsible for physical vehicle control, while
6.6. Parking Assistance ADAS systems monitor the environment and assist the driver.
Parking assistance systems use aria to help drivers park
their vehicles safely and efficiently. These systems employ a 7. Discussion
variety of AI techniques, such as sensor fusion, ML, and CVT. The use of AI in self-driving cars brings numerous
Machine vision algorithms are used to discern patterns and benefits, including improved safety, increased efficiency,
identify things in the car's environment, such as other vehicles, greater convenience, and better accessibility. AI technology
curbs, and barriers. Large data sets of parking scenarios are enables self-driving cars to monitor and interpret complex
used to train ML algorithms that forecast the vehicle's most traffic situations in real-time and make decisions faster and
effective path and give drivers instructions. Sensor fusion more accurately than human drivers, reducing the number of
combines data from many sensors, like cameras and ultrasonic accidents caused by human error. Driving has become more
sensors, to build a complete picture of the surrounding area practical and economical thanks to AI's ability to optimize
and determine the exact distances to nearby objects. routes and driving techniques to save on fuel and travel time.
Self-driving cars with AI capabilities may also autonomously
Parking assistance systems can reduce stress for drivers park, drive through traffic, and adjust to changing road
and make parking simpler, safer, and more efficient by conditions, improving accessibility for elderly and disabled
integrating these many forms of AI. Wijaya et al. presented a individuals. According to recent studies, using AI-powered
method for real-time semi-AV parking in which a visual self-driving cars might cut road fatalities by up to 90%,
parking assistance system provides maneuvering enhance traffic flow, and expand mobility for millions of
recommendations to the driver for reverse parking. To people [86].
generate recommendations for the driver, the proposed system
includes wide-angle lens correction, a global bird's-eye view, Achieving fully AD cars is challenging, with several
and user-guided vision-based parking line recognition [81]. A technical and societal challenges to overcome. Creating
laser-based SLAM system for automatic parallel parking and algorithms that can effectively perceive and comprehend the
tracking control was reported by Song et al. [82]. It environment, including recognizing and responding to various
incorporates path tracking, parking path planning, and objects and circumstances, is one of the largest problems. This
environment perception and reconstruction. requires addressing challenges in CVT, natural language
processing, and decision-making under uncertainty [47] [88].
6.7. Blind Spot Detection Furthermore, it is crucial to guarantee the dependability and
Modern cars have a critical safety feature called blind spot robustness of AI systems because even little biases or errors
recognition that aids drivers in avoiding crashes with other in the algorithms can have major negative effects in the actual
vehicles that might be in their blind areas. To make this world. Other challenges include clarifying ethical and legal
technology effective, AI is essential. Blind spot detection issues related to using AI in AVs and establishing standards
systems frequently employ various AI techniques, including for testing and validating AI-based systems [89]. Current
CVT, ML, and NNs. In the vehicle's environment, CVT is research has concentrated on enhancing the interpretability
employed to recognize things, and ML algorithms are trained and transparency of AI algorithms, creating AI systems that
on vast data sets to discover patterns and foresee potential can learn from human demonstrations, and addressing moral
dangers. Real-time processing and sensor data analysis using questions surrounding the use of AI in autonomous cars.
NNs enable the system to provide precise predictions and alert
drivers when a car is spotted in its blind zone. Blind spot There are several research segments for AD with AI,
detection systems are able to increase driver safety and reduce including:
traffic accidents by combining these many forms of AI. A
camera-based blind spot identification system was created by 7.1. Perception
Kwon et al. The established research framework consisted of Developing algorithms that can accurately perceive the
five stages: Data preprocessing, feature extraction, fully environment, including object detection and recognition,
connected network model learning, vehicle blind spot scene understanding, and localization.
adjustment, and false alarm reduction [83]. In replacement of
the conventional radar-based approach, Zhao et al. proposed a 7.2. Planning and Decision Making
camera-based DLN technique that accurately detects other Developing algorithms capable of making safe and
cars in the blind spot [103]. To enhance blind spot detection, efficient decisions based on the perceived environment,
Lee et al. suggested employing generative adversarial including path planning, trajectory optimization, and motion
networks to augment nighttime data [85]. control.

236
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

7.3. Human-Machine Interaction autonomous, they show substantial advancements toward the
Creating user interfaces using natural language creation of AVs that can operate without human
processing, gesture recognition, and facial expression interference.[90][91][92][93][94].
recognition to enable secure and effective communication
between people and autonomous cars. Several businesses are attempting to obtain increasing
levels of autonomy in the AV industry, which is quickly
7.4. Reinforcement Learning developing. Autonomy levels 4 and 5 would let vehicles
Developing algorithms that enable AVs to learn from operate in all conditions without requiring human
their own experiences and improve their performance over involvement, and companies like Tesla, Waymo, and General
time. Motors strive toward this goal. Advanced sensing and
mapping technologies and more sophisticated AI and ML
7.5. Explainable AI algorithms must be created to reach this level of autonomy.
Developing algorithms that can provide interpretable and Companies are also seeking to integrate self-driving vehicles
transparent explanations for decisions made by AVs to enable into already-existing transportation networks, like ride-
greater trust and understanding of the technology. sharing services and public transportation systems [95][96].
The global market for self-driving cars is predicted to increase
7.6. Multi-agent Systems from 20.3 million units in 2021 to 62.4 million units by 2030,
Creating algorithms that let autonomous cars according to the Global Forecast Study [97]. With sales
communicate with each other and work together to accomplish projected to reach nearly $326 billion by the end of 2030, the
shared objectives like enhancing traffic flow or preventing automotive industry is focused on developing driver
crashes. assistance systems that will pave the way for self-driving cars.

7.7. Cybersecurity 8. Conclusion


Creating methods to safeguard the privacy and security of In conclusion, the use of AI in self-driving technologies
autonomous cars, including defense against cyberattacks and has the potential to completely transform the transportation
secure data transfer. sector. AI algorithms, ML, DLN, and CVT techniques are
being developed, and they will eventually result in advanced
7.8. Testing and Validation AVs that can navigate challenging road settings and instantly
Creating procedures for evaluating the dependability and adapt to changing conditions.
safety of AD systems, such as certification, field testing, and
simulation. The advantages are obvious, even though there are still
certain obstacles to be solved, such as guaranteeing the
These are only a few research topics being investigated in security and reliability of self-driving technology. In addition
the AD area. to improving transportation alternatives for those with
disabilities or limited mobility, self-driving cars offer the
As of early 2023, no fully autonomous self-driving cars potential to reduce traffic congestion and accidents caused by
are available for purchase on the market. However, several human error.
companies produce vehicles with advanced driver assistance
features, such as lane departure warnings, adaptive cruise Moreover, AI can be used in self-driving technologies for
control, and automatic emergency braking. The most cutting- purposes other than just personal transportation. Drones and
edge driving assistance technology, known as "Autopilot," is self-driving trucks could revolutionize the transport sector,
installed in Tesla vehicles and presently runs at autonomy making it quicker, safer, and more effective.
level 2. Using its Super Cruise system, which functions at
level 2 autonomy right now, General Motors further creates Overall, new opportunities for the future of transportation
automobiles with cutting-edge driver-aid features. Although have been made possible by incorporating AI into self-driving
these technologies normally function at autonomy level 2 or technologies. In the upcoming years, we may anticipate seeing
3, other automakers like Audi, BMW, and Mercedes-Benz increasingly sophisticated and trustworthy AVs on the road,
also build vehicles with advanced driver assistance features. thanks to ongoing research and development.
Despite the fact that these systems are not entirely

References
[1] Ján Ondrušaet al., “How Do Autonomous Cars Work?,” Transportation Research Procedia, vol. 44, pp. 226-233, 2020. [CrossRef]
[Google Scholar] [Publisher Link]
[2] C. Thorpe et al., “Vision and Navigation for the Carnegie-Mellon Navlab,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 10, no. 3, 1988. [CrossRef] [Google Scholar] [Publisher Link]

237
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

[3] Sebastian Thrun et al., “Stanley: The Robot that Won the DARPA Grand Challenge,” Journal of Field Robotics, 2006. [CrossRef]
[Google Scholar] [Publisher Link]
[4] Society of Automotive Engineers, 2021. [Online]. Available: https://www.sae.org/standards/content/j3016_202104/
[5] Badr Ben Elallid et al., “A Comprehensive Survey on the Application of Deep and Reinforcement Learning Approaches in AD,” Journal
of King Saud University - Computer and Information Sciences, vol. 34, pp. 7366–7390, 2022. [CrossRef] [Google Scholar] [Publisher
Link]
[6] Scott Drew Pendleton et al., “Perception, Planning, Control, and Coordination for Autonomous Vehicles,” Machines, vol. 5, no. 6, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Sorin Grigorescu et al., “A Survey of Deep Learning Techniques for Autonomous Driving,” Journal of Field Robotics, 2019.[CrossRef]
[Google Scholar] [Publisher Link]
[8] Yifang Ma et al., “Artificial Intelligence Applications in the Development of Autonomous Vehicles: A Survey,” IEEE/CAA Journal of
Automatica Sinica, vol. 7, no. 2, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[9] Neelma Naz et al., “Intelligence of Autonomous Vehicles: A Concise Revisit,” Journal of Sensors, pp. 1-11, 2022. [CrossRef] [Google
Scholar] [Publisher Link]
[10] Vinyas D. Sagar, and T. S. Nanjundeswaraswamy, “Artificial Intelligence in Autonomous Vehicles - A Literature Review,” i-Manager’s
Journal on Future Engineering & Technology, vol. 14, no. 3, 2019. [Google Scholar] [Publisher Link]
[11] Brian Paden et al., “A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles,” IEEE International
Conference on Intelligence and Safety for Robotics, 2016.[CrossRef] [Google Scholar] [Publisher Link]
[12] Yuxiao Zhang et al., “Perception And Sensing for Autonomous Vehicles Under Adverse Weather Conditions: A Survey,” ISPRS Journal
of Photogrammetry and Remote Sensing, vol. 196, pp. 146–177, 2023. [CrossRef] [Google Scholar] [Publisher Link]
[13] Jorge Vargas et al., “An Overview of Autonomous Vehicles Sensors and Their Vulnerability to Weather Conditions,” Sensors, vol. 21,
no. 16, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[14] Maria Jokela, Matti Kutila, and Pasi Pyykönen. “Testing and Validation of Automotive Point Cloud Sensors in Adverse Weather
Conditions,” Appllied Sciences, vol. 9, no. 11, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[15] Mohammad Aldibaja et al., “Lateral Road-mark Reconstruction Using Neural Network for Safe Autonomous Driving in Snow-wet
Environments,” IEEE International Conference on Intelligence and Safety for Robotics, 2018. [CrossRef] [Google Scholar] [Publisher
Link]
[16] Marcel Sheeny et al., “RADIATE: A Radar Dataset for Automotive Perception in Bad Weather,” IEEE International Conference on
Intelligence and Safety for Robotics, pp. 1-7, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[17] Yasin Almalioglu et al., “Deep Learning-Based Robust Positioning for All-Weather Autonomous Driving,” Nature Machine
Intelligence, vol. 4, pp. 749–760, 2022. [Google Scholar] [Publisher Link]
[18] Guofa Li et al., “A Deep Learning Based Image Enhancement Approach for Autonomous Driving at Night,” Knowledge-Based Systems,
2020. [CrossRef] [Google Scholar] [Publisher Link]
[19] Hazem Rashed et al., “FuseMODNet: Real-Time Camera and LiDAR-based Moving Object Detection for Robust Low-light Autonomous
Driving,” International Conference on Computer Vision Workshop, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[20] Thiago Rateke, and Aldo von Wangenheim, “Road Surface Detection and Differentiation Considering Surface Damages,” Autonomous
Robots, vol. 45, pp. 299–312, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[21] Hao Chen et al., “SW-GAN: Road Extraction from Remote Sensing Imagery Using Semi-Weakly Supervised Adversarial Learning,”
Remote Sensing, vol. 14, no.17, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[22] Boris Bucko et al., “Computer Vision Based Pothole Detection under Challenging Conditions,” Sensors, vol. 22, no. 22, 2022. [CrossRef]
[Google Scholar] [Publisher Link]
[23] Andrés Eduardo Gómez Hernandez, Özgür Erkent, and Christian Laugier, “Recognize Moving Objects Around an Autonomous Vehicle
Considering a Deep-learning Detector Model, Dynamic Bayesian Occupancy,” International Conference on Control, Automation,
Robotics and Vision, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[24] Anagha Danglea et al., “Enhanced Colorization of Thermal Images for Pedestrian Detection using Deep Convolutional Neural Networks,”
Procedia Computer Science, vol. 218, pp. 2091–2101,2023. [CrossRef] [Google Scholar] [Publisher Link]
[25] Lin Zhao et al., “Dynamic Object Tracking for Self-Driving Cars Using Monocular Camera and LIDAR,” IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[26] Abu Hasnat Mohammad Rubaiyat et al., “Multi-sensor Data Fusion for Vehicle Detection in Autonomous Vehicle Applications,” IS&T
International Symposium on Electronic Imaging Autonomous Vehicles and Machines Conference, 2018. [CrossRef] [Google Scholar]
[Publisher Link]
[27] Huihui Pan et al., “Deep Learning Based Data Fusion for Sensor Fault Diagnosis and Tolerance in Autonomous Vehicles,” Chinese
Journal of Mechanical Engineering, vol. 34, no. 72, 2021. [CrossRef] [Google Scholar] [Publisher Link]

238
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

[28] Wenwen Liu, Yuanchang Liu, and Richard Bucknall, “Filtering Based Multi-Sensor Data Fusion Algorithm for a Reliable Unmanned
Surface Vehicle Navigation,” Journal of Marine Engineering and Technology, pp. 67-83, 2022. [CrossRef] [Google Scholar] [Publisher
Link]
[29] Arnav Vaibhav Malawade, Trier Mortlock, and Mohammad Abdullah Al Faruque, “HydraFusion: Context-Aware Selective Sensor
Fusion for Robust and Efficient Autonomous Vehicle Perception,” ACM/IEEE 13th International Conference on Cyber-Physical Systems,
pp. 68-79, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[30] Guohang Yanet al., “OpenCalib: A Multi-Sensor Calibration Toolbox for Autonomous Driving,” Software Impacts, vol. 14, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[31] Brahayam Ponton et al., “Efficient Extrinsic Calibration of Multi-Sensor 3D LiDAR Systems for Autonomous Vehicles using Static
Objects Information,” IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022. [CrossRef] [Google Scholar]
[Publisher Link]
[32] Haiyang Jiang, Yuanyao Lu, and Jingxuan Wang, “ A Data-Driven Miscalibration Detection Algorithm for a Vehicle-Mounted
Camera,” Mobile Information Systems, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[33] Tamás Wágner et al., “SPaT/MAP V2X Communication Between Traffic Light and V Hicles and a Realization with Digital Twin,”
Computers and Electrical Engineering, vol. 106, 2023. [CrossRef] [Google Scholar] [Publisher Link]
[34] Kyungtae Kim, Seokjoo Koo, and Ji-Woong Choi, “Analysis on Path Rerouting Algorithm based on V2X Communication for Traffic
Flow Improvement,” International Conference on Information and Communication Technology Convergence, pp. 251-254, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[35] Priyanka Paygude et al., “Self-Driving Electrical Car Simulation using Mutation and DNN,” SSRG International Journal of Electronics
and Communication Engineering, vol. 10, no. 6, pp. 27-34, 2023. [CrossRef] [Google Scholar] [Publisher Link]
[36] Cheng Xu et al., “A Real-Time Complex Road AI Perception Based on 5G-V2X for Smart City Security,” Wireless Communications
and Mobile Computing, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[37] Pangwei Wang et al., “Real-Time Urban Regional Route Planning Model for Connected Vehicles Based on V2x Communication,”
Journal of Transport and Land Use, vol. 13, no. 1, pp. 517-538, 2020. [Google Scholar] [Publisher Link]
[38] Iftikhar Rasheed et al., “Intelligent Vehicle Network Routing with Adaptive 3D Beam Alignment for mmWave 5G-Based V2X
Communications,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 5, 2021. [CrossRef] [Google Scholar]
[Publisher Link]
[39] Long Luo et al., “Intersection-Based V2X Routing via Reinforcement Learning in Vehicular Ad Hoc Networks,” IEEE Transactions on
Intelligent Transportation Systems, vol. 23, no. 6, pp. 5446-5459, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[40] Paweł Szulczyński, Dariusz Pazderski, and Krzysztof Kozłowski, “Real-Time Obstacle Avoidance using Harmonic Potential Functions,”
Journal of Automation, Mobile Robotics & Intelligent Systems, vol. 5, no. 3, 2011. [Google Scholar] [Publisher Link]
[41] Wen-Kung Tseng, and Hou-Yu Chen, “The Study of Tracking Control for Autonomous Vehicle,” SSRG International Journal of
Mechanical Engineering, vol. 7, no. 11, pp. 57-62, 2020. [CrossRef] [Publisher Link]
[42] Yongyi Li et al., “Research on Automatic Driving Trajectory Planning and Tracking Control Based on Improvement of the Artificial
Potential Field Method,” Sustainability, vol. 14, no. 19, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[43] Gill et al., “A Cell Decomposition-Based Collision Avoidance Algorithm for Robot Manipulators,” Cybernetics and Systems: An
International Journal, vol. 29, no. 2, pp. 113-135, 1998. [CrossRef] [Google Scholar] [Publisher Link]
[44] Mahmoud Wahdan, and Mohamed M.Elgazzar, “Homotopy Classes and Cell Decomposition Algorithm to Path Planning for Mobile
Robot Navigation,” International Journal of New Innovations in Engineering and Technology, vol. 11, no. 3, 2019. [Google Scholar]
[Publisher Link]
[45] Hanlin Niu et al., “Voronoi-Visibility Roadmap-Based Path Planning Algorithm for Unmanned Surface Vehicles,” Journal of Navigation,
vol. 72, no. 4, pp. 850-874, 2018. [CrossRef] [Google Scholar] [Publisher Link]
[46] Qiongqiong Li et al., “Smart Vehicle Path Planning Based on Modified PRM Algorithm,” Sensors, vol. 22, no. 17, 2022. [CrossRef]
[Google Scholar] [Publisher Link]
[47] Gourav Bathla et al., “Autonomous Vehicles and Intelligent Automation: Applications, Challenges, and Opportunities,” Mobile
Information Systems, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[48] Zhigang Ren et al., “Deep Neural Networks-Based Real-Time Optimal Navigation for an Automatic Guided Vehicle with Static and
Dynamic Obstacles,” Neurocomputing, vol. 443, pp. 329-344, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[49] Sumana Biswas, Sreenatha G. Anavatti, and Matthew A. Garratt, “Multiobjective Mission Route Planning Problem: A Neural Network-
Based Forecasting Model for Mission Planning,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no.1, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[50] Ferenc Hegedüs et al., “Motion Planning for Highly Automated Road Vehicles with a Hybrid Approach Using Nonlinear Optimization
and Artificial Neural Networks,” Journal of Mechanical Engineering, vol. 65, pp. 148-160, 2019.[Google Scholar] [Publisher Link]

239
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

[51] Yu Wu et al., “A Hybrid Particle Swarm Optimization-Gauss Pseudo Method Forreentry Trajectory Optimization of Hypersonic Vehicle
Withnavigation Information Model,” Aerospace Science and Technology, vol. 118, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[52] Xinghai Guo et al., “Global Path Planning And Multi-Objective Path Control for Unmanned Surface Vehicle Based on Modified Particle
Swarm Optimization Algorithm,” Ocean Engineering, vol. 216, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[53] Qinghua Mao et al., “Deviation Correction Path Planning Method of Full-Width Horizontal Axis Roadheader based on Improved Particle
Swarm Optimization Algorithm,” Mathematical Problems in Engineering, 2023. [CrossRef] [Google Scholar] [Publisher Link]
[54] Stefano Arrigoni et al., “Non-linear MPC Motion Planner for Autonomous Vehicles based on Accelerated Particle Swarm Optimization
Algorithm,” AEIT International Conference of Electrical and Electronic Technologies for Automotive, pp. 1-6, 2019. [CrossRef] [Google
Scholar] [Publisher Link]
[55] Qi Song et al., “Dynamic Path Planning for Unmanned Vehicles Based on Fuzzy Logic and Improved Ant Colony Optimization,” IEEE
Access, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[56] Long Chen et al., “Conditional DQN-Based Motion Planning with Fuzzy Logic for Autonomous Driving,” IEEE Transactions on
Intelligent Transportation Systems, vol. 23, no. 4, pp. 2966-2977, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[57] Mohammed A. H. Ali et al., “Autonomous Road Roundabout Detection and Navigation System for Smart Vehicles and Cities using
Laser Simulator–Fuzzy Logic Algorithms and Sensor Fusion,” Sensors, vol. 20, no. 13, 2020. [CrossRef] [Google Scholar] [Publisher
Link]
[58] Intellias Global Technology Partners, How Autonomous Vehicles Sensors Fusion Helps Avoid Deaths, Intellias Blog, 2018. [Publisher
Link]
[59] Linhui Xiao et al., “Dynamic-Slam: Semantic Monocular Visual Localization and Mapping Based on Deep Learning in Dynamic
Environment,” Robotics and Autonomous Systems, vol. 117, pp. 1–16, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[60] Yongbao Ai et al., “DDL-SLAM: A Robust RGB-D SLAM in Dynamic Environments Combined with Deep Learning,” IEEE Access,
vol. 8, pp. 162335-162342, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[61] Yi An et al., “Visual-LiDAR SLAM Based on Unsupervised Multi-Channel Deep Neural Networks,” Cognitive Computation, vol. 14,
pp. 1496–1508, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[62] Muhammad Sualeh, and Gon-Woo Kim, “Semantics Aware Dynamic SLAM Based on 3D MODT,” Sensors, vol. 21, no. 19, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[63] N. Botteghi et al., “Reinforcement Learning Helps Slam: Learning to Build Maps,” International Archives of the Photogrammetry,
Remote Sensing and Spatial Information Sciences, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[64] Manasa R, K. Karibasappa, and J Rajeshwari, "Autonomous Path Finder and Object Detection using an Intelligent Edge Detection
Approach," SSRG International Journal of Electrical and Electronics Engineering, vol. 9, no. 8, pp. 1-7, 2022. [CrossRef] [Publisher
Link]
[65] Shuhuan Wen et al., “Path Planning for Active Slam Based on Deep Reinforcement Learning Under Unknown Environments,” Intelligent
Service Robotics, vol. 13, pp. 263-272, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[66] M. Karthikeya, S. Sathiamoorthy, and M. Vasudevan, “Lane Keep Assist System for an Autonomous Vehicle Using Support Vector
Machine Learning Algorithm,” Innovative Data Communication Technologies and Application, vol. 46, pp. 101-108, 2020. [CrossRef]
[Google Scholar] [Publisher Link]
[67] Yougang Bian et al., “An Advanced Lane-Keeping Assistance System with Switchable Assistance Modes,” IEEE Transactions on
Intelligent Transportation Systems, vol. 21, no. 1, pp. 385-396, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[68] Xingyu Zhou et al., “Individualizable Vehicle Lane Keeping Assistance System Design: A Linear- Programming-Based Model Predictive
Control Approach,” IFAC PapersOnLine, vol. 55, no. 37, pp. 518–523, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[69] Xie Bangquan, and Weng Xiao Xiong, “Real-Time Embedded Traffic Sign Recognition Using Efficient Convolutional Neural Network,”
IEEE Access, vol. 7, pp. 53330-53346, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[70] Danyah A. Alghmghama et al., “Autonomous Traffic Sign (ATSR) Detection and Recognition using Deep CNN,” Procedia Computer
Science, vol. 163, pp. 266–274, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[71] Tran Ngoc Son, and Lai Khac Lai, “Research on Predictive Control for the Damping System of Autonomous Vehicles in the Public
Transport on the Basis of Artificial Intelligence,” SSRG International Journal of Electronics and Communication Engineering, vol. 10,
no. 3, pp. 1-9, 2023. [CrossRef] [Publisher Link]
[72] Yuan Lin, John McPhee, and Nasser L. Azad, “Comparison of Deep Reinforcement Learning and Model Predictive Control for Adaptive
Cruise Control,” IEEE Transactions on Intelligent Vehicles, vol. 6, no. 2, pp. 221-231, 2021. [CrossRef] [Google Scholar] [Publisher
Link]
[73] Zifei Nie, and Hooman Farzaneh, “Adaptive Cruise Control for Eco-Driving Based on Model Predictive Control Algorithm,” Applied
Sciences, vol. 10, no. 15, 2020. [CrossRef] [Google Scholar] [Publisher Link]

240
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

[74] Javier Bas et al., “Policy and Industry Implications of the Potential Market Penetration of Electric Vehicles with Eco-Cooperative
Adaptive Cruise Control,” Transportation Research Part A: Policy and Practice, vol. 164, pp. 242–256.2022. [CrossRef] [Google
Scholar] [Publisher Link]
[75] Vahid Behzadan, and Arslan Munir, “Adversarial Reinforcement Learning Framework for Benchmarking Collision Avoidance
Mechanisms in Autonomous Vehicles,” IEEE Intelligent Transportation Systems Magazine, vol. 13, no. 2, pp. 236-241, 2021. [CrossRef]
[Google Scholar] [Publisher Link]
[76] Myounghoe Kim, Seongwon Lee, and Jaehyun Lim, “Unexpected Collision Avoidance Driving Strategy Using Deep Reinforcement
Learning,” IEEE Access, vol. 8, pp. 17243-17252, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[77] Xiangkun He et al., “Emergency Steering Control of Autonomous Vehicle for Collision Avoidance and Stabilization,” International
Journal of Vehicle Mechanics and Mobility, vol. 57, no. 8, 2018. [CrossRef] [Google Scholar] [Publisher Link]
[78] Kasper Socha, Markus Borg, and Jens Henriksson. “SMIRK: A Machine Learning-Based Pedestrian Automatic Emergency Braking
System with a Complete Safety Case,” Software Impacts, vol. 13, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[79] Zhaomeng Chen et al., “A Novel Emergency Braking Control Strategy for Dual-Motor Electric Drive Tracked Vehicles Based on
Regenerative Braking,” Applied Sciences, vol. 9, no. 12, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[80] Hongyuan Mu et al., “An Autonomous Emergency Braking Strategy Based on Non-Linear Model Predictive Deceleration Control,” IET
Intelligent Transport Systems, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[81] Kevin Tirta Wijaya et al., “Vision-Based Parking Assist System with Bird-Eye Surround Vision for Reverse Bay Parking Maneuver
Recommendation,” International Electronics Symposium, pp. 102-107, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[82] Jie Song et al., “Laser-Based Slam Automatic Parallel Parking Path Planning and Tracking for Passenger Vehicle,” IET Intelligent
Transport Systems, vol. 13 no. 10, pp. 1557-1568.2019. [CrossRef] [Google Scholar] [Publisher Link]
[83] Donghwoon Kwon et al., “A Study on Development of the Camera-Based Blind Spot Detection System Using the Deep Learning
Methodology,” Applied Sciences, vol. 9, no. 14, 2019. [CrossRef] [Google Scholar] [Publisher Link]
[84] R. Manasa et al., “Adaptive Learning of Radial Basis Function Neural Networks Based on Traffic Sign Recognition using Principal
Component Analysis,” SSRG International Journal of Electronics and Communication Engineering, vol. 10, no. 6, pp. 1-6, 2023.
[CrossRef] [Publisher Link]
[85] Hongjun Lee, Moonsoo Ra, and Whoi-Yul Kim, “Nighttime Data Augmentation Using GAN for Improving Blind-Spot Detection,” IEEE
Access, vol. 8, pp. 48049-48059, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[86] Mackenzie and Company Autonomous Driving’s Future: Convenient and Connected, 2023. [Online]. Available:
https://www.mckinsey.com/industries/automotive-and-assembly/our-insights
[87] Promita Maitra et al., “Introducing Autonomous Car Methodology in WSN,” International Journal of Computer & Organization Trends,
vol. 5, no. 1, pp. 51-54, 2015. [CrossRef] [Google Scholar] [Publisher Link]
[88] Darsh Parekh et al., “A Review on Autonomous Vehicles: Progress, Methods and Challenges,” Electronics, vol. 11, no. 14, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[89] Kareem Othman, “Public Acceptance and Perception of Autonomous Vehicles: A Comprehensive Review,” AI and Ethics, vol. 1, pp.
355-387, 2021. [CrossRef] [Google Scholar] [Publisher Link]
[90] Tesla, Autopilot, 2022. [Online]. Available: https://www.tesla.com/autopilot/
[91] General Motors, Super Cruise, 2022. [Online]. Available: https://www.gm.com/gmsafetytechnology/super-cruise.html
[92] Audi, Driver Assistance Systems Retrieved, 2022. [Online]. Available: https://www.audi-mediacenter.com/en/audi-technology-lexicon-
7180/driver-assistance-systems-7184
[93] BMW, Driver Assistance Systems, 2022. [Online]. Available: https://www.bmw.com/en/innovation/driver-assistance.html
[94] Mercedes-Benz, Driver Assistance Systems, 2022. [Online]. Available: https://www.mercedes-benz.com/en/innovation/driving-
assistance-systems/
[95] Manzoor Ahmed Khan et al., “Level-5 Autonomous Driving-Are We There Yet? A Review of Research Literature,” ACM Computing
Surveys, vol. 55, no. 2, pp. 1-38, 2022. [CrossRef] [Google Scholar] [Publisher Link]
[96] Autonomous/Driverless Car Market - Growth, Trends, COVID-19 Impact, and Forecast, 2021. [Google Scholar] [Publisher Link]
[97] Self-driving Cars Market by Component (Radar, LiDAR, Ultrasonic, & Camera Unit), Vehicle (Hatchback, Coupe & Sports Car, Sedan,
SUV), Level of Autonomy (L1, L2, L3, L4, L5), Mobility Type, EV and Region - Global Forecast to 2030. 2022. [Online]. Available:
https://www.marketresearch.com/MarketsandMarkets-v3719/Self-driving-Cars-Component-Radar-30653755/
[98] Ali R. Abdellah et al., “Deep Learning for Predicting Traffic in V2X Networks,” Applied Sciences, vol. 12, no. 19, 2022. [CrossRef]
[Google Scholar] [Publisher Link]
[99] Ankur Saharia, and Rishi Sarswat, “Evolution of Autonomous Cars,” SSRG International Journal of Electronics and Communication
Engineering, vol. 3, no. 5, pp. 7-12, 2016. [CrossRef] [Publisher Link]
[100] Pengwei Wang et al., “Obstacle Avoidance Path Planning Design for Autonomous Driving Vehicles Based on an Improved Artificial
Potential Field Algorithm,” Energies, vol. 12, no. 12, 2019. [CrossRef] [Google Scholar] [Publisher Link]

241
Guirrou Hamza et al. / IJETT, 71(8), 225-242, 2023

[101] Julio A. Placed, and José A. Castellanos, “A Deep Reinforcement Learning Approach for Active SLAM,” Applied Sciences, vol. 10, no.
23, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[102] Jingwei Cao et al., “Improved Traffic Sign Detection and Recognition Algorithm for Intelligent Vehicles,” Sensors, vol. 19, no. 18, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[103] Yiming Zhao et al., “Camera-Based Blind Spot Detection with a General Purpose Lightweight Neural Network,” Electronics, vol. 8, no.
2, 2019. [CrossRef] [Google Scholar] [Publisher Link]

242

You might also like