0% found this document useful (0 votes)
17 views11 pages

Conference Paper

Uploaded by

haseeb29581
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views11 pages

Conference Paper

Uploaded by

haseeb29581
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Deep Learning Based Maneuvering for

Automated Vehicles in GPS/Communication


Denied Environment⋆

No Author Given

No Institute Given

Abstract. Autonomous vehicles (AV) that work without human inter-


vention are known for enhancing road safety and reducing traffic conges-
tion. Current AV systems rely heavily on external communication. Our
research aims to develop deep learning-based AI agent capable of per-
forming essential maneuvers. By collecting human driving behavior and
sensory data, the AI agent will learn to execute maneuvers without ex-
ternal communication, as we are using Inertial Navigation System (INS).
Using MATLAB® simulation the AI agent’s performance is evaluated
in dynamic environments. Results claim that systems can navigate safely
by maintaining Constant Time Headway(CTH). This approach advances
the AV technology in by providing reliability in GPS/communication de-
nied environments and adaptability in dynamic complex scenarios.

Keywords: Autonomous Vehicle · Maneuvering · Constant Time Head-


way (CTH)

1 Introduction
Autonomous vehicles (AVs) or smart vehicles are self-driving vehicles that use
technologies like sensors, actuators, artificial intelligence, and computer vision
to navigate through diverse traffic scenarios without any human intervention.
AVs contribute to reducing road accidents, alleviating traffic congestion, and
mitigate air pollution. Technologies like ADAS (advanced-driver assistance sys-
tem) and collision avoidance system are preferred for energy efficiency, improved
convenience, and boosting productivity of AVs [1]. The calculated modifications
made to the vehicle’s path, direction, and speed to achieve the desired result
is called maneuvering. Maneuvering is crucial for AVs to navigate effectively
through diverse traffic scenarios. Current systems rely heavily on Vehicle-to-
Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications to perform
these maneuvers, but these methods are not always reliable, especially in GPS
or communication denied environments such as tunnels, space, underwater, etc.
This gap highlights the need for AVs to function autonomously without any
external communication, ensuring safety and effectiveness even in challenging
scenarios.

Supported by organization x.
2 No Author Given

Building a deep learning-based module capable of learning from real-world


traffic data is a challenging task, especially in GPS-denied environments without
any communication. This involves using sensors and advanced algorithms such
as Simultaneous Localization and Mapping (SLAM) to dynamically map the
environment and navigate. The proposed module requires data from vehicle-
mounted sensors, including real-world traffic data and human driving behavior
data, to output maneuvering protocols containing real-world car maneuvering
examples and human driving data. This sensor data, filtered by a Kalman filter
and fused through data fusion techniques, will inform maneuvering protocols
based on detected obstacles, free space, speed of neighboring cars, and discipline
constraints. These protocols encompass maneuvers like merging, lane changing,
splitting and joining, following, overtaking, and stopping. The primary challenge
lies in enabling autonomous vehicles (AVs) to accurately detect other vehicles
and their surroundings, making intelligent, real-time decisions without relying on
GPS or communication networks. By simulating human driving behaviors and
integrating them into AV systems, we can develop more adaptive maneuvering
capabilities.
Our research aims to develop an AI-agent that can perform essential maneu-
vers in GPS/communication denied environments using SLAM. Our AI agent
will be modular that will incorporate modules of different maneuvers like lane
changing, overtaking, following, merging, joining, splitting, u-turn and parking.
We propose to collect and analyze data on human driving behavior and dynamic
traffic patterns to train this AI-agent. The goal is to create a system that can
independently execute maneuvers using sensor inputs and without relying on
external communication. This approach not only fills a significant gap in current
AV technology but also enhances the reliability and safety of AVs in real-world
scenarios where communication infrastructure is limited or unavailable.
In our AV system, inputs from various sources integrate to inform the decision-
making process, guiding the vehicle through complex traffic scenarios. These
inputs encompass human driving behavior data, sensory data capturing the en-
vironment, traffic condition data, as well as parameters such as vehicle model
and time and distance headway, adhering to the safety rule of Constant Time
Headway (CTH). Our system generates outputs that dictate the vehicle’s ma-
neuvers, ensuring it maintains a safe temporal and spatial distance from other
vehicles at all times. By prioritizing safety principles like CTH, our system en-
deavors to navigate in real-time through diverse traffic scenarios while upholding
the highest standards of safety and performance.
Our contribution employs MATLAB® Automated Driving Toolbox™ to sim-
ulate our project. We start by creating a driving scenario, placing an ego vehicle,
and defining the road layout, barriers, and other vehicles. We equip the ego vehi-
cle with an Inertial Navigation System (INS) sensor, taking into account distance
and time headway. Additionally, we obtain human driving behavior data from
real-time videos of human driving. A while loop runs continuously to generate
sensor data until the maneuver is complete. Inside the loop, we calculate the
positions of targets relative to the ego vehicle, get the current state of the ego
Title Suppressed Due to Excessive Length 3

vehicle, and set up variables to store different types of sensor data. Each sensor
generates obstacle data, which we gather into a single data structure (‘allData‘).
This dataset, combined with the human driving behavior data, will be used
further in our project to train the AI agent.
Ultimately, this research advances the state-of-the-art in autonomous vehicle
technology by integrating all maneuvers into a single model. It operates effec-
tively in GPS or communication-denied environments and can navigate safely in
traffic conditions with both human drivers and automated vehicles.

Fig. 1. Automated driving simulation on MATLAB incorporating Ego and Actor ve-
hicles.

2 Literature Review

Previously, a lot of research has been done on smart cars’ maneuvering, in which
artificial intelligence has played a major role especially deep learning algorithm.
One such study is about overtaking as a maneuver. In this study they’ve used
ADAS as ACC (adaptive cruise control) and TIS (technology-independent sen-
sor) to minimize the risk of collision. After simulation, it provides safe over-taking
but this model has been trained only on the highway, as well as simulation model
doesn’t always work the same in real [2]. Another paper talks about safe and
comfortable driving in diverse and unpredictable conditions. Previous approaches
had limitations when came to unpredictable conditions. Thus, Jayabrata et.al
proposed the combination of PMP (predictive maneuver plan) and DRL (deep
reinforcement learning) to handle randomness of real-world scenarios [3]. After
testing it worked well but needs a lot of training data and is complex.
As Reinforcement learning (RL) controllers normally offer no safety sureties,
hence, [4] proposed a method called SECRM (safe, efficient, comfortable RL
based car following model) that balances “traffic efficiency maximization” and
“jerk minimization”. This model assists to maintain target speed in both regular
and emergency braking test scenarios offering safety guarantee. In one study the
4 No Author Given

author talks about car-following maneuver solved by the technique of combining


RL and MPC (model predictive control) to follow a planned path and maintain
a safe distance from the followed vehicle in dynamic road scenarios [5]. This
technique has been effective in control and safety and is able to find optimal
solutions. But this approach is computationally expensive as well as complex in
terms of training.
As the number of vehicles has vastly grown, traffic conditions have also be-
come complex. This research uses ADAS to identify lane-changing and lane-
keeping [6]. Results prove that based on acceleration features, accuracy classifi-
cation of lane-changing-to-right, lane-keeping and lane-changing-to-left is accu-
rately more than 96 percent. In the following research, the goal is to deal with
the challenge of parking in a parking lot due to crowded environment [7]. To
solve this, this research uses position, perception, maneuvering algorithms, and
decision-making algorithms using only a single LIDAR sensor. But LIDAR can-
not perceive every obstacle. Plus, it’s not reliable in adverse weather conditions.
To drive safely a vehicle needs to be localized, but as GPS is not reliable,
the author proposed a method that is based on the combination of state-space
model and deep learning model to localize the place where there is no access
to GPS. State-space model captures the dynamics of the vehicle and its envi-
ronment, while deep learning learns relationship between the vehicle-state and
localization error [8].As cities are complex and ever-changing, Zhu et al. intro-
duced a way to teach computer to navigate long distances in cities without maps
using deep reinforcement learning (DRL). The DRL approach uses dual-pathway
architecture to encapsulate locale-specific features that recognize landmarks spe-
cific to a particular place. It can also learn generalized policies that are applied
to all cities [9].

3 Methodology
3.1 Dataset Collection
The development of the AI-agent begins with gathering datasets from sensors,
real-world traffic, and human driving behaviors, which is crucial for training
the agent to maneuver effectively under complex scenarios. The collected data
undergoes preprocessing to ensure it is suitable for use, which involves trans-
forming data from different sources like sensors and maps, and dividing it into
training and testing subsets. Once preprocessing is complete, we insert different
algorithms for training the model including finite state machines (FSMs), de-
cision trees, Kalman filters, PID controllers, Model Predictive Control (MPC)
algorithms, Simultaneous Localization and Mapping (SLAM) algorithms, deep
neural networks (DNNs), reinforcement learning, and Advanced Driver Assis-
tance Systems (ADAS).

3.2 Algorithm Selection


Algorithm selection depends on the specific maneuver being developed. For in-
stance, a lane-following maneuver might combine FSMs, PID controllers, and
Title Suppressed Due to Excessive Length 5

Fig. 2. Bird’s Eye View plot from Vehicle-Mounted Camera.

DNNs, whereas a parking maneuver could use MPC, SLAM algorithms, and
reinforcement learning. The trained models are then simulated in a digital en-
vironment that represents the vehicle and its surroundings, testing the system
under various scenarios and conditions. The final phase involves implementing
the algorithms on the vehicle’s hardware and sensor systems, followed by real-
world testing to evaluate performance on actual roads.

3.3 State-of-The-Art
The state-of-the-art in autonomous vehicle technology is evolving to encompass
a range of maneuvers and address challenges in GPS-denied and communication
denied environments. While previous literature may not cover all maneuvers
or focus on such environments, our project targets these specific complexities.
We’re planned to develop a system that provides to environments like tunnels,
space, underwater, or anywhere GPS and communication are limited or absent.
Moreover, our model is uniquely trained to handle traffic scenarios involving both
human drivers and automated vehicles, incorporating data on human driving
behavior, sensory inputs, and diverse traffic conditions. This approach ensures
that our AI agent can navigate safely through environments with a mix of human
and automated vehicles, contributing to the advancement of autonomous driving
technology in real-world scenarios where traditional methods may fall short.

3.4 Proposed Approach


The proposed approach involves developing a black box system that processes
inputs to generate desired outputs, facilitating autonomous vehicle maneuvering.
Our inputs encompass human driving behavior data, sensory data, and traffic
condition data while also initializing time and distance headway and ego vehi-
cle model. Once these inputs are acquired, we train our model using selected
algorithms to produce maneuver modules that dictate when to perform specific
maneuvers. These modules adjust the speed, distance, and dimensions of the
6 No Author Given

ego vehicle to navigate safely based on the maneuver modules. Sensory data on
traffic conditions distinguishes between free spaces and obstacles, providing de-
tails on obstacle distances relative to the ego vehicle. Information from the ego
vehicle, including its position, heading, speed, and dimensions, is integrated into
the model.

Fig. 3. Proposed Approach Schematic

3.5 Problem Formulation

Assumptions on Driving Dynamics: In developing our autonomous vehicle


system, we make several key assumptions about driving dynamics to simplify and
standardize the modeling process.
Our Assumption 1: is that all obstacles encountered on the road are cars,
whether they are driven by humans or automated systems. These cars are as-
sumed to travel at the same speed and in the same direction, ensuring a uniform
traffic flow and reducing the complexity involved in predicting and responding
to the behaviors of other vehicles.
The Assumption 2: is that our ego vehicle, which is the autonomous vehicle
being controlled, is always visible to all other human drivers and automated vehi-
cles on the road. This visibility assumption is crucial for ensuring that other road
users can respond appropriately to the ego vehicle’s maneuvers, thereby enhanc-
ing overall safety and coordination in mixed-traffic scenarios. These assumptions
help create a controlled environment to refine the autonomous vehicle’s decision-
making and maneuver execution.

Headway Initialization Suppose two vehicles, x and y, are driving in the


same direction along the same lane. Let x precede y at time t. The distance
between x and y at time t is denoted as d(t), and the speed of y at time t is
denoted as vy (t). We define the time headway of y relative to x at time t as
δ(t) = vd(t)
y (t)
. If δ(t) is no less than a given constant ∆∗ > 0, called the desired
Title Suppressed Due to Excessive Length 7

time headway, we say the pair (x, y) is CTH-∆∗ safe at time t. In other words,
if d(t) ≥ vy (t)∆∗ , then (x, y) is CTH-∆∗ safe at t [10].
Assumption 3: states that for any pair of consecutive vehicles in a sequence
of n vehicles, is CTH-∆∗ safe at the initial time t0 .
To determine ∆∗ , consider a lane with a∗∗minimum speed limit vmin and a
maximum speed limit vmax . We set ∆∗ = ∆ vmin +D0
, where ∆∗∗ is the maximum
time needed to stop a vehicle from any speed v ≤ vmax using emergency braking
(which differs from normal deceleration but should be consistent), and D0 is the
maximum vehicle length. This way, the CTH-∆∗ safety rule ensures that y will
never collide with x, even if x stops suddenly at any time [10].

3.6 Performing Maneuvers


Although we plan to incorporate a number of maneuvers, but until now we have
performed only three maneuvers those include Lane change, Overtake and Follow
maneuver.

Lane Change Maneuver Suppose two vehicles x, y are driving in the same
direction along the same lane with a velocity of vx (t) and vy (t) at a certain time
t. d(t) is the distance between both the vehicles, and DH is the required distance
headway at vx (t). If at some point the vx (t) becomes greater than vy (t) and d(t)
∗∗
becomes equal to DH and time headway ∆∗ ≥ CTH-∆∗ where∆∗ = ∆ vmin +D0

then ’Lane Change’ maneuver is executed.

Fig. 4. Ego Vehicle performing lane change Maneuver in MATLAB simulation.

Conditions for lane change


– vx (t) > vy (t): The Ego vehicle is traveling faster than the Actor vehicle.
– d(t) = DH (vx ): The distance between the vehicles is equal to the required
headway distance at the current speed of the Ego vehicle.
– ∆∗ ≥ CT H − ∆∗ : The time headway condition must be satisfied, where
∆∗∗ +D0
∆ = vmin .

8 No Author Given

Follow Maneuver Suppose If vx (t) remains smaller than vy (t) and distance
between the vehicles d(t) remains greater than the required DH (vx ), and if ∆∗
remains less than or equal to the difference between CTH-∆∗ and ∆∗ , then
’Follow Maneuver’ is to be executed.

Fig. 5. Ego Vehicle performing follow Maneuver in MATLAB simulation.

Conditions for following


– vx (t) < vy (t): The Ego vehicle is traveling slower than the Actor vehicle.
– d(t) > DH (vx ): The distance between the vehicles is greater than the re-
quired headway distance at the current speed of the Ego vehicle.
– ∆∗ < CT H − ∆∗ : This states that there’s sufficient time gap between the
vehicles to ensure safety while following

Overtake Maneuver If vx (t) is greater than vy (t), and the distance d(t) is
equal to the required DH (vx ) at the current speed of ego vehicle, and if ∆∗
remains greater than or equal to the difference between CTH-∆∗ and ∆∗ , then
’overtake’ maneuver is to be executed.

Fig. 6. Ego Vehicle performing overtake Maneuver in MATLAB simulation.

Conditions for Overtaking

– vx (t) > vy (t): The Ego vehicle is traveling faster than the Actor vehicle.
Title Suppressed Due to Excessive Length 9

– d(t) = DH (vx ): The distance between the vehicles is equal to the required
headway distance at the current speed of the Ego vehicle.
– ∆∗ ≥ CT H − ∆∗ : which indicates that the current time gap between the
vehicles exceeds the minimum required safety threshold, ensuring safe ma-
neuver.

Fig. 7. State Machines for Maneuvers

4 Results

The obtained data from sensors and human-driving behavior gave us some in-
sights that if ego is moving with a certain speed behind an actor vehicle with
a certain distance in between, then it will have it’s time and distance headway
according to it’s speed. The results show that as the velocity of the ego increases,
the time and the distance headway also increase. And similarly when the velocity
of the ego decreases, the time and the distance headway also decrease.

Fig. 8. Time Headway (s) AND Distance Headway (m) vs Velocity (m/s)
10 No Author Given

Fig. 9. Time Headway vs Distance Headway

From this we also conclude that Time Headway and Distance Headway are
directly proportional to each other, and hence when one increases the other also
increases and vice versa,
The sensor data generator, generated data for the driving scenario, providing
distinct outputs for objects, lanes, point clouds, and INS measurements. This
perception gave a comprehensive view of the challenging environment with multi-
ple road segments, barriers, and various actors. Executing maneuvers successfully
allowed testing the sensors in tough environments. Moreover, we incorporated
human driving data to understand and model how human drivers perform ma-
neuvers in various situations. By combining this human driving data with our
vehicle model and the defined time headway parameters, we aimed to generate
realistic and dynamic vehicle behaviors.
Moreover, the use of INS verifies the accuracy of navigation in GPS and
communication limited or denied environment. Also there is no danger of packet
loss, which helps in accurate information reaching the receiver.

5 Conclusion
In conclusion, our autonomous vehicle system successfully combined sensory
data, human driving behavior, and robust algorithmic processing to navigate
complex traffic environments with high accuracy and safety. The integration of
data from sensors, such as the INS sensor, and real-time human driving videos al-
lowed us to model realistic driving behaviors and scenarios. Our proposed model
can be integrated with multiple maneuvering modules and it is completely inde-
pendent of GPS/communication which helps AVs to operate in an environment
without V2V or V2I or radio communication.

References
1. Hanan Rizk, Ahmed Chaibet, and Ali Kribèche. Model-based control and model-
free control techniques for autonomous vehicles: A technical survey. Applied Sci-
ences, 13(11):6700, 2023.
Title Suppressed Due to Excessive Length 11

2. Josue Ortega, Henrietta Lengyel, and Jairo Ortega. Design and analysis of the
trajectory of an overtaking maneuver performed by autonomous vehicles oper-
ating with advanced driver-assistance systems (adas) and driving on a highway.
Electronics, 12(1):51, 2022.
3. Jayabrata Chowdhury, Vishruth Veerendranath, Suresh Sundaram, and
Narasimhan Sundararajan. Predictive maneuver planning with deep rein-
forcement learning (pmp-drl) for comfortable and safe autonomous driving. arXiv
preprint arXiv:2306.09055, 2023.
4. Omar ElSamadisy, Tianyu Shi, Ilia Smirnov, and Baher Abdulhai. Safe, efficient,
and comfortable reinforcement-learning-based car-following for avs with an ana-
lytic safety guarantee and dynamic target speed. Transportation research record,
2678(1):643–661, 2024.
5. Liwen Wang, Shuo Yang, Kang Yuan, Yanjun Huang, and Hong Chen. A combined
reinforcement learning and model predictive control for car-following maneuver of
autonomous vehicles. Chinese Journal of Mechanical Engineering, 36(1):80, 2023.
6. Yuming Wu, Lei Zhang, Ren Lou, and Xinghua Li. Recognition of lane changing
maneuvers for vehicle driving safety. Electronics, 12(6):1456, 2023.
7. Felipe Jiménez, Miguel Clavijo, and Alejandro Cerrato. Perception, positioning and
decision-making algorithms adaptation for an autonomous valet parking system
based on infrastructure reference points using one single lidar. Sensors, 22(3):979,
2022.
8. Feihu Zhang, Zhiliang Wang, Yaohui Zhong, and Liyuan Chen. Localization er-
ror modeling for autonomous driving in gps denied environment. Electronics,
11(4):647, 2022.
9. Piotr Mirowski, Matt Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith
Anderson, Denis Teplyashin, Karen Simonyan, Andrew Zisserman, Raia Hadsell,
et al. Learning to navigate in cities without a map. Advances in neural information
processing systems, 31, 2018.
10. Xueli Fan, Qixin Wang, and Jie Liu. A reliable wireless protocol for highway
and metered-ramp cav collaborative merging with constant-time-headway safety
guarantee. ACM Transactions on Cyber-Physical Systems, 7(4):1–26, 2023.

You might also like