Webpdf
Webpdf
Webpdf
Intelligence for
Autonomous Vehicles
Explainable AI for Autonomous Vehicles: Concepts, Challenges, and
Applications is a comprehensive guide to developing and applying explainable
artificial intelligence (XAI) in the context of autonomous vehicles. It begins with
an introduction to XAI and its importance in developing autonomous vehicles.
It also provides an overview of the challenges and limitations of traditional
black-box AI models and how XAI can help address these challenges by
providing transparency and interpretability in the decision-making process of
autonomous vehicles. The book then covers the state-of-the-art techniques and
methods for XAI in autonomous vehicles, including model-agnostic approaches,
post-hoc explanations, and local and global interpretability techniques. It also
discusses the challenges and applications of XAI in autonomous vehicles, such
as enhancing safety and reliability, improving user trust and acceptance, and
enhancing overall system performance. Ethical and social considerations are
also addressed in the book, such as the impact of XAI on user privacy and
autonomy and the potential for bias and discrimination in XAI-based systems.
Furthermore, the book provides insights into future directions and emerging
trends in XAI for autonomous vehicles, such as integrating XAI with other
advanced technologies like machine learning and blockchain and the potential
for XAI to enable new applications and services in the autonomous vehicle
industry. Overall, the book aims to provide a comprehensive understanding
of XAI and its applications in autonomous vehicles to help readers develop
effective XAI solutions that can enhance autonomous vehicle systems’ safety,
reliability, and performance while improving user trust and acceptance.
This book:
Edited by
Kamal Malik, Moolchand Sharma,
Suman Deswal, Umesh Gupta,
Deevyankar Agarwal,
and Yahya Obaid Bakheet Al Shamsi
Front cover image: AlinStock/Shutterstock
First edition published 2025
by CRC Press
2385 NW Executive Center Drive, Suite 320, Boca Raton FL 33431
and by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
© 2025 selection and editorial matter, Kamal Malik, Moolchand Sharma,
Suman Deswal, Umesh Gupta, Deevyankar Agarwal and Yahya Obaid
Bakheet Al Shamsi; individual chapters, the contributors
Reasonable efforts have been made to publish reliable data and information,
but the author and publisher cannot assume responsibility for the validity
of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in
this publication and apologize to copyright holders if permission to publish
in this form has not been obtained. If any copyright material has not been
acknowledged please write and let us know so we may rectify in any future
reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be
reprinted, reproduced, transmitted, or utilized in any form by any electronic,
mechanical, or other means, now known or hereafter invented, including
photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access
www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC),
222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are
not available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or
registered trademarks and are used only for identification and explanation
without intent to infringe.
ISBN: 978-1-032-65501-7 (hbk)
ISBN: 978-1-032-81996-9 (pbk)
ISBN: 978-1-003-50243-2 (ebk)
DOI: 10.1201/9781003502432
Typeset in Sabon
by Apex CoVantage, LLC
Dedication
Dr. Kamal Malik would like to dedicate this book to her father,
Sh. Ashwani Malik, her mother, Smt. Shakuntla Malik and her brother,
Dr. Shiv Malik, for their constant support and motivation; she would
also like to give my special thanks to the publisher and her other co-
editors for believing in her abilities. Above all, a humble thanks to the
Almighty for this accomplishment.
Mr. Moolchand Sharma would like to dedicate this book to his father,
Sh. Naresh Kumar Sharma and his mother, Smt. Rambati Sharma for
their constant support and motivation, and his family members, including
his wife, Ms. Pratibha Sharma, and son, Dhairya Sharma. He also thank
the publisher and his other co-editors for believing in his abilities.
Dr. Suman Deswal would like to dedicate this book to her father Late
Sh. V D Deswal and Mother Smt. Ratni Deswal, who taught her to
never give up, her husband, Mr. Vinod Gulia, and daughters, Laisha and
Kyna, for always loving and supporting her in every endeavor of life. She
also thanks the publisher and co-editors who believed in her capabilities.
Dr. Umesh Gupta would like to dedicate this book to his mother, Smt.
Prabha Gupta, his father, Sh. Mahesh Chandra Gupta, for their constant
support and motivation, and his family members, including his wife,
Ms. Umang Agarwal, and his son, Avaya Gupta. He also thank the
publisher and his other co-editors for believing in his abilities. Before
beginning and after finishing his endeavor, he must appreciate the
Almighty God, who provides me with the means to succeed.
vii
viii Dedication
Dr. Deevyankar Agarwal would like to dedicate this book to his father,
Sh. Anil Kumar Agarwal, his mother, Smt. Sunita Agarwal, his wife,
Ms. Aparna Agarwal, and his son, Jai Agarwal, for their constant
support and motivation. He would also like to give his special thanks to
the publisher and his other co-editors for having faith in his abilities.
Dr. Yahya Obaid Bakheet Al Shamsi would like to dedicate this book to his
father and mother for their constant support, prayers, and motivation.
He would also like to give his special thanks to the publisher and his
co-editors for their collaboration and mutual support.
Contents
Preface xv
About the editors xvii
List of contributors xx
1 Autonomous vehicles 1
R A S H M I K U MARI , SUBH RAN I L DAS, ABH I SH EK T HA K UR ,
A N K I T K U M A R AN D RAGH WE N DRA KI SH O RE SINGH
1.1 Introduction 1
1.2 Importance of artificial intelligence (AI) in autonomous
vehicles 3
1.3 AI-driven decision making 6
1.4 AI techniques and deep learning algorithms 9
1.5 Sensor fusion and data integration in autonomous
vehicles 12
1.6 Perception system in autonomous vehicles 15
1.7 Human-AI interaction in autonomous vehicles 16
1.8 Safety and reliability in AI-driven autonomous
vehicles 17
1.9 Conclusion 20
References 20
ix
x Contents
3.1 Introduction 50
3.2 Current state of XAI in autonomous vehicles 52
3.2.1 Autonomous vehicles 52
3.2.2 Explainable artificial intelligence (XAI) 53
3.2.3 Case studies of XAI techniques in autonomous
vehicles 54
3.3 Challenges and limitations of XAI in autonomous vehicles 61
3.4 Future trends in XAI for autonomous vehicles 64
3.5 Conclusion 65
References 66
4.1 Introduction 73
4.2 Background and review of related work 74
4.2.1 XAI method for convolutional neural networks in
self-driving cars 74
4.2.2 The internet of vehicles structure and need for
XAI-IDS 76
4.2.3 XAI frameworks 76
4.2.4 Practical implementation of XAI-based models 76
4.3 Internet of vehicles (IoV) network architecture 77
4.3.1 Autonomous vehicle components and design 79
4.3.2 Applications and services 84
4.3.3 Current issues 85
4.4 XAI methods and algorithms 86
4.4.1 XAI methods can be sub-divided into four categories 86
4.4.2 XAI algorithms in autonomous vehicles 88
4.5 XAI models to improve overall system performance 92
4.6 Discussion 94
4.7 Conclusion 95
References 95
xii Contents
Index 171
Preface
xv
xvi Preface
especially want to thank our dedicated peer reviewers who volunteered for
the arduous and tedious step of quality checking and critiquing the submit-
ted chapters.
Kamal Malik, Moolchand Sharma, Suman Deswal,
Umesh Gupta, Deevyankar Agarwal,
Yahya Obaid Bakheet Al Shamsi
About the editors
xvii
xviii About the editors
xx
Contributors xxi
Autonomous vehicles
Rashmi Kumari, Subhranil Das, Abhishek
Thakur, Ankit Kumar and Raghwendra
Kishore Singh
1.1 INTRODUCTION
DOI: 10.1201/9781003502432-1 1
2 Explainable artificial intelligence for autonomous vehicles
acceleration, braking, and steering. This level may include basic warning
systems, such as lane departure warnings or forward collision warnings,
but these are only intended to alert the driver and do not actively inter-
vene in driving tasks.
Level 1: (Driver Assistance/Partial Automation) At Level 1, vehicles have cer-
tain driving assistance features that can control either the steering or the
acceleration/deceleration, but not both simultaneously. An example of
Level 1 automation is adaptive cruise control, which can automatically
adjust the vehicle’s speed to maintain a safe following distance from the
vehicle ahead.
Level 2: (Partial Automation) At Level 2, the vehicle can simultaneously con-
trol steering and acceleration/deceleration under certain conditions. The
system can handle tasks like lane-keeping and adaptive cruise control,
allowing the driver to take their hands off the steering wheel and feet off
the pedals, but the driver must remain engaged.
Level 3: (Conditional Automation) At Level 3, the vehicle can handle all
aspects of driving without constant human supervision, but the driver
must be ready to take over when the system requests, such as in com-
plex or unexpected situations. The transition from Level 3 to Level 4
automation is a critical boundary, as Level 4 no longer requires human
intervention in most circumstances.
Autonomous vehicles 3
Level 4: (High Automation) At Level 4, vehicles are highly automated and can
operate without human intervention in predefined operational design
domains (ODDs). Within the defined ODD, Level 4 vehicles can handle
all driving tasks, including navigating complex city traffic or adverse
weather conditions, without the need for human control or supervision.
Level 5: (Full Automation) Level 5 represents the highest level of automa-
tion, where the vehicle is fully autonomous and capable of perform-
ing all driving tasks under any conditions and without human input or
intervention. There is no requirement for a human driver in a Level 5
vehicle. Level 5 autonomy enables driverless mobility and is a significant
step toward fully realizing the potential of autonomous vehicles in revo-
lutionizing transportation.
Reference Year Publication Short Survey In-Depth Survey Deep Learning/ Safety System AI Hardware Sensor Fusion
Reinforcement Deployment System
Learning
Mahadev et al. [17] 2018 ACM √ √ √
Q Rao and J. Frotunki [18] 2018 ACM √ √ √
Haboucha, et al. [19] 2017 IEEE √ √ √
AM Nascimento et al. [20] 2019 IEEE √ √ √
Thadeshwar et al. [21] 2020 IEEE √ √
Y Ma et al. [22] 2020 IEEE √ √ √ √
S. Grigarsse [23] 2020 Wiley √ √ √
S. Das and SK Mishra [24] 2020 Springer √ √ √
C Medrano-Berum et al. [25] 2021 Taylor & Francis √ √
X Di and R Shi [26] 2021 Elsevier √ √ √
S. Gupta et al. [27] 2021 Elsevier √ √ √ √
F. Ding et al. [28] 2021 IEEE √ √ √
Autonomous vehicles
M. Tammvee et al. [29] 2021 Springer √ √ √
Vartika et al. [30] 2021 Springer √ √ √
S Das and R Kumari [31] 2021 IOP √ √ √
M. Singh et al. [32] 2022 IEEE √ √
C Kim et al. [33] 2022 Springer √ √ √
7
8
Explainable artificial intelligence for autonomous vehicles
Table 1.2 Summary of Various DL Methods for Scene Understanding in AVs
Reference Year Publication Dataset Proposed Method Learning Hardware Condition of Traffic
Sless et al. [37] 2019 IEEE NuScenes Dataset Occupancy grid mapping Unsupervised Titan X GPU Semi-Urban
Diu et al. [38] 2020 Springer Foggy Zurich Curriculum Model Unsupervised Nvidia EVGA GeForce Adverse Traffic
Database Adaptation (CMA) GTX TITAN X GPU condition
Huang et al. [39] 2021 IEEE CoRL2017 End to End Deep Supervised Titan X GPU Urban
Learning
Divya et al. [40] 2021 IEEE India Driving Dataset Domain Adaptation Unsupervised Not reported Urban
(DA)
Andreas et al. [41] 2021 IEEE KITTI odometry Multi-modal Supervised Nvdia Geforce GTX Semi-urban
M. Singh et al. [32] 2022 IEEE KITTI Fuzzy Logic Controller Supervised Nvidia EVGA GeForce Semi-urban
GTX TITAN X GPU
Sundar et al. [42] 2022 Springer KITTI 3D CNN and Pedestrian Supervised Titan X GPU Urban
Detection Algorithm
Nousias et al. [43] 2023 IEEE KITTI Vector Quantization Unsupervised Not reported Urban
Autonomous vehicles 9
Figure 1.4 Schematic diagram for the various categories of XAI in AV.
Autonomous vehicles
11
12 Explainable artificial intelligence for autonomous vehicles
Sensor fusion and data integration are critical aspects of AVs that enable
them to build a comprehensive understanding. Autonomous vehicles are
equipped with various sensors, such as cameras, LiDAR, RADAR, ultrasonic
sensors, and GPS, each providing unique data about the environment. Sen-
sor fusion and data integration involve combining information from these
different sensors to create a coherent representation.
Sensor fusion involves the process of merging data from multiple sensors
to create a more robust and reliable perception of the environment. Each
sensor has its strengths and limitations. For example, cameras are excellent
at recognizing objects and reading road signs, but they may struggle in low-
light conditions. LiDAR can provide precise 3D point cloud data, but it may
have difficulties in heavy rain or snow. RADAR is proficient in detecting
the velocity of objects but may lack detailed information about their shape.
Ultrasonic sensors are useful for short-range obstacle detection but have
limited coverage. By fusing the data from these sensors, AVs can compensate
for individual sensor shortcomings and obtain a more comprehensive and
accurate view of the environment.
Data integration involves not only combining sensor data but also incor-
porating other relevant information, such as maps, traffic data, and histori-
cal driving patterns. For example, GPS data can help AVs with localization
and positioning, while maps provide information about road layouts and
speed limits. Traffic data can assist with real-time route planning, and his-
torical driving patterns can help predict and anticipate the behavior of other
road users. A summary of the sensor fusion and data integration methods is
provided in what follows.
Sriram et al. [49] present a novel approach for sensor fusion, integrat-
ing LiDAR and Camera sensors to create a robust drivable road detection
Autonomous vehicles 13
different sensors and integrating various data sources, sensor fusion, and
data integration play a pivotal role in making autonomous driving safer,
more reliable, and capable of navigating diverse and dynamic real-world
driving conditions.
LIDAR
un
dV
iew
Infrared
Sensor
Rear Collision
View
Tr
a
ffic iew
V
Si
gn
Wheels
al
nc g
Odometery
e
sta in
Di eepe
Ne bje
K an
jects
ar cts
L
nt Ob
Dista
Su
rro
Ultrasonic
un
Sensor
dV
iew
Over the past ten years, vehicles plying the roads have led to a surge in traffic
accidents, posing significant challenges for public safety and society. Human
error, characterized by inappropriate judgments, distractions, and fatigue,
often contributes to these accidents, resulting in fatalities and mishaps. In
light of this pressing issue, AVs offer potential solutions for enhancing the
18 Explainable artificial intelligence for autonomous vehicles
1.9 CONCLUSION
REFERENCES
[1] Ebert, Christof, Michael Weyrich, and Hannes Vietz. AI-Based Testing for
Autonomous Vehicles. No. 2023–01–1228. SAE Technical Paper, 2023.
[2] Karnati, Akshitha, and Devanshi Mehta. “Artificial intelligence in self driving
cars: Applications, implications and challenges.” Ushus Journal of Business
Management 21, no. 4 (2022).
[3] Guntrum, Laura Gianna, Sebastian Schwartz, and Christian Reuter. “Dual-
use technologies in the context of autonomous driving: An empirical case
study from Germany.” Zeitschrift für Außen-und Sicherheitspolitik 16, no. 1
(2023): 53–77.
[4] Ma, Yifang, Zhenyu Wang, Hong Yang, and Lin Yang. “Artificial intelligence
applications in the development of autonomous vehicles: A survey.” IEEE/
CAA Journal of Automatica Sinica 7, no. 2 (2020): 315–329.
Autonomous vehicles 21
[5] Das, Subhranil, Rashmi Kumari, and S. Deepak Kumar. “A review on appli-
cations of simultaneous localization and mapping method in autonomous
vehicles.” Advances in Interdisciplinary Engineering: Select Proceedings of
FLAME 2020 (2021): 367–375.
[6] Das, Subhranil, and Rashmi Kumari. “Application of horizontal projection
lines in detecting vehicle license plate.” In 2020 International Conference on
Smart Electronics and Communication (ICOSEC), IEEE, pp. 1–6, 2020.
[7] Das, Subhranil, P. Arvind, Sourav Chakraborty, Rashmi Kumari, and S.
Deepak Kumar. “IoT based solar smart tackle free AGVs for industry 4.0.”
In International Conference on Internet of Things and Connected Technolo-
gies, pp. 1–7. Springer International Publishing, 2020.
[8] Das, Subhranil, and Rashmi Kumari. “Application of extended Hough trans-
form technique for stationary images in vehicle license plate.” In 2021 6th
International Conference for Convergence in Technology (I2CT), pp. 1–4.
IEEE, 2021.
[9] Das, Subhranil, and Rashmi Kumari. “Online training of identifying charac-
ters present in vehicle license plate.” In 2021 4th Biennial International Con-
ference on Nascent Technologies in Engineering (ICNTE), pp. 1–6. IEEE,
2021.
[10] Li, L., K. Ota, and M. Dong. “Humanlike driving: Empirical decision-making
system for autonomous vehicles.” IEEE Transactions on Vehicular Technology
67, no. 8 (2018): 6814–6823. https://doi.org/10.1109/TVT.2018.2822762
[11] Gallardo, Nicolas, Nicholas Gamez, Paul Rad, and Mo Jamshidi. “Autono-
mous decision making for a driver-less car.” In 2017 12th System of Systems
Engineering Conference (SoSE), pp. 1–6. IEEE, 2017.
[12] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classifi-
cation with deep convolutional neural networks.” Advances in Neural Infor-
mation Processing Systems 25 (2012).
[13] Xie, Dong-Fan, Zhe-Zhe Fang, Bin Jia, and Zhengbing He. “A data-driven
lane-changing model based on deep learning.” Transportation Research Part
C: Emerging Technologies 106 (2019): 41–60.
[14] Li, Dong, Dongbin Zhao, Qichao Zhang, and Yaran Chen. “Reinforce-
ment learning and deep learning based lateral control for autonomous driv-
ing.” arXiv preprint arXiv:1810.12778 (2018).
[15] Strickland, Mark, Georgios Fainekos, and Heni Ben Amor. “Deep predictive
models for collision risk assessment in autonomous driving.”In 2018 IEEE Inter-
national Conference on Robotics and Automation (ICRA), pp. 4685–4692.
IEEE, 2018.
[16] Wang, Huanjie, Shihua Yuan, Mengyu Guo, Ching-Yao Chan, Xueyuan
Li, and Wei Lan. “Tactical driving decisions of unmanned ground vehi-
cles in complex highway environments: A deep reinforcement learning
approach.” Proceedings of the Institution of Mechanical Engineers, Part D:
Journal of Automobile Engineering 235, no. 4 (2021): 1113–1127.
[17] Mahadevan, Karthik, Sowmya Somanath, and Ehud Sharlin. “Can inter-
faces facilitate communication in autonomous vehicle-pedestrian interac-
tion?.” In Companion of the 2018 ACM/IEEE International Conference on
Human-Robot Interaction (HRI '18), pp. 309–310. Association for Comput-
ing Machinery, 2018. https://doi.org/10.1145/3173386.3176909
22 Explainable artificial intelligence for autonomous vehicles
[18] Rao, Qing, and Jelena Frtunikj. “Deep learning for self-driving cars:
Chances and challenges.” In Proceedings of the 1st International Work-
shop on Software engineering for AI in Autonomous Systems (SEFAIS
'18), pp. 35–38. Association for Computing Machinery, 2018. https://doi.
org/10.1145/3194085.3194087
[19] Haboucha, Chana J., Robert Ishaq, and Yoram Shiftan. “User preferences
regarding autonomous vehicles.” Transportation Research Part C: Emerging
Technologies 78 (2017): 37–49.
[20] Nascimento, Alexandre Moreira, Lucio Flavio Vismari, Caroline Bianca San-
tos Tancredi Molina, Paulo Sergio Cugnasca, Joao Batista Camargo, Jorge
Rady de Almeida, Rafia Inam, Elena Fersman, Maria Valeria Marquezini, and
Alberto Yukinobu Hata. “A systematic literature review about the impact of
artificial intelligence on autonomous vehicle safety.” IEEE Transactions on
Intelligent Transportation Systems 21, no. 12 (2019): 4928–4946.
[21] Thadeshwar, Hiral, Vinit Shah, Mahek Jain, Rujata Chaudhari, and Vishal
Badgujar. “Artificial intelligence based self-driving car.” In 2020 4th Inter-
national Conference on Computer, Communication and Signal Processing
(ICCCSP), IEEE, pp. 1–5, 2020.
[22] Ma, Yifang, Zhenyu Wang, Hong Yang, and Lin Yang. “Artificial intelligence
applications in the development of autonomous vehicles: A survey.” IEEE/
CAA Journal of Automatica Sinica 7, no. 2 (2020): 315–329.
[23] Grigorescu, Sorin, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu.
“A survey of deep learning techniques for autonomous driving.” Journal of
Field Robotics 37, no. 3 (2020): 362–386.
[24] Das, Subhranil, and Sudhansu Kumar Mishra. “Collision avoidance and path
planning for mobile robots based on state estimation approach.” Journal of
Intelligent & Fuzzy Systems Preprint (2023): 1–12.
[25] Medrano-Berumen, Christopher, and Mustafa İlhan Akbaş. “Validation of
decision-making in artificial intelligence-based autonomous vehicles.” Jour-
nal of Information and Telecommunication 5, no. 1 (2021): 83–103.
[26] Di, Xuan, and Rongye Shi. “A survey on autonomous vehicle control in
the era of mixed-autonomy: From physics-based to AI-guided driving pol-
icy learning.” Transportation Research Part C: emerging Technologies 125
(2021): 103008.
[27] Gupta, Savyasachi, Dhananjai Chand, and Ilaiah Kavati. “Computer vision
based animal collision avoidance framework for autonomous vehicles.”
In Computer Vision and Image Processing: 5th International Conference,
CVIP 2020, Prayagraj, India, December 4–6, 2020, Revised Selected Papers,
Part III 5, Springer, pp. 237–248, 2021.
[28] Ding, Feng, Keping Yu, Zonghua Gu, Xiangjun Li, and Yunqing Shi. “Per-
ceptual enhancement for autonomous vehicles: Restoring visually degraded
images for context prediction via adversarial training.” IEEE Transactions
on Intelligent Transportation Systems 23, no. 7 (2021): 9430–9441.
[29] Tammvee, Martin, and Gholamreza Anbarjafari. “Human activity recognition-
based path planning for autonomous vehicles.” Signal, Image and Video Pro-
cessing 15, no. 4 (2021): 809–816.
Autonomous vehicles 23
[30] Vartika, Vagisha, Swati Singh, Subhranil Das, Sudhansu Kumar Mishra, and
Sitanshu Sekhar Sahu. “A review on intelligent PID controllers in autono-
mous vehicle.” Advances in Smart Grid Automation and Industry 4.0: Select
Proceedings of ICETSGAI4. 0 (2021): 391–399.
[31] Das, Subhranil, and Rashmi Kumari. “Real time implementation of square
path tracing by autonomous mobile robot.” Journal of Physics: Conference
Series 1831, no. 1 (2021): 012011.
[32] Singh, Manjari, Subhranil Das, and Sudhansu Kumar Mishra. “Static obsta-
cles avoidance in autonomous ground vehicle using fuzzy logic control-
ler.” In 2020 International Conference for Emerging Technology (INCET),
pp. 1–6. IEEE, 2020.
[33] Kim, Cheol-jin, Myung-jae Lee, Kyu-hong Hwang, and Young-guk Ha.
“End-to-end deep learning-based autonomous driving control for high-speed
environment.” The Journal of Supercomputing 78, no. 2 (2022): 1961–1982.
[34] Wang, Huijuan, Yuan Yu, and Quanbo Yuan. “Application of Dijkstra algo-
rithm in robot path-planning.” In 2011 Second International Conference
on Mechanic Automation and Control Engineering, IEEE, pp. 1067–1069,
2011.
[35] Tang, Gang, Congqiang Tang, Christophe Claramunt, Xiong Hu, and Peipei
Zhou. “Geometric A-star algorithm: An improved A-star algorithm for AGV
path planning in a port environment.” IEEE Access 9 (2021): 59196–59210.
[36] Zhang, Chaoyong, Duanfeng Chu, Shidong Liu, Zejian Deng, Chaozhong
Wu, and Xiaocong Su. “Trajectory planning and tracking for autonomous
vehicle based on state lattice and model predictive control.” IEEE Intelligent
Transportation Systems Magazine 11, no. 2 (2019): 29–40.
[37] Sless, Liat, Bat El Shlomo, Gilad Cohen, and Shaul Oron. “Road scene under-
standing by occupancy grid learning from sparse radar clusters using seman-
tic segmentation.” In Proceedings of the IEEE/CVF International Conference
on Computer Vision Workshops (ICCVW), pp. 867–875. IEEE, 2019.
[38] Dai, Dengxin, Christos Sakaridis, Simon Hecker, and Luc Van Gool. “Cur-
riculum model adaptation with synthetic and real data for semantic foggy
scene understanding.” International Journal of Computer Vision 128 (2020):
1182–1204.
[39] Huang, Zhiyu, Chen Lv, Yang Xing, and Jingda Wu. “Multi-modal sensor
fusion-based deep neural network for end-to-end autonomous driving with
scene understanding.” IEEE Sensors Journal 21, no. 10 (2020): 11781–11790.
[40] Kothandaraman, Divya, Rohan Chandra, and Dinesh Manocha. “BoMu-
DANet: unsupervised adaptation for visual scene understanding in unstruc-
tured driving environments.” In 2021 IEEE/CVF International Conference
on Computer Vision Workshops (ICCVW), pp. 3966–3975. IEEE, 2021.
https://doi.org/10.1109/ICCVW54120.2021.00442
[41] Papandreou, Andreas, Andreas Kloukiniotis, Aris Lalos, and Konstantinos
Moustakas. “Deep multi-modal data analysis and fusion for robust scene
understanding in CAVs.” In 2021 IEEE 23rd International Workshop on
Multimedia Signal Processing (MMSP), IEEE, pp. 1–6, 2021.
[42] Iftikhar, Sundas, Muhammad Asim, Zuping Zhang, and Ahmed A. Abd El-
Latif. “Advance generalization technique through 3D CNN to overcome the
24 Explainable artificial intelligence for autonomous vehicles
Explainable artificial
intelligence
Fundamentals, Approaches, Challenges,
XAI Evaluation, and Validation
Manoj Kumar Mahto
DOI: 10.1201/9781003502432-2 25
26 Explainable artificial intelligence for autonomous vehicles
2.2 INTRODUCTION TO EXPLAINABLE
ARTIFICIAL INTELLIGENCE
Model
Transparency
Key Concepts
in
Explainability
Interpretability
vs.
Transparency
Trustworthiness
2.4.3 Trustworthiness
Trustworthiness is a fundamental concept in Explainable Artificial Intel-
ligence (XAI) that transcends mere transparency and interpretability. As
emphasized by Adadi and Berrada (2018), trustworthiness goes beyond
Explainable artificial intelligence 29
scenarios where the underlying data relationships are relatively simple and
can be adequately captured by a straightforward model. However, these
models may struggle when dealing with highly complex data and intricate
patterns. As pointed out by Lipton (2016), in use cases where data relation-
ships are nonlinear or involve a multitude of interacting factors, transparent
models may not offer the required predictive accuracy. In such situations,
more complex and non-transparent models, such as deep neural networks,
might be necessary. Therefore, the choice of model transparency must be
context-specific, balancing the need for interpretable decision-making with
the complexity of the data and the accuracy requirements of the application.
and trust the explanations (Lage et al., 2021). Furthermore, evaluation meth-
odologies should consider the context in which the AI system operates and
how explanations impact decision-making. The selection of appropriate
evaluation metrics should align with the specific goals and requirements of
autonomous vehicles to ensure that XAI systems enhance safety and user
trust effectively.
Simulation and testing are integral components of the evaluation and vali-
dation process for Explainable Artificial Intelligence (XAI) in autonomous
vehicles. Given the complex and dynamic nature of real-world driving sce-
narios, conducting extensive real-world testing can be costly and poten-
tially risky. Therefore, simulation environments, as emphasized by Sun et al.
(2017), offer a controlled and safe means of evaluating XAI systems under
a wide range of conditions. These simulations can replicate various driving
scenarios, including rare and hazardous events, allowing researchers to assess
how well XAI systems perform and explain their actions in challenging situ-
ations. Moreover, simulation environments facilitate systematic testing of
XAI responses to different inputs, helping to identify potential vulnerabilities
and areas for improvement. Combining simulation testing with real-world
testing, as advocated by Alarifi et al. (2018), provides a comprehensive vali-
dation approach that ensures XAI systems are robust, safe, and capable of
delivering reliable explanations to support autonomous vehicle operation.
respond and explain their decisions. This approach helps uncover vulner-
abilities and areas for improvement in a safe and cost-effective manner. Fur-
thermore, simulated environments are instrumental in the development and
fine-tuning of XAI models before their deployment in actual vehicles. By
leveraging these environments, researchers can ensure that XAI systems are
not only technically sound but also capable of delivering reliable explana-
tions in the dynamic and unpredictable context of autonomous driving.
2.20 CONCLUSION
REFERENCES
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on
explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
Alarifi, A., Alkahtani, M., Almazroi, A., Hu, J., & Alarifi, A. (2018). Autonomous
Vehicles: Trusted by whom? A comprehensive survey on autonomous vehicle
trust management systems. IEEE Transactions on Intelligent Transportation
Systems, 20(8), 2917–2926.
48 Explainable artificial intelligence for autonomous vehicles
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., . . . &
Zhang, X. (2016). End to end learning for self-driving cars. arXiv preprint
arXiv:1604.07316.
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N., . . . & Polito, G.
(2015). Intelligible models for healthcare: Predicting pneumonia risk and hos-
pital 30-day readmission. In Proceedings of the 21th ACM SIGKDD Interna-
tional Conference on Knowledge Discovery and Data Mining (pp. 1721–1730).
Carvalho, A. M., & Freitas, A. A. (2019). A comprehensive survey of interpretable
machine learning methods. The Journal of Machine Learning Research, 20(1),
1–77.
Chen, C., Seff, A., Kornhauser, A., & Xiao, J. (2015). DeepDriving: Learning
affordance for direct perception in autonomous driving. In Proceedings of the
IEEE International Conference on Computer Vision (ICCV) (pp. 2722–2730).
Chen, J., Song, L., Wainwright, M. J., & Jordan, M. I. (2018). Learning to explain:
An information-theoretic perspective on model interpretation. In Proceed-
ings of the 35th International Conference on Machine Learning (ICML '18)
(pp. 883–892).
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable
machine learning. arXiv preprint arXiv:1702.08608.
Friedman, J. H., Hastie, T., & Tibshirani, R. (2000). Additive logistic regression:
A statistical view of boosting (with discussion and a rejoinder by the authors).
The Annals of Statistics, 28(2), 337–407.
Gal, Y., Islam, R., & Ghahramani, Z. (2016). Deep Bayesian active learning with
image data. In Proceedings of the 33rd International Conference on International
Conference on Machine Learning—Volume 48 (ICML’16) (pp. 2153–2162).
Goodfellow, I., Bengio, Y., Courville, A., & Bengio, Y. (2016). Deep Learning
(Vol. 1).
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causabil-
ity and explainability of artificial intelligence in medicine. Wiley Interdisciplin-
ary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.
Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Ribeiro, A., & Doshi-Velez,
F. (2021). An evaluation of the human-interpretability of explanation. arXiv
preprint arXiv:2104.14957.
Lipton, Z. C. (2016). The mythos of model interpretability. In Proceedings of
the 2016 ICML Workshop on Human Interpretability in Machine Learning
(pp. 1–5).
Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., . . . &
Sánchez, C. I. (2017). A survey on deep learning in medical image analysis.
Medical Image Analysis, 42, 60–88.
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting
model predictions. In Advances in Neural Information Processing Systems
(pp. 4765–4774).
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”
Explaining the predictions of any classifier. In Proceedings of the 22nd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining
(pp. 1135–1144).
Explainable artificial intelligence 49
Rudin, C. (2019). Stop explaining black box machine learning models for high
stakes decisions and use interpretable models instead. Nature Machine Intel-
ligence, 1(5), 206–215.
Sun, F., Wen, S., Li, S., & Li, J. (2017). Learning-based motion planning for auton-
omous vehicles: A review. IEEE Transactions on Intelligent Transportation Sys-
tems, 19(4), 1135–1145.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . &
Kaiser, Ł. (2017). Attention is all you need. In Advances in Neural Information
Processing Systems (pp. 30–38).
Chapter 3
3.1 INTRODUCTION
Autonomous driving (AD) has become one of the major key research direc-
tions in finding smart mobility solutions [1]. By providing safety and reli-
ability in modern transportation systems, autonomous vehicles (AVs) are
expected to leverage transportation services such as car parking or avoiding
accidents due to human errors [2]. Figure 3.1 shows a conceptual figure of
autonomous driving cars in smart cities [3]. Vehicle-to-vehicle (V2V) tech-
nology [4–7] is used to describe the cooperation between different vehicles
in the same area. The purpose of V2V communication is to enhance vehicle
traffic and pedestrian and passenger safety. On the other hand, Vehicle to
Infrastructure (V2I) technology helps vehicles communicate with roadside
units (RSUs) [8, 9].
V2I could be very beneficial for avoiding traffic accidents or traffic jams
or accessing services on the internet, as illustrated in Figure 3.1. The number
of connected vehicles is growing, as there were 237 million in 2021, accord-
ing to Statista [10]. This number is expected to reach 400 million by 2025.
The key challenges of such numbers of connected vehicles are related to the
telecommunication area and its security.
In a typical AD task, AVs generate massive amounts of multi-dimensional
data from different components such as cameras, RADAR, LIDAR, GPS,
and ultrasonic sensors. These data are required to be processed for differ-
ent purposes, such as monitoring, prediction, decision-making, and control
purposes [11, 12]. Artificial intelligence (AI) techniques showed huge success
in dealing with the complexity of such AD tasks with the help of power-
ful hardware with certain capabilities. However, there are many challenges
regarding how much humans can interfere during decision-making. These
challenges limit the acceptance of such technologies due to their human
unexplainability.
Explainable Artificial Intelligence (XAI) is emerged to address the ambigu-
ity inside AI algorithms. For instance, it is very challenging to get insight into
the internal mechanism of machine learning (ML) algorithms [13]. Figure 3.2
50 DOI: 10.1201/9781003502432-3
Explainable artificial intelligence in autonomous vehicles 51
This chapter discusses the need for explanations in AVs and presents
regulations and standards related to explanations. It categorizes the differ-
ent stakeholders who interact with AVs and explores various dimensions of
explanations. The chapter also highlights the importance of in-vehicle user
interfaces and novel interaction technologies for efficient explanation provi-
sion. It addresses the limitations and biases in current research on explana-
tion models and provides recommendations for implementation. The chapter
concludes with challenges in the explainable AV landscape and suggestions
for future work.
This chapter contains four further sections. Section 2 will discuss the cur-
rent State of XAI in AVs. Section 3 will report on the challenges and limita-
tions of XAI in AVs, and Section 4 will give an idea of future trends in XAI
for AVs. Finally, Section 5 offers conclusions to the discussion.
This section provides a comprehensive view of the status of XAI within the
AVs domain. In recent years, XAI has emerged as a crucial component in
addressing the challenges associated with AVs, particularly concerning trans-
parency and accountability. This section delves into the current advance-
ments and developments in XAI techniques specifically tailored for AVs. It
explores the ongoing research, emerging technologies, and practical applica-
tions that are shaping the landscape of XAI within this field. By examining
the current state of XAI in AVs, this section offers valuable insights into
how AI explainability is evolving to enhance the safety, trustworthiness, and
acceptance of AVs in modern transportation systems.
• Model-Specific Methods.
• Post Hoc Explanation Methods.
• Model-Agnostic Methods.
• Interpretable Neural Networks.
• Local vs. Global Interpretability.
• Explanations for Different Data Types.
• Human-Centric vs. Machine-Centric.
AVs offer benefits like increased safety and reduced energy consumption,
but these have security and privacy issues. Combining Blockchain and AI
could provide strong protection against malicious attacks [33]. AI can opti-
mize Blockchain construction, while Blockchain provides data immutabil-
ity and trust mechanisms. This paper explored the potential application of
Blockchain and AI solutions for AV security, identifying security and privacy
threats, reviewing recent literature, and highlighting limitations and chal-
lenges. This paper [34] addressed the challenges of implementing AI in sup-
port of Automated Guided Vehicles (AGV) within smart internal logistics
systems for agile manufacturing. The navigation system must be tailored
to industrial environments with high interaction with other systems and
human staff. AI’s challenges differ from those of other smart manufacturing
areas, and the paper systematizes these challenges and discusses promising
AI methods for AGV-based internal logistics systems.
Automated navigation technology plays a crucial role in intelligent trans-
portation and smart city systems. XAI aims to streamline decision-making
processes in AVs, enhancing trust in AI-based solutions. Madhav and Tyagi
[35] conducted a comparative analysis between XAI simulations and vehicu-
lar smart systems, emphasizing noteworthy advancements. The study intro-
duced visual explanatory techniques and an intrusion detection classifier,
yielding substantial improvements over existing research. The study pro-
posed a post-processing mathematical framework to improve the accuracy
of stereo vision in generating high-density depth values for AD [36]. The
framework uses prior information from a deep learning-based U-Net model
to refine the depth achieved, making it explainable and applicable to depth
maps generated by any stereo vision algorithm. The framework’s results
show improvements in stereo depth generation accuracy.
Keneni et al. investigated the explanation of autonomous system decisions
using an Unmanned Aerial Vehicle (UAV) [37]. It achieved a simulation using
15 fuzzy rules, considering weather conditions and enemies. It created two
ANFIS models, one for the weather zone and the other for distance from
the enemy. The best optimization is generating six fuzzy rules for the Sug-
eno model, which are easily comprehensible and maintain acceptable RMSE
values. The findings show promising results for integrating ANFIS models
with upcoming UAV technologies to make UAVs more transparent, under-
standable, and trustworthy. AD has gained popularity because of its poten-
tial benefits, including increased safety, traffic efficiency, and fuel economy
[26]. However, most studies focus on limited data sets or fully automated
environments. To address these issues, an XAI Federated Deep Reinforce-
ment Learning (RL) model improves trajectory decisions for newcomer AVs.
The model uses XAI to compute the contribution of each vehicle’s features,
ensuring trustworthiness. Experiments show this model outperforms bench-
mark solutions like Deep Q-Network and Random Selection [38].
Explainable artificial intelligence in autonomous vehicles 57
AVs are increasingly using AI to reduce driver stress and improve learning [39].
XAI techniques help developers understand the intricate workings of deep
learning models, also known as black box models. The authors focused
on how AVs can detect and segment the road using deep learning models,
achieving high IoU scores [39]. Figure 3.4 illustrates the system model inte-
gration of XAI for model explanation, which includes semantic segmenta-
tion of an input frame.
Soares et al. proposed a new self-organizing neuro-fuzzy approach for self-
driving cars, combining explainable self-organizing architecture and density-
based feature selection methods [40]. The approach classifies action states
from different self-driving conditions, providing humans with understand-
able IF . . . THEN rules. The density-based feature selection method ranks
feature densities, creating individualized subsets per class. Experiments on a
58 Explainable artificial intelligence for autonomous vehicles
Ford Motor Company dataset validated the approach’s accuracy and showed
it could surpass competitors. In Rjoub et al. [41], a trust-aware approach to
AV selection, incorporating the Local Interpretable Model-Agnostic Expla-
nations (LIME) method and One-Shot Federated Learning, was proposed.
This method outperforms the standard approach in terms of both accuracy
and reliability, demonstrating the effectiveness of trust metrics in AV selec-
tion in federated learning systems.
Bellotti et al. proposed a framework for understanding autonomous agent
decision-making in a highway simulation environment [42]. Data analysis
includes episode timeline, frame-by-frame, and aggregated statistical anal-
ysis. Motivators include longitudinal gap, lane position, and last-frame
patterns. Attention and SHAP values are differentiated, reflecting the neural
network’s architecture. Xu et al. [43] presented a multi-task formulation for
action prediction and explanations of AD. The authors proposed a CNN
architecture that combines reasoning about action-inducing objects and
global scene context. They also present a large dataset annotated for both
driving commands and explanations. The experiments show that the genera-
tion of explanations improves decision-making for actions, and both ben-
efit from a combination of object-centric and global scene reasoning [43].
Mankodiya et al. proposed an XAI system in VANETs (Vehicular Ad-Hoc
Networks) to enhance trust management in AVs [44]. Figure 3.5 illustrates
the system model employed for training and predicting malicious AVs. The
system uses a decision tree-based random forest to solve a problem with
98.44% accuracy and an F1 score on the VeRiMi dataset. The authors evalu-
ated the proposed model using performance measures and metrics, making
it interpretable and serving the purpose of XAI. Overall, the paper aims to
increase transparency and trust in AI models used in AVs.
Cultrera et al. presented an attention-based model for AD in a simu-
lated environment [45]. The model uses visual attention to improve driv-
ing performance and provide explainable predictions. The model achieves
state-of-the-art results on the CARLA driving benchmark. The architecture
includes a multi-head design, with each head specialized for different driving
commands. The attention layer assigns weights to image regions based on
their relevance to the driving task. The model adopts a fixed grid to obtain
an attention map of the image. Nowak et al. discussed the importance of
explainable AI in law and the challenges faced in training neural networks
for intelligent vehicles [46]. It highlights the need for interpretability in deep
learning systems and the use of attention heat maps to improve training
and eliminate misdetections. The system’s interpretation helps eliminate mis-
detections by explaining which parts of the images triggered the decision,
enhancing the reliability of vehicle autonomy.
The document also emphasizes the importance of data adequacy and the
limitations of acquiring more data in real-life conditions. Dong et al. [47]
Explainable artificial intelligence in autonomous vehicles 59
Despite the intensive application of XAI in AD, there are still challenges
that researchers need to look at. Starting from the terminology used in XAI,
the research community needs to agree on the terms and the definitions
sufficiently. Although such challenges are not limited to the applications
of XAI in AD, it would still help a lot to come up with a unified terminol-
ogy around XAI. For example, as explained by Arrieta et al. [54], there is a
lack of consensus on the definition of XAI by D. Gunning [55], where XAI
is defined by bringing understanding and trust concepts and missing other
concepts such as causality, transferability, informativeness, fairness and con-
fidence [56–58]. Other examples are the synonyms of terminology, such as
feature importance, feature relevance, and subjectivity in explanation [59,
60]. Keeping in mind that an explainable system always considers reason-
ing without human post-processing as a final step of the generative pro-
cess [56] or combining reasoning with human-understandable features [61].
Another challenge is related to the accessibility of the explanations to soci-
ety and policymakers. The explanation of decisions was approved in 2016
by the European Union General Data Protection Regulation (GDPR), which
was viewed as a promising way to conduct accountability and transpar-
ency. However, there are some uncertainties regarding their transparency
and accountability [62].
The trade-off between accuracy, interpretability, and explainability poses
a great challenge in XAI. For instance, making a black box model explain-
able does not mean having good performance. In fact, explaining black box
models sometimes results in other problems. Instead, one should also add
some degree of interpretability. Rudin et al. [63] stated that “higher complex-
ity does not inherently mean higher accuracy, and this has been very true for
such DL models.” Figure 3.6 shows why there is an urgent effort to focus
on improving both the performance and explainability of the model. As can
be seen, models with good performance tend to have lower explainability.
Hence, a reasonable trade-off is required.
The use of XAI raises several concerns regarding the lack of explanation
and discrimination due to unfair bias and predictive parity [64, 65]. Data
sets with private and sensitive data may disproportionately affect underrep-
resented groups [66]. Using these data sets leads to discriminatory, unethical,
and unfair issues [67]. To solve bias issues, several studies were proposed.
Kamiran et al. [68] proposed removing the discrimination before a classi-
fier is learned by reweighting the data sets. Others presented a technique
to optimize the representation of the data [69]. Moreover, other techniques
were presented, including bias detection, adversarial debiasing during data
processing, and equalized odds for post-processing [70–73].
62 Explainable artificial intelligence for autonomous vehicles
Figure 3.6 C
omparison of different ML techniques, explainability, and performance (as pre-
sented by DARPA) [64].
helps to elevate the security of XAI. Other attempts to enhance the security
of XAI are providing a taxonomy for XAI methods [81], Structured adver-
sarial attacks [82], and explaining explanations [83].
The study of future trends of the XAI relies on current knowledge and chal-
lenges. In XAI for AD, any decision-making requires an explanation from its
intelligent driving system. These explanations are in the form of visual and
textual forms or feature importance scores [16]. Figure 3.9 explains how
textual form explanation is used in an autonomous driving task. A natural
language text summarizes a visual observation and is then used to predict
an appropriate action [32]. The importance of an explainable vision could
be achieved through post hoc explanations. As mentioned earlier, post hoc
explanations are helpful in many aspects of XAI. Post hoc explanations can
be a simple yet effective method when making a decision when a regula-
tory investigation is required, as in the “Molly Problem,” where “A young
girl called Molly is crossing the road alone and is hit by an unoccupied
3.5 CONCLUSION
REFERENCES
[13] A. Adadi and M. Berrada, “Peeking inside the black-box: A survey on explain-
able artificial intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160,
2018.
[14] S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, “Towards safe, explain-
able, and regulated autonomous driving,” arXiv preprint arXiv:2111.10518,
2021.
[15] D. Omeiza, H. Webb, M. Jirotka, and L. Kunze, “Explanations in autono-
mous driving: A survey,” IEEE Transactions on Intelligent Transportation
Systems, vol. 23, no. 8, pp. 10142–10162, 2021.
[16] S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel, “Explainable artificial
intelligence for autonomous driving: A comprehensive overview and field
guide for future research directions,” arXiv preprint arXiv:2112.11561,
2021.
[17] M. N. Ahangar, Q. Z. Ahmed, F. A. Khan, and M. Hafeez, “A survey of auton-
omous vehicles: Enabling communication technologies and challenges,” Sen-
sors, vol. 21, no. 3, p. 706, 2021.
[18] A. Biswas and H.-C. Wang, “Autonomous vehicles enabled by the integration
of IoT, edge intelligence, 5g, and blockchain,” Sensors, vol. 23, no. 4, p. 1963,
2023.
[19] P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, and P. M. Atkinson,
“Explainable artificial intelligence: an analytical review,” Wiley Interdis-
ciplinary Reviews: Data Mining and Knowledge Discovery, vol. 11, no. 5,
p. e1424, 2021.
[20] A. Chaddad, J. Peng, J. Xu, and A. Bouridane, “Survey of explainable AI
techniques in healthcare,” Sensors, vol. 23, no. 2, p. 634, 2023.
[21] V. Chamola, V. Hassija, A. R. Sulthana, D. Ghosh, D. Dhingra, and B. Sikdar,
“A review of trustworthy and explainable artificial intelligence (XAI),” IEEE
Access, p. 1, 2023. https://doi.org/10.1109/ACCESS.2023.3294569
[22] W. Saeed and C. Omlin, “Explainable Ai (XAI): A systematic meta-survey of
current challenges and future opportunities,” Knowledge-Based Systems, vol.
263, p. 110273, 2023.
[23] M. Nagahisarchoghaei, N. Nur, L. Cummins, N. Nur, M. M. Karimi,
S. Nandanwar, S. Bhattacharyya, and S. Rahimi, “An empirical survey on
explainable AI technologies: Recent trends, use-cases, and categories from
technical and application perspectives,” Electronics, vol. 12, no. 5, p. 1092,
2023.
[24] R. Deshpande and P. Ambatkar, “Interpretable deep learning models: Enhanc-
ing transparency and trustworthiness in explainable AI,” Proceeding Interna-
tional Conference on Science and Engineering, vol. 11, p. 1352–1363, 2023.
[25] G. P. Reddy and Y. V. P. Kumar, “Explainable AI (XAI): Explained,” in
2023 IEEE Open Conference of Electrical, Electronic and Informa-
tion Sciences (eStream), pp. 1–6, IEEE, 2023. https://doi.org/10.1109/
eStream59056.2023.10134984
[26] S. Ali, T. Abuhmed, S. El-Sappagh, K. Muhammad, J. M. Alonso-Moral,
R. Confalonieri, R. Guidotti, J. Del Ser, N. Díaz-Rodríguez, and F. Herrera,
“Explainable artificial intelligence (XAI): What we know and what is left
to attain trustworthy artificial intelligence,” Information Fusion, vol. 99,
p. 101805, 2023.
68 Explainable artificial intelligence for autonomous vehicles
XAI applications in
autonomous vehicles
Lina E. Alatabani and Rashid A. Saeed
4.1 INTRODUCTION
DOI: 10.1201/9781003502432-4 73
74 Explainable artificial intelligence for autonomous vehicles
Task
USER
(1)
Where V0 is the vector of the last hidden layer, is the vector of the last
hidden layer when inputting sub-images n,m, and K is the number of ele-
ments in the vector [13].
The algorithm functions by adding a gray-scaling function and then gen-
erating an explanation image where a dataset of four images was used from
a dataset containing two categories of images: “vehicle and non-vehicle”.
A test image was input to the pre-trained CNN to get preliminary results. In
calculating the difference stage, the larger the difference, the better the result
of the explanation. As a result, the methodology helped in analyzing the pre-
diction capabilities of the CNN model to ensure its credibility in addition to
analyzing the reasons why CNN models might fail in detecting autonomous
vehicle accidents [14].
76 Explainable artificial intelligence for autonomous vehicles
The IoV is a technology that allows vehicles to communicate with their sur-
roundings in a connected manner using a certain network topology. The
minimal IoV network architecture consists of three major elements: (1)
cloud, (2) connection, and (3) clients. The main target of IoV is to process
the collected data within the network securely to improve the quality of
autonomous behavior and road safety [27]. This is all incorporated with
new and emerging technologies, such as fifth generation (5G), to make use of
the features provided by them, such as Vehicular Cloud Computing (VCC),
advanced signal processing, and Software Defined Network (SDN), allow
maximum use of Intelligent Transport System (ITS) requirements.
The cloud is responsible for the storage and commuting of all data distrib-
uted on the network. The communications element is the backbone of the
architecture that encompasses wireless communication technology such as
5G. The client elements are the end users using the IoV system [28].
Figure 4.2 Autonomous vehicle components and their functions and the levels of autonomy.
78 Explainable artificial intelligence for autonomous vehicles
I. Vehicle-to-Sensors (V2S)
Sensors and actuators are required for communication with other devices
and infrastructure using different technologies.
Vehicles within the IoV infrastructure send alerts and important messages
to drivers of vehicles and motorcycles. Interaction of vehicles with the envi-
ronment and motorcycles uses Bluetooth to send alerts to the motorcycle
driver’s audio helmet.
This interaction model states that a number of devices connect and com-
municate with one another through the use of IP networks or the Internet.
They use protocols such as Bluetooth, ZigBee, or Z-Wave to develop direct
D2D communication.
V. Vehicle-to-Pedestrian (V2P)
X. Vehicle-to-Everything (V2X)
V2X stands for the ability to communicate between the vehicle and other
elements within an IoV system, such as sensors and other sources of informa-
tion, to collect data through high-bandwidth, low-latency, and highly reli-
able links to reach a fully autonomous driving experience.
With the introduction of electric vehicles (EVs), the V2G system has intro-
duced benefits to power systems by stabilizing the fluctuations and instability
of energy demands using Plug-in electric vehicles to reduce energy costs [29].
These sensors measure the distance between a car and any surrounding
obstacles, deriving car speed, road conditions, and overall vehicle behavior.
LiDAR optimizes laser lights to calculate the distance between still and
moving objects.
The CCU processes and analyzes the data collected from sensors and
maps them to read conditions. AI mechanisms and methods are continuously
applied to the CCU to improve the autonomy experience.
V. Video Cameras
Video cameras detect traffic lights and moving objects and help in parking
and detecting road conditions. Modern cars have 360○ camera view [32].
The autonomy of vehicles is characterized by five levels, ranging from
Level 0 up to Level 5. Level 0 means no automation, and Level 5 aims at full
automation of the driving experience to eliminate driver-related errors, i.e.,
the human factor.
Autonomous car developers use modern design techniques, such as simu-
lation and virtual modeling techniques, for better optimization and integra-
tion of components used in vehicle development achieved by using hardware
and software used by the automotive concept design discipline [32].
4.3.1.1 AV technologies
Emerging technologies in autonomous vehicles aim to reduce the rate at
which accidents occur; consequently, vehicle weight reduces as AV technol-
ogies are integrated into the majority of vehicles. Emerging technologies
XAI applications in autonomous vehicles 81
such as Lane Change Assistant (LCA) will help achieve the objective of
AV [33].
Informs the driver through the display, audio, or optical signals when the
driver is attempting to park the vehicle. It helps the driver to maneuver in
tight spaces.
Sends vibrations in the vehicle’s seat or the steering wheel when the driver
is about to leave the lane accidentally.
Using radar sensors mounted on the front of the vehicle, FCW acts as a
warning when the distance between the vehicle and the one in front of it is
short.
With the use of a distance sensor and adjacent lever, the ACC component
measures the distance and speed with relevance to the vehicles ahead.
Detects lane markings and performs corrective action to adjust and keep
the vehicle in the lane and prevent it from deviating to other lanes.
Long Range Radar Short & Medium Range Radar Camera Long Range LiDAR Short Range LiDAR
ew nd
Vi rou
r
Su
Park assist
Park assist
Adaptive Automatic Traffic Sign Rear
ew nd
Vi rou
pedestrian Cross Collision
Cruise collision
r
Su
Recognition traffic alert Warning
Control avoidance
ew d
Vi oun
rr
Su
Figure 4.3 Autonomous vehicle components and their function and the levels of autonomy.
The ECU is responsible for adjusting the travel speed by managing the
digital throttle of the vehicle’s power engine,
XAI applications in autonomous vehicles 83
The BCU controls the counterforce of the system and sets on the digital
brake system when it is called by the ACC module.
III. The use of sensors (such as the Brake Pedal Sensor, Radar, Accelera-
tor Pedal Sensor, etc.) and actuators (such as the Brake Actuator
[BA] and Throttle Actuator [TA], etc.) are increasing rapidly with
additional functions in autonomous vehicles. The main functional-
ity of Brake Assist Controller is to control vehicle speed by sending
signals to the vehicle throttle actuator (TAC), whose main func-
tion is to manage the throttle value based on the needs of the ACC
system.
IV. The radar system contains a set of multiple radars installed in the
front, rear, and sides of the vehicle to detect surrounding objects. This
provides the ability to detect vehicles and objects at a distance up to
120 meters away because the radar system uses three overlapping
radar beams at 76–77 Khz frequency.
I. Public Transportation
With the aim of decreasing the public transportation load and the utili-
zation of available resources, the first autonomous microbus was tested in
Finland during the year 2018. It is mostly suitable for transporting people
short distances and for transporting employees.
This can be categorized into two types: post hoc, which is presented in
addition to the black-box model, and by design, which is presented within
the model training phase. The complexity of ML models is related to their
interpretability, i.e., an extremely complex model provides less interpretabil-
ity [40–42].
II. Model-Related
This is classified into model-specific, which can only be used with specified
types of algorithms, while model-agnostic is the opposite; it can be applied
to any algorithm. Model-specific makes the selection of models closed to
certain types of models, which decreases the model’s representation and
degrades the accuracy. On the other hand, the opposite is applied in model-
agnostic interpretation [43].
III. Scope-Related
This has two classifications: global and local. Global means that the
overall system should be understood along with the relationship between
XAI applications in autonomous vehicles 87
I. Explanation by Influence
V. Text Explanation
I. Explainability
II. Interpretability
• Simulatability
This is viewed at the model level, where transparency is defined by the ability
of a person to understand the entire model. A user should be able to take input
data, place it side by side with parameters, and produce predictions accordingly.
• Decomposability
• Algorithmic
4.4.2.1 Algorithm 1
Following equation one in the related work section, a proposed algorithm
based on CNN first inputs an image with its original size IO into the CNN
and then generates the vector of the last hidden layer VO, in the next stem of
the algorithm, the original sized image is sub-divided into n×m sub-images,
and then the new sub-images I0,0, . . ., I(N−1),(M−1) are input into the CNN and
generate the vector of the last hidden layer once again. Finally, the difference
dif(VO, Vn,m) between VO and Vn,m is calculated using Euclidean distance. All
of these steps are made to prepare for the making an explanation step, where
a copy of the original image IO is made then denoted by IC, then the area of
IC corresponding to In,m, n = 0, . . .,N−1, m = 0, M−1 is filled with purple
color, where the cloudiness is in the amount of the value dif(VO, Vn,m) the
larger this value, the more clear the color purple in the filled area. With these
given insights, we can use the copied image IC to approximate the part of
the image that impacted the final prediction of the model, as the last hidden
layer output is used instead of the final prediction because the last hidden
layer contains more parameters and the output value of the last hidden layer
is impacted by the difference between the input.
The proposed algorithm has three main functions: function grayscale
(img), function predict (img, model, i), and function fill (img, y0, y1, x0, x1,
opacity, explan). Function grayscale (img) takes an input image (img) and
executes gray-scaling to img and then returns the gray-scaled image. Func-
tion predict (img, model, i) is a set of parameters, an input image img, con-
volutional neural network model model, and layer index i = 0, . . ., layers-1
for layers equals the number of layers in the model. It is true that iout < iin
where iout is the output of a layer, with index iout and iin is the input of the next
layer, which has been extracted from the output of the previous layer. Func-
tion fill (img, y0, y1, x0, x1, opacity, explan) gets the original image img and
integer values y0, y1, x0, x1 to copy the original image and fill the area within
the image with black when explan = false or purple when explan = true with
opacity variation range from 0.0 to 0.1 when explan = false or 0.75 when
explan = true, x0 to x1 (x0 < x1) represents the horizontal range of the area
and y0 to y1 (y0 < y1) represents the vertical range of the area [14].
4.4.2.2 Algorithm 2
Yolo V4 algorithm was proposed for use in traffic sign recognition with
the use of a proper dataset. It is constructed with a three element structure
XAI applications in autonomous vehicles 91
consisting of (1) backbone, (2) neck, and (3) head. The backbone uses a neu-
ral activation function named mish, which can be represented by:
(2)
Where x,y are the core coordinates, w, h are the width and height of the
estimates, t is the network output, cx and c y are the top left coordinates, and
pw and ph are the anchor dimensions of the box.
CNN also estimates the class for class=[ P1, P 2, P 3, P 4]T
III. A review of the excellent confidence IOU truthpred of the K bounding boxes,
with the condition IOU thres ; if IOU truth
pred > IOU thres, then the bounding
box contains the object; otherwise, it doesn’t include the object.
IV. The category with the largest probability value as the target type is
chosen.
V. To execute the full local search to opt out of the duplicated boxes and
output, Non-Maximum Suppression (NMS) is used.
For XAI, three more steps are to be added to the detection model, including
mask generation, model inquiry, and saliency estimation.
I. Mask Generation
For an input image I and a bounding box b = (x1, y1, x2, y2) detected by a
detection function f a set of local masks LM is generated by masking various
parts of b and its closest pixels LM = {lm1, lm2, . . ., lmN}. The local masking
area MA is calculated by considering coordinates of b and five surrounding
pixels MA = {x1–5, y1–5, x2 +5,y2 +5}. The masking process is initiated by
individually sub-dividing the masking area by two in horizontal and vertical
directions, and then the sub-divided areas are once again divided into two,
which continues until it reaches smaller sub-areas with up to 20 pixels. Each
sub-area is masked by replacing the value of pixels with the average of pixel
values. Local masks aim at “estimating the importance of pixels within a
detected object and its surroundings”.
Global Masking is a process that aims to detect outliers or data irrelevant
to the detected object. It works by investigating parts of the input image
and determining if these outliers have an effect on the output of the model.
Global masks usually come in two sizes of pixels: 20 × 20 and 50 × 50.
The process of global mask generation is initiated by masking a square
with the size 20 × 20 or 50 × 50 at the upper left side of the image. After-
ward, the square moves in vertical and horizontal directions, and the process
proceeds until the masking reaches the lower right side of the image; then the
pixel value is replaced the same way in the local masking process.
XAI applications in autonomous vehicles 93
Model inquiry acts as a mediator between the object detection model and
the explanation model. This model receives the masked images, submits
them back into the object detection model in order to generate new bound-
ing boxes to be detected by the object detector, and then sends them to the
saliency estimation module. The inquiry model is a model-agnostic method
that has no data about the AI technique used, whether the neural network,
activation function, loss function, or the hyperparameters used in the ML
model; i.e., it’s a stand-alone method that can be used to explain any object
detection model, despite which ML model is used.
This module collects the original estimation generated by the object detec-
tion in addition to the estimations generated by the local and global masks.
It aims to calculate the difference between the original and new predictions,
and, finally, it develops a heat map to visualize the importance of the differ-
ence between the original and new images for a particular detected object
within an image.
This process can be represented mathematically by the given inputs: input
image I with a size of w × h, an original bounding box generated by object
detection b = (x1, y1, x2, y2) with a detection function f, local masks LM =
{lm1, lm2, . . ., lmN}, Global masks GM ={gm1,gm2, . . .,gmj}, a combination
of new bounding boxes B′={b1′, b2′ , . , bk′ } which are detected by a detection
function f for all the globally and locally masked copies of the input image
I where bk {x1 , y1 , x2 , y2 }. The goal is to calculate a saliency map in the form
of a matrix with the size of w × h.
The mathematical procedure is as follows:
In this equation, the similarity between the original and new bounding
boxes is calculated using the maximum Intersection Over Unit between b
and every bk B . This is done for all local masks lmn ∈ LM.
The similarity value falls in a range between 0 and 1, calculated as the
IOU threshold.
If the value is larger than the IOU threshold, then an object that has been
detected within the new masked image that has a large amount of shared
94 Explainable artificial intelligence for autonomous vehicles
area with the object in the original image, i.e., the original detected box b
resembles the new bounding box of the masked images bk B
However, if the similarity is smaller than the IOU threshold, it means the
difference between the original and new masked images is huge. This differ-
ence can be calculated using the minimum distance between b and bk B .
1
Where Distance (b,bk ) for the distance between two bound-
IOU (b,bk )
ing boxes is measured by the inverse of IOU between the boxes. With the
addition of the distance between b and B ′ the saliency map can be calculated
as:
The same saliency process for local masks is done to the global mask for
gmj ∈ GM.
For the last stage in the explanation method, the heatmap is visualized by
mapping the saliency map SM on the input image with different colors that
specify the levels of importance [54].
4.6 DISCUSSION
4.7 CONCLUSION
REFERENCES
5.1 INTRODUCTION
In the era of the internet, transportation systems can be seen as a mix of tra-
ditional and technological systems where traditional consists of autonomous
vehicles and technological consists of connected vehicles [1–3]. Autonomous
vehicles focus on intravehicle communication, whereas connected vehicles
focus on intervehicle communication. The motivation behind developing a
transportation system always remains safety by overcoming several issues
and challenges in existing systems. The evolution of IoV started with the
invention of wheels and continues through self-driving cars ranging from
light to heavy vehicles [4]. In the initial years of the invention of automobiles,
evolution has been very slow, but in the last two decades, it has grown expo-
nentially after merging Information and Communication Technologies (ICT)
and the transportation sector. ICT has significantly reduced some issues like
congestion, safety, security and infotainment on the road to avoiding any
casualties happening on the roads. It is leading to smart transportation and
advanced traffic management systems aimed at smart cities [5–8]. When ICT
is merged into the transportation system, it is called Intelligent Transporta-
tion System (ITS). This new term evolved in the 19th century. In the 20th
century, a significant innovation was observed in ITS, and an infrastructure-
less network of vehicles was invented, termed the CITS, where C stands for
cooperative. Later, it was termed the Vehicular Ad hoc Network, which was
a form of a Mobile Ad hoc Network. It created the concept of intervehicle
communication. Then, in 2010, with the invention of IoT, a third layer of the
internet cloud of vehicles was added on top of the vehicular cloud, starting
a new technology in transportation system called the Internet of Vehicles
(IoV). Figure 5.1 shows the changes in and years of technological inventions
in the transportation system. The invention of the internet cloud in interve-
hicle communication is the basis of IoV and the future of IoV, i.e., cloud-
based IoV (CIoV) [9]. At the same time, the internet cloud is the platform
of the frameworks IoV and CIoV. In CIoV, all transportation solutions are
based upon machine learning and Artificial Intelligence [10].
• No static topology.
• Limitation on bandwidth (it is very low).
• Distributed control.
• No stable communication channel.
• Security threats.
• Limited storage and power.
• Shared communication channel causing information loss.
1. The perception layer gathers data from the various actuators in the
area, such as sensors inside the car (for example, speed, direction and
position of the car, as well as driver attitudes) and outside the car (for
example, traffic environments and weather conditions), and securely
transmits the information to the coordination layer [30–33]. This layer
consists of numerous gadgets, smartphones, roadside units (RSUs),
roadside actuators (RAs) and other components of the intelligent
infrastructure.
2. The coordination layer ensures information gathered from the percep-
tion layer is transferred for processing in artificial intelligence securely
and in a unified structure (5G). The network layer is crucial to the
Internet of Vehicles [34]. It is comprised of a module to manage the
disparate networks connecting the various components of the intel-
ligent infrastructure and guarantees the ability for autonomous data
exchanges. WAVE, 4G and 5G, Wi-Fi, WLAN, Bluetooth and satellite
networks are among the networks that smart automobiles employ for
communication. These networks transfer data from the artificial intel-
ligence layer to the perception layer.
3. The IoV ecosystem’s AI layer, represented by the virtualized cloud sub-
structure, is its brain. It is in charge of managing the data from the
coordinating layer and uses algorithms to make decisions to examine
this aggregating data. In the cloud environment, it oversees a variety of
services based on a careful examination of the information obtained. In
order to assess the collected data and determine what action is neces-
sary at any given time, it uses ML, AI and cloud computing. It consists
of specialized equipment, big data analysis tools and cloud computing
components like self-driven vehicles that use image processing to detect
objects on the road.
4. The application layer creates intelligent applications that inform end
users of what the AI layer has learned. The IoV technology is commer-
cialized by the application layer. The AI layer’s outputs are used to give
end users applications like remote car control, multimedia watching
services, driving aids and safe driving apps, among others. This layer
disseminates user data and sends it to the business layer for further
processing [35].
5. The business layer employs a business model that assesses the bud-
get planning for operating the applications by analyzing the statistics
Emerging applications and future scope of IoV for smart cities 107
Habib et al. [38] reviewed the popular and developing IoV frameworks
and network layered structures. A novel, modern design of the Internet of
Vehicles (IoV) model has been proposed to assess the efficiency of the IoV
ecosystem as a network structure, encompassing the processes of detecting,
collecting, storing, processing acquired data, and ultimately utilizing the ser-
vices. The client layer, communication layer, cloud layer and cater service
layer are the four levels that make up CIoV. The primary features of the IoT
application ecosystems, which can be implemented for various and diverse
communication models, are taken into consideration when approaching
CIoV. The CIoV protocol suite was presented for each layer in the design as
a case study, taking operational and security dimensions.
Sharma et al. [39] presented an IoV communication system that consists
of three zones: the car zone, the collaboration zone and the smart city zone.
In order to achieve this, we predetermined an in-car gateway design that per-
mits communication and information sharing across these three zones. This
in-car gateway’s hardware and software architecture were presented. Our
novel architecture might support a number of cutting-edge services, includ-
ing driverless driving. As a result, in the preceding section, key informa-
tion flow and functions for autonomous driving services were emphasized.
To support our proposal, we also suggested an autonomous lane-changing
method. We want to use our gateway in the targeted network to verify its
effectiveness in future work.
Bhardwaj [40] presented a seven-layered architecture for IoV. This paper
first identified the key components of the IoV ecosystem, outlining their func-
tions and proposing a model for how they interact. It also introduced a net-
work paradigm made up of the environmental model and the intra-vehicular
model. This study developed a protocol stack that meets the standards
needed for interoperability across all parties and incorporates current proto-
cols into the IoV ecosystem. It concluded by presenting a situational analy-
sis and a sequence diagram that illustrate how the suggested seven-layered
architecture might be used in a practical IoV situation. It made the case
that the suggested architecture will help readers understand the interaction
108 Explainable artificial intelligence for autonomous vehicles
model, network model and layered architecture. IoV will encourage the
fusion of the automotive and IT industries.
In Kumar et al. [41], it was suggested that vehicles having IoV capabilities
might scan the passengers for any potential health hazards, such as a driver
who is intoxicated or patients who require medical attention, and then notify
the appropriate health care services or authorities. Ensuring that the driver is
in a state that is safe for driving will significantly reduce the rate of careless
driving and traffic accidents. Pressure sensors can adjust the wheels to the
appropriate working situations, good reflex frameworks can be integrated
and special actuators can improve the cabin’s interior environment. Addi-
tionally, it is possible to link the driver’s identity to the car so that only the
driver may operate it. If the car is stolen, all available information will be
relayed to the driver, including whether the car is in motion or parked. IoV
may also implement a smart template matching system that makes it simple
for traffic police to identify the vehicle, ensuring theft protection.
A machine learning-based Delimited Anti-Jamming approach was put
forth by Sunil Kumar et al. (2019) and was useful in scenarios involving
vehicular traffic. The discriminating signal of the jamming car is found
and filtered to determine the specific location of the affected vehicles. The
research in question specifically developed a model for a vehicular interfer-
ence system emphasizing the geolocation of cars in the defined jamming
circumstances. To assess the performance, the existing methods and the sug-
gested anti-jammer method were compared. The proposed approach proved
to perform better than existing approaches based on a variety of perfor-
mance factors.
Bhardwaj et al. [42] defined CIoV as a network-centric framework for
intelligent cars, wherein vehicles are primarily thought of as context-aware
agents under the vehicular networking notion. It is a sensory network made
up of road components like cars and other related infrastructure that allows
for little human engagement while receiving real-time information from the
road and covering the physical world, including people and the environ-
ment. It used integrated cognitive algorithms and in-vehicle applications to
observe real-time data in live broadcasts and do computations like pattern
matching. This evolving architecture facilitated and maintained data for
vehicle-to-vehicle and thing-to-thing communication in each cloud utiliz-
ing cutting-edge machine learning techniques like deep learning, artificial
neural networks and data mining, this heterogeneous database was used for
knowledge discovery.
For large-scale data gathering, transfer and processing from smart items
within the smart city environment, the IoV can be employed as a practical solu-
tion to meet the needs of smart cities. The vehicle nodes inside the IoV do not
experience the resource limitations of limited battery power and information
processing as do traditional wireless sensor networks (WSNs). Vehicles serve
as mobile smart nodes or objects inside the sensing network within the IoV.
There are lots of challenges and issues with existing IoV technology. The
rapid evolution of the Internet of Vehicles is breaking down many barriers on
the path to building smart cities, but several significant challenges still remain.
give users the ability to feel, comprehend, learn and think independently of
the physical and social worlds. This assumption gave rise to the Cloud-based
Internet of Vehicles (CIoV) paradigm, which enables cognitive computing
capabilities for networks using technologies like software-defined network-
ing, artificial intelligence, machine learning and similar ones. Its primary goal
is to create a link between the social environment, which includes human
demand, awareness and social behavior, and the transportation system,
which includes vehicles and road infrastructure. In the context of smart cit-
ies, the evolved CIoV is working to integrate the vast real physical environ-
ment with digital control by sensing, comprehending and programming it.
As a result, information loss and host or device control loss are also possible
outcomes. Any information security breach poses a risk to commuters’ per-
sonal relationships as well as their lives on the road. Absolute security must
be achieved to develop a transportation system for network nodes, peers and
the cloud to share trustworthy and authentic information.
5.6 CONCLUSION
REFERENCES
6.1 INTRODUCTION
Previous studies have looked into issues with vehicle communication with
everything else (V2X). Vehicle-to-vehicle (V2V), vehicle-to-infrastructure
(V2I), vehicle-to-roadside units (V2R), vehicle-to-pedestrian (V2P), vehicle-
to-grid (V2G), vehicle-to-building (V2B), device (V2D), and cloud (V2C)
communications are all part of V2X, as indicated in Figure 6.1. Efficient
implementation of IoV applications will be possible after the obstacles of
integrating these heterogeneous entities have been overcome. Vehicular net-
works are also defined by their inherent volatility and their inherent vari-
ability [6]. Since this changeability impacts data transmission in the network,
maintaining consistent communication is challenging. Moreover, the growth
of communication networks and Artificial Intelligence (AI) will lead to
enhanced capabilities in traffic management systems. With this development
comes the need for IoV-connected organizations to meet varying QoS criteria
for things like real-time traffic monitoring, low-latency communication, and
a few missed packets.
A counterintuitive interaction framework is presented by the interaction
of IoV entities, which does not ensure frictionless interactions and real-
time information sharing. Services, such as collision avoidance systems and
health data remotely. Doctors and clinical care systems receive the data gath-
ered from a variety of smart healthcare devices and use it to take appropriate
action. These days, you can buy a wide variety of intelligent wearable devices
that measure everything from blood pressure and glucose levels to heart
rate and temperature to how much you walk around throughout the day.
In addition, IoE is utilized in numerous smart home applications, including
window/door/room temperature/light/security alarm management. All the
aforementioned programs need to have a fast response time in order to send
the signal to the user or server. Efficient high-wireless networks are required
to perceive and collect varied data from IoT devices [9]. These days, wireless
network technologies like 4G and 5G have undergone tremendous improve-
ments and advancements. Thanks to recent developments in wireless net-
working technology, physical devices may now detect signals and send them
to the client and server in a timely fashion. In addition, the 5G network is
utilized to fix problems with wireless connections and delays in data trans-
mission between the various Internet of Things devices and the user or server.
The 5G network is essential in advancing IoT to IoE technology. Everyday
life is being revolutionized by the implementation of cutting-edge technolo-
gies like the Internet of Things (IoT), the Internet of Everything (IoE), and
the Fifth Generation Mobile Network (5G cellular).
In this study, we contribute the following:
• Discuss all the parts of the IoV environment that are important for
achieving interoperable and smooth integration.
• Based on a scan of existing literature, we provide the interoperabil-
ity criteria for IoV and evaluate potential research avenues moving
forward.
• Propose five groups of interoperability issues in IoV that prevent us
from achieving the goals of a unified IoV ecosystem.
• Provide a summary of the literature’s interoperability approaches and
the difficulties that still exist in putting them into practice.
• Discuss the unresolved issues and highlight upcoming research direc-
tions in IoV cooperation.
issues in the smart home environment. Moon et al. [11] suggest building a
service called ubiquitous middleware connection, and Park et al. [12] pro-
pose a method called video production room bridge adapter. Difficulties
about incompatibility between in- and out-of-house gadgets are addressed
by Park et al. using a paradigm that uses virtual network systems in the
omnipresent home. In the context of smart manufacturing, Zeid et al. [13]
zero in on the difficulties of interoperability. The authors investigate the
semantic and semantic interconnection of factories and the cloud in produc-
tion, along with the architectural model solutions provided by prominent
platforms, as Industries 4.0 and the Industrial Internet Consortium (IIC).
Relationship (network connectivity), interaction (data syntax), semantic
(data understanding), dynamic (context changes), cognitive (action match-
ing), and conceptual are the five layers of interoperability that make up
the adaptation framework presented by Pantsar-Syva Niemi et al. [14] for
situation-based and self-adaptive applications in a connected world (model-
ing and abstraction). However, the aforementioned projects are likely to only
apply to smart homes, smart factories, and smart environments rather than
the IoV as a whole.
Noura et al. [15] provide a comprehensive overview of the difficulties
inherent in interconnecting heterogeneous devices and the solutions avail-
able across the various IoT platforms. Tools supplied by adapters/gateways
can be used, such as mediators; virtual networks can be formed on top of
the physical layer; networking technologies, open APIs, and service-oriented
architectures can all be found at the network layer [16]. According to Lee
et al. [17], worldwide standards that have been created and adopted by
official bodies are necessary for achieving interoperability and security in
the IoT.
As a result, Jain and Bhardwaj investigate and synthesize the worldwide
norms for IoT interoperability and security [18].
In order to tackle the issue of semantic interoperability in the Internet of
Things, Rahman et al. [19] categorize current solutions into ontology, mid-
dleware, and the Semantic Web. Additionally, outstanding research issues
are highlighted, and frameworks and tools for assessing and evaluating IoT
interoperability are described by Bhardwaj and Ahalawat [20].
Lack of resources, using proprietary technology, network complexity, dif-
ferent security requirements, and heterogeneous devices are all mentioned as
potential obstacles to tackling IoT interoperability by Konduru and Bhara-
magoudray [21]. Google Weave, IoTivity, Alljoyn, and Apple Home Kit are
just a few of the open-source technologies, frameworks, and APIs that the
authors highlight as having been developed with the intention of resolving
interoperability difficulties in the IoT. However, proposed IoT solutions are
insufficient to meet interoperability concerns in IoV because of the unique
difficulties associated with device-to-device communication in this domain.
Future issues and challenges of internet of vehicles 121
The ecology of the Internet of Vehicles and the various nodes involved in
its communication is covered in this section. The authors also detail the
IoV ecosystem, including its users, apps, and platform. Figure 6.2 shows the
Internet of Vehicles layered architecture with a security layer and provides a
122 Explainable artificial intelligence for autonomous vehicles
more nuanced look at how the framework’s layers can be used to structure
the IoV ecosystem.
Because it provides a framework for integrating “things,” intelligent
vehicles, humans, and the encompassing infrastructure via channels such as
the internet and the Internet, IoV makes it possible for vehicles to connect
with other entities in their operational environment. Interactions between
cars (V2V), the grid (V2G), buildings (V2B), roadside units (V2R), and the
cloud (V2C) are all made possible by the IoV, which enables the develop-
ment of highly interconnected, heterogeneous systems capable of running
a wide range of applications and services [27]. Because of this, the various
components of the ecosystem are able to share knowledge and collaborate to
build an advanced infrastructure suitable for smart city services. To reduce
traffic and pollution, ease traffic congestion, free up parking spots, and cut
costs, IoV is expected to increase ride-sharing services. When compared to
ITS, IoV is distinguished by its capacity to turn every car on the road into a
self-contained computing, storage, and networking node that can exchange
data with other cars, pedestrians, and roadside infrastructure.
Future issues and challenges of internet of vehicles 123
Several architectural levels for IoV have been proposed based on the
interaction of communication nodes. Layers of the architecture include per-
ception, the network, and the application. To enable communication and
interoperability, each layer requires collaboration from multiple levels. At
the perception layer of IoV, there is a plethora of flawless data sources (het-
erogeneous devices). At the application layer, there is a suite of programs and
tools for analyzing the information gleaned from these devices [28]. Con-
nectivity, data extraction, transmission, and security between the two higher-
level layers are all made possible by the network layer, making it a crucial
component of IoV operations. Effective V2X communication is enabled by
the network layer’s central location, which allows it to cater to the needs of
all the other levels and devices [29].
The edge devices, such as sensors, actuators, and the related computational
resources, are hosted by the perception layer, the biggest drop. This layer is
responsible for perceiving and collecting data by way of the vehicles’ in-built
sensors, which can record data about the surrounding environment, such
as highway and traffic conditions, obstacles, and driving habits. Due to the
volume and variety of information collected by the perception layer’s sen-
sors, collaboration amongst IoV entities is essential for making the most of
available resources. In addition, the layer facilitates the digitization of analog
data for subsequent processing, storage, and dissemination. Using wireless
and wired connections, smart objects process the gathered data locally, typi-
cally in real-time, and disseminate the results to other devices in the same
layer or network [30]. Wi-Fi, Bluetooth, ZigBee, and radio-frequency are all
utilized for wireless connectivity at this layer, while standard serial interfaces
like I2C, SPI, and Ethernet are used for cable connectivity. Because devices
in the perception layer employ a wide variety of communication technolo-
gies and protocols, achieving interoperability is essential to ensuring that
sensed data is transmitted quickly and effectively throughout the network.
Sensors and other IoV devices should be able to reliably interact with one
another and join the automotive network on the fly if there is commonality
at the perception layer [31]. Adopting standards would also guarantee that
equipment can be managed and connected regardless of their core platform,
specifications, or models.
This layer utilizes network technology to determine the paths sensor data
must take to reach the various services that make up the IoV. Communication
is facilitated at this level via portals, switches, hubs, and routing devices [32].
Technologies such as 4G/LTE, 5G/6G, Bluetooth, and Wi-Fi are used to send
information to programs in heterogeneous networks. Other technologies
include wireless access in vehicle environment (WAVE) and international
standardization for microwave access (WiMAX). There are benefits and
drawbacks to each of these technological options. Examples include WAVE’s
ability to connect devices via dedicated short-range communications (DSRC),
124 Explainable artificial intelligence for autonomous vehicles
which necessitates a mobility layer that can handle things like handoffs and
ensures that traffic is managed in a way that doesn’t degrade network qual-
ity of service.
There are two main categories for IoV applications: (1) ITS applications and
(2) smart city apps. In this article, we’ll go over the various uses for IoV and
how they’re categorized. The uses of ITS encompass a wide range of technol-
ogies that can be applied to the transportation system to improve its depend-
ability, efficiency, safety, and sustainability without disrupting the existing
infrastructure. There are five broad types of ITS-related IoV applications:
• Security
makes it easier to handle data in IoV infrastructure, it also makes the system
more susceptible to cyber threats. Secured packet delivery in the network
requires security methods such as cryptography for application data trans-
fer and other approaches that do not affect the quality of service of IoV
infrastructure. Figure 6.3 shows the security requirements of the Internet of
Vehicles.
• Scalability
Solutions for IoV must be easily scalable; otherwise, the argument for
their adoption would collapse under the weight of the resources needed to
construct or modify them. High scalability in design technology, i.e., archi-
tectures that can switch between centralized and decentralized control
modes depending on network demands, is also required because the network
contains a wide variety of vehicular nodes and deployment environments
to process data and ensure sustainable service. There needs to be a standard-
ization of services to intelligently manage, distribute, and secure data from
connected devices, all while keeping operational costs low so that solutions
can adapt to the unpredictable nature of the IoV’s traffic [40]. Flexible and
optimized search engine designs are necessary for managing requests and
128 Explainable artificial intelligence for autonomous vehicles
• Trust Management
Trust between the communicating entities in the IoV is necessary for them
to cooperate based on mutually understood context data. The storage and
computation of trust-related information creates overhead that causes laten-
cies in nodes with limited computing and storage capabilities due to the need
for regular updates by trust evaluation algorithms aiming to reduce the risk
of untrusted nodes in the network.
Algorithms for trust management for information credibility that are scal-
able and efficient, formalizing and optimizing entity reputation and avail-
able resources, are needed to meet this problem. Solutions that leverage the
decentralized capabilities of blockchain are interesting techniques that can
provide a safe framework for efficient and scalable trust management. More-
over, the design of trust management models should take semantic interoper-
ability into account.
• Artificial Intelligence
• Cross-Domain Integration
Low-latency networks like the IoV, 5G, 6G, and mmWave offer superior
connection with increased data transmission rate, decreased communication
delay, powerful detection capabilities, and greater security. Integration of
these technologies to serve vehicle applications requires an understanding
of channel measurements in dynamic situations, impacts of signal obstruc-
tions, and dynamics of antenna directionality. If these technologies were inte-
grated with preexisting vehicular networks such as Cellular V2X, WiMax,
and DSRC, as well as architectures like NDN, SDN, and SIoV, the IoV’s
quality of service would be greatly enhanced. Data for channel measurement
130 Explainable artificial intelligence for autonomous vehicles
6.6 CONCLUSION
REFERENCES
ii. Design Principles for Smart Car Interfaces (Phillips et al., 2020)
When designing user interfaces for smart cars, several principles should
be considered to ensure usability and effectiveness. These principles include:
Smart car interfaces should be simple and easy to understand. The infor-
mation presented should be clear and concise, avoiding unnecessary com-
plexity. The use of intuitive icons, symbols, and visual cues can help drivers
quickly grasp the meaning of different controls and indicators.
b. Consistency
c. Contextual Awareness
e. Minimizing Distractions
i. Visualizing AI Outputs
AI for object detection, the interface can display the detected objects and
their confidence levels. This visualization helps drivers understand how the
AI system perceives the environment and makes decisions.
Explainable AI can also assist in error handling and recovery in smart car
interfaces. If the AI system encounters an error or uncertainty, the interface
can provide explanations and alternative options to the driver. This transpar-
ency helps drivers understand the limitations of the AI system and enables
them to take appropriate actions.
drivers to make informed decisions and interact effectively with the vehicle’s
AI systems. Usability testing and iterative design further ensure that the
interface meets the drivers’ needs and enhances their overall experience.
i. Explainable Recommendations
v. Ethical Considerations
User feedback also helps identify potential safety concerns. Users may
encounter situations where the smart car’s behavior or response is unex-
pected or unsafe. By collecting feedback on these incidents, developers can
identify and address potential safety risks, ensuring that smart cars operate
reliably and securely.
Furthermore, user feedback can shed light on the effectiveness of explainable
AI techniques implemented in smart cars. Users can provide insights into the
clarity and comprehensibility of the explanations provided by the AI system.
This feedback is crucial in refining and optimizing the explainability features
of smart cars, making them more transparent and understandable to users.
Based on the analysis of user feedback, developers can start designing and
implementing changes to address the identified issues and improve the user
experience. This may involve modifying the user interface, enhancing specific
features, or optimizing the explainability of AI techniques.
The iterative design process is cyclical, and the steps mentioned previously
should be repeated multiple times. Each iteration builds upon the previous
one, incorporating user feedback and continuously improving the design.
This iterative approach ensures that smart cars evolve and adapt to meet the
changing needs and expectations of users.
that power these vehicles. Smart cars rely on a combination of sensors, cam-
eras, radar, and LiDAR to perceive the environment and make decisions in
real time. However, the complexity of these systems can introduce potential
vulnerabilities and safety risks. It is crucial to address these challenges to
ensure the safe operation of smart cars.
data, it is crucial to ensure that this data is handled securely and in compli-
ance with privacy regulations. Explainable AI can contribute to privacy and
data protection by providing transparency in data handling and allowing
users to have control over their personal information.
security challenges that must be addressed to ensure the integrity and reli-
ability of smart car systems.
regulations and standards to ensure their safe and ethical operation. XAI
can provide the necessary transparency and accountability to meet these
requirements.
Smart cars continuously collect and store data from various sensors and
systems. The challenge lies in determining what data should be collected,
how long it should be retained, and who has access to it. Striking the right
balance between data collection for improving AI algorithms and respecting
user privacy is crucial.
Smart cars often rely on cloud-based services and external data sources
for various functionalities. This necessitates data sharing with third par-
ties, raising concerns about data security and privacy. Establishing secure
Feature designing and security consider. in EVs utilizing XAI 153
Users should have control over their data and be informed about how it
is collected, used, and shared. Obtaining informed consent from users for
data collection and processing is essential. However, designing user-friendly
interfaces that clearly communicate data practices and obtaining meaningful
consent can be challenging.
XAI techniques can provide insights into how AI algorithms process and
use data. By explaining the features and patterns that influence decision-
making, users can gain a better understanding of how their data is being uti-
lized. This transparency can help build trust and ensure that data processing
is in line with privacy regulations.
Explainable AI can help identify the specific data features crucial for
decision-making. By understanding which features are most influential,
developers can design AI systems that rely less on sensitive data, thereby
reducing privacy risks. This can be particularly important in cases where
personal information needs to be protected.
154 Explainable artificial intelligence for autonomous vehicles
i. Complexity of AI Algorithms
regional data protection laws. Ensuring that XAI techniques and systems
adhere to these regulations can be complex, especially considering the evolv-
ing nature of privacy laws.
Privacy and data protection are critical considerations in the develop-
ment and deployment of smart cars. Explainable AI can play a vital role in
addressing privacy challenges by providing transparent explanations for AI
decisions. By enhancing transparency, XAI can help build trust, empower
users, and ensure compliance with privacy regulations. However, several
challenges must be overcome to effectively implement XAI for privacy and
data protection in smart cars. Future research and collaboration between
industry, academia, and regulators are essential to address these challenges
and realize the full potential of XAI in ensuring privacy and data protection
in smart cars.
REFERENCES
8.1 INTRODUCTION
8.1.6 Counterfactuals
Counterfactual explanations (Colley et al., 2022) take local explanations a
step further by providing insights into what changes would need to be made
to an instance to achieve a different prediction. In other words, counterfac-
tuals answer the question, “What would need to be different for the model
to make a different decision?” Counterfactual explanations are particularly
useful in scenarios where the model’s decision is not aligned with the user’s
expectations or preferences. By generating counterfactuals, users can under-
stand the specific changes that would need to be made to the input features
in order to achieve the desired outcome.
For example, let’s say a smart car’s AI model predicts that a certain road
is safe to drive on, but the user disagrees and believes it is not safe. By gen-
erating counterfactual explanations, the user can understand what changes
would need to be made to the road conditions (such as reducing the speed
162 Explainable artificial intelligence for autonomous vehicles
limit or adding additional signage) for the model to predict that the road is
unsafe.
Counterfactual explanations can be generated using optimization tech-
niques that aim to find the minimal set of changes required to achieve a
different prediction. These techniques take into account the constraints and
boundaries of the input features to ensure that the counterfactuals are real-
istic and feasible.
Overall, local explanations and counterfactuals are powerful tools for under-
standing the inner workings of smart cars’ AI systems at a granular level.
They provide insights into the factors that influence the system’s decisions
and allow for customization, error detection, and regulatory compliance. By
incorporating these techniques, smart cars can become more transparent,
trustworthy, and user-centric.
Feature detection and visualization in smart cars using XAI 163
extracted rules may not capture all the nuances and complexities of the origi-
nal model, leading to a potential loss of performance.
Furthermore, rule sets may not be able to handle situations that fall out-
side the predefined rules. In complex and unpredictable scenarios, the smart
car may encounter situations not covered by the existing rules. In such cases,
the smart car may need to rely on other decision-making mechanisms or
fallback strategies to ensure safe and reliable operation.
Despite these limitations, rule extraction and rule sets provide valuable
tools for enhancing the explainability and transparency of smart cars. By
enabling users to understand and validate the decision-making process, these
techniques contribute to developing trustworthy and user-centric AI systems
in the field of smart cars.
REFERENCES
consensus.app/papers/explain-explain-study-necessity-explanations-autono-
mous-shen/ccb979c3a9df5982a8976daa432e116f/ (Accessed: 13 December
2023).
Shin, D. (2021) ‘The Effects of Explainability and Causability on Perception, Trust,
and Acceptance: Implications for Explainable AI’, The International Journal
of Human-Computer Studies, 146. Available at: https://doi.org/10.1016/j.
ijhcs.2020.102551.
Tambwekar, P. and Gombolay, M. (2023) ‘Towards Reconciling Usability and Use-
fulness of Explainable AI Methodologies’, ArXiv, abs/2301.05347. Available
at: https://doi.org/10.48550/arXiv.2301.05347.
Index
171
172 Index
artificial intelligence (AI), 1, 53, 60, 73, human-AI interaction in, 16–17
85, 100, 106, 112, 117, 125, importance of AI in, 3–6
128, 134 levels of autonomy of, 1–3, 2, 77, 82
algorithms, 25–26, 35, 38–40, 50, 52, perception system in, 13, 15–16, 15,
65, 76, 87–88, 128, 135, 138–141, 20, 37, 158–159
144–149, 151–154 protocol architecture, 52
in autonomous vehicles, 3–6, 47 safety and reliability in, 5, 16–20, 18,
black-box, 26 38–39, 46, 50
integration of, 5, 20, 27, 29, 147 security, 125
malicious attacks on, 147–148 sensor fusion and data integration in,
systems, 5, 19, 25–30, 33–34, 36, 12–15
39–40, 42, 45, 47, 53–54, 59, 66, XAI in, 34–35, 35, 45–47, 50–65,
76, 136–142, 144–146, 148–151, 73–95
153, 159, 162, 166–168 see also specific types of autonomous
techniques, 9–12, 50, 79, 128, 143 vehicles; smart cars
see also AI/ML; explainable artificial
intelligence (XAI) B
artificial neural networks (ANN), 84, banking, 54, 74
87–88 108 Bayesian Convolutional LSTM, 6
AS-DBSCAN, 13 Bayesian neural networks, 37–38
Association for Computing Machinery
Berkeley DeepDrive eXplanation
(ACM), 7, 46
(BDD-X) dataset, 55, 55
attention heat maps, 58
best practices, 43–44, 46–47, 134
Automated Guided Vehicles (AGV), 56
bias, 52, 54, 61, 73, 138–140, 148, 153,
automated navigation technology, 56
158–160, 165
automated robotics bus, 84
Big Data, 76, 111
automation, 46, 95
analysis, 18, 106
driving, 18, 46
black box models, 26, 57, 61–62
levels of, 1–3, 2, 52, 80
blind spots, 158–159
Automotive Safety Integrity Levels
blockchain, 54, 56, 128
(ASILs), 19
construction, 56
autonomous driving (AD), 1, 2, 5, 9, 12,
technology, 53
15–17, 19–20, 42, 45–46, 50, 51,
Brake Control Module (BCU), 83
54–56, 58–62, 64–66, 65, 79, 83,
107, 139, 152, 158, 165
vision-based, 54 C
autonomous driving systems, 16–17, 19, cameras, 1, 9, 12–13, 15, 50, 53, 82,
59–60 83–85, 102, 145
integration of, 17 smart, 84
autonomous electric helicopter, 85 video, 80
autonomous electric tram, 84 cargo tracking, 125
autonomous microbus, 84 CARLA driving benchmark, 58
autonomous underground vehicle, 84 central computing unit (CCU), 80
autonomous underwater vehicle Cloud-based Internet of Vehicles
(AUV), 85 (CIoV), 100, 101, 107–108, 112
autonomous vehicles (AVs), 1–20, cloud computing, 77, 106, 111–112
50–65, 73–95 collision warning systems, 124–125
adoption of, 36, 39–40 comprehensibility, 27, 42–43, 47,
AI techniques and deep learning 142, 144
algorithms in, 9–12 computer-aided learning, 125
AI-driven decision making in, 6–7 computer vision, 3, 9, 15, 33–34, 59
data integration in, 12–15 congestion prediction systems, 124
Index 173
G of DL techniques, 20
gaming, 65 IoV, 119, 125, 129
Gaussian occupancy mapping, 14 of probabilistic models, 38
Gaussian process, 14 of safety mechanisms, 39
gaze estimation, 55 sensor, 4, 14
GazeMobileNet model, 55 of vehicle components, 80, 116
Global Positioning System (GPS), 12, of vehicle security and safety, 18
15–16, 18, 50, 78, 80, 152 of XAI, 38–39, 46–47, 57, 116,
Google Weave, 120 135–136
GPS, see Global Positioning System (GPS) Intelligent Transportation System (ITS),
ground segmentation, 14 84, 95, 100–101, 101, 104, 109,
112, 116, 118, 122, 124–125, 129
H see also Cooperative Intelligent
Transportation System (CITS)
hacking, 5 Intelligent Transport System (ITS), 74,
handover, 16–17 77, 101
Hazard Analysis and Risk Assessment Intelligent Vehicle Motion Planning
(HARA) process, 18–19 (IVMP), 60
healthcare, 26–33, 54, 74, 102, 118–119 intelligent vehicles, 58, 122, 158, 166
human-AI interaction, 16–17, 39–40 interaction technologies, 52
in AVs, 16–17 Internet of Autonomous Vehicles
designing user-friendly XAI interfaces, 40 (IoAV), 53
ensuring positive user experience, 40 Internet of Energy (IoE), 104, 112
human-centric, 41, 54 Internet of Everything (IoE), 118–119
human drivers, 1, 3–4, 6, 15–17, 39, 53 Internet of Things (IoT), 18, 53, 84,
human-driving learning methods, 54–55 100–102, 103, 104, 107, 110–111,
human error, 17, 50, 144 116, 118–121, 129
human factors, 16–17, 80 interoperability, 120–121
human-machine interface (HMI), 16, 139 Internet of Vehicles (IoV), 53, 76
hybrid AI models, 36 AI-based, 112
applications, 124–126
I applications and services, 84–85
IEC 61508, 18 autonomous vehicle components and
image recognition, 53 design, 79–83
image segmentation, 33 Cloud, 112
Industrial Internet Consortium communication system, 107
(IIC), 120 current issues, 85–86
Industries 4.0, 120 directions, 126–130
Information and Communication ecosystem, 121–124
Technologies (ICT), 100–101 emerging applications for smart cities,
information exchange, 118 100–112
infotainment, 100–101, 104, 109–111, evolution of, 101
124–125 future issues and challenges of,
Institute of Electrical and Electronics 116–130
Engineers (IEEE), 7, 8, 11, 46, future research interoperability, 119,
110, 116 121, 130
integration future scope of, 111–112
of AI, 5, 20, 27, 29, 147 integration, 119, 125, 129
of autonomous driving systems, 17 issues and challenges of, 110–111
chassis, 52 layered architecture of, 104,
cross-domain, 129 106–107, 117, 122
data, 12–15 ML, 112
Index 177
R S
Radio Detection and Ranging SAE J3016, 1–3, 2, 46
(RADAR), 1, 9, 12–16, 50, 53, safety, 3–4, 9, 16, 37–40, 42–43, 46–47,
80–83, 82, 145 50–52, 56, 74, 79, 100–101, 104,
180 Index
109, 112, 118, 124–125, 134, 137, security, 5, 18, 20, 53, 56, 60, 62,
144–146, 150–151, 158, 164–167 74, 76, 95, 100–101, 104, 107,
in AI-driven AVs, 17–20, 34 110–112, 119–120, 123, 125–127,
analyses, 18 129–130
applications, 124–125 of AVs, 18, 54, 56, 125
assessments, 46 challenges, 79, 127, 146–148
of AVs, 5, 16–17, 18, 19–20, 36, 38, 46 challenges in smart cars, 146–148
benefits, 144 enhancement with XAI, 148–150
-centric, 109 explainable, 64
collaborative, 146 in EVs using XAI, 134–155
compliance with standards, 151 framework, 53
-critical, 35, 38–39, 60, 145, 149–150 of IoV, 127
and decision-making, 165 layer, 121, 122
ecosystem, 39 loopholes, 74
enhancing with XAI, 144, 146, operational, 18
148–149 network, 128
features, 125 protocols, 103
functional, 18 telecommunication, 50
human, 25 threats, 56, 102
human-machine interaction for, 146 vehicle, 18
measures, 79, 140, 149, 151 vulnerabilities, 54
mechanisms, 20, 39, 45 of XAI, 63, 66
objectives, 19 see also cybersecurity; over-the-air
parameters, 109 (OTA) updates; supply chain
passenger, 38, 50 self-attention-based model, 59
pedestrian, 38, 50 self-driving
protocols, 20, 39 automobiles, 118
public, 17, 39, 116 conditions, 57
regulators, 145 dataset, 55
requirements, 41, 151 systems, 54–55
risks, 142, 145 technology, 5
road, 4, 13, 19, 77–78, 124 see also self-driving cars; self-driving
in smart cars, 144–146, 149, 159, 165 vehicles (SDVs)
standards, 5, 36, 45, 47, 145, 151 self-driving cars, 1, 3–5, 9, 74–75, 100
system, 7, 84, 139 self-driving vehicles (SDVs), 9, 60, 79, 86
validation and certification, 145 self-organizing neuro-fuzzy approach,
in XAI, 38–39 57–58
see also safety and reliability; semantics, 129
safety-critical semantic segmentation, 14, 55, 57, 57,
safety and reliability, 38–39 59–60
in AI-driven AVs, 17–20 Semantic Web, 120; technologies 121
in autonomous driving, 19 SenML, 121
of AVs, 5, 16, 18, 38–39 sensor fusion, 4, 7, 9, 12–15
integration of safety mechanisms, 39 sensors, 1, 3–5, 9, 12–16, 18, 52–53,
in modern transportation systems, 50 78–85, 101–102, 106, 109, 123,
safety considerations in XAI, 38–39 126, 145, 152, 159
safety-critical, 35, 38–39, 60, 145 parking, 109
decision support, 149–150 pressure, 108
saliency estimation, 92–94 real-time kinetic (RTK), 53
saliency maps/mapping, 25, 29, 33–34, see also cameras; Light Detection and
54, 93–94, 159 Ranging (LiDAR); Radio Detection
scalability, 31–32, 127–128 and Ranging (RADAR); sensor
Index 181