


default search action
xAI 2024: Valletta, Malta
- Luca Longo
, Sebastian Lapuschkin
, Christin Seifert
:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part IV. Communications in Computer and Information Science 2156, Springer 2024, ISBN 978-3-031-63802-2
Explainable AI in Healthcare and Computational Neuroscience
- Oleksandr Davydko
, Vladimir Pavlov
, Przemyslaw Biecek
, Luca Longo
:
SRFAMap: A Method for Mapping Integrated Gradients of a CNN Trained with Statistical Radiomic Features to Medical Image Saliency Maps. 3-23 - Helard Becerra Martinez
, Katryna Cisek, Alejandro García-Rudolph, John D. Kelleher, Andrew Hines:
Transparently Predicting Therapy Compliance of Young Adults Following Ischemic Stroke. 24-41 - Martin A. Gorosito, Anis Yazidi
, Børge Sivertsen
, Hårek Haugerud
:
Precision Medicine for Student Health: Insights from Tsetlin Machines into Chronic Pain and Psychological Distress. 42-65 - Enrico Sciacca, Claudio Estatico
, Damiano Verda
, Enrico Ferrari
:
Evaluating Local Explainable AI Techniques for the Classification of Chest X-Ray Images. 66-83 - Jorn-Jan van de Beld
, Shreyasi Pathak
, Jeroen Geerdink
, Johannes H. Hegeman
, Christin Seifert
:
Feature Importance to Explain Multimodal Prediction Models. a Clinical Use Case. 84-101 - Charles A. Ellis
, Martina Lapera Sancho, Robyn L. Miller
, Vince D. Calhoun
:
Identifying EEG Biomarkers of Depression with Novel Explainable Deep Learning Architectures. 102-124 - Thies de Graaff
, Michael Wild
, Tino Werner
, Eike Möhlmann
, Stefan Seibt, Benjamin Ebrecht:
Increasing Explainability in Time Series Classification by Functional Decomposition. 125-144 - Maciej Mozolewski
, Szymon Bobek
, Rita P. Ribeiro
, Grzegorz J. Nalepa
, João Gama:
Towards Evaluation of Explainable Artificial Intelligence in Streaming Data. 145-168 - Helene Knof
, Michell Boerger
, Nikolay Tcholtchev
:
Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-Based MI Detection Model. 169-190
Explainable AI for Improved Human-Computer Interaction and Software Engineering for Explainability
- Agustin Picard, Lucas Hervier, Thomas Fel, David Vigouroux:
Influenciæ: A Library for Tracing the Influence Back to the Data-Points. 193-204 - Maike Schwammberger
, Raffaela Mirandola
, Nils Wenninghoff:
Explainability Engineering Challenges: Connecting Explainability Levels to Run-Time Explainability. 205-218 - Giulia Vilone
, Francesco Sovrano
, Michaël Lognoul
:
On the Explainability of Financial Robo-Advice Systems. 219-242 - Muhammad Rashid
, Elvio G. Amparore
, Enrico Ferrari
, Damiano Verda
:
Can I Trust My Anomaly Detection System? A Case Study Based on Explainable AI. 243-254 - Federico Cabitza
, Caterina Fregosi, Andrea Campagner
, Chiara Natali
:
Explanations Considered Harmful: The Impact of Misleading Explanations on Accuracy in Hybrid Human-AI Decision Making. 255-269 - Kirsten Thommes
, Olesja Lammert
, Christian Schütze
, Birte Richter
, Britta Wrede
:
Human Emotions in AI Explanations. 270-293 - Tobias Labarta, Elizaveta Kulicheva, Ronja Froelian, Christian Geißler, Xenia Melman, Julian von Klitzing:
Study on the Helpfulness of Explainable Artificial Intelligence. 294-312
Applications of Explainable Artificial Intelligence
- Adrian Byrne
:
Pricing Risk: An XAI Analysis of Irish Car Insurance Premiums. 315-330 - Björn Milcke, Pascal Dinglinger, Jonas Holtmann:
Exploring the Role of Explainable AI in the Development and Qualification of Aircraft Quality Assurance Processes: A Case Study. 331-352 - Bruno Mota
, Pedro Faria
, Juan M. Corchado
, Carlos Ramos
:
Explainable Artificial Intelligence Applied to Predictive Maintenance: Comparison of Post-Hoc Explainability Techniques. 353-364 - Bujar Raufi, Ciaran Finnegan, Luca Longo
:
A Comparative Analysis of SHAP, LIME, ANCHORS, and DICE for Interpreting a Dense Neural Network in Credit Card Fraud Detection. 365-383 - Javier Perera-Lago, Víctor Toscano-Durán, Eduardo Paluzo-Hidalgo
, Sara Narteni
, Matteo Rucco
:
Application of the Representative Measure Approach to Assess the Reliability of Decision Trees in Dealing with Unseen Vehicle Collision Data. 384-395 - Sara Narteni
, Alberto Carlevaro
, Jérôme Guzzi, Maurizio Mongelli:
Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions. 396-417 - Md Shajalal
, Alexander Boden
, Gunnar Stevens
, Delong Du
, Dean-Robin Kern
:
Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments. 418-440 - Valentina Zaccaria
, David Dandolo
, Chiara Masiero
, Gian Antonio Susto
:
AcME-AD: Accelerated Model Explanations for Anomaly Detection. 441-463

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.