


default search action
xAI 2024: Valletta, Malta
- Luca Longo
, Sebastian Lapuschkin
, Christin Seifert
:
Explainable Artificial Intelligence - Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part I. Communications in Computer and Information Science 2153, Springer 2024, ISBN 978-3-031-63786-5
Intrinsically Interpretable XAI and Concept-Based Global Explainability
- Benjamin Leblanc, Pascal Germain
:
Seeking Interpretability and Explainability in Binary Activated Neural Networks. 3-20 - Shreyasi Pathak
, Jörg Schlötterer
, Jeroen Veltman
, Jeroen Geerdink
, Maurice van Keulen
, Christin Seifert
:
Prototype-Based Interpretable Breast Cancer Prediction Models: Analysis and Challenges. 21-42 - Luisa Gallée
, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees
, Felix Weig, Daniel Vogele
, Meinrad Beer
, Michael Götz
:
Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model. 43-56 - Szymon Oplatek, Dawid Rymarczyk
, Bartosz Zielinski
:
Revisiting FunnyBirds Evaluation Framework for Prototypical Parts Networks. 57-68 - Teodor Chiaburu, Frank Haußer
, Felix Bießmann
:
CoProNN: Concept-Based Prototypical Nearest Neighbors for Explaining Vision Models. 69-91 - Georgii Mikriukov
, Gesina Schwalbe
, Franz Motzkus
, Korinna Bade
:
Unveiling the Anatomy of Adversarial Attacks: Concept-Based XAI Dissection of CNNs. 92-116 - Jiayi Li, Sheetal Satheesh, Stefan Heindorf
, Diego Moussallem
, René Speck, Axel-Cyrille Ngonga Ngomo
:
AutoCL: AutoML for Concept Learning. 117-136 - Franz Motzkus
, Georgii Mikriukov
, Christian Hellert
, Ute Schmid
:
Locally Testing Model Detections for Semantic Global Concepts. 137-159 - Lenka Tetková
, Teresa Karen Scheidt
, Maria Mandrup Fogh, Ellen Marie Gaunby Jørgensen, Finn Årup Nielsen
, Lars Kai Hansen
:
Knowledge Graphs for Empirical Concept Retrieval. 160-183 - Jonas Teufel
, Pascal Friederich
:
Global Concept Explanations for Graphs by Contrastive Learning. 184-208
Generative Explainable AI and Verifiability
- Alessandro Castelnovo
, Roberto Depalmas, Fabio Mercorio
, Nicolò Mombelli, Daniele Potertì, Antonio Serino, Andrea Seveso
, Salvatore Sorrentino
, Laura Viola:
Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation. 211-229 - Julian Tritscher, Philip Lissmann, Maximilian Wolf, Anna Krause, Andreas Hotho, Daniel Schlör:
Generative Inpainting for Shapley-Value-Based Anomaly Explanation. 230-243 - Kenza Amara
, Rita Sevastjanova
, Mennatallah El-Assady:
Challenges and Opportunities in Text Generation Explainability. 244-264 - Jane Arleth dela Cruz
, Iris Hendrickx
, Martha A. Larson
:
NoNE Found: Explaining the Output of Sequence-to-Sequence Models When No Named Entity Is Recognized. 265-284
Notion, Metrics, Evaluation and Benchmarking for XAI
- Jérôme Rutinowski
, Simon Klüttermann
, Jan Endendyk, Christopher Reining
, Emmanuel Müller
:
Benchmarking Trust: A Metric for Trustworthy Machine Learning. 287-307 - Qi Huang, Emanuele Mezzi, Osman Mutlu
, Miltiadis Kofinas
, Vidya Prasad
, Shadnan Azwad Khan
, Elena Ranguelova
, Niki van Stein
:
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI. 308-331 - Helena Löfström
, Tuwe Löfström
:
Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty. 332-355 - Miquel Miró-Nicolau
, Antoni Jaume-i-Capó
, Gabriel Moyà-Alcover
:
Meta-evaluating Stability Measures: MAX-Sensitivity and AVG-Sensitivity. 356-369 - Eric Arazo
, Hristo Stoev, Cristian Bosch
, Andrés L. Suárez-Cetrulo
, Ricardo Simon Carbajo
:
Xpression: A Unifying Metric to Optimize Compression and Explainability Robustness of AI Models. 370-382 - Oscar Llorente, Rana Fawzy, Jared Keown, Michal Horemuz, Péter Vaderna, Sándor Laki, Roland Kotroczó, Rita Csoma, János Márk Szalai-Gindl:
Evaluating Neighbor Explainability for Graph Neural Networks. 383-402 - Anna Hedström
, Leander Weber
, Sebastian Lapuschkin
, Marina M.-C. Höhne
:
A Fresh Look at Sanity Checks for Saliency Maps. 403-420 - Alan Perotti
, Claudio Borile
, Arianna Miola
, Francesco Paolo Nerini
, Paolo Baracco, André Panisson
:
Explainability, Quantified: Benchmarking XAI Techniques. 421-444 - Samuel Sithakoul, Sara Meftah, Clément Feutry:
BEExAI: Benchmark to Evaluate Explainable AI. 445-468 - Fernando Aguilar-Canto
, Omar García-Vázquez, Tania Alcántara, Alberto Espinosa-Juárez
, Hiram Calvo
:
Associative Interpretability of Hidden Semantics with Contrastiveness Operators in Face Classification Tasks. 469-491

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
