Explainable Artificial Intelligence
Explainable Artificial Intelligence
A R T I C L E I N F O A B S T R A C T
Keywords: The rapid growth and use of artificial intelligence (AI)-based systems have raised concerns regarding explain
Explainable AI (XAI) ability. Recent studies have discussed the emerging demand for explainable AI (XAI); however, a systematic
XAI effects review of explainable artificial intelligence from an end user’s perspective can provide a comprehensive un
Trust
derstanding of the current situation and help close the research gap. The purpose of this study was to perform a
Transparency
Understandability
systematic literature review of explainable AI from the end user’s perspective and to synthesize the findings. To
AI Adoption be precise, the objectives were to 1) identify the dimensions of end users’ explanation needs; 2) investigate the
AI Use effect of explanation on end user’s perceptions, and 3) identify the research gaps and propose future research
agendas for XAI, particularly from end users’ perspectives based on current knowledge. The final search query for
the Systematic Literature Review (SLR) was conducted on July 2022. Initially, we extracted 1707 journal and
conference articles from the Scopus and Web of Science databases. Inclusion and exclusion criteria were then
applied, and 58 articles were selected for the SLR. The findings show four dimensions that shape the AI expla
nation, which are format (explanation representation format), completeness (explanation should contain all
required information, including the supplementary information), accuracy (information regarding the accuracy
of the explanation), and currency (explanation should contain recent information). Moreover, along with the
automatic representation of the explanation, the users can request additional information if needed. We have also
described five dimensions of XAI effects: trust, transparency, understandability, usability, and fairness. We
investigated current knowledge from selected articles to problematize future research agendas as research
questions along with possible research paths. Consequently, a comprehensive framework of XAI and its possible
effects on user behavior has been developed.
1. Introduction (Haque et al., 2020), and other public services (Hengstler et al., 2016;
Haque et al., 2021; Du and Xie, 2021); however, the working principle of
Recently, the adoption and use of artificial intelligence (AI)-based AI systems is unclear as the machine-learning models used in different AI
applications by various business organizations have been increasing to systems do not reveal enough information about the process through
aid decision-making. For example, the International Data Corporation which the conclusion is derived (Castelvecchi, 2016). Furthermore, the
(IDC) has estimated that the worldwide AI expenditure is supposed to deep neural network (DNN) models used in advanced AI systems are
increase to 110 billion US dollars by the end of 2024 (Adadi and Berrada, extraordinarily complex to explain. Only specific people who design the
2018; IDC, 2018). Because AI has become more prevalent, it has become algorithms understand how the system works (Angelov and Soares,
routine to rely on it to make decisions in our daily lives (Stahl et al., 2020). The opacity of AI systems can reduce end users’ trust and reliance
2021; Mahmud et al., 2022a,b). We use various intelligent systems every on using AI-based systems while making critical decisions (Hasan et al.,
day, such as in content and product recommendation (Benbasat and 2021; Baum et al., 2011). To address this problem, researchers and
Wang, 2005; Gruetzemacher et al., 2021; Wang et al., 2014; Choi et al., practitioners have called for the requirement to provide explainable
2012), news websites, social media (Feng et al., 2020), healthcare Artificial Intelligence (XAI) that allows end users to perceive the
* Corresponding author.
E-mail addresses: [email protected] (A.B. Haque), [email protected] (A.K.M.N. Islam), [email protected], [email protected] (P. Mikalef).
https://doi.org/10.1016/j.techfore.2022.122120
Received 28 January 2022; Received in revised form 27 August 2022; Accepted 16 October 2022
Available online 6 November 2022
0040-1625/© 2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
underlying working principle of the decision-making procedure (Laato extant literature to represent current knowledge on XAI in terms of
et al., 2022; Tiainen, 2021). Understanding the working principles of AI explanation needs and XAI effects and (b) research domains and (2) the
systems is crucial for end users to make effective decisions in different development of thematically organized future research avenues.
contexts (Scott et al., 1977). For example, in mission-critical use cases, To address the research objectives, 58 publications were selected by
such as healthcare, the decision-making procedure should be under scanning Scopus and Web of Science databases and using rigorous
standable for the users (doctors) to rely on the system (Lauritsen et al., citation chaining techniques. Our SLR has three key findings. First, we
2020). found four dimensions of end users’ explanation needs: format,
Furthermore, the General Data Protection Regulation (GDPR) has completeness, accuracy, and currency. We then linked these dimensions
also emphasized the explainability of AI systems by introducing the with the five effects of XAI: trust, transparency, understandability, us
“right to explanation” (Goodman and Flaxman, 2017). The regulation ability, and fairness, which have been discussed in prior literature (Laato
includes another policy related to “automated individual decision- et al., 2022) to develop a framework. Finally, we found 10 application
making, including profiling,” to prevent personal data from being used domains where XAI research has been conducted. Based on these find
and processed by automated systems without permission (Malgieri, ings, our paper contributes to the existing XAI literature (Binns et al.,
2019). In addition, the High-level Expert Group on Artificial Intelligence 2018; Chazette and Schneider, 2020; Schneider et al., 2021; van der Waa
of the European Commission has also outlined the importance of an et al., 2020; Laato et al., 2022) by 1) identifying the dimensions of end
explanation to achieve the transparency and reliability of AI in their users’ explanation needs and presenting them from information systems
“Ethics Guidelines for Trustworthy Artificial Intelligence (AI)”.1 research perspective; 2) identifying the outcomes of XAI from the end
Furthermore, governments worldwide are currently adopting auto users’ perspectives; 3) identifying research gaps and problematizing
mated decision making; for example, the Dutch immigration services are future research directions in XAI, particularly from end users’ perspec
testing automated processes for asylum requests and resident permit tives; and 4) building a framework for XAI research from end users’
applications (Janssen et al., 2020). Such sensitive decision making by perspective. Our findings also help practitioners design a more user-
government organizations should have explainability for the users as friendly and trustworthy XAI system by determining the explanation
well for those involved in decision making. Therefore, organizations needs of the end users.
involved in the government should also have XAI as a prerequisite for an The remainder of the paper is structured as follows. Section 2 out
automated decision-making system. AI-based decision-making systems lines the background of XAI and Related Works. Section 3 describes the
can be used for scalable and larger ecosystems; however, the systems SLR methodology and literature selection. Section 4 contains the
need to be incorporated with some principles related to ethics and rights research trend of XAI based on the selected articles, and Section 5 syn
(Fjeld et al., 2020). thesizes previous studies on XAI. This section comprehensively repre
Therefore, due to the wide applicability and demand, researchers sents the current knowledge of XAI aligned with information systems
have investigated XAI across various domains and perspectives. Previ research. Section 6 critically analyzes the current knowledge to identify
ously published Systematic Literature Reviews (SLRs) on XAI have future research directions. Section 7 outlines the comprehensive
focused on the ethical perspective of AI’s black box nature (Meske et al., framework of XAI research from the end user’s perspective. Section 8
2022; Wells and Bednarz, 2021), human-centric design patterns for ML- briefly describes the implications of this work, and Section 9 concludes
based systems (Chromik and Butz, 2021), personalized explanations of the paper.
XAI (Schneider and Handali, 2019), behavioral interactions of human
and autonomous agents (Anjomshoae et al., 2019), XAI in healthcare 2. Background
domain (Chakrobartty and El-Gayar, 2021; Antoniadi et al., 2021), and
AI system communication, design recommendations and tradeoffs of the 2.1. Explainable AI and related concepts
end user-centric AI (Laato et al., 2022), among others. Despite the
plethora of these types of investigations, we have identified two major Explainable AI, interpretable AI, transparent AI, understandable AI,
research gaps concerning end users’ explanation needs. First, most prior and responsible AI terminologies are used interchangeably in the liter
SLRs (see Table 1) focused on a single domain (e.g., healthcare, trans ature (Arrieta et al., 2020). XAI has emerged intending to present ex
portation, etc.). This limits our understanding of how the end users’ planations purveyed to human understanding, trust, and transparency
explanation needs might vary across different domains. For example, the (Gerlings et al., 2021a, 2021b). The relational link connecting the input
healthcare professionals’ explanation needs for making critical decisions and the output of an artificial neural network is not observable. There
would be significantly different than consumers’ decisions regarding fore it is necessary to put effort into the explainability and interpret
their next purchases. Second, most prior SLRs (see Table 1) have been ability of the black-box nature of various AI models (Dağlarli, 2020).
conducted from a technical perspective. To understand the explanation DARPA, one of the leading research organizations on XAI, explained XAI
needs of end users, an SLR that reviews studies from the human as an extension of an AI system whose models and decisions can be easily
perspective is needed. However, very few SLR studies (Laato et al., understandable and properly believable by end users (Gunning and Aha,
2022) have been done from the end users’ perspectives. Therefore, an 2019). The understandability and believability of machine-learning
SLR conducted across different domains that includes the latest pub models contribute to the interpretability of a machine-learning model
lished articles can provide a comprehensive outline of how XAI has for the target audience (Lipton, 2018). Explainability usually indicates
advanced in different application domains in recent times. Moreover, a how strongly a particular phenomenon can be described so that the
comprehensive study of human-centered XAI can help researchers and audience can effortlessly understand it. Therefore, in XAI, explainability
practitioners understand how people perceive different types of expla means the AI should be capable of explaining predictions obtained from
nations provided by AI-based systems. The analysis will also provide a model from a more profound methodological point of view to users
meticulous insight into the impact of XAI on humans. Hence, we con (Antunes et al., 2012); however, explainable AI can also be defined as:
ducted an SLR to critically analyze the previous research on AI users’ “given an audience, an explainable Artificial Intelligence produces de
explanation needs to fulfill the research objectives, which are (1) a tails or reasons to make its functioning clear or easy to understand”
synthesis of prior literature on XAI that contains (a) a critical analysis of (Arrieta et al., 2020).
Interpretability (Lipton, 2018) specifies that the working procedure
of the machine models should be made unambiguous and crystal clear to
1
on Artificial Intelligence (AI HLEG), H.-L. I. G. (2019). Ethics Guidelines for both technical and nontechnical users. Though interpretability and
Trustworthy AI. European Commission. https://ec.europa.eu/digital-single-mar explainability are used interchangeably, there are some basic conceptual
ket/en/news/ethics-guidelines-trustworthy-ai differences between them. Explainability means explaining the
2
Table 1
Anjomshoae et al., Presents a goal-driven literature review of explainable robots and agents to enhance the All documents are published An initial collection of 303 papers were reduced to 62 final selections using seven
2019 understanding of the “black box.” between the years 2008–2018. inclusion criteria. The authors did not mention the types of individual publication.
These papers were collected from digital libraries, such as IEEExplore, Science Direct,
ACM, and Google Scholar.
Schneider and This study provides a structured collection of information that conceptualizes The paper did not mention the They collected research articles and conference papers from the IEEE Xplore, AIS, ACM,
Handali, 2019 “personalized explanation” and relates the idea to other domains that are intertwined publication time of these and Arxiv databases. Their study did not mention the total number of papers
with XAI. documents. considered.
Antoniadi et al., Highlighting the indispensability of interpretable AI systems in medical use cases, this Unknown – 2020 (authors did not Using the Google Scholar database, they identified 668 articles based on six
2021 study underscores the ethical and fair decision making by AI systems in medical practices. specify their starting year as a combinations of search phrases. Through an intricate elimination and selection
This study claims to provide suggestions to aid future opportunities and tackle search criterion). process, 33 papers were finally selected. The authors did not specify the publication
foreseeable challenges. type of these papers.
Chakrobartty and Raising concerns about the un-explainability of AI techniques, especially in the medical This study covers documents Based on eight search keywords, they initially found 66 documents, which were
El-Gayar, 2021 sector, this study highlights the methods and practices that emphasize XAI in the medical published between 2008 and reduced to 22 using several inclusion and exclusion criteria. The authors did not specify
sector. 2020. the type of publications.
Chromik et al., To better comprehend the black box, this study presents an argument that advocates that Unknown – 2020 (authors did not An initial collection of 146 documents was reduced to 91 documents that meticulously
2021 the explanation user interfaces interpretability increases by employing explanation- specify their starting year as a matched the research objective.
generating models. This study provides insight into how designers can attune the search criteria).
explanation of AI systems in user interfaces.
Gerlings et al., This study presents a thoroughgoing discussion on how XAI addresses the black box Covers documents from 2016 to They collected data from ArXiv, AIS, JSTOR, ACM Digital Library, IEEE Xplore, SAGE,
2021b problem in AI-based applications. By conducting a comprehensive study of recent 2020. and Science Direct digital libraries. From 221 initial documents, they finally picked 64
publications, they attempted to find how XAI contributes to reducing the gap between documents for their study.
stakeholders and the black box.
Linardatos et al., This study highlights the programming implementation in recent studies that contributes Not specified. Not specified.
2021 to increasing the interpretability of ML models from both theorist and practitioner
perspectives.
Wells & Bednarz, This study accentuates the societal and ethical implications of XAI in the area of Covers published documents Conducting a Boolean search on digital libraries, such as ACM, IEEExplorer, Science
2021 reinforcement learning. The study showed limitations, such as lack of user studies, the between 2014 and 2020. Direct, and Springer Link digital libraries, they gathered 520 papers, among which they
3
prevalence of toy-examples, and difficulties providing understandable explanations, in justify choosing only 25 papers that matched their research interest.
the case of reinforcement learning.
• Defense/Military – 1
• Autonomous Vehicles – 2
• Networking 2
• Robotics – 4
• Gridworld – 5
• Games - 16
Laato et al., 2022 The authors identified the high-level objectives of AI communications with end users Search conducted on October The search was conducted on both Scopus and Web of Science from XAI from the HCI
such as understandability, trustworthiness, transparency, controllability, and fairness. 2020 perspective. 808 unique articles were extracted after removing the duplicates. The final
Practical relevance and research interest in XAI have significantly 3.1. Literature selection criteria
increased in recent times. We have been able to identify several prior
SLRs which focus on various domains. Table 1 represents a comparative For the literature selection, we defined a set of well-defined inclusion
analysis of these identified studies. The black-box nature of AI poses and exclusion criteria based on the scope of this review work. The in
ethical concerns and risks since no one can interpret what is going on clusion and exclusion criteria are outlined in Table 2.
inside and how the data is being processed (Meske et al., 2022).
Therefore, the open development of AI should be closely observed and 3.1.1. Search result extraction and analysis
audited, as the compromises involved may lead to dire consequences The search terms and the results extracted are provided in Table 3.
(Meske et al., 2022). The explanatory design of the user interface can From both databases, only conference and journal articles were
also contribute to understanding black box AI. Interaction factors, such selected, and the duplicates were removed, leaving 1707 articles. After
as transmission, dialogue, control, experience, optimal behavior, tool reading the titles and abstracts, 1190 articles were removed from the
use, and embodied action, are critical when designing such a system. list. Full texts of the remaining 517 articles were studied carefully to
Four human-centric design patterns for ML-based systems increase the remove the articles that were not within the scope of our research
understanding level of a human user through a set of explanation- theme. Furthermore, articles without empirical studies were excluded,
generating methods (Chromik and Butz, 2021). Naturalness, respon which resulted in the final 58 articles. Fig. 1 depicts the screening and
siveness, flexibility, and sensitivity are the four recurring design patterns selection process.
that are the most frequently used human-centric design patterns
(Chromik and Butz, 2021). Personalized explanation enhances the 4. Research trend
interpretability and understanding of the explainees (Schneider and
Handali, 2019); however, there is a substantial research gap regarding Of the 58 studies in our SLR, 13 were journal articles, and 45 were
collecting personalized and explicit information from the explainees conference articles. Table 4 depicts the publications per year for the
with arguable privacy concerns (Schneider and Handali, 2019). selected studies and the number of journal and conference articles. Here,
Research on explainable AI has increased and has primarily focused we observed that the number of publications increased from 2018 on
on policy summarization, human collaboration, visualization, verifica wards. This clarifies that explainable AI has been a topic of interest in
tion, etc. (Wells and Bednarz, 2021); however, research gaps exist in recent years. Other bibliometric data of the selected articles, such as the
customized algorithms, user testing, and scalability (Wells and Bednarz, number of publications by publishers (Table 5) and the top-five cited
2021). In one systematic review, the explainable nature of the behav articles (according to Scopus), including their author affiliation
ioral interaction of agents and robots with human users is discussed (Table 6), are presented as well.
(Anjomshoae et al., 2019). The work also summarizes the importance of
the explainable nature of intelligent systems for non-expert users. Both 5. Synthesis of prior literature
technical and non-technical perspectives are important for the XAI
domain. One of the seminal scholarly works (discussing the importance This section provides a critical analysis of the selected research
of unveiling the black box) comprehensively outlines the need, research studies and an overview of their findings. This section is divided into (1)
challenges, and future research opportunities to provide explainability Current Knowledge Representation and (2) Research Domains. Table 7
(Gerlings et al., 2021a, 2021b). Healthcare is a crucial domain for any represents the synthesis of prior literature.
type of technology use. Similarly, XAI research for the healthcare
domain can help doctors make decisions (Chakrobartty and El-Gayar, 5.1. Current knowledge representations
2021). XAI in healthcare includes various techniques and methods
used for XAI (Chakrobartty and El-Gayar, 2021) and clinical decision This section represents the current knowledge extracted from the
making (Antoniadi et al., 2021). Other traditional review studies discuss selected articles. The section is divided into three subsections: (1) XAI
explainable AI from a technical perspective, such as the interpretability representation, (2) Effects of Explainable AI, and (3) Explanation Pre
methods of various machine-learning interpretability models (Linarda sentation Time.
tos et al., 2021). Recently, Laato et al. (2022) identified the high-level
objectives of AI communications with end users such as understand 5.1.1. XAI representation
ability, trustworthiness, transparency, controllability, and fairness. We have adopted the information quality dimensions proposed by
Moreover, they provided design recommendations for explanations of AI Wixom and Todd (2005) to conceptualize XAI representation di
systems along with future research directions. mensions. Wixom and Todd (2005) proposed four information quality
4
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
Table 2
Inclusion and exclusion criteria.
Inclusion criteria Exclusion criteria
5
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
Table 6
Top-five cited articles, including their authors and affiliations according to Google scholar (till the date of final submission of this article).
Title Year Source Cited Authors with affiliations Publisher Type
by
Designing theory-driven user- 2019 Conference on Human 447 Wang, D., School of Computing, National University of ACM Conference
centric explainable AI Factors in Computing Singapore, Singapore, Singapore; Yang, Q., Human-
Systems - Proceedings Computer Interaction Institute, Carnegie Mellon University,
Pittsburgh, PA, United States; Abdul, A., School of
Computing, National University of Singapore, Singapore,
Singapore; Lim, B.Y., School of Computing, National
University of Singapore, Singapore, Singapore
It’s reducing a human being to a 2018 Conference on Human 353 Binns, R., Dept. of Computer Science, University of Oxford, ACM Conference
percentage”; perceptions of Factors in Computing United Kingdom; Van Kleek, M., Dept. of Computer Science,
justice in algorithmic decisions Systems - Proceedings University of Oxford, United Kingdom; Veale, M., Dept. of
Science, Technology, Engineering and Public Policy,
University College London, United Kingdom; Lyngs, U.,
Dept. of Computer Science, University of Oxford, United
Kingdom; Zhao, J., Dept. of Computer Science, University of
Oxford, United Kingdom; Shadbolt, N., Dept. of Computer
Science, University of Oxford, United Kingdom
Why and why not explanations 2009 Conference on Human 568 Lim, B.Y., Carnegie Mellon University, United States; Dey, A. ACM Conference
improve the intelligibility of Factors in Computing K., Carnegie Mellon University, 5 United States; Avrahami,
context-aware intelligent systems Systems - Proceedings D., Intel Research Seattle, United States
Assessing demand for intelligibility 2009 ACM International 243 Lim, B.Y., Carnegie Mellon University, Pittsburgh, United ACM Conference
in context-aware applications Conference Proceeding States; Dey, A.K., Carnegie Mellon University, United States
Series
The effects of transparency on trust 2008 User Modeling and User- 435 Cramer, H., Human Computer Studies Lab., University of Springer Journal
in and acceptance of a content- Adapted Interaction Amsterdam, Amsterdam, Netherlands; Evers, V., Human
based art recommender Computer Studies Lab., University of Amsterdam,
Amsterdam, Netherlands; Ramlal, S., Human Computer
Studies Lab., University of Amsterdam, Amsterdam,
Netherlands; Van Someren, M., Human Computer Studies
Lab., University of Amsterdam, Amsterdam, Netherlands;
Rutledge, L., Telematica Institute, Enschede, Netherlands,
CWI, Amsterdam, Netherlands; Stash, N., Eindhoven
University of Technology, Eindhoven, Netherlands, VU
University Amsterdam, De Boelelaan Amsterdam,
Netherlands; Aroyo, L., Eindhoven University of Technology,
Eindhoven, Netherlands, VU University Amsterdam,
Amsterdam, Netherlands; Wielinga, B., Human Computer
Studies Lab., University of Amsterdam, Amsterdam,
Netherlands
Ramakrishna, 2021). The explanation format in virtual assistant systems algorithms’ working principles (Branley-Bell et al., 2020; Bussone et al.,
includes voice-based interactions along with textual and visual expla 2015; Cai et al., 2019; Daudt et al., 2021; Eiband et al., 2019; Lee and
nations (Weitz et al., 2019, 2021; Gao et al., 2022). An interactive agent Rich, 2021). In addition, providing users with contextual information
with hybrid (textual and audio-visual) explanations can increase the and references about the prediction upon request increases users’ trust
perception of trust in a system (Weitz et al., 2021). XAI in immigration and perception of reliability (Branley-Bell et al., 2020; Daudt et al.,
systems needs both the textual and visual information format because 2021; Lee and Rich, 2021; Bove et al., 2021). Contextual information
the decision-making requires careful observation of personal details, refers to an explanation that is domain-specific or application-specific.
travel itineraries, and photo matches with travelers (Janssen et al., Along with explaining the algorithms and machine-learning models, it
2020). In the human resource context, both textual and visual expla is also important to include domain-specific contextual information
nations are recommended (Bankins et al., 2022). For criminal justice use regarding decision making. The contextual information varies across
cases, the reasoning should include information related to both “why” different domains. Therefore, it should be considered by the developers
and “why not” because the counterfactual details help clear any doubt or during the design phase of a system. Media and entertainment recom
bias (Dodge et al., 2019). The hybrid explanation format is also required mendation systems can explain decision-making by revealing the
for other context-aware systems (Lim et al., 2009), general decision- working procedure of the algorithm, the personal data being used, and
making systems (Brennen, 2020; Schrills and Franke, 2020), travel visual representation of the recommendation being made (Ehsan et al.,
guides (Lim and Dey, 2009), cooking recommendation systems (Broek 2019; Kouki et al., 2019; Schmidt et al., 2020). For example, the users of
ens et al., 2010), and wearable systems (Danry et al., 2020). music and movie recommendation systems want to see what kind of data
has been used for the prediction and the popularity rating of the decision
5.1.1.2. Completeness. Completeness in XAI refers to providing the (Kouki et al., 2019; Ngo et al., 2020). In addition, information regarding
target user with all required information, including on demand sup the movie name, previous ratings, genres, and confidence measurements
plementary data. For the healthcare domain, the user needs to be pre can be provided as an explanation.
sented with patients’ demographic information, cardinal symptoms, Another example is an online news recommendation system, where
previous test data, and initial evaluations (Wang et al., 2019; Xie et al., the visual explanation includes a two-dimensional partial dependence
2019). The visual explanation can include a vivid and concise repre plot that describes how the output is influenced by the input properties
sentation of appropriate diagnosis images, indicators of different prop (Szymanski et al., 2021). A textual explanation of XAI can also include
erties, bar charts, etc. (Bussone et al., 2015; Cai et al., 2019; Ehsan et al., product type, price, order details, and other different attributes and
2019; Eiband et al., 2018). The textual explanation can include a features (Ehsan et al., 2021; Eslami et al., 2018; Bankins et al., 2022).
detailed representation of the decision-making procedure and the The reasons for user agreement and disagreement related to predictions
6
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
Table 7
Synthesis of prior literature.
Source XAI representation Effects Explanation presentation time Research focus
Cramer et al., 2008 Hybrid Accuracy, Trust With the recommendation and after the user demands explanation Media and Entertainment
representation Transparency
Branley-Bell et al., 2020 Hybrid Trust With the recommendation and after the user demands explanation Healthcare
representation Understandability
Cheng et al., 2019 Hybrid Understandability After the user demands explanation as supplementary information Education
representation
Daudt et al., 2021 Hybrid Trust Not mentioned explicitely, but analysis shows both with recommendation Healthcare
representation Understandability and after the user demands explation
Lee and Rich, 2021 Hybrid Trust With the recommendation and after the user demands explanation Healthcare
representation
Wang et al., 2019 Hybrid Trust Not mentioned explicitly Healthcare
representation Understandability
Xie et al., 2019 Hybrid Trust Not mentioned explicitly Healthcare
representation
Rodriguez-Sampaio Hybrid Trust With the recommendation and after the user demands explanation Healthcare
et al., 2022 representation Understandability
Bussone et al., 2015 Graphical Trust With the recommendation and after the user demands explanation Healthcare
representation
Cai et al., 2019 Graphical Transparency With the recommendation and after the user demands explanation Healthcare
representation
Eiband et al., 2019 Graphical Transparency With the recommendation and after the user demands explanation Recommendation System
representation
Hudon et al., 2021 Hybrid Trust Not explicitly mentioned Media and Entertainment
representation Understandability
Górski and Hybrid Understandability With the recommendation and after the user demands explanation Law
Ramakrishna, 2021 representation Fairness
Evans et al., 2022 Hybrid Understandability With the recommendation and after the user demands explanation Healthcare
representation
Ehsan et al., 2019 Hybrid Understandability With the recommendation Media and entertainment
representation
Kouki et al., 2019 Hybrid Trust With the recommendation Media and entertainment
representation
Ngo et al., 2020 Hybrid Transparency With the recommendation Media and entertainment
representation
Oh et al., 2018 Hybrid Trust With the recommendation and after the user demands explanation Media and entertainment
representation Usability
Schmidt et al., 2020 Hybrid Trust With the recommendation and after the user demands explanation Media and entertainment
representation
Szymanski et al., 2021 Hybrid Understandability With the recommendation and after the user demands explanation Media and entertainment
representation Transparency
Ehsan et al., 2021 Hybrid Trust With the recommendation and after the user demands explanation E-commerce
representation Understandability
Eslami et al., 2018 Hybrid Understandability With the recommendation and after the user demands explanation E-commerce
representation
Conati et al., 2021 Hybrid Transparency With the recommendation and after the user demands explanation Education
representation
Mucha et al., 2021 Hybrid Fairness With the recommendation and after the user demands explanation Education
representation
Putnam and Conati, Hybrid Trust With the recommendation and after the user demands explanation Education
2019 representation
Li et al., 2021 Hybrid Transparency With the recommendation and after the user demands explanation Human Resource
representation Management
Khosravi et al., 2022 Hybrid Trust With the recommendation and after the user demands explanation Education
representation
Binns et al., 2018 Hybrid Understandability With the recommendation Transportation, Finance
representation Fairness
Chromik et al., 2021 Hybrid Trust With the recommendation and after the user demands explanation Finance
representation Understandability
Usability
Cirqueira et al., 2020 Hybrid Trust With the recommendation and after the user demands explanation Finance
representation
Liu et al., 2021a Hybrid Transparency With the recommendation and after the user demands explanation Legal
representation Understandability
Fairness
Liu et al., 2021b Hybrid Transparency With the recommendation and after the user demands explanation Social Networking
representation
Górski and Hybrid Transparency With the recommendation and after the user demands explanation Legal
Ramakrishna, 2021 representation Understandability
Fairness
Lim and Dey, 2009 Hybrid Understandability With the recommendation and after the user demands explanation Social Networking
representation
Yin et al., 2019 Hybrid Trust With the recommendation and after the user demands explanation Social Networking
representation
Weitz et al., 2019 Trust With the recommendation and after the user demands explanation Digital Assistant
(continued on next page)
7
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
Table 7 (continued )
Source XAI representation Effects Explanation presentation time Research focus
Hybrid
representation
Weitz et al., 2021 Hybrid Trust With the recommendation and after the user demands explanation Digital Assistant
representation
Janssen et al., 2020 Hybrid Transparency With the recommendation and after the user demands explanation E-Governance
representation Fairness
Bankins et al., 2022 Hybrid Trust With the recommendation Human Resource
representation Management
Dodge et al., 2019 Hybrid Trust With the recommendation and after the user demands explanation E-Governance
representation Fairness
Lim et al., 2009 Hybrid Understandability With the recommendation and after the user demands explanation Recommendation System
representation
Brennen, 2020 Hybrid Trust With the recommendation and after the user demands explanation Recommendation System
representation Usability
Schrills and Franke, Hybrid Trust With the recommendation and after the user demands explanation Recommendation System
2020 representation
Broekens et al., 2010 Hybrid Understandability With the recommendation and after the user demands explanation Media and Entertainment
representation
Danry et al., 2020 Hybrid Understandability With the recommendation and after the user demands explanation Healthcare
representation
Eiband et al., 2018 Graphical Transparency With the recommendation and after the user demands explanation Healthcare
representation
Bove et al., 2021 Graphical Trust With the recommendation and after the user demands explanation E-commerce
representation Understandability
Chazette and Schneider, Hybrid Understandability With the recommendation Transportation
2020 representation
Schneider et al., 2021 Hybrid Understandability With the recommendation Transportation
representation
van der Waa et al., 2020 Hybrid Understandability With the recommendation Transportation
Representation Transparency
Park et al., 2021 Hybrid Trust With the recommendation and after the user demands explanation Human Resource
representation
Hong et al., 2020 Hybrid Trust With the recommendation and after the user demands explanation Social networking
representation
Liao et al., 2020 Hybrid Trust With the recommendation and after the user demands explanation Social networking
representation Transparency
Wang and Moulden, Hybrid Transparency With the recommendation and after the user demands explanation Social networking
2021 representation
Dhanorkar et al., 2021 Hybrid Trust With the recommendation and after the user demands explanation AI Development
representation
Evans et al., 2022 Hybrid Trust With the recommendation and after the user demands explanation Healthcare
representation
Andres et al., 2020 Hybrid Trust On demand explanation AI Development
representation
Hind et al., 2020 Hybrid Understandability With the recommendation and after the user demands explanation Social networking
representation
in intelligent tutoring systems must be explained to the users as well in a human resource management system should explain the working
(Conati et al., 2021; Putnam and Conati, 2019). Therefore, users should procedure and should display personal information and other attributes
be able to request information if the explanations provided do not meet both in a textual and visual format (Park et al., 2021). A similar situation
their expectations. Furthermore, other systems, such as grade estima is observed in the case of immigration services and criminal justice use
tions and university admission decision making, are required to provide cases (Dodge et al., 2019; Janssen et al., 2020). To establish the
students with personal details, academic details, and other required completeness of the XAI system, the system developers and designers
attributes that contribute to decision making (Cheng et al., 2019; Mucha should keep a critical eye on the explanation types and user re
et al., 2021). quirements. The users want “why,” “why not,” “how,” “what if,” and
Loan application systems, fraud detection, and other banking soft “what else” explanations from the systems along with an interactive user
ware are sophisticated decision-making systems. Therefore, explana interface (Broekens et al., 2010; Conati et al., 2021; Schrills and Franke,
tions for these types of systems should be more detailed and comprise 2020). Moreover, developers may consider using different color-coding
personal details, previous credit history, employment history, and the indicators that can also enhance trust among users (Brennen, 2020).
algorithms’ working procedures (Binns et al., 2018; Chromik et al., Therefore, they should design and develop an interactive system care
2021; Cirqueira et al., 2020). Similarly, for transportation systems, de fully considering all the requirements of users, device diversity, and
cision making can be explainable using contextual information, confi regulatory issues to promote the completeness of XAI (Brennen, 2020;
dence measurements, light indicators, and previous decisions in similar Danry et al., 2020; Hong et al., 2020).
situations (Chazette and Schneider, 2020; Schneider et al., 2021; Bove
et al., 2021). Flight re-routing systems can provide the reasoning behind 5.1.1.3. Accuracy. Users’ accuracy perceptions regarding information
choosing specific routes and other supplementary information on de from XAI systems vary and depend on different factors. Explanations
mand (Binns et al., 2018). Virtual assistant systems should provide an containing personalized prioritization matrices, counterfactual infor
explanation using the appearance of a virtual agent, such as facial ex mation about specific predictions (Liu et al., 2021a, 2021b), and sup
pressions, voice, and gestures. A harmonic combination of explainable plementary information instigate the perception of accuracy and
AI methods as well as appropriate linguistic representations can make understandability among users (van der Waa et al., 2020; Wang et al.,
the system trustworthy (Weitz et al., 2019, 2021; Gao et al., 2022). XAI
8
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
2019; Xie et al., 2019). Moreover, this information can help verify 5.1.2. Effects of XAI
decision-making and motivate the user to adopt an AI-based system Our goal in this paper is to link the XAI representation dimensions
(Wang et al., 2019). The academic tutoring system shows the confidence with XAI effects. Towards this goal, we adopted the XAI objectives
value of a decision as an explanation, which helps users accept or ignore described by Laato et al. (2022) and categorize explainable AI effects
a decision (Putnam and Conati, 2019). In addition, for education-related into trust, transparency, usability, understandability, and fairness. The
AI tools, explanation accuracy can increase if comparisons are shown effects are briefly explained in the following subsections based on our
between previous and current recommendations, trust scores of different literature review.
recommendations using different models, etc. (Li et al., 2021; Khosravi
et al., 2022). Some explainable AI systems can increase user interaction 5.1.2.1. Trust. Based on our literature review, we have observed that
by providing detailed user instructions (Oh et al., 2018), information of users’ trust is affected by both the stated and the observed accuracy of
the mental model used (Cramer et al., 2008), collaborative filtering (Ngo the machine-learning model. For example, users’ trust in a machine-
et al., 2020), and contextual data (Eiband et al., 2018; Liao et al., 2020; learning model increases or decreases based on the information about
Bove et al., 2021). User involvement in the design process reduces the stated and observed accuracy (Yin et al., 2019). Prior research studies
knowledge gap and promotes accuracy perceptions (Eslami et al., 2018; have shown that providing users with contextual information, historical
Ngo et al., 2020; Oh et al., 2018). Users’ accuracy perceptions of XAI data, and the proper reference behind decision making enhances trust in
information are based on an explanation that contains information the system, particularly in the context of healthcare and finances (Cir
related to the certainty level of prediction (Bussone et al., 2015; Eiband queira et al., 2020; Kouki et al., 2019; Wang et al., 2019; Xie et al., 2019;
et al., 2019), algorithmic decision-making procedures (Eiband et al., Dhanorkar et al., 2021; Bove et al., 2021). Furthermore, users’ percep
2019; Park et al., 2021), claims and evidence (Danry et al., 2020), and tions of bias are reduced if explanations include input value attributes,
information regarding domain expert engagement in the development reference data related to the prediction, and contextual information
process (Mucha et al., 2021; Wang and Moulden, 2021). (Cirqueira et al., 2020; Daudt et al., 2021; Hong et al., 2020; Lee and
XAI should produce a human-like explanation and should show the Rich, 2021; Evans et al., 2022). Moreover, a high confidence level for
accuracy level of the system to make the system more interpretable and specific predictions helps users build trust in the system (Bussone et al.,
accurate (Janssen et al., 2020; Lim and Dey, 2009; Park et al., 2021). 2015; Ehsan et al., 2021).
Users’ perceptions of the accuracy of the data of the XAI system can be Explanation styles have a significant impact on users’ trust in a
established if the explanation of the algorithmic working procedure is system. For example, visual explanations of the input data of the
presented sequentially to the users. This sequential flow of actions and machine-learning model induce a higher level of visibility, under
information will motivate the user to accept or deny the decision standability, observability, and trust in the system (Schrills and Franke,
(Broekens et al., 2010; Conati et al., 2021). AI-based law-related 2020; Hudon et al., 2021). User trust also varies based on whether they
decision-making systems can positively affect users’ accuracy percep are informed that a human or AI made the decision; however, the
tions by including evidence-based-reasoning sentences, legal rule sen variation is mostly observed when the decision is positive. Users tend to
tences, and citation sentences (Górski and Ramakrishna, 2021). trust a system if the decision is positive irrespective of the decision
Explanations including this information act as a reference to the accu maker (Bankins et al., 2022). The explanation should contain enough
racy of the decision made by the system (Górski and Ramakrishna, detail regarding the prediction and decision-making procedure so that
2021). users can feel confident and trust the system. Among various types of
visual explanation formats, augmented reality-based explanations and
5.1.1.4. Currency. Currency is defined as the user’s perception of up-to- product displays also enhance end users’ trust in a system (Rodriguez-
date information (Wixom and Todd, 2005); however, for XAI, currency Sampaio et al., 2022). Too much information could create cognitive
unfolds differently. XAI explains the algorithmic working principle, overload and decrease users’ understanding and trust (Cramer et al.,
counterfactual data, supplementary information, and contributing fea 2008; Schmidt et al., 2020; Hudon et al., 2021). The explanation should
tures (Binns et al., 2018; Chromik et al., 2021; Eiband et al., 2019). From be stakeholder-oriented, such as by designing an interactive user inter
the XAI perspective, though the users are presented with an automatic face to explain to non-technical stakeholders (Andres et al., 2020; Liao
explanation, an on-demand explanation is also available. The on- et al., 2020). Therefore, increased user interaction by providing
demand explanation can include the most recent information about adequate instructions and allowing the user to take initiatives would
any decision (Bussone et al., 2015; Putnam and Conati, 2019; Schrills increase reliability and trustworthiness (Dodge et al., 2019; Oh et al.,
and Franke, 2020; Wang et al., 2019). Supplementary information 2018; Putnam and Conati, 2019; Schrills and Franke, 2020).
regarding the contextual data and the latest and historical references can If a system can simulate human-like expressions using lip sync and
also be available in XAI systems (Binns et al., 2018; Branley-Bell et al., body language, it can increase trust (Weitz et al., 2021). For example,
2020; Cirqueira et al., 2020; Bove et al., 2021). When designers and virtual assistants’ voices, facial expressions, and gestures enhance users’
developers include the users in the XAI development process, they can trust. Therefore, a harmonic combination of a human-like facial
acquire up-to-date user requirements (Ngo et al., 2020; Oh et al., 2018). expression along with an appropriate linguistic representation can have
For fraud detection, loan approval contexts, and other mission-critical a significant impact on users’ trust (Weitz et al., 2019: Gao et al., 2022).
systems, it is essential to present the latest information (Binns et al., 2018; From the organizational point of view, employees’ trust in any AI-based
Chromik et al., 2021; Cirqueira et al., 2020). Recruitment systems also system is related to effectiveness, job efficiency, data protection, user
should use the candidates up to date information for recruitment-related understanding, and control. Weitz et al. (2019) also observed that
decision-making (Bankins et al., 2022; Li et al., 2021). Financial though explanations should show the user relevant data along with the
decision-making such as loan or credit approval, should use the up-to-date attributes, personal data need to be masked for privacy reasons (Wang
financial history of the person (Chromik et al., 2021; Cirqueira et al., and Moulden, 2021). Similarly, explanations that include comparisons
2020). Another use case related to the flight re-routing system offers the among different attributes and previous and current recommendations
latest flight data to users so that travel is flexible and comfortable (Binns can increase user (students, teachers, and educational researchers) trust
et al., 2018). The same goes for media and entertainment recommendation in education-related XAI systems (Khosravi et al., 2022). Another study
systems, where the users recommend the latest movies and music as part related to human resource management revealed that decreasing the
of the process (Kouki et al., 2019). Similarly, instant messaging applica knowledge gap between the user and the system can enhance trust
tions and tour guide systems present the latest explanation data to the user (Chromik et al., 2021). Therefore, the authors also recommended
(Lim and Dey, 2009; Yin et al., 2019). reducing the knowledge gap by collaborating with users during the XAI
9
Table 8
XAI Standardization Lack of holistic guidelines for XAI development for Our review shows no empirical study that provides holistic guidelines or Alignment of the current software development cycle with XAI
researchers and practitioners. standards for developing an XAI System. development.
XAI development process is opaque. GDPR is newly introduced and one of the strictest guidelines. Hence, integrating it Different stakeholders can be involved in the XAI software
Research from a regulatory and compliance perspective into XAI requires rigorous investigation. development lifecycle to determine the best practices/guidelines.
is not available. RQ 1. How can XAI development guidelines be identified? Identify the stakeholders.
Communication methods and nature among the Conduct qualitative and quantitative research to identify the
stakeholders is not defined. stakeholder requirements.
Identify different types of stakeholder engagement with the XAI
development lifecycle through qualitative and quantitative research.
Research across multiple domains can also help portray domain-
specific guidelines as well as generic guidelines for XAI development.
RQ 2. How can we incorporate GDPR as a design requirement of XAI Understand the applicable GDPR articles.
development? Codesign with industry practitioners, end users, and legal experts to
investigate the GDPR requirements.
Conduct data protection impact assessments to evaluate the GDPR
compliance trends.
Conduct design science research to identify common guidelines for
GDPR compliance.
RQ 3. How do we integrate the proposed “Ethics guidelines for Trustworthy Investigate and identify the feasibility of integrating ethical guidelines.
Artificial Intelligence” by “High-level Expert Group on Artificial Intelligence” Co-design with practitioners and experts to outline the suitable ethical
from the European Commission? guideline requirements.
RQ 4. How can the XAI stakeholders communicate with others for collaborative Stakeholders should be identified.
development? Codesign with the stakeholders to establish a collaborative
development environment.
Use iterative evaluations of different types of communication
techniques to find the suitable one.
XAI Visualization Very few theory-guided studies have been conducted. Information (or explanation) quality dimensions are not properly aligned with Investigate the information quality dimensions in prior literature and
10
Understand how the effect of XAI changes over time among different
Behavioral theories can be used to propose suitable research models.
(explainable), the design becomes complex, and the whole process be
comes time-consuming as well (van der Waa et al., 2020).
For movie recommendation systems, content-based collaborative
filtering can be adopted to increase transparency. Therefore, item-based
explanations.
stakeholders.
stakeholders.
et al., 2019; Eslami et al., 2018; Lim et al., 2009; Bove et al., 2021).
Moreover, presenting every interaction within the system in a sequential
manner helps users understand the working procedure of the system
(Broekens et al., 2010). Ehsan et al. (2021) found that social trans
parency is important in increasing an AI-based system’s understand
ability; however, without background information and proper
contextual information, the prediction accuracy (confidence measure
ment) is nothing but a number (Ehsan et al., 2021; Bove et al., 2021).
Explanations with logical reasoning and counterfactual information
improve the understandability of the system (Górski and Ramakrishna,
2021). In the case of expert-level end users, counterfactual explanations
help to understand the generated explanations, decision making, and
factors relevant to the algorithms (Evans et al., 2022). Case-based ex
planations can increase the understandability of decision making in
criminal justice related use cases (Liu et al., 2021a, 2021b). The study by
Liu et al. (2021a, 2021b) also showed that if users complete some
training before using a system, this can increase the interactive nature of
and the familiarity with the explanations (Liu et al., 2021a, 2021b).
Issues/topics/research gaps
11
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
decision making (Schneider et al., 2021). Moreover, for wearable sys et al., 2018; Janssen et al., 2020). Furthermore, fairness is perceived
tems, auditory feedback of explanations increases a system’s under more favorably by users when the input influence explanation presented
standability (Danry et al., 2020). is understood (Binns et al., 2018; Mucha et al., 2021).
In the case of non-technical stakeholders, the information should be
clear, concise, and comprehensive so that there is no unnecessary in 5.1.3. Explanation presentation time
formation that might create a cognitive overload (Hudon et al., 2021). Our critical observation of the selected literature shows that expla
Users of AI-based hiring systems require numerical data of the assess nations are provided to users in two ways. In most cases, the explanation
ment along with the explanation to increase understandability. The at is shown to the user automatically while visualizing the decision itself.
tributes should be properly labelled and explained, and the decision In this case, the user wants minimal and adequate information to be
should be properly reasoned to increase user understandability (Li et al., provided to avoid cognitive overload (Ehsan et al., 2019; Schmidt et al.,
2021). Furthermore, the explanation provision can be on-demand to 2020; Hudon et al., 2021; Dhanorkar et al., 2021). Therefore, to have
avoid the monotonous and time-consuming nature of a system (Chazette more control over an explainable AI system, users also prefer on-demand
and Schneider, 2020). Another way to increase understandability is to supplementary and contextual information sharing. Hence, “when the
use fact sheets and mental models for a variety of stakeholders involved AI should be explainable” revolves around these primary concepts. For
in the AI development process (Chromik et al., 2021; Hind et al., 2020). different domains, the concept is presented in various ways because the
The fact sheet contains all the attributes of data, the prediction mech nature of the interaction is not the same across all domains. Some of
anism, the working principle, the inherent structure of the model, the these scenarios are described in detail.
training data for machine-learning models, testing protocols, and testing For medical personnel, both textual and visual explanations and
models (Hind et al., 2020). In addition, the developers of XAI must related hints are displayed automatically to doctors after decision
understand the user’s mental model before developing the system. making (Branley-Bell et al., 2020; Eiband et al., 2018, 2019; Xie et al.,
Mental model is crucial for any interactive system design since it is based 2019). In addition, supplementary information should be available upon
on users’ beliefs and perceptions about the external world. Therefore, user request for better diagnosis, understandability, and trust (Wang
the developer team must collaborate with end users, domain experts, et al., 2019; Xie et al., 2019). The supplementary information can be a
and other necessary stakeholders to establish dedicated communication combination of a reference to the previous diagnosis or any historical
(Chromik et al., 2021). data that might help make an accurate decision (Branley-Bell et al.,
2020; Bussone et al., 2015; Daudt et al., 2021; Lee and Rich, 2021).
5.1.2.4. Usability. XAI systems can have a positive impact on a system’s Similarly, various media and entertainment systems require textual and
usability (Oh et al., 2018). According to Chazette and Schneider (2020), visual explanations immediately with the prediction result (Kouki et al.,
for navigation systems, users would like to feel in control of the system 2019; Ngo et al., 2020). Music recommendation systems, art recom
because it provides the user with the choice of accepting or rejecting a mendation systems, arcade gaming, tour guides, cooking agents, and
decision. Furthermore, feedback modalities/features in autonomous movie recommendation systems require automatic rational generation
vehicles can significantly increase user experiences by making the sys along with personalized recommendations (Cramer et al., 2008; Ehsan
tem more usable and understandable (Chazette and Schneider, 2020). et al., 2019; Lim and Dey, 2009; Ngo et al., 2020; Schmidt et al., 2020).
For the finance and human resource domain, following a particular Another requirement for automatic explanations is to reveal the filtering
explanation style is vital because presenting various explanations using technique used as well as which data have been considered. For AI-based
a specific explanation style could help the user understand the role of drawing tools, the user needs information on-demand rather than
various features and the reasoning behind the prediction, which can automatic explanations because users need to lead the task rather than
improve usability. Similarly, Szymanski et al. (2021) found that in the receive suggestions from the system (Oh et al., 2018). One demand
case of a news article recommendation system, explanations help users explanation is also required for intelligent cooking agents. The users like
assess their own article writing skills and at the same time learn to to lead the task and later like to receive explanations from the system
improve their articles. Furthermore, to increase usability, accessible and (Broekens et al., 2010).
interactive interfaces should be designed and developed for non- In the case of the financial domain, users are automatically presented
technical stakeholders (Andres et al., 2020; Brennen, 2020). Involving with an explanation regarding decision making. Though the decision
the stakeholders in the development lifecycle may also increase a sys making is automatic, the system should show the explanation in
tem’s usability (Chromik et al., 2021). different styles to make it more understandable (Binns et al., 2018;
Cirqueira et al., 2020). Similar to healthcare decision making, contex
5.1.2.5. Fairness. The fairness of an intelligent system is dependent on tual information is also needed upon user request (Chromik et al., 2021;
various attributes and values as well as validity. Local explanation, Rodriguez-Sampaio et al., 2022). Autonomous car users require auto
which refers to an explanation of each prediction, enhances system matic and prompt textual and visual explanations along with the pre
fairness perceptions for the user. Case-based explanations have less diction result for quick decision making (Chazette and Schneider, 2020;
impact on fairness criteria, but a global explanation can compensate for Schneider et al., 2021). Contextual information about different scenarios
this and can enhance user trust (Dodge et al., 2019); however, for a should be available upon user request. Moreover, the literature analysis
criminal justice use case-based explanation, evidence-based-reasoning, revealed that users of human resource management, e-commerce, and
legal rule sentences, and citation sentences have impacts on users’ other recommendation systems (Broekens et al., 2010; Conati et al.,
fairness perceptions (Liu et al., 2021a, 2021b; Górski and Ramakrishna, 2021; Ehsan et al., 2021; Park et al., 2021; Zimmermann et al., 2022)
2021). Social media related health applications and services require require both on-demand and automatic explanations. Therefore, it is
explanations with all types of details, logical reasoning, demographic clear that in most cases, users receive explanations automatically along
information, and supplementary information to increase fairness per with the prediction result; however, supplementary information is
ceptions among users (Liu et al., 2021a, 2021b). necessary and should be available on-demand in most cases. The sup
For the finance and human resource domains, explanation style is plementary information can be textual, visual, or hybrid because there is
vital to fairness perceptions. As mentioned, the different explanations no clear infrmation of this requirement in the literature. In addition, job
presented in similar explanation styles could help the user understand recruiters or recruitment agencies sometimes need to backtrack the
the role of various features and the reasoning behind a prediction (Binns decision they made to get help in the next recruitment. Backtracking
et al., 2018). Therefore, the user’s ability to differentiate among various helps to understand the decision-making process of the system. There
reasons will increase, resulting in enhanced fairness perceptions (Binns fore, human resource managers and recruiters require mostly on-
demand explanations (Li et al., 2021; Daudt et al., 2021).
12
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
5.2. Research domains to explaining a certain decision in relation to use cases (Schneider et al.,
2021). Moreover, the explanations (hints) provided to the user can be
We have identified 10 domains in which XAI has been used: visual, textual, light indicators, or a hybrid mode (Schneider et al., 2021;
healthcare, media and entertainment, education, transportation, van der Waa et al., 2020). For navigation systems, the user requirements
finance, e-commerce, human resource management, digital assistant, e- are slightly different because the users require on-demand explanations
governance, and social networking. We comprehensively discuss the use as well as proper reasoning behind any decision being made. A similar
of XAI in these domains in more detail. scenario is observed in the case of flight re-routing systems. In both
cases, the users wanted to control the flow of suggestions (explanations)
5.2.1. Healthcare provided by the system (Binns et al., 2018).
Healthcare is one of the most explored research domains in XAI. This
domain includes research on clinical decision making, disease diagnosis, 5.2.5. Finance
and health-related recommendation systems (Branley-Bell et al., 2020; Financial use cases of XAI research include insurance, financial fraud
Bussone et al., 2015; Cai et al., 2019; Wang et al., 2019; Rodriguez- detection, and loan applications (Binns et al., 2018; Chromik et al.,
Sampaio et al., 2022). The users of AI-based systems in healthcare are 2021; Cirqueira et al., 2020). For banking activities, such as insurance
primarily doctors with very little technical knowledge. Moreover, they claims and loan approvals, explanations regarding specific decision
tend to have their own opinion regarding disease detection and clinical making should be made available to users (Chromik et al., 2021). The
decision making (Branley-Bell et al., 2020; Bussone et al., 2015; Lee and operator should be able to see the loan requestor’s information, credit
Rich, 2021). Therefore, the explanation required by doctors should history, and other demographic information. Binns et al. (2018) and
include sufficient graphical and textual data along with appropriate Cirqueira et al. (2020) also argued that when designing an explainable
contextual references (Branley-Bell et al., 2020; Daudt et al., 2021; Xie system, the developers must understand and connect with the user’s
et al., 2019). In addition, healthcare-based applications, such as fitness mental model. An effective XAI system should be able to detect the
apps and nutrition recommendations, should have a communicative and incorrect mental model and calibrate it accordingly (Binns et al., 2018;
interactive user interface (Eiband et al., 2018, 2019). Cirqueira et al., 2020).
13
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
appealing to users. Moreover, end users require linguistic explanations conceptualize primary and alternative strategies (Hong et al., 2020; Liao
from an XAI system. Hence, an interactive agent with a harmonic et al., 2020). In addition, XAI development requires the active partici
combination of explainable AI methods and an appropriate linguistics pation of domain experts, product managers, data scientists, auditors,
representation can make a system trustworthy and more user-centered and end users (Wang and Moulden, 2021).
(Gao et al., 2022; Weitz et al., 2021).
6. Critical analysis of future research agendas
5.2.9. E-governance
Empirical analyses have been performed on a criminal justice use This section focuses on questioning and problematizing future
case to investigate people’s perceptions of the fairness of machine- research directions (Alvesson and Sandberg, 2011, 2020). In contrast to
learning algorithms and to what extent these algorithms need explana the previous section’s discussion of the XAI research trend, this section
tions (Dodge et al., 2019; Janssen et al., 2020). To increase under extensively focuses on establishing a critical standpoint of future
standability, credibility, and trust, the system should explain the research directions by analyzing “what” is the current knowledge and
algorithm’s working procedure, the attributes that contribute to deci “how” it can be improved (Alvesson and Sandberg, 2011, 2020).
sion making, and the availability of contextual data (Dodge et al., 2019). Therefore, we have reconsidered the current understanding related to
Similarly, investigations of immigration services use cases reveal that XAI’s methodological, conceptual, and development issues and investi
though algorithms can help in decision making, it is not necessary to gated the unexplored areas. We have divided the whole observation into
make all decisions using algorithms (Janssen et al., 2020). One study three primary thematic categories. The first one considers the stan
also revealed that the white box approach (explainable AI approach) can dardization practice, the second focuses on representing XAI, and the
lead to better decision making (Janssen et al., 2020). Therefore, e- last considers the overall effect of XAI on humans. Furthermore, rather
governance requires human intervention for critical decision making. than simply pointing out the gap that exists in the research findings, we
have tried to articulate emerging research questions deduced from the
5.2.10. Social networking unexplored research areas. We then constructed them in terms of their
Research on the social networking domain has revealed that the potential significance to identify specific and feasible research paths.
participants require both “why” and “why not” explanations for specific Table 8 provides an overview of the future research directions based on
system behaviors (Lim and Dey, 2009; Yin et al., 2019; Liu et al., 2021a, current knowledge.
2021b). Therefore, developers can provide user log information, mental
model related information, and contextual information on-demand. 6.1. Theme 1: XAI standardization
Moreover, an effective explainable AI system requires human user
intervention in the design process through a dedicated communication Our analysis reveals that XAI has been used in various domains;
medium (Yin et al., 2019). however, there is a lack of studies that inform XAI standardization. One
Apart from the application domains of XAI, several studies have of the articles provides the guidelines for UI design for XAI, which both
discussed the realm of XAI development. Studies related to XAI have the designers and developers can use if needed (Eiband et al., 2018).
been conducted to develop practical guidelines for designers, de Another article proposes a question bank that might be useful for
velopers, domain experts, and other related stakeholders (Hind et al., requirement elicitation for explainable AI (Liao et al., 2020); however,
2020; Hong et al., 2020; Liao et al., 2020; Wang and Moulden, 2021). these two articles do not offer comprehensive guidelines or standards for
Hind et al. (2020) designed a question bank as a standard guideline for developing an explainable AI system. Therefore, the following research
collecting user requirements for user-centered AI. The guidelines pro questions can be addressed for the XAI standardization theme.
vided in this study can be a vital component in designing a trustworthy,
understandable, interactive, and user-centric XAI system (Hind et al., 6.1.1. RQ 1. How can XAI development guidelines be developed?
2020). Developers should also explore the problem space and Extensive research on XAI design and development can facilitate
14
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
determining standard guidelines and best practices for XAI develop researchers should evaluate different XAI representations among low-
ment. Therefore, an important research direction would be identifying literate people. It is vital to conceptualizing various XAI scenarios that
the best practices and guidelines for XAI development. Addressing this can be presented to them.
research question should include the involvement of all necessary
stakeholders in the research. Furthermore, researchers across multiple 6.3. XAI effects
domains can help create domain-specific guidelines for XAI develop
ment. Design science research can also be used to create XAI develop 6.3.1. RQ 6. How do we measure the trust, transparency,
ment guidelines (Hevner and Chatterjee, 2010). understandability, and usability of XAI? How do explanation quality
dimensions affect trust, transparency, understandability, and usability?
6.1.2. RQ 2. How can we incorporate regulatory and ethical aspects as Our observation in this review work revealed that a limited number
design requirements of XAI development? of studies exist that measure the user perceptions of the transparency,
Our observation reveals a lack of empirical studies from a regulatory understandability, and usability of an XAI system (Cramer et al., 2008;
and compliance perspective. Article 22 of the GDPR discusses “auto Daudt et al., 2021; Cheng et al., 2019). Thus, researchers should use
mated individual decision-making, including profiling,” to safeguard the existing measurement scales to measure these factors. The identified
data subject’s personal information from automatic processing (Mal scales should be adapted to the XAI context. Theory-guided approaches
gieri, 2019; European Union, 2018). In addition, Articles 13–15 of the can be used to construct models to investigate how explanation quality
GDPR discuss the data subject’s right to know the logic, that is, affects satisfaction, trust, transparency, understandability, and usability.
“meaningful information,” regarding the processing of personal data. To
be more precise, the data subject has the right to be informed about 6.3.2. RQ 7. How do we measure the XAI impact on different stakeholders?
“meaningful information about the logic involved” if any decision An AI ecosystem contains different stakeholders, such as designers,
related to the subject is “based solely on automatic, automated pro domain experts, developers, data scientists, UX engineers, and regula
cessing” (Malgieri, 2019). tory bodies (Meske et al., 2022; Laato et al., 2022). For example, domain
To address this research question, researchers must identify the experts can participate in the XAI development process to identify the
possible GDPR articles related to the XAI system. The requirements of feasibility of the explanations. Data scientists can assist the development
GDPR compliance are mostly related to personal data collection, pro process by designing more explainable machine-learning models. Simi
cessing, retention strategy, and destruction. Therefore, co-design work larly, other stakeholders can contribute to XAI development. To un
that involves regulators, auditors, privacy offers, and other necessary derstand the impact on different stakeholders, a similar methodological
stakeholders is a useful research direction. Another crucial step is to approach can be adopted as we suggested in RQ6.
conduct a data protection impact assessment.
6.3.3. RQ 8. What is the effect of XAI on low-literate people?
6.1.3. RQ 3. How can the stakeholders communicate with the developer We did not find studies that targeted low-literate people. To address
team for XAI development? this research question, first, researchers need to identify the low-literate
We have observed from the review that communication among the group of people. Experiments can be designed in which AI decision-
developer team and other stakeholders are essential to XAI develop making and explanations can be presented to collect responses on
ment; however, there are limited guidelines to initiate and conduct such explanation quality and other important factors. This type of research
communication (Meske et al., 2022). Therefore, to address this research can also validate the developed scales in RQ4 and RQ6 among low-
question, the researchers can organize co-design workshops with the literate user groups.
stakeholders of the XAI ecosystem so that the communication techniques
can be identified and evaluated. 6.3.4. RQ 9. What is the longitudinal effect of XAI on various types of end
users?
6.2. XAI visualization Human-centered XAI can benefit from longitudinal studies because it
will help researchers understand the changes in user perceptions over
6.2.1. RQ 4. How do we measure the explanation quality dimensions of time at the group level and individually. Most prior research studies on
XAI? explainable AI are cross-sectional. Researchers can develop relevant
We have identified explanation quality dimensions in this paper. research models and test them using longitudinal research design.
Previous studies did not empirically measure the explanation quality
dimensions. In our review, we also observed that the explanation quality 7. Synthesized framework for XAI research from users’
dimensions of AI systems unfold differently than the information quality perspectives
dimensions. Therefore, researchers can search for the availability of
existing measurement scales for format, completeness, accuracy, and The findings from the current SLR enabled us to construct a
currency. If such scales are available, researchers can adapt them to the comprehensive framework for XAI research from end users’ perspectives
XAI context. A major adaptation of these scales would be needed, and in (Fig. 2). Building on the work of Wixom and Todd (2005), our proposed
fact, researchers may need to develop the scales from scratch by comprehensive framework suggests that object-based beliefs, such as the
following the standard scale development procedure (Moore and Ben explanation quality dimensions (format, completeness, accuracy, and
basat, 1991). currency) as well as when to explain (automatic and on-demand),
impact a number of behavioral beliefs, including trust, transparency,
6.2.2. RQ 5. How does explanation representation differ in the case of understandability, usability, and fairness. In turn, these behavioral be
relatively low-literate people? liefs impact behavioral intention (AI adoption, AI use).
The low-literacy group tends to have low AI literacy, which makes According to Wixom and Todd (2005), object-based beliefs are the
information representation more challenging. The selected articles used characteristics of technology, whereas behavioral beliefs are the antic
in this work revealed textual, visual, auditory, and hybrid modes of ipated consequences of technology use. Wixom and Todd (2005) sug
information representation. Different modes are used for different gested that the impacts of object-based beliefs on behavioral beliefs are
application domains; however, no article investigates neither how to mediated through the object-based attitude (Eagly and Chaiken, 1993;
represent the explanations to low-literate people nor how to measure Fazio and Olson, 2003); however, in a recent empirical study (Islam
their perceptions of the explanation. Therefore, addressing this research et al., 2020), it was shown that object-based beliefs can have direct
question can help present an explanation suitable for all users. The impacts on behavioral beliefs. Therefore, we have proposed direct
15
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
relationships between object-based beliefs and behavioral beliefs in our the likelihood of system adoption and use. Therefore, we suggest that
framework. Fig. 2 shows the graphical representation of the framework. system designers consider this need when they design AI-based systems.
This also has implications for AI education. We suggest including topics
8. Implications such as explainable AI, responsible AI, and AI governance, among
others, as important topics to train AI developers in addition to technical
8.1. Theoretical implications topics.
Our SLR findings have five major theoretical contributions. First, 9. Conclusion
from a broad perspective, our study is one of the few studies investi
gating AI end users’ explanation needs. Therefore, our paper contributes Recently, AI has gained significant momentum, which, if correctly
to the previously conducted literature reviews (Wells and Bednarz, managed, does have the potential to revolutionize various sectors;
2021; Anjomshoae et al., 2019; Gerlings et al., 2021a, 2021b; Laato however, the AI community must overcome the challenge of explain
et al., 2022), particularly by identifying the end users’ explanation needs ability, an intrinsic hurdle that was not a part of AI-based ecosystems
and the impacts. before. This work has comprehensively discussed XAI from the end
Second, we adopted Wixom and Todd’s (2005) conceptualization of user’s perspective. We have identified the dimensions of explanation
information quality dimensions to conceptualize the explanation quality quality from existing empirical studies, and we found that the effects of
dimensions of AI systems. Our findings show that explanation quality XAI on end users can motivate users to adopt and use AI-based systems.
dimensions are format, completeness, accuracy, and currency. We have Furthermore, by investigating the selected studies, we have identified
also observed that when to explain (automatic and on-demand) is crucial future research avenues. Possible directions to address these
another important factor of XAI. With our findings, we contribute to avenues and a comprehensive framework have also been identified and
research conducted to design and govern responsible AI systems (Wearn developed, respectively. Though the widespread application of XAI is
et al., 2019; Maas, 2018; Peters et al., 2020; Rakova et al., 2021). yet to be implemented, based on our review, the growing need for XAI is
Third, we have described the five effects of XAI systems: trust, vividly clear. The explanation quality dimensions of XAI outlined in this
transparency, understandability, usability, and fairness. Our SLR find work are vital to XAI system development because the dimensions can
ings position these factors as the most important effects of XAI. While have impacts on trust, understandability, fairness, and transparency.
these factors are described by Laato et al. (2022), our SLR links them Our study has three limitations. First, we have considered only the
with XAI representation dimensions. empirical studies on XAI for this review work. Future studies can also
Fourth, we have identified three major themes of future research: consider theoretical papers on XAI.
XAI standardization, XAI visualization, and XAI effects. We have pro Second, we used Scopus and Web of Science for the database search.
posed nine possible research questions that future IS researchers can Hence, we might have missed important studies for our work. This
investigate. We have also outlined the possible ways researchers can limitation can be addressed in the future by conducting searches of other
address these research questions. databases.
Finally, we have proposed a comprehensive framework by connect Third, we have used Wixom and Todd’s (2005) information quality
ing explanation-related factors and XAI effects. We further propose that dimensions for conceptualizing the explanation quality of AI systems.
the XAI effects can ultimately influence behaviors, such as AI adoption There are other information quality dimensions proposed by other re
and use. This framework has implications for researchers. For example, searchers (Wang and Strong, 1996). Therefore, future studies can use
many interesting research models can be developed and tested based on these dimensions to identify additional explanation quality dimensions
this framework. While the framework is developed using Wixom and for AI systems.
Todd’s (2005) work, which describes relationships among object-based
beliefs, behavioral beliefs, and behavior, hypotheses can be developed
CRediT authorship contribution statement
from additional theories, such as the IS success model (DeLone and
McLean, 1992, 2002), technology acceptance models (Davis, 1989;
AKM Bahalul Haque: Conceptualization, Methodology, Conducting
Davis et al., 1989; Chuttur, 2009), the theory of reasoned actions (Ajzen
Primary Search, Data Collection, Writing Original Draft, Analyzing and
and Fishbein, 1973; Ajzen and Fishbein, 1980; Fishbein, 1967; Fishbein
Addressing the Reviewer’s Comments
and Ajzen, 1977; Hale et al., 2002), and the theory of planned behavior
A.K.M. Najmul Islam: Conceptualization, Reviewing Draft, Editing,
(Ajzen, 1985, 1991).
Reviewing the Search Result, Critically Analyzing Reviewers’ Com
ments, Supervision
8.2. Practical implications
Patrick Mikalef: Conceptualization, Reviewing Draft, Reviewing the
Search Result and Data, Critically Analyzing Reviewers’ Comments,
From a practical standpoint, this SLR can serve as a guideline for
Supervision
designing human-centric AI and measuring its consequences. Because AI
is becoming more prevalent in all aspects of life, the findings of this
study may drive researchers and enthusiasts to design digital services
Declaration of competing interest
that are morally sustainable. For example, designers can ensure that
their systems provide explanations related to the identified dimensions
None.
of explanation quality. Their design should also contain possibilities for
both automatic and on-demand explanations. Our findings also outline a
need to design XAI systems in various domains, not just for mission- Acknowledgements
critical systems. Since AI is now being used more than ever in various
industrial and corporate decision-making, the findings of this SLR can This work was supported by the Slovenian Research Agency
help understand the employees’ behavioral intention to use those sys (research core funding No. P5-0410).
tems. As discussed in the literature, various state-of-the-art recruitment
systems use data-driven decision-making. In cases like this, the expla Appendix A. Supplementary data
nation dimensions can help to understand the details of the decision-
making process. Hence, the synthesized framework of this SLR can be Supplementary data to this article can be found online at https://doi.
adopted in various industries and corporate organizations to understand org/10.1016/j.techfore.2022.122120.
16
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
17
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
Factors in Computing Systems - Proceedings, 2018-April, 1–13. https://doi.org/ Intelligent User Interfaces, Proceedings IUI, Part F1476, pp. 379–390. https://doi.
10.1145/3173574.3174006. org/10.1145/3301275.3302306.
European Union, 2018. Art. 22 GDPR - Automated individual decision-making, including Laato, S., Tiainen, M., Islam, A.N., Mäntymäki, M., 2022. How to explain AI systems to
profiling. General Data Protection Regulation. https://gdpr-info.eu/art-22-gdpr/. end users: a systematic literature review and research agenda. Internet Res. 32 (7),
Evans, T., Retzlaff, C.O., Geißler, C., Kargl, M., Plass, M., Müller, H., Holzinger, A., 2022. 1–31.
The explainability paradox: challenges for xAI in digital pathology. Futur. Gener. Lauritsen, S.M., Kristensen, M., Olsen, M.V., Larsen, M.S., Lauritsen, K.M., Jørgensen, M.
Comput. Syst. 133, 281–296. J., Lange, J., Thiesson, B., 2020. Explainable artificial intelligence model to predict
Fazio, R.H., Olson, M.A., 2003. Attitudes: foundation, function and consequences. In: acute critical illness from electronic health records. NatureCommunications 11 (1).
Hogg, M.A., Cooper, J. (Eds.), The Sage Handbook of Social Psychology. Sage, https://doi.org/10.1038/s41467-020-17431-x.
London, UK. Lee, M.K., Rich, K., 2021. Who is included in human perceptions of ai?: trust and
Feng, C., Khan, M., Rahman, A.U., Ahmad, A., 2020. News recommendation systems- perceived fairness around healthcare AI and cultural mistrust. In: Conference on
accomplishments, challenges future directions. IEEE Access 8, 16702–16725. Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/
https://doi.org/10.1109/ACCESS.2020.2967792. 3411764.3445570.
Fishbein, M., 1967. Attitude and the Prediction of Behavior. Readings in attitude Theory Li, L., Lassiter, T., Oh, J., Lee, M.K., 2021. Algorithmic hiring in practice: recruiter and
and Measurement. HR professional’s perspectives on AI use in hiring. In: Proceedings of the 2021 AAAI/
Fishbein, M., Ajzen, I., 1977. Belief, attitude, intention, and behavior: an introduction to ACM Conference on AI, Ethics, and Society, pp. 166–176.
theory and research. Philos. Rhetor. 10 (2). Liao, Q.V., Gruen, D., Miller, S., 2020. Questioning the AI: informing design practices for
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M., 2020. Principled Artificial explainable AI user experiences. In: Conference on Human Factors in Computing
Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Systems - Proceedings, 1–15. https://doi.org/10.1145/3313831.3376590.
Principles for AI. Berkman Klein Center Research Publication (2020-1). Lim, B.Y., Dey, A.K., 2009. Assessing demand for intelligibility in context-aware
Gao, M., Liu, X., Xu, A., Akkiraju, R., 2022. In: Arai, K. (Ed.), Intelligent Systems and applications. In: ACM International Conference Proceeding Series, 195–204. https://
Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, 296. Springer, doi.org/10.1145/1620545.1620576.
Cham. Lim, B.Y., Dey, A.K., Avrahami, D., 2009. Why and why not explanations improve the
Gerlings, J., Jensen, M.S., Shollo, A., 2021. Explainable AI, but explainable to whom? htt intelligibility of context-aware intelligent systems. In: Conference on Human Factors
p://arxiv.org/abs/2106.05568. in Computing Systems - Proceedings, 2119–2128. https://doi.org/10.1145/
Gerlings, J., Shollo, A., Constantiou, I., 2021. Reviewing the need for explainable 1518701.1519023.
artificial intelligence (XAI). In: Proceedings of the Annual Hawaii International Linardatos, P., Papastefanopoulos, V., Kotsiantis, S., 2021. Explainable AI: a review of
Conference on System Sciences, pp. 1284–1293. https://doi.org/10.24251/ machine learning interpretability methods. Entropy 23 (1), 1–45. https://doi.org/
hicss.2021.156, 2020-Janua. 10.3390/e23010018.
Ghallab, M., 2019. Responsible AI: requirements and challenges. AI Perspect. 1 (1), 1–7. Lipton, Z.C., 2018. The mythos of model interpretability: in machine learning, the
https://doi.org/10.1186/s42467-019-0003-z. concept of interpretability is both important and slippery. Queue 16 (3). https://doi.
Goodman, B., Flaxman, S., 2017. European union regulations on algorithmic decision org/10.1145/3236386.3241340.
making and a “right to explanation”. AI Mag. 38 (3), 50–57. https://doi.org/ Liu, H., Lai, V., Tan, C., 2021. Understanding the effect of out-of-distribution examples
10.1609/aimag.v38i3.2741. and interactive explanations on human-ai decision making. In: Proceedings of the
Górski, Ł., Ramakrishna, S., 2021. Explainable artificial intelligence, lawyer’s ACM on Human-Computer Interaction, 5. CSCW2, pp. 1–45.
perspective. In: Proceedings of the Eighteenth International Conference on Artificial Liu, R., Gupta, S., Patel, P., 2021. The application of the principles of responsible AI on
Intelligence and Law, pp. 60–68. social media marketing for digital health. Inf. Syst. Front. 1–25.
Gruetzemacher, R., Dorner, F.E., Bernaola-Alvarez, N., Giattino, C., Manheim, D., 2021. Maas, M.M., 2018. Regulating for’Normal AI Accidents’ Operational Lessons for the
Forecasting AI progress: a research agenda. Technol. Forecast. Soc. Chang. 170, Responsible Governance of Artificial Intelligence Deployment. In: Proceedings of the
120909. 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 223–228.
Gunning, D., Aha, D.W., 2019. DARPA’s explainable artificial intelligence program. AI Mahmud, H., Islam, A.N., Ahmed, S.I., Smolander, K., 2022a. What influences
Mag. 40 (2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850. algorithmic decision-making? A systematic literature review on algorithm aversion.
Hale, J.L., Householder, B.J., Greene, K.L., 2002. The theory of reasoned action. In: The Technol. Forecast. Soc. Chang. 175, 121390.
Persuasion Handbook: Developments in Theory and Practice, 14, pp. 259–286. Mahmud, H., Islam, A.K.M.N., Mitra, R.K., Hasan, A.R., 2022b. The Impact of Functional
Haque, A.K.M.B., Hasan Pranto, T., All Noman, A., Mahmood, A., 2020. Insight about and Psychological Barriers on Algorithm Aversion – An IRT Perspective. In:
detection, prediction and weather impact of coronavirus (Covid-19) using neural Papagiannidis, S., Alamanos, E., Gupta, S., Dwivedi, Y.K., Mäntymäki, M., Pappas, I.
network. Int. J. Artif. Intell. Appl. 11 (4), 67–81. https://doi.org/10.5121/ O. (Eds.), The Role of Digital Technologies in Shaping the Post-Pandemic World. I3E
ijaia.2020.11406. 2022, Lecture Notes in Computer Science, 13454. Springer, Cham.
Haque, A.K.M.B., Bhushan, B., Dhiman, G., 2021. Conceptualizing smart city Malgieri, G., 2019. Automated decision-making in the EU member states: the right to
applications: requirements, architecture, security issues, and emerging trends. explanation and other “suitable safeguards” in the national legislations. Comput.
Expert. Syst. https://doi.org/10.1111/exsy.12753. Law Secur. Rev. 35 (5), 105327 https://doi.org/10.1016/j.clsr.2019.05.002.
Hasan, R., Shams, R., Rahman, M., 2021. Consumer trust and perceived risk for voice- Meske, C., Bunde, E., Schneider, J., Gersch, M., 2022. Explainable artificial intelligence:
controlled artificial intelligence: the case of Siri. J. Bus. Res. 131, 591–597. objectives, stakeholders, and future research opportunities. Inf. Syst. Manag. 39 (1),
Hengstler, M., Enkel, E., Duelli, S., 2016. Applied artificial intelligence and trust—the 53–63.
case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Moore, G.C., Benbasat, I., 1991. Development of an instrument to measure the
Chang. 105, 105–120. perceptions of adopting an information technology innovation. Inf. Syst. Res. 2 (3),
Hevner, A., Chatterjee, S., 2010. Design science research in information systems. In: 192–222.
Design research in information systems. Springer, Boston, MA, pp. 9–22. Mucha, H., Robert, S., Breitschwerdt, R., Fellmann, M., 2021. Interfaces for explanations
Hind, M., Houde, S., Martino, J., Mojsilovic, A., Piorkowski, D., Richards, J., in human-AI interaction: proposing a design evaluation approach. In: Conference on
Varshney, K.R., 2020. Experiences with improving the transparency of AI models Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/
and services. In: Conference on Human Factors in Computing Systems - Proceedings, 3411763.3451759.
pp. 1–8. https://doi.org/10.1145/3334480.3383051. Ngo, T., Kunkel, J., Ziegler, J., 2020. In: Exploring Mental Models for Transparent and
Hong, S.R., Hullman, J., Bertini, E., 2020. Human factors in model interpretability: Controllable Recommender Systems: A Qualitative Study. UMAP 2020 - Proceedings
industry practices, challenges, and needs. In: Proceedings of the ACM on Human- of the 28th ACM Conference on User Modeling, Adaptation and Personalization,
Computer Interaction, 4. CSCW1, pp. 1–26. https://doi.org/10.1145/3392878. pp. 183–191. https://doi.org/10.1145/3340631.3394841.
Hudon, A., Demazure, T., Karran, A., Léger, P.M., Sénécal, S., 2021. Explainable artificial Oh, C., Song, J., Choi, J., Kim, S., Lee, S., Suh, B., 2018. I lead, you help but only with
intelligence (XAI): how the visualization of AI predictions affects user cognitive load enough details: Understanding the user experience of co-creation with artificial
and confidence. In: NeuroIS Retreat. Springer, Cham, pp. 237–246. intelligence. In: Conference on Human Factors in Computing Systems - Proceedings,
IDC, 2018. Worldwide Artificial Intelligence Spending Guide. International Data 2018-April, 1–13. https://doi.org/10.1145/3173574.3174223.
Corporation. https://www.idc.com/getdoc.jsp?containerId=IDC_P33198. Park, H., Ahn, D., Hosanagar, K., Lee, J., 2021. Human-ai interaction in human resource
Islam, A.N., Cenfetelli, R., Benbasat, I., 2020. Organizational buyers’ assimilation of B2B management: understanding why employees resist algorithmic evaluation at
platforms: effects of IT-enabled service functionality. J. Strateg. Inf. Syst. 29 (1), workplaces and how to mitigate burdens. In: Conference on Human Factors in
101597. Computing Systems - Proceedings. https://doi.org/10.1145/3411764.3445304.
Janssen, M., Hartog, M., Matheus, R., Yi Ding, A., Kuk, G., 2020. Will algorithms blind Peters, D., Vold, K., Robinson, D., Calvo, R.A., 2020. Responsible AI—two frameworks
People? The effect of explainable AI and decision-makers’ experience on AI- for ethical design practice. IEEE Trans. Technol. Soc. 1 (1), 34–47.
supported decision-making in government. Soc. Sci. Comput. Rev. 1–16 https://doi. Putnam, V., Conati, C., 2019. Exploring the need for explainable artificial intelligence
org/10.1177/0894439320980118. (XAI) in intelligent tutoring systems (ITS). In: CEUR Workshop Proceedings, p. 2327.
Khosravi, H., Shum, S.B., Chen, G., Conati, C., Tsai, Y.S., Kay, J., Gašević, D., 2022. Rakova, B., Yang, J., Cramer, H., Chowdhury, R., 2021. Where responsible AI meets
Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 3, reality: Practitioner perspectives on enablers for shifting organizational practices. In:
100074. Proceedings of the ACM on Human-Computer Interaction, 5. CSCW1, pp. 1–23.
Kitchenham, B.A., Charters, 2007. In: Guidelines for performing systematic literature Rodriguez-Sampaio, M., Rincón, M., Valladares-Rodríguez, S., Bachiller-Mayoral, M.,
reviews in software engineering. Technical Report, Ver. 2.3 EBSE Technical Report, 2022. Explainable artificial intelligence to detect breast cancer: A qualitative case-
1. EBSE, pp. 1–54. based visual interpretability approach. In: International Work-Conference on the
Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., Getoor, L., 2019. Personalized Interplay between Natural and Artificial Computation. Springer, Cham, pp. 557–566.
explanations for hybrid recommender systems. In: International Conference on
18
A.B. Haque et al. Technological Forecasting & Social Change 186 (2023) 122120
Schmidt, P., Biessmann, F., Teubner, T., 2020. Transparency and trust in artificial J. Multimodal User Interfaces 15 (2), 87–98. https://doi.org/10.1007/s12193-020-
intelligence systems. J. Decis. Syst. 29 (4), 260–278. https://doi.org/10.1080/ 00332-0.
12460125.2020.1819094. Wells, L., Bednarz, T., 2021. Explainable AI and reinforcement learning—a systematic
Schneider, Johannes, Handali, Joshua, 2019. Personalized explanation in machine review of current approaches and trends. Front. Artif. Intell. 4, 550030 https://doi.
learning: A conceptualization. In: European Conference of Information Systems. org/10.3389/frai.2021.550030.
Schneider, T., Ghellal, S., Love, S., Gerlicher, A.R.S., 2021. Increasing the user experience Wixom, B.H., Todd, P.A., 2005. A theoretical integration of user satisfaction and
in autonomous driving through different feedback modalities. In: International technology acceptance. Inf. Syst. Res. 16 (1), 85–102. https://doi.org/10.1287/
Conference on Intelligent User Interfaces, Proceedings IUI, 7–10. https://doi.org/ ISRE.1050.0042.
10.1145/3397481.3450687. Xie, Y., Chen, X.A., Gao, G., 2019. Outlining the design space of explainable intelligent
Schrills, T., Franke, T., 2020. Color for characters - effects of visual explanations of AI on systems for medical diagnosis. In: CEUR Workshop Proceedings, p. 2327.
trust and observability. In: Lecture Notes in Computer Science (Including Subseries Yin, M., Vaughan, J.W., Wallach, H., 2019. Understanding the effect of accuracy on trust
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12217 in machine learning models. In: Conference on Human Factors in Computing
LNCS, pp. 121–135. https://doi.org/10.1007/978-3-030-50334-5_8. Systems - Proceedings, pp. 1–12. https://doi.org/10.1145/3290605.3300509.
Scott, A.C., CWJDR, Shortliffe, E.H., 1977. Explanation capabilities of production-based Zimmermann, R., Mora, D., Cirqueira, D., Helfert, M., Bezbradica, M., Werth, D.,
consultation systems. American Journal of Computational Linguistics 1–50. http Weitzl, W.J., Riedl, R., Auinger, A., 2022. Enhancing brick-and-mortar store
s://aclanthology.org/J77-1006. shopping experience with an augmented reality shopping assistant application using
Stahl, B.C., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Wright, D., personalized recommendations and explainable artificial intelligence. Journal of
2021. Artificial intelligence for human flourishing–beyond principles for machine Research in Interactive Marketing. Vol. ahead-of-print No. ahead-of-print.
learning. Journal of Business Research 124, 374–388.
Szymanski, M., Millecamp, M., Verbert, K., 2021. Visual, textual or hybrid: the effect of
AKM Bahalul Haque is a Junior Researcher at the Department of Software Engineering at
user expertise on different explanations. In: International Conference on Intelligent
LUT University. Earlier, he was a lecturer at the Department of Electrical and Computer
User Interfaces, Proceedings IUI, 109–119. https://doi.org/10.1145/
Engineering, North South University. His works have been accepted and published in in
3397481.3450662.
ternational conferences and peer-reviewed journals, including IEEE Access, Expert Sys
Tiainen, M., 2021. To Whom to Explain and What?: Systematic Literature Review on
tems, Cybernetics and Systems, various International conference proceedings, Tylor and
Empirical Studies on Explainable Artificial Intelligence (XAI) (Master Thesis).
Francis Books, and Springer Book. His research interests include Explainable AI, block
accessed June 5, 2022. https://www.utupub.fi/handle/10024/151554.
chain, data privacy and protection, and human-computer interaction.
van der Waa, J., Schoonderwoerd, T., van Diggelen, J., Neerincx, M., 2020. Interpretable
confidence measures for decision support systems. Int. J. Hum. Comput. Stud. 144
(May), 102493 https://doi.org/10.1016/j.ijhcs.2020.102493. A.K.M. Najmul Islam received the Ph.D. degree in information systems from the Uni
Wachter, S., Mittelstadt, B., Floridi, L., 2017. Transparent, explainable, and accountable versity of Turku, Finland, and the M.Sc. (Eng.) degree from the Tampere University of
AI for robotics. ScienceRobotics 2 (6). https://doi.org/10.1126/scirobotics.aan6080. Technology, Finland. He is currently an Adjunct Professor at Tampere University, Finland.
Wang, D., Yang, Q., Abdul, A., Lim, B.Y., States, U., 2019. In: Designing Theory-Driven He is also an Associate Professor at LUT University, Finland. His research has been pub
User-Centric Explainable AI, pp. 1–15. lished in top outlets, such as European Journal of Information Systems, Information Sys
Wang, J., Moulden, A., 2021. AI trust score: a user-centered approach to building, tems Journal, Journal of Strategic Information Systems, Technological Forecasting and
designing, and measuring the success of intelligent workplace features. In: Social Change, Computers in Human Behavior, Internet Research, Computers & Education,
Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/ Journal of Medical Internet Research, Information Technology & People, Telematics &
10.1145/3411763.3443452. Informatics, Journal of Retailing and Consumer Research, Communications of the AIS,
Wang, R.Y., Strong, D.M., 1996. Beyond accuracy: what data quality means to data Journal of Information Systems Education, AIS Transaction on Human-Computer Inter
consumers. J. Manag. Inf. Syst. 12 (4), 5–33. action, and Behaviour & Information Technology.
Wang, Z., Yu, X., Feng, N., Wang, Z., 2014. An improved collaborative movie
recommendation system using computational intelligence. J. Vis. Lang. Comput. 25
Patrick Mikalef is a Professor in Data Science and Information Systems at the Department
(6), 667–675. https://doi.org/10.1016/j.jvlc.2014.09.011.
of Computer Science. He has been a Marie Skłodowska-Curie post-doctoral research fellow
Wearn, O.R., Freeman, R., Jacoby, D.M., 2019. Responsible AI for conservation. Nat.
working on “Competitive Advantage for the Datadriven Enterprise” (CADENT). He
Mach. Intell. 1 (2), 72–73.
received his B.Sc. in Informatics from the Ionian University, his M.Sc. in Business Infor
Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E., 2019. I "do you trust me?":
matics for Utrecht University, and his Ph.D. in IT Strategy from the Ionian University. His
increasing user-trust by integrating virtual agents in explainable AI interaction
research interests focus on the strategic use of data science and information systems in
design. In: IVA 2019 - Proceedings of the 19th ACM International Conference on
turbulent environments. He has published work in international conferences and peer
Intelligent Virtual Agents, pp. 7–9. https://doi.org/10.1145/3308532.3329441.
reviewed journals, including the European Journal of Information Systems, British Journal
Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E., 2021. “Let me explain!”:
of Management, Information and Management, and the European Journal of Operational
exploring the potential of virtual agents in explainable AI interaction design.
Research.
19