0% found this document useful (0 votes)
8 views

Explainable AI XAI Explained

Uploaded by

soumikfarhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Explainable AI XAI Explained

Uploaded by

soumikfarhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Explainable AI (XAI): Explained

G. Pradeep Reddy, Y. V. Pavan Kumar


School of Electronics Engineering, VIT-AP University, Amaravati 522237, Andhra Pradesh, INDIA
[email protected], [email protected]
2023 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream) | 979-8-3503-0383-4/23/$31.00 ©2023 IEEE | DOI: 10.1109/ESTREAM59056.2023.10134984

Abstract—Artificial intelligence (AI) has become an integral 100

part of our lives; from the recommendations we receive on social 90


media to the diagnoses made by medical professionals. However, 80
as AI continues to grow more complex, the “black box” nature of

Interest Over Time


70
many AI models has become a cause for concern. The main
60
objective of Explainable AI (XAI) research is to produce AI
50
models that are easily interpretable and understandable by
humans. In this view, this paper presents an overview of XAI and 40
its techniques for creating interpretable models, specifically 30
focusing on Local Interpretable Model-Agnostic Explanations 20
(LIME) and SHapley Additive exPlanations (SHAP). 10
Furthermore, this paper delves into the various applications of
0
XAI in different domains, including healthcare, finance, and law. 2019 2020 2021 2022 2023
Additionally, the ethical and legal implications of using XAI are Year
mentioned. Finally, the paper discusses various challenges and Fig. 1. Google trends statistics for Explainable AI over the last four years.
future research directions of XAI.
XAI is a critical component of Industry 4.0, providing
Keywords—Black box models, Explainable AI (XAI), reliable and safe AI-based systems for various industries.
Interpretability, Local Interpretable Model-Agnostic Explanations Google Trends data presented in Fig. 1 shows a significant
(LIME), SHapley Additive exPlanations (SHAP), Transparency,
increase in research interest in the term “Explainable AI”. The
Trust in AI.
need for XAI has become increasingly important in high-stakes
I. INTRODUCTION domains such as healthcare, finance, law, and autonomous
The growing use of artificial intelligence (AI) has led to vehicles. In these applications, the consequences of an
apprehensions regarding the opacity of decision-making incorrect decision or prediction can be severe, leading to
processes within AI models in various applications [1]-[7]. potential harm or loss of life. Though there are few review
This "black box" nature of these models has raised concerns papers available on XAI in the literature, most of them are
about their reliability and trustworthiness. To address these discussing high-level aspects of XAI which is very difficult for
concerns, Explainable AI (XAI) has emerged as an important novice readers. With this motivation, this paper aims to serve
area of research that aims to create transparent and as a valuable resource for beginners interested in understanding
interpretable AI systems, making it easier for humans to and getting started with XAI. It provides a clear introduction to
understand how the system arrived at its decision [8]. The the key concepts and techniques of XAI, making it easier for
concept of XAI was first introduced in 2016 by Defense readers new to the field to understand the topic. This is the key
Advanced Research Projects Agency (DARPA) [9]. contribution of this paper.
The history of XAI can be traced back to the early days of The paper is organized as follows. Section 2 discusses the
AI research, where researchers were interested in building various methods used in XAI. Section 3 covers the applications
systems that were both intelligent and transparent. However, of XAI in various domains. Section 4 presents various
with the rise of machine learning and deep learning, which are challenges and the future directions of XAI research. Finally,
based on complex and opaque models, the issue of Section 5 summarizes the key points covered in the paper.
transparency and interpretability has become more critical. The II. METHODS
lack of transparency in AI systems has raised concerns about
accountability, trustworthiness, and ethical implications, which Recently, there is a rapid surge in the popularity of black-
has resulted in a growing interest in XAI, attracting attention box models. However, the focus of researchers and
from both academia and industry [10]. Industry 4.0 involves practitioners has been primarily on enhancing accuracy and
the integration of advanced technologies such as AI, robotics, minimizing errors rather than prioritizing the explainability of
and IoT into the manufacturing process, resulting in higher AI models. Although explainability may currently be perceived
efficiency, productivity, and cost savings. However, using such as an optional feature, it is expected to become an essential
technologies also comes with challenges, especially when they requirement as AI decision-making systems continue to
are not transparent or explainable, which can hinder the proper expand. Inevitably, explainability will be required in the future
functioning and security of IoT devices and networks [11]. to guarantee transparency in every phase of black box model
implementations.

979-8-3503-0383-4/23/$31.00 ©2023
Authorized licensed use limited IEEE of Toronto. Downloaded on August 06,2024 at 15:06:21 UTC from IEEE Xplore.
to: The University Restrictions apply.
Present
Task

How?
Learning Learned Why?
Dataset Output When?
Process Function
Where?

XAI
Task

New Learning Explainable Explainable I Understand


Dataset
Process Model Inference

Fig. 2. Conventional AI and XAI processes [9].

The difference between the conventional AI representation


A. Feature Visualization
and XAI is clearly shown in Fig. 2 [9]. The challenge of
creating AI models is the complexity of the models themselves. Feature visualization is a technique that involves
Fig. 3 illustrates the trade-off between explainability and generating visualizations of the features that an AI model has
performance. Many modern AI models, such as deep neural learned to recognize. This helps in providing the clear insights
networks, are complex and difficult to understand. This can into what the model is looking for in the input data exactly.
make it difficult to identify potential biases or errors in the Besides, it can be used to identify the potential biases or errors
model and make it difficult to explain the model's decision to in the model [13].
humans. For example, the concept of the feature visualization can
be used in tasks such as image recognition to generate images
that activate a specific neuron in the model to its maximum
CNN XAI Goal extent. This technique helps to identify which parts of the
input image are critical for the model's decision by visualizing
the input that elicits the strongest response from the neuron.
Performance

XGBoost Through this mechanism, the feature visualization helps in


understanding the features for which the neuron is most
Decision sensitive.
SVM Trees Feature visualization is typically employed with deep
neural networks to understand how they process information.
KNN
Google introduced a popular method called DeepDream in
Rule-based 2015, which modifies the input image to maximize the
learning activation of certain neurons using backpropagation [14].
These “dreams up” visual patterns correspond to specific
Explainability features, facilitating a better understanding of the network's
Fig. 3. Explainability vs. Performance. inner workings.

The conventional process involves the input of data, B. Saliency Mapping


algorithm, and the output of results, while the XAI process Saliency mapping is another technique that can be used to
includes additional steps of explanation generation and user create more interpretable AI models. This technique involves
interactions to ensure transparency and accountability in the producing a heat map of the input data that emphasizes the
decision-making process. The section describes key techniques input regions that are crucial for the model's decision-making.
such as feature visualization, saliency mapping, and model The significance of the saliency mapping lies in its ability to
interpretation for implementing XAI. Researchers are actively identify potential biases and errors in the model, which helps
working to improve the efficiency and usability. By employing to improve the overall accuracy and fairness of the model.
these XAI methods, stakeholders can gain more insight into the Furthermore, saliency maps can provide a clear and concise
process by which AI models make more informed and reliable explanation for the decision of the models. This makes it more
decisions based on AI-generated recommendations [12]. understandable to humans.

Authorized licensed use limited to: The University of Toronto. Downloaded on August 06,2024 at 15:06:21 UTC from IEEE Xplore. Restrictions apply.
There are different methods for generating saliency maps. Algorithm 1 LIME Implementation
One popular approach is to compute the gradients of the Input: Instance to be explained.
network's output with respect to the input image and then use Step 1. Initialize the LIME explainer with specified
these gradients to weigh the importance of different pixels parameters.
[13]. Grad-CAM creates saliency maps through the
Step 2. Generate a set of samples around the instance to be
computation of gradients between the network's output and explained using the specified distance metric
feature maps in a designated layer of the network [15]. (perturbations).
C. Model Interpretation Step 3. Evaluate the model's prediction on the generated
samples and record the results.
In XAI, Model interpretation is a crucial component that
enhances the transparency and interpretability of AI models. Step 4. Train a linear model on the generated samples and
their corresponding machine-learning model
Model-agnostic techniques are a type of model interpretation
predictions using the specified kernel width.
technique that can be applied to any AI model, regardless of
the specific algorithm or architecture used. These techniques Step 5. Calculate the feature importance weights based on the
coefficients of the trained linear model.
help to analyze the relationship between input features and the
model's output without relying on the internal workings of a Step 6. Sort the feature importance weights in descending
order and return them as the list of explanations.
specific model.
LIME (Local Interpretable Model-Agnostic Explanations) Output: Feature explanations, where each explanation provides
and SHAP (SHapley Additive exPlanations) are two popular the name of a feature and its corresponding importance weight.
techniques of model-agnostic techniques that generate local
explanations for individual predictions or global feature 2) SHAP: SHAP is a technique for explaining the output of
importance analysis. LIME creates surrogate models that any AI model by computing the influence of each input
approximate the original model and generate locally faithful feature on the final forecast [17]. This technique is based
explanations of individual predictions, while SHAP on cooperative game theory and the concept of Shapley
decomposes the prediction output into contributions from each values. The Shapley value was proposed by Lloyd Shapley
input feature to generate feature importance values. and named in his honor. It can be defined as the average of
These techniques are particularly useful when dealing with the marginal contributions of each player to all possible
complex models (example: deep neural networks) or when coalitions. This uses a weighted linear regression model to
transparency is required in decision-making processes. They approximate the model's behavior in the local
can be used with feature visualization and saliency mapping to neighborhood of a specific input instance. The weights of
provide a more comprehensive understanding of how a model the linear model are determined by the Shapley values,
works and identify potential biases or errors in the model's which are computed using a recursive formula that
decision-making process. A detailed discussion of LIME and considers all possible combinations of input features. The
SHAP techniques is given below. SHAP technique provides a measure of the importance of
each feature for a specific input instance, as well as a
1) LIME: LIME is an explainability technique that aims to global feature importance ranking based on the average
provide local, interpretable explanations for individual absolute value of the Shapley values across all instances.
predictions. It was introduced in 2016 [16]. LIME This technique provides a unified framework for
generates a set of interpretable features from the original explaining the output of an AI model and provides
input data and trains a simpler, interpretable model on the accurate and intuitive explanations for a wide range of
generated features. The simpler model is then used to applications. Algorithm 2 outlines the procedure for
explain the predictions of the more complex black-box implementing the SHAP.
model.
LIME generates interpretable features by introducing
Algorithm 2 SHAP Implementation
perturbations to the original input data and observing their
impact on the output of the black-box model. For example, Input: Instance to be explained.
if the input is text, LIME might introduce perturbations by Step 1. Initialize the SHAP explainer with the trained AI
randomly replacing words in the text and assessing the model and specified explainer type.
resulting change in the model's output. Similarly, if the Step 2. Generate a set of background samples from the
input is a numerical data, then the LIME might introduce background dataset.
perturbations by adding or subtracting small amounts of Step 3. Calculate the SHAP values for the instance to be
noise to each data point and then evaluating the impact on explained using the generated background samples.
the model's prediction. Step 4. Calculate the feature importance weights based on
By identifying the most relevant features to the model's the absolute values of the SHAP values.
output in a given local region, LIME can help create a Step 5. Sort the feature importance weights in descending
simpler, interpretable model that explains the behavior of order and return them as the list of explanations.
the original model in that region. Algorithm 1 outlines the Output: Feature explanations, where each explanation gives
procedure for implementing the LIME. the name of a feature and its corresponding SHAP value.

Authorized licensed use limited to: The University of Toronto. Downloaded on August 06,2024 at 15:06:21 UTC from IEEE Xplore. Restrictions apply.
III. APPLICATIONS influence on the model's decision, making it easier to identify
XAI finds applications across multiple domains, including potential errors or biases in the model.
healthcare, finance, law, etc., as shown in Fig. 4 [18], [19]. In B. Finance
healthcare, XAI can help interpret medical data, while in XAI has the potential to revolutionize the finance industry
finance, it can detect fraud, evaluate creditworthiness, and by providing a better understanding of how AI models make
assess risk more accurately. In law, XAI can analyze legal
decisions. This understanding can help financial institutions
documents, identify relevant case law, and ensure transparent
make better-informed decisions and mitigate potential risks.
and objective decision-making. The main benefit of XAI lies in
Important applications of XAI in finance are given below.
its ability to make the decision-making process of AI
transparent and understandable, making it valuable in any 1) Fraud detection: XAI can help in fraud detection by
application where trust and accountability are critical. The explaining the decisions made by AI models. By providing
detailed discussion of key applications such as healthcare, transparent and interpretable results, it can help in
finance, and law are discussed as follows. identifying the reasons behind fraudulent activities and
prevent them from happening in the future.
Applications of XAI
2) Credit scoring: AI models are increasingly being used to
evaluate creditworthiness. XAI can help in explaining how
these models arrive at a credit score, making it easier for
Healthcare Law Education Transportation
banks and financial institutions to understand why a
particular score was assigned to a customer. This can help
in making more accurate lending decisions and reducing
Cybersecurity
Finance Marketing Agriculture
the risk of default.

Fig. 4. Various applications of XAI.


3) Investment management: XAI can help in portfolio
management by providing insights into how AI models
A. Healthcare make investment decisions. By explaining the reasoning
behind investment decisions, XAI can increase
In healthcare, XAI finds a range of applications, from
transparency and trust in the decision-making process.
medical diagnosis to treatment recommendations. Some of the
Ultimately, this can lead to better performance and more
key use cases in health care are discussed in this section. For
satisfied investors.
instance, [20] proposed an explainable deep-learning model for
diagnosing skin diseases. The model generates saliency maps 4) Compliance: XAI can explain how AI models make
that highlight the regions of the image that are most important decisions, helping financial institutions ensure regulatory
for the diagnosis. This can help dermatologists understand how compliance and avoid legal risks. This could involve
the model arrived at its diagnosis and provide more accurate identifying and mitigating potential biases in AI systems
diagnoses. that may lead to non-compliance. By providing
In the context of the COVID-19 pandemic, AI has shown transparency in decision-making, XAI can help institutions
immense potential in developing solutions to address the identify potential violations and make more informed
challenges posed by the virus. However, the lack of decisions.
transparency in AI models has been a major hindrance in their
adoption in clinical practice, especially in the areas of 5) Customer service: XAI can help in improving customer
diagnosis and disease staging. Through enhancing model service by providing explanations for AI-powered
performance, establishing user trust, and aiding in decision- decisions. For example, If an AI system denies a
making, XAI has the potential to tackle this problem [21]. customer's loan application, XAI can provide a rationale
Stroke is a leading cause of disability, but predicting upper for the decision by identifying particular elements that
limb (UL) functional recuperation after rehabilitation is a resulted in the rejection. When customers receive
challenge due to the complexity of post-stroke impairment. transparent and interpretable information about AI-
XAI has the potential to address this challenge by enabling the powered decisions, it can lead to an improvement in their
development of predictive models that reveal the most satisfaction levels and reduce the probability of negative
important features contributing to predictions [22]. customer experiences.
Several techniques for XAI have been developed in the C. Law
context of medical image analysis which can help to detect and XAI has various potential applications in the legal domain.
diagnose diseases such as cancer [23]. For example, one The following are some of the ways in which XAI can be used
approach is to generate visual representations of critical image in law.
areas that influenced the model's decision through feature
visualization. This can help doctors and patients better 1) Legal document analysis and contract review: XAI can
understand the rationale behind the model's diagnosis and build assist in analyzing large volumes of legal documents and
trust in its accuracy. Additionally, saliency mapping can be identifying relevant information, resulting in faster and
used to highlight the regions of an image that had the most more accurate analysis of legal contracts.

Authorized licensed use limited to: The University of Toronto. Downloaded on August 06,2024 at 15:06:21 UTC from IEEE Xplore. Restrictions apply.
2) Legal decision-making: XAI can be used in legal decision-
making processes such as predicting case outcomes or 1 Explainability vs. Performance
recommending plea bargains. Studies have shown that XAI
models outperform traditional machine learning models in 2 Human Factors
terms of accuracy, transparency, and interpretability. Challenges and
Research Perspectives
3 Lack of a Universal Standard
3) Addressing challenges in the legal domain: XAI can help
address challenges such as model interpretability and the 4 Bias and Fairness
limited availability of legal data. Researchers have
proposed methods for designing and evaluating XAI 5 Evaluation
models that prioritize fairness and transparency and have
suggested the use of synthetic legal data to train and test Fig. 5. Various challenges and research directions in XAI.
XAI models in the legal domain.
A. Explainability versus Performance
Besides the aforementioned key applications, XAI has a Balancing explainability with performance in machine
range of other applications, which are summarized in Table 1. learning models can often present a trade-off. Models that
prioritize explainability may sacrifice performance, while
TABLE I. SUMMARY OF DIVERSE APPLICATIONS OF XAI highly performant models may lack interpretability. To address
Application Description this challenge, future XAI research should concentrate on
Marketing By leveraging XAI, personalized recommendations can developing more advanced techniques that strike a balance
be offered to customers, which can enhance their
satisfaction levels and increase the sales.
between these two objectives, thereby meeting the need for
Education The utilization of XAI can aid teachers and students in both high performance and explainability.
comprehending the learning process of students and
offering customized feedback to enhance their learning B. Human Factors
results. Another challenge in XAI is understanding how people
Cybersecurity Through its ability to provide insights into the actions of
interact with and interpret the explanations provided by XAI
attackers and defenders, XAI can play a crucial role in
identifying and mitigating potential security threats. techniques. People may have different levels of technical
Transportation Transportation systems can be made safer and more expertise and may interpret explanations in different ways,
efficient by utilizing XAI, which can analyze vast which can affect their trust in the system. To overcome this
amounts of data from diverse sources and provide real- challenge, research should focus on understanding the human
time recommendations.
Agriculture XAI can help farmers make better decisions about crop factors involved in XAI and developing techniques that are
management and irrigation by analyzing data from tailored to different user groups.
sensors.
C. Lack of a Universal Standard
As XAI can be used in various domains, it is important to A challenge in XAI is the lack of a universal standard or
consider the ethical and legal issues surrounding its use. One framework for developing and evaluating XAI techniques. This
major concern is the potential for bias in the algorithms used makes it difficult to compare different approaches and limits
for XAI. Bias can arise due to various reasons, such as the the ability to apply XAI techniques across different domains
data used to train the model, the features selected, or the and applications. Future research should focus on developing a
assumptions made during model building. This can lead to standardized framework for XAI that can be widely adopted
unfair treatment of certain groups of people, particularly those and applied in various contexts.
who are already marginalized in society. To mitigate this D. Bias and Fairness
issue, it is important to have diverse and representative
datasets and to regularly audit the models for bias. Machine learning models can perpetuate and amplify
Another ethical concern is the impact of XAI on privacy. existing biases in data, which can lead to unfair and
The use of personal data in XAI can potentially lead to discriminatory outcomes. XAI techniques should be designed
violations of privacy and data protection laws. To address this to identify and mitigate biases in machine learning models to
issue, data protection laws should be adhered to and the use of ensure fair and ethical decision-making.
personal data should be minimized whenever possible. E. Evaluation
IV. CHALLENGES AND RESEARCH PERSPECTIVES Assessing the effectiveness of XAI techniques is essential
to ensure that they provide accurate and practical explanations.
Despite the significant progress made in XAI in recent Nevertheless, evaluating XAI techniques can pose a challenge,
years, there are still various challenges and new research as there is no established consensus on what constitutes a good
avenues that require attention in order to further enhance the explanation. Therefore, future research should prioritize
field [24]-[27]. In this view, this section outlines and briefly developing standardized evaluation metrics and benchmarks
discusses various critical challenges and potential research that can be used to evaluate the effectiveness of the XAI
perspectives of XAI. These are depicted in Fig. 5 and described techniques.
as follows.

Authorized licensed use limited to: The University of Toronto. Downloaded on August 06,2024 at 15:06:21 UTC from IEEE Xplore. Restrictions apply.
V. SUMMARY How, and Where,” IEEE Trans. Ind. Inf., vol. 18, no. 8, pp. 5031–5042,
Aug. 2022, doi: https://doi.org/10.1109/TII.2022.3146552.
Explainable AI (XAI) provides a promising avenue for [12] C. Agarwal, O. Queen, H. Lakkaraju, and M. Zitnik, “Evaluating
increasing the transparency, accountability, and trustworthiness explainability for graph neural networks,” Sci Data, vol. 10, no. 1, p.
of AI systems. This paper explored the concept of XAI and its 144, Mar. 2023, doi: https://doi.org/10.1038/s41597-023-01974-x.
importance. Various methods such as feature visualization, [13] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside
saliency mapping, and model interpretation are detailed. Convolutional Networks: Visualising Image Classification Models and
Saliency Maps,” 2013, doi: https://doi.org/10.48550/ARXIV.1312.6034.
Special attention was given to LIME and SHAP techniques.
Further, the XAI applications in diverse domains, such as [14] Inceptionism: Going Deeper into Neural Networks, Jun. 2015,
https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-
healthcare, finance, and law are discussed. The ethical and neural.html, last accessed on 06 May 2023.
legal implications of XAI are also highlighted. Finally, the [15] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D.
paper outlined challenges and prospects for future research. Batra, “Grad-CAM: Visual Explanations from Deep Networks via
Gradient-Based Localization,” in 2017 IEEE International Conference
REFERENCES on Computer Vision (ICCV), Venice: IEEE, Oct. 2017, pp. 618–626.
doi: https://doi.org/10.1109/ICCV.2017.74.
[1] K. Ramireddy, A. S. Hari, and Y. V. P. Kumar, “Artificial Intelligence
Based Control Methods for Speed Control of Wind Turbine Energy [16] M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’:
System,” in Intelligent Computing in Control and Communication, G. T. Explaining the Predictions of Any Classifier,” 2016, doi:
C. Sekhar, H. S. Behera, J. Nayak, B. Naik, and D. Pelusi, Eds., Lecture https://doi.org/10.48550/ARXIV.1602.04938.
Notes in Electrical Engineering,, vol. 702. Singapore: Springer, 2021, [17] S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model
pp. 203–217. doi: https://doi.org/10.1007/978-981-15-8439-8_18. Predictions,” 2017, doi: https://doi.org/10.48550/ARXIV.1705.07874.
[2] S. N. V. B. Rao et al., “Day-Ahead Load Demand Forecasting in Urban [18] U. Pawar, D. O’Shea, S. Rea, and R. O’Reilly, “Explainable AI in
Community Cluster Microgrids Using Machine Learning Methods,” Healthcare,” in 2020 International Conference on Cyber Situational
Energies, vol. 15, no. 17, p. 6124, Aug. 2022, doi: Awareness, Data Analytics and Assessment (CyberSA), Dublin, Ireland:
https://doi.org/10.3390/en15176124. IEEE, Jun. 2020, pp. 1–2. doi:
[3] Y. V. P. Kumar and R. Bhimasingu, “Fuzzy logic based adaptive virtual https://doi.org/10.1109/CyberSA49311.2020.9139655.
inertia in droop control operation of the microgrid for improved [19] B. M. Keneni et al., “Evolving Rule-Based Explainable Artificial
transient response,” in 2017 IEEE PES Asia-Pacific Power and Energy Intelligence for Unmanned Aerial Vehicles,” IEEE Access, vol. 7, pp.
Engineering Conference (APPEEC), Bangalore: IEEE, Nov. 2017, pp. 17001–17016, 2019, doi:
1–6. doi: https://doi.org/10.1109/APPEEC.2017.8309006. https://doi.org/10.1109/ACCESS.2019.2893141.
[4] B. Vasu Murthy, Y. V. Pavan Kumar, and U. V. Ratna Kumari, “Fuzzy [20] C. Metta et al., “Explainable Deep Image Classifiers for Skin Lesion
logic intelligent controlling concepts in industrial furnace temperature Diagnosis,” 2021, doi: https://doi.org/10.48550/ARXIV.2111.11863.
process control,” in 2012 IEEE International Conference on Advanced
[21] F. Giuste et al., “Explainable Artificial Intelligence Methods in
Communication Control and Computing Technologies (ICACCCT),
Combating Pandemics: A Systematic Review,” IEEE Rev. Biomed.
Ramanathapuram, India: IEEE, Aug. 2012, pp. 353–358. doi:
Eng., vol. 16, pp. 5–21, 2023, doi:
https://doi.org/10.1109/ICACCCT.2012.6320801.
https://doi.org/10.1109/RBME.2022.3185953.
[5] P. P. Kasaraneni, Y. Venkata Pavan Kumar, G. L. K. Moganti, and R.
[22] M. Gandolfi et al., “eXplainable AI Allows Predicting Upper Limb
Kannan, “Machine Learning-Based Ensemble Classifiers for Anomaly
Rehabilitation Outcomes in Sub-Acute Stroke Patients,” IEEE J.
Handling in Smart Home Energy Consumption Data,” Sensors, vol. 22,
Biomed. Health Inform., vol. 27, no. 1, pp. 263–273, Jan. 2023, doi:
no. 23, p. 9323, Nov. 2022, doi: https://doi.org/10.3390/s22239323.
https://doi.org/10.1109/JBHI.2022.3220179.
[6] B. Prasanth et al., “Maximizing Regenerative Braking Energy
[23] B. H. M. van der Velden, H. J. Kuijf, K. G. A. Gilhuijs, and M. A.
Harnessing in Electric Vehicles Using Machine Learning Techniques,”
Viergever, “Explainable artificial intelligence (XAI) in deep learning-
Electronics, vol. 12, no. 5, p. 1119, Feb. 2023, doi:
based medical image analysis,” Medical Image Analysis, vol. 79, p.
https://doi.org/10.3390/electronics12051119.
102470, Jul. 2022, doi: https://doi.org/10.1016/j.media.2022.102470.
[7] S. N. V. B. Rao et al., “Power Quality Improvement in Renewable-
[24] A. Rawal, J. McCoy, D. B. Rawat, B. M. Sadler, and R. St. Amant,
Energy-Based Microgrid Clusters Using Fuzzy Space Vector PWM
“Recent Advances in Trustworthy Explainable Artificial Intelligence:
Controlled Inverter,” Sustainability, vol. 14, no. 8, p. 4663, Apr. 2022,
Status, Challenges, and Perspectives,” IEEE Trans. Artif. Intell., vol. 3,
doi: https://doi.org/10.3390/su14084663.
no. 6, pp. 852–866, Dec. 2022, doi:
[8] A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on https://doi.org/10.1109/TAI.2021.3133846.
Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp.
[25] W. Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta-
52138–52160, 2018, doi:
survey of current challenges and future opportunities,” Knowledge-
https://doi.org/10.1109/ACCESS.2018.2870052.
Based Systems, vol. 263, p. 110273, Mar. 2023, doi:
[9] Matt Turek, “Explainable Artificial Intelligence,” https://doi.org/10.1016/j.knosys.2023.110273.
https://www.darpa.mil/program/explainable-artificial-intelligence, last
[26] L. Weber, S. Lapuschkin, A. Binder, and W. Samek, “Beyond
accessed on 06 May 2023.
explaining: Opportunities and challenges of XAI-based model
[10] G. Schwalbe and B. Finzel, “A comprehensive taxonomy for improvement,” Information Fusion, vol. 92, pp. 154–176, Apr. 2023,
explainable artificial intelligence: a systematic survey of surveys on doi: https://doi.org/10.1016/j.inffus.2022.11.013.
methods and concepts,” Data Min Knowl Disc, Jan. 2023, doi:
[27] P. Gohel, P. Singh, and M. Mohanty, “Explainable AI: current status and
https://doi.org/10.1007/s10618-022-00867-8.
future directions,” 2021, doi:
[11] I. Ahmed, G. Jeon, and F. Piccialli, “From Artificial Intelligence to https://doi.org/10.48550/ARXIV.2107.07045.
Explainable Artificial Intelligence in Industry 4.0: A Survey on What,

Authorized licensed use limited to: The University of Toronto. Downloaded on August 06,2024 at 15:06:21 UTC from IEEE Xplore. Restrictions apply.

You might also like