0% found this document useful (0 votes)
29 views6 pages

Explainability of Brain Tumor Classification Based On Region

The document discusses the importance of Explainable Artificial Intelligence (XAI) in the classification of brain tumors using MRI images, emphasizing the need for transparency in AI decision-making in healthcare. It reviews various XAI techniques, such as LIME, SHAP, and Grad-CAM, and their applications in enhancing the interpretability of deep learning models, achieving an accuracy of 80% in brain tumor classification. The study aims to improve patient outcomes by providing medical practitioners with better insights into AI-driven diagnoses and treatment decisions.

Uploaded by

Linh Đào
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views6 pages

Explainability of Brain Tumor Classification Based On Region

The document discusses the importance of Explainable Artificial Intelligence (XAI) in the classification of brain tumors using MRI images, emphasizing the need for transparency in AI decision-making in healthcare. It reviews various XAI techniques, such as LIME, SHAP, and Grad-CAM, and their applications in enhancing the interpretability of deep learning models, achieving an accuracy of 80% in brain tumor classification. The study aims to improve patient outcomes by providing medical practitioners with better insights into AI-driven diagnoses and treatment decisions.

Uploaded by

Linh Đào
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Explainability of Brain Tumor Classification Based

on Region
2024 International Conference on Emerging Technologies in Computer Science for Interdisciplinary Applications (ICETCS) | 979-8-3503-7250-2/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICETCS61022.2024.10544289

Prashant Narayankar
School of Computer Science and Dr. Vishwanath P. Baligar
Engineering School of Computer Science and
KLE Technological University Engineering
Hubballi, India KLE Technological University
[email protected] Hubballi, India
[email protected]

Abstract—Medical image analysis plays a crucial role in this high-stakes environment, the need for transparency in AI
modern healthcare, aiding clinicians in diagnosing and treating decision-making is not just essential but also critical. Artificial
various medical conditions. With the advent of Artificial intelligence (AI) and deep learning (DL) models are helping
Intelligence (AI) and Machine Learning (ML), there has been a to bridge many gaps in the agriculture industry by enabling the
surge in the development of AI-powered algorithms for medical prediction of when to plant seeds, the effects of climate
image analysis. Explainable Artificial Intelligence (XAI) change, and when to harvest. Artificial intelligence (AI) has
techniques aim to provide interpretable and transparent been shown to be a catalyst for improvements that are
insights into the decision-making process of AI models, revolutionizing conventional processes.
enhancing their usability and trustworthiness in healthcare
applications. We review the state-of-the-art XAI methods, To meet this requirement, this in-depth analysis begins a
including feature visualisation, attention mechanisms, and rule- multi-pronged investigation that explores the core ideas, wide-
based systems, and their application to medical image analysis. ranging applications, difficult obstacles, and bright future
XAI techniques can pave the way for safer and more effective possibilities of XAI in the complex field of brain tumour
AI-driven medical solutions, ultimately benefiting healthcare classification from medical images. Giving medical
providers and patients. As the healthcare industry continues to practitioners the knowledge and resources they need to make
embrace AI, integrating XAI into medical image analysis is better educated, data driven, and ultimately confident clinical
poised to revolutionize how diseases are detected, diagnosed,
judgments is the main objective. As we go along this road, our
and treated. In our work, we are using the deep learning model
goal is to not only meet but also surpass current patient care
to classify and the Explainable AI model, which explains the
prediction of the model for Brain tumour disease using MRI
standards, paving the way for higher standards and better
images. We have used CNN for brain tumour disease healthcare results. The goal is to fully utilize XAI’s innate
classification. Out of 7043 images, we have taken 5722 for capacity to improve the accuracy and usefulness of deep
training and 1321 for testing and validating the disease. The learning-based brain tumour categorization. The paper aims to
model achieves 80% of accuracy. Explainable AI models like support a paradigm change in healthcare towards more
LIME, SHAP, Integrated Gradients, and Grad-CAM are used informed, transparent, and data-driven decision-making
to interpret a model’s predictions on regions of interest inan processes, eventually supporting the main objective of
image. enhancing patient outcomes and care.

Keywords— Explainable Artificial Intelligence (XAI), II. BACKGROUND STUDY


Artificial Intelligence AI. Convolutional neural networks, or CNNs, were used for the
purpose of simplicity of classification.
I. INTRODUCTION
The significant advances in deep learning techniques in A. Convolution Neural Networks
recent years, particularly in the difficult field of medical image Within the field of machine learning, Convolutional
processing, have ushered in a transformative era. Deep neural Neural Networks (CNNs) are specialized in data processing
networks have shown amazing performance in applications for tasks like segmentation and picture classification. CNNs,
such as photo segmentation and sickness detection. The more which consist of convolutional, pooling, and fully connected
regularly these deep learning models are utilized, the more layers, are particularly good at identifying complex patterns in
transparency and interpretability are required in AI decision- pictures while maintaining spatial links. Their ability to
making, especially in critical domains such as medical image acquire more complicated characteristics is made possible by
analysis [1]. In response to this need, ExplainableArtificial their hierarchical feature extraction method, which makes
Intelligence (XAI) has emerged as a key strategy to improve them especially useful for applications where spatial
human comprehension of intricate AI models, suchas deep organization is essential. By reducing trainable parameters,
neural networks. the novel weight sharing approach promotes generalization
and inhibits over fitting. The scalability of CNNs and their
This necessity has led to the development of Explainable capacity to handle massive datasets effectively have rendered
Artificial Intelligence (XAI), a vital tactic to enhance human them invaluable in various applications, signifying a
understanding of complex AI models like deep neural noteworthy breakthrough in the fields of computer vision and
networks. For the healthcare sector, the convergence of artificial intelligence [3].
Explainable Artificial Intelligence (XAI) [2] with deep
learning-based medical picture processing is critical. In this B. Integrated Gradients
context, transparency and interpretability in AI-driven Integrated Gradient approach greatly improves the
systems are essential since each treatment and diagnostic interpretability of convolutional Neural Networks (CNNs) in
decision has a big impact on the patient's quality of life. The the context of image classification. In contrast to other
stakes are really high and there is not much room for error. In approaches, Integrated Gradient evaluates the importance of

979-8-3503-7250-2/24/$31.00 ©2024 IEEE

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
neural network activations at every layer, offering a thorough various picture feature scales. By working with domain
comprehension of the decision-making procedure. This experts to link visualized regions with recognized clinical
method, which is especially important in the complex process signals, Grad-CAM explanations may be integrated with
of brain tumour classification, reveals the contribution of clinical expertise to improve the tool’s in formativeness and
individual neurons to the final conclusion by painstakingly dependability [7].The development of an intuitive user inter-
recording and evaluating activations. Integrated Gradient’s face is an essential prerequisite for the seamless incorporation
layer-by-layer analysis reveals distinct neurons and properties of Grad-CAM (Gradient-weighted Class Activation Mapping)
that are important to each CNN layer, allowing for a thorough into image classification operations. This guarantees that we
investigation of the decision-making process[4]. can understand Grad-CAM visualizations in conjunction with
the source pictures with ease. The general design ideas aim to
C. Local Interpretable Model-agnostic Explanations make Grad-CAM clearly interpretable, usable in real-world
The incorporation of LIME, or Local Interpretable Model settings, and therapeutically useful.
agnostic Explanations, into CNNs for image categorization is
a significant step towards the realization of interpretable III. RELATED WORKS
artificial intelligence. By adjusting individual pictures, LIME Zubaira Naz, Muhammad Usman Ghani Khanet, et al [8]
addresses the opacity of CNNs and produces a variety of developed a system that mainly focuses on the use of
related but distinct instances. After that, a locally interpretable Explainable Artificial Intelligence (XAI) in healthcare,
model is trained using these cases to approximate the complex specifically for the quick identification of respiratory
decision boundary of the original CNN. Confidence and a disorders that impact the lungs. The article presents a transfer
better understanding are fostered by the localized learning strategy based on Convolutional Neural Networks
interpretation that results, which provides insightful (CNN) to explain diseases like edema, TB, nodules, and
information about particular factors impacting forecasts [19]. pneumonia using chest radiographs. The dataset consists of
LIME supports the need for openness in complicated decision- COVID-19-positive and negative CT images, with a focus on
making by helping with bias identification, model validation, COVID-19 classification using the COVID-CT dataset and
and trust-building in addition to improving interpretability[5]. COVIDNet. The suggested CNN model uses transfer learning
x: original representation being explained to process chest X-ray (CXR) images with variables including
x’: Binary vector for human interpretable representation learning rates, epochs, batch sizes, and class labels (”COVID”
n: number of features or ”Non-COVID”). To Improve model performance,
h: function which allows to recover pretrained designs like ResNet50 are used, and tests with
x’ z: perturbed version of original images. alternative CNN architectures like DenseNet169 and
MobileNet are undertaken. ResNet50 appears to have the best
D. SHapley Additive exPlanations accuracy, scoring 93% for COVID-CT and 97% for
SHapley Additive exPlanations (SHAP) provides a COVIDNet datasets, according to the results. The main goal
thorough grasp of feature relevance and its influence on model of the paper is to improve interpretability by matching
results by creating connections between certain features in an explanations to areas that were identified by medical experts
image and the predictions made by the CNN model. Complex after a CT picture was classified as COVID-19. This is
image areas may be identified with the use of Shapley values, accomplished by using LIME, a methodical procedure that
which calculate the average contribution of each feature. creates heat maps and highlights elements that are important
SHAP simplifies the intricate links between model predictions in the classification, boosting the reliability and
and picture attributes in the field of image classification, interpretability of the outputs of the AI system in the context
making it easier to identify important image traits. CNN of COVID-19 diagnosis. The model further improves its
models become more reliable and useful as a result of this explanatory powers by including an explainable module to
thorough explanation. The architectural schematic of SHAP extract activation [8] maps.
highlights its methodical methodology and flexibility, Jai Vardhan, Taraka Satya Krishna Teja Malisetti, et al [9].
highlighting its smooth incorporation into current CNN in their research , focus on enhancing breast cancer detection
models without sacrificing effectiveness. In general, SHAP by synergistically combining thermal imaging, artificial
helps CNN models be interpretable, resilient, and reliable for intelligence (AI), and explainable AI (XAI) techniques to
reliable and accurate picture categorization[6]. mitigate the challenges of false positives and false negatives
hx: Mapping function, in existing screening methods. The dataset encompasses 780
z’: Training data representation where z belongs to the breast ultrasound images, categorized into normal, benign,
open interval between 0 and 1 and malignant classes, with each image standardized at
500x500 pixels. However, this dataset presents challenges,
E. Grad-CAM including low image quality and class imbalances, leading to
When it comes to image classification, the foundational the adoption of segmentation-based methods. Renowned deep
ideas behind the creation and application of Grad-CAM learning models, U-Net and CANet, are employed for feature
(Gradient-weighted Class Activation Mapping) for CNN- extraction, and the integration of hyperspectral imaging
based models are focused on guaranteeing contextual enhances automatic feature extraction for improved
relevance. Grad-CAM places a strong emphasis on identifying classification accuracy of breast cancer nest tissue. The
key areas in pictures that are crucial to the model’s segmentation model excels in detecting black round spots but
conclusions. One important idea is to use a multiscale faces challenges with irregular shapes. Despite limitations
hierarchical visualization technique, which records such as a small dataset and questions regarding real-world
information in pictures at multiple resolutions to give a applicability, the model achieves high accuracy, reaching 99%
thorough insight of the model’s attention at distinct during validation and 95% during training, highlighting the
anatomical levels. This method is essential for examining

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
potential of integrated thermal imaging and AI in breast cancer model segmentation and classification outcomes through
detection. explainability, as well as the decision-making process. A
VGG-like network based on re-parameterization is used by the
By distinguishing between samples of normal lung opacity classification model. In contrast to previous multi-branch
and samples caused by various diseases, such as the novel systems such as ResNet, Incep-tion, and others, RepVGG
coronavirus illness (COVID-19), Pitroda, M. M. Fouda, et al. serves as a foundation. Network offers clear speed benefits
[10] created a framework for lung disease detection from chest and lowers model complexity and resource usage. Many
X-ray pictures. 3,616 COVID-19-positive patients, 10,192 techniques based on perturbation or gradient are used in
normal cases, and 6,012 lung opacity X-rays make up the explainability.Grad-CAM++ was selected as a framework for
dataset. A particular CNN design called the U-Net neural enhanced disease visualization. Brain Tumor Segmentation
network is used for quick and accurate image segmentation. (BraTS) 2021 dataset is a collection of brain MRI images. This
For comparison analytics, DenseNet-201, DenseNet-121, dataset contains 4 × 240 × 240 × 155 MRI sequences, with a
ResNet-50, and InceptionV3 models are utilized. DenseNet- pixel size of 240 × 240 for each image and 155 image
201 showed good prediction performance. The explainability sequences per case. The segmentation F1 score was 95% and
of the Layer wise Relevance Propagation (LRP) method they the classification accuracy was 98.5% for the suggested
have adopted has been assessed using a robust performance
framework.
metric based on pixel flipping. The performance of LRP is
compared with other explainable methods like Deep Taylor Zeineldin, R.A., Karar, M.E., Elshaer, Z. et
Decomposition (DTD), Guided Back-propagation (GB), and al. Explainability of deep neural networks for MRI analysis of
Local Interpretable Model Agnostic Explanation (LIME). brain tumours [13]. This study introduces a fresh
Best performance was demonstrated by the Deep Taylor explainability framework dubbed NeuroXAI, designed to aid
Decomposition (DTD) approach. On the other hand, layerwise in understanding the behaviour of DL networks through
Relevance Propagation (LRP) was shown to produce advanced visualization techniques like attention maps.
outcomes that were almost comparable. One drawback is that NeuroXAI operates post hoc, allowing its application to
some bone structure is still irrelevant in the heat maps that are diverse pre-trained deep neural models, thus offering insights
created. The model's performance can be enhanced and a more into their functioning. Furthermore, our research underscores
accurate explanation can be provided by using the bone the importance of integrating explainable artificial
suppression technique during image preparation. intelligence (XAI) methods into medical image analysis tasks,
as evidenced by two case studies. NeuroXAI also facilitates
In order to make the system more interpretable and CNN analysis by providing individual activation maps for
practical in a medical setting, Morteza Esmaeili, Riyas each internal filter. Additionally, our NeuroXAI findings
Vettukattil, et al. [11] created a system that uses explainable highlight the crucial role of XAI in medical imaging tasks,
AI to provide reasons for its conclusions in addition to aiming to hasten the clinical acceptance of DL models among
providing precise tumor localization. The dataset, which was medical practitioners. In future endeavours, we will focus on
obtained from The Cancer Imaging Archive archives, includes quantitatively assessing XAI methods to evaluate the quality
glioblastoma (WHO grade IV), the most aggressive type of of generated sensitivity maps and investigating their
cancer, and lower-grade gliomas. Images obtained T2- correlation with DL accuracy metrics, including further
weighted and FLAIR magnetic resonance (MR) data from 354 experiments on multi-modal MRI-guided neurosurgery.
participants, which was the inclusion criterion. Every image Another primary objective of our research will be to explore
has been scaled to 256 x 256 x 128 pixels in resolution. Brain the potential for extracting quantitative features from
pictures with and without tumor lesions were produced in explanation methods, such as tumour volume and centroid.
19,200 and 14,800 slices, respectively, for trainingOn the
testing dataset, the mean cross validated prediction accuracy Eder M, Moser E, Holzinger A, Jean-Quartier C,
for DenseNet-121, GoogLeNet, and MobileNet was 92.1, Jeanquartier F. Interpretable Machine Learning with Brain
87.3, and 88.9, respectively.To visualize each model's Image and Survival Data [14]. This study examines the use of
performance on tumor lesion location, the Grad-CAM visual explanations to interpret network models predicting
algorithm is used. Using saliency heatmaps, it produces visual glioma patient survival rates. Three network variations were
descriptions of post-processed picture space compared, achieving accuracies from 31% to 55.2%, with the
gradients.Compared to GoogLeNet and MobileNet, "2D full size" model performing best, akin to BraTS2020
DenseNet121 offered a much higher mean localization challenge winners. SHAP features aid in interpreting results,
accuracy of hit = 81.1% and IoU = 79.1%.Grad-CAM highlighting cases where high accuracy may not indicate
estimations and gradient based explanations are linked to the robust training. Limited additional datasets pose a challenge,
issue, particularly when aiming at several objects in an image. with focus on explainability over network optimization.
SHAP analysis, alongside pre-processing, helps evaluate
A novel framework has been suggested by Fei Yan, networks with scant training data. Future research aims to
Yunqing Chen, et al. [12] that performs end-to-end address dataset limitations and refine models by integrating
technologies of MRI images for improved brain tumor diverse data sources.
identification and analysis. The framework makes use of a
composite explainable network. The two models that make up Meske, C., Bunde, E., Schneider, J., & Gersch, M.[15].
the Framework are segmentation and classification. The Explainable artificial intelligence: objectives, stakeholders,
classification work for both low-grade (LGGs) and highgrade and future research opportunities. The research note addresses
(HGGs) gliomas can be finished using the classification the risks associated with black-box AI systems and
model. The MRI input sequences can be used by this emphasizes the need for explainability. It discusses previous
framework to generate segmentation, classification, and research on Explainable AI (XAI) within the realm of
explainability output findings. The primary goal is to do a information systems. The origin of the term XAI is explored,
thorough study and a complementary interpretation of the along with its generalized objectives and the various

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
stakeholder groups involved. Quality criteria for personalized This dataset contains 7043 images of human brain MRI
explanations are also examined. The note concludes with a images which are classified into 4 classes: glioma -
look ahead at future research directions in XAI. meningioma - no tumour and pituitary tumour class images
taken from the Br35H dataset Fig. 1 shows sample examples.
Singh, A., Sengupta, S., & Lakshminarayanan, [16]. In the
review of Explainable deep learning models in medical image TABLE I. DATASET DETAILS
analysis which focuses on the current applications of
explainable deep learning in various medical imaging tasks. It CLASSES TRAINING TESTING

discusses different approaches, highlights challenges for GLIOMA 1321 300


clinical deployment, and identifies areas needing further
research. The perspective is that of a deep learning researcher PITUITARY 1467 310
designing systems for clinical end-users, providing practical MENINGIOMA 1339 306
insights into the implementation of explainable deep learning
NO TUMOUR 1595 405
in medical imaging. Research has expanded existing
explainability techniques better to address the complexities of
the medical imaging field. For instance, Expressive Gradients
(EG) has been proposed as an enhancement of commonly
employed Integrated Gradients (IG), specifically tailored to
improve coverage of retinal lesions while also extending
concept vectors to encompass continuous attributes such as
texture and shape. These endeavours have propelled
advancements in explainability, offering customization
capabilities without necessitating the creation of entirely new
methodologies.
A framework for COVID-19 diagnosis using chest X-ray
images and deep learning models was created by Dr. Rawan
Ghnemat, Alodibat S, et al. [17]. The system places a strong
emphasis on explainability and interpretability. Using chest
X-ray pictures, deep learning algorithms have been applied in
numerous studies to diagnose COVID-19. Specifically, the
application of Convolutional Neural Nets (CNNs) like
Fig 1. Types of Tumour Included for Classification.
VGG16 has demonstrated encouraging outcomes. The
surveyed research emphasize the importance of B. Proposed Model
interpretability, temporal complexity, and model accuracy in
practical medical contexts. A crucial component of current
research is the incorporation of XAI techniques, such as Local
Interpretable Model-agnostic Explanations (LIME). The
reviewed literature emphasizes how XAI, especially in the
medical field, can enhance the interpretability and
transparency of deep learning models. The articles in the
survey acknowledge that XAI explanations are subjective,
especially when it comes to LIME. There is a discussion of
the difficulties in evaluating these explanations objectively as
well as the value of expert validation and human
interpretation. It is stressed that more investigation is required
to assess the performance of the XAI model.
Fig 2. Architecture of the proposed mode

IV. EXPERIMENTATION Fig. 2 employs a dataset sourced from Kaggle, amalgamating


Text The training of the classification of brain tumors is three distinct datasets: figshare, SARTAJ, and Br35H.
performed on a laptop with configurations of Intel(R) Through meticulous data preprocessing, each image in the
Core(TM) i5-10210U CPU @ 1.60GHz with 8 GB dataset is assigned a label corresponding to specific brain
RAM(Random AccessMemory) and VS Code on Microsoft tumour categories, including glioma, pituitary, meningioma,
Windows 11. or the absence of a tumor. The pixel values in the images serve
A. Dataset Description as features for the model, and to enhance the model’s
comprehension of the categorical nature of the target variable
The dataset used for training and testing the CNN model is during training, labels are normalized into categorical data.
taken from the Kaggle website. The dataset is a combination Subsequently, a Convolutional Neural Network (CNN)[16]
of the following three datasets : model is constructed, incorporating layers for convolution
 Figshare (Conv2D), downsampling (MaxPooling layers), and fully
connected networks (Dense layers). Post-training, the model
 SARTAJ Dataset is applied to categorize brain tumor images and predict disease
 Br35H categories. To enhance interpretability and transparency,
Explainable Artificial Intelligence (XAI) methods such as

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
Integrated Gradients, Local Interpretable Model-agnostic considered less important or contributing minimally to the
Explanations (LIME), SHapley Additive exPlanations prediction. These color-coded representations help interpret
(SHAP), and Grad-CAM are employed. The resulting the model’s decision-making process by highlighting the
interpretations from SHAP, LIME, Integrated Gradients, and regions of the input image that have the most impact on the
Grad-CAM are systematically evaluated to discern the predicted outcome.
significance and relevance of various aspects, ultimately
augmenting the interpretability of the CNN model’s
conclusions.

C. Training of the model

The Convolutional Neural Network (CNN) model designed


for classification comprises two initial convolutional layers,
each equipped with 32 filters, and a Rectified Linear Unit
(ReLU) activation function to introduce non-linearity. A
MaxPooling layer with a 2x2 pool size and strides set to 2 Fig. 3. Linear Graph of accuracy vs epochs
facilitates downsampling. The input shape is defined as [224, In a SHAP (Shapley Additive exPlanations) model applied to
224, 3]. Subsequently, a third convolutional layer with 32 a CNN for brain tumour classification; the below visualization
filters and a 3x3 kernel size is incorporated, followed by a showcases SHAP values across different ranges. In Fig 7
fourth convolutional layer featuring 64 filters and a 3x3 Colored regions indicate the pixel-wise interpretability values,
kernel. Another MaxPooling layer with a 2x2 pool size and revealing the contribution of each pixel to the model’s
strides set to 2 contributes to downsampling. The model decision for a specific class and the highest shap value is
undergoes training for 10 epochs, achieving a commendable 0.004. In Fig 6 Positive SHAP value for a specific feature:
accuracy of 80.55%.The Local Interpretable Model-Agnostic This suggests that an increase in the value of that feature
Explanations (LIME) approach is employed using the makes the model more likely to predict the presence of a brain
LimeImageExplainer to generate explanations for predicted tumor.Negative SHAP value for a specific feature: This
brain tumor classes, highlighting specific pixels’ suggests that an increase in the value of that feature makes the
interpretability. SHAP values are computed to assign model less likely to predict the presence of a brain tumour.
contributions to each feature, offering insights into the
model’s decision-making process. Integrated Gradient utilizes In Integrated Gradients, interpreting the predictions of a CNN
a trapezoidal rule to approximate the integral of gradients, model has 4 components: baseline image, original image,
providing an average rate of change in the model’s prediction attribution mask, and overlay. Fig 7 shows the baseline image,
concerning input features. Grad-CAM enhances which serves as a reference point for contributions of different
interpretability by assigning weights to image regions crucial features in the pixels in an image towards the model’s
for the model’s decision, aiding collaboration between AI prediction. It is often set to a neutral or zero input, representing
models and medical professionals in brain tumor the absence of the feature of interest. Actual image 10 is the
classification. This methodology facilitates trust and input image for which you want to understand the model’s
transparency by visually representing the importance of prediction. In the case of brain tumour detection, it’s an MRI
different regions in the input image for enhanced model image of the brain. The attribution mask represents the
interpretability. computed attributions or importance values assigned to each
pixel in the original image, and this is the output of the
V. RESULTS AND DISCUSSION
Integrated Gradients method. It highlights which regions of
The proposed CNN model will classify thousands of MRI the image had a significant impact on the model’s decision.
images of brain tumours into four categories: Glioma, The overlay is the visual representation of the attribution mask
Meningioma, Notumor and Pituitary. The XAI models overlaid onto the actual image. This is a way to visually
namely Integrated gradient, LIME, SHAP and Grad-CAM interpret and understand which parts of the image are more
are with CNN to classify the brain MRI images with critical in influencing the model’s decision. Darker regions in
explaining the region where more likely the tumours occur. the overlay typically indicate areas with higher importance
These models will help to classify The model is tested using for the model’s prediction. In Integrated Gradients, blue and
images from a dataset by achieving 80.55% accuracy and 10 pink are often used to represent the contribution of different
epochs as shown in 3. regions in the context of brain tumour detection. - *Blue: *
LIME is applied to a CNN predicting brain tumour classes and Typically represents areas with a negative contribution,
identifies a specific region of interpretability in pixels for a indicating regions that contribute towards a lower probability
given predicted class. In Fig 4, the Green, Yellow, and red or confidence in predicting the presence of a tumour. - *Pink:
regions indicate the specific region of interpretability in * Represents areas with a positive contribution, indicating
pixels for a given predicted class. In the Lime (Local regions that contribute towards a higher probability or
Interpretable Modelagnostic Explanations) method for brain confidence in predicting the presence of a tumour. These
tumour detection, red, yellow, and green typically represent colour-coded representations help visualize the model’s
the importance or contribution of different regions or features decision-making process by showing which regions of the
in the image. Red areas often indicate the most influential input image contribute positively or negatively to the final
regions in making a prediction. Yellow areas may represent prediction.
moderately important regions. Green areas are usually

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
Grad-CAM typically produces a heatmap where the intensity Geitung, J. T. (2021). Explainable Artificial Intelligence for
Human-Machine Inter- action in Brain Tumor Localization. Journal
of color corresponds to the importance of a particular region of personalized medicine, 11(11), 1213.
in the input image. In Fig 10 has Warmer colors like red or https://doi.org/10.3390/jpm11111213
yellow often indicate higher importance, while cooler colors [3] Naz, Z., Khan, M. U. G., Saba, T., Rehman, A., Nobanee, H.,
like blue or green represent lower importance. This heatmap Bahaj, S. A. (2023). An Explainable AI-Enabled Framework for
Interpreting Pulmonary Diseases from Chest Radiographs.
is then overlaid on the original image, providing a visual Cancers, 15(1), 314. https://doi.org/10.3390/cancers15010314.
indication of the regions that influenced the neural network’s [4] V. Pitroda, M. M. Fouda and Z. M. Fadlullah, ”An Explainable
decision AI Model for Interpretable Lung Disease Classification,” 2021
IEEE In- ternational Conference on Internet of Things and
Intelligence Sys- tems (IoTaIS), Bandung, Indonesia, 2021, pp.
98-103, doi: 10.1109/Io- TaIS53735.2021.9628573.
[5] Vardhan, Jai Krishna, Ghanta. (2023). Breast Cancer
Segmentation using Attention-based Convolutional Network and
Explainable AI.
[6] Bas H.M. van der Velden, Hugo J. Kuijf, Kenneth G.A. Gilhuijs,
Max
A. Viergever,Explainable artificial intelligence (XAI) in deep
learning- based medical image analysis,Medical Image
Analysis,Volume 79,2022,
[7] Tiwari, Rudra. (2023). Explainable AI (XAI) and its Applications
in Building Trust and Understanding in AI Decision Making.
INTERAN- TIONAL JOURNAL OF SCIENTIFIC RESEARCH
IN ENGINEER- ING AND MANAGEMENT. 07.
10.55041/IJSREM17592.
Fig. 4. Glioma Fig. 5. Pituitary [8] Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana
Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham
Morgan, and Rajiv Ranjan. 2023. Explainable AI (XAI): Core
Ideas, Techniques, and Solutions. ACM Comput. Surv. 55, 9,
Article 194 (September 2023), 33 pages.
https://doi.org/10.1145/3561048
[9] X. Fan, B. Lang, Y. Zhou, and T. Zang, “Adding network
bandwidth resource management to hadoop yarn,” in 2017
seventh international conference on information science and
technology (ICIST). IEEE, 2017,pp. 444–449.
[10] Doilovi, F.K., Bri, M., Hlupi, N.: Explainable artificial
intelligence: A survey. In: 2018 41st International
Convention on Information and Communication Tech-
nology, Electronics and Microelectronics (MIPRO). pp. 0210
0215 (May 2018).
https://doi.org/10.23919/MIPRO.2018.8400040
Fig. 6. Meningioma Fig. 7. No tumour [11] Sebastian Wallkötter, Silvia Tulli, Ginevra Castellano, Ana
Paiva, and Mohamed Chetouani. 2021. Explainable Embodied
Agents Through Social Cues: A Review. J. Hum.-Robot Interact.
VI. CONCLUSIONS 10, 3, Article 27 (September 2021), 24 pages.
https://doi.org/10.1145/3457188
Building a Convolutional Neural Network(CNN) for [12] Li, H.; Chen, X.; Qian, X.; Chen, H.; Li, Z.; Bhattacharjee, S.;
classifying brain tumour diseases. This model will help Zhang, H.; Huang, M.-C.; Xu, W. An explainable COVID-19
classify the brain tumor diseases that fall into 4 categories that detection system based on human sounds. Smart Health 2022, 26,
100332.
we have considered for this project and integrating [13] Zeineldin, R.A., Karar, M.E., Elshaer, Z. et al. Explainability of
explainable AI methods such as Grad-CAM, LIME, SHAP, deep neural networks for MRI analysis of brain tumors. Int J
and integrated gradients enhances the interpretability of CARS 17, 1673–1683 (2022)
model predictions. Integration of XAI into brain tumor [14] Eder, M.; Moser, E.; Holzinger, A.; Jean-Quartier, C.;
Jeanquartier, F. Interpretable Machine Learning with Brain Image
detection systems holds the promise of improving diagnostic and Survival Data. BioMedInformatics 2022, 2, 492-510.
accuracy, reducing false positives, and ultimately enhancing https://doi.org/10.3390/biomedinformatics2030031
patient outcomes. By leveraging these techniques, we gain [15] Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2022).
insights into the influential features and decision-making Explainable artificial intelligence: objectives, stakeholders, and
future research opportunities. Information Systems
processes of CNN, fostering a more transparent and Management, 39(1), 53-63.
understandable framework for analyzing brain tumor images. [16] Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020).
Comparison of general DNN techniques with XAI Explainable deep learning models in medical image
analysis. Journal of imaging, 6(6), 52.
framework are not in the present study, it can be considered
[17] Wang, Y.; Jiang, C.; Wu, Y.; Lv, T.; Sun, H.; Liu, Y.; Li, L.; Pan,
as future scope. Furthermore, a diverse array of explainable X. Semantic-Powered Explainable Model-Free Few-Shot
AI models exists, providing a comprehensive toolkit for Learning Scheme of Diagnosing COVID-19 on Chest X-ray.
researchers and practitioners to delve into the interpretability IEEE J. Biomed. Health Inform. 2022, 26, 5870–5882.
of complex neural network models in medical imaging and [18] Patil, P., Meena, S.M. (2021). Optimization in Artificial
Intelligence-Based Devices and Algorithms for Efficient
beyond. Training: A Survey. In: Kaiser, M.S., Xie, J., Rathore, V.S. (eds)
REFERENCES Information and Communication Technology for Competitive
Strategies (ICTCS 2020). Lecture Notes in Networks and
[1] Yan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Systems, vol 190. Springer, Singapore.
Xiao. 2023. ”An Explainable Brain Tumor Detection Frame- https://doi.org/10.1007/978-981-16-0882-7_79
work for MRI Analysis” Applied Sciences 13, no. 6: 3438.
https://doi.org/10.3390/app13063438
[2] Esmaeili, M., Vettukattil, R., Banitalebi, H., Krogh, N. R.,

Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.

You might also like