Explainability of Brain Tumor Classification Based On Region
Explainability of Brain Tumor Classification Based On Region
on Region
2024 International Conference on Emerging Technologies in Computer Science for Interdisciplinary Applications (ICETCS) | 979-8-3503-7250-2/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICETCS61022.2024.10544289
Prashant Narayankar
School of Computer Science and Dr. Vishwanath P. Baligar
Engineering School of Computer Science and
KLE Technological University Engineering
Hubballi, India KLE Technological University
[email protected] Hubballi, India
[email protected]
Abstract—Medical image analysis plays a crucial role in this high-stakes environment, the need for transparency in AI
modern healthcare, aiding clinicians in diagnosing and treating decision-making is not just essential but also critical. Artificial
various medical conditions. With the advent of Artificial intelligence (AI) and deep learning (DL) models are helping
Intelligence (AI) and Machine Learning (ML), there has been a to bridge many gaps in the agriculture industry by enabling the
surge in the development of AI-powered algorithms for medical prediction of when to plant seeds, the effects of climate
image analysis. Explainable Artificial Intelligence (XAI) change, and when to harvest. Artificial intelligence (AI) has
techniques aim to provide interpretable and transparent been shown to be a catalyst for improvements that are
insights into the decision-making process of AI models, revolutionizing conventional processes.
enhancing their usability and trustworthiness in healthcare
applications. We review the state-of-the-art XAI methods, To meet this requirement, this in-depth analysis begins a
including feature visualisation, attention mechanisms, and rule- multi-pronged investigation that explores the core ideas, wide-
based systems, and their application to medical image analysis. ranging applications, difficult obstacles, and bright future
XAI techniques can pave the way for safer and more effective possibilities of XAI in the complex field of brain tumour
AI-driven medical solutions, ultimately benefiting healthcare classification from medical images. Giving medical
providers and patients. As the healthcare industry continues to practitioners the knowledge and resources they need to make
embrace AI, integrating XAI into medical image analysis is better educated, data driven, and ultimately confident clinical
poised to revolutionize how diseases are detected, diagnosed,
judgments is the main objective. As we go along this road, our
and treated. In our work, we are using the deep learning model
goal is to not only meet but also surpass current patient care
to classify and the Explainable AI model, which explains the
prediction of the model for Brain tumour disease using MRI
standards, paving the way for higher standards and better
images. We have used CNN for brain tumour disease healthcare results. The goal is to fully utilize XAI’s innate
classification. Out of 7043 images, we have taken 5722 for capacity to improve the accuracy and usefulness of deep
training and 1321 for testing and validating the disease. The learning-based brain tumour categorization. The paper aims to
model achieves 80% of accuracy. Explainable AI models like support a paradigm change in healthcare towards more
LIME, SHAP, Integrated Gradients, and Grad-CAM are used informed, transparent, and data-driven decision-making
to interpret a model’s predictions on regions of interest inan processes, eventually supporting the main objective of
image. enhancing patient outcomes and care.
Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
neural network activations at every layer, offering a thorough various picture feature scales. By working with domain
comprehension of the decision-making procedure. This experts to link visualized regions with recognized clinical
method, which is especially important in the complex process signals, Grad-CAM explanations may be integrated with
of brain tumour classification, reveals the contribution of clinical expertise to improve the tool’s in formativeness and
individual neurons to the final conclusion by painstakingly dependability [7].The development of an intuitive user inter-
recording and evaluating activations. Integrated Gradient’s face is an essential prerequisite for the seamless incorporation
layer-by-layer analysis reveals distinct neurons and properties of Grad-CAM (Gradient-weighted Class Activation Mapping)
that are important to each CNN layer, allowing for a thorough into image classification operations. This guarantees that we
investigation of the decision-making process[4]. can understand Grad-CAM visualizations in conjunction with
the source pictures with ease. The general design ideas aim to
C. Local Interpretable Model-agnostic Explanations make Grad-CAM clearly interpretable, usable in real-world
The incorporation of LIME, or Local Interpretable Model settings, and therapeutically useful.
agnostic Explanations, into CNNs for image categorization is
a significant step towards the realization of interpretable III. RELATED WORKS
artificial intelligence. By adjusting individual pictures, LIME Zubaira Naz, Muhammad Usman Ghani Khanet, et al [8]
addresses the opacity of CNNs and produces a variety of developed a system that mainly focuses on the use of
related but distinct instances. After that, a locally interpretable Explainable Artificial Intelligence (XAI) in healthcare,
model is trained using these cases to approximate the complex specifically for the quick identification of respiratory
decision boundary of the original CNN. Confidence and a disorders that impact the lungs. The article presents a transfer
better understanding are fostered by the localized learning strategy based on Convolutional Neural Networks
interpretation that results, which provides insightful (CNN) to explain diseases like edema, TB, nodules, and
information about particular factors impacting forecasts [19]. pneumonia using chest radiographs. The dataset consists of
LIME supports the need for openness in complicated decision- COVID-19-positive and negative CT images, with a focus on
making by helping with bias identification, model validation, COVID-19 classification using the COVID-CT dataset and
and trust-building in addition to improving interpretability[5]. COVIDNet. The suggested CNN model uses transfer learning
x: original representation being explained to process chest X-ray (CXR) images with variables including
x’: Binary vector for human interpretable representation learning rates, epochs, batch sizes, and class labels (”COVID”
n: number of features or ”Non-COVID”). To Improve model performance,
h: function which allows to recover pretrained designs like ResNet50 are used, and tests with
x’ z: perturbed version of original images. alternative CNN architectures like DenseNet169 and
MobileNet are undertaken. ResNet50 appears to have the best
D. SHapley Additive exPlanations accuracy, scoring 93% for COVID-CT and 97% for
SHapley Additive exPlanations (SHAP) provides a COVIDNet datasets, according to the results. The main goal
thorough grasp of feature relevance and its influence on model of the paper is to improve interpretability by matching
results by creating connections between certain features in an explanations to areas that were identified by medical experts
image and the predictions made by the CNN model. Complex after a CT picture was classified as COVID-19. This is
image areas may be identified with the use of Shapley values, accomplished by using LIME, a methodical procedure that
which calculate the average contribution of each feature. creates heat maps and highlights elements that are important
SHAP simplifies the intricate links between model predictions in the classification, boosting the reliability and
and picture attributes in the field of image classification, interpretability of the outputs of the AI system in the context
making it easier to identify important image traits. CNN of COVID-19 diagnosis. The model further improves its
models become more reliable and useful as a result of this explanatory powers by including an explainable module to
thorough explanation. The architectural schematic of SHAP extract activation [8] maps.
highlights its methodical methodology and flexibility, Jai Vardhan, Taraka Satya Krishna Teja Malisetti, et al [9].
highlighting its smooth incorporation into current CNN in their research , focus on enhancing breast cancer detection
models without sacrificing effectiveness. In general, SHAP by synergistically combining thermal imaging, artificial
helps CNN models be interpretable, resilient, and reliable for intelligence (AI), and explainable AI (XAI) techniques to
reliable and accurate picture categorization[6]. mitigate the challenges of false positives and false negatives
hx: Mapping function, in existing screening methods. The dataset encompasses 780
z’: Training data representation where z belongs to the breast ultrasound images, categorized into normal, benign,
open interval between 0 and 1 and malignant classes, with each image standardized at
500x500 pixels. However, this dataset presents challenges,
E. Grad-CAM including low image quality and class imbalances, leading to
When it comes to image classification, the foundational the adoption of segmentation-based methods. Renowned deep
ideas behind the creation and application of Grad-CAM learning models, U-Net and CANet, are employed for feature
(Gradient-weighted Class Activation Mapping) for CNN- extraction, and the integration of hyperspectral imaging
based models are focused on guaranteeing contextual enhances automatic feature extraction for improved
relevance. Grad-CAM places a strong emphasis on identifying classification accuracy of breast cancer nest tissue. The
key areas in pictures that are crucial to the model’s segmentation model excels in detecting black round spots but
conclusions. One important idea is to use a multiscale faces challenges with irregular shapes. Despite limitations
hierarchical visualization technique, which records such as a small dataset and questions regarding real-world
information in pictures at multiple resolutions to give a applicability, the model achieves high accuracy, reaching 99%
thorough insight of the model’s attention at distinct during validation and 95% during training, highlighting the
anatomical levels. This method is essential for examining
Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
potential of integrated thermal imaging and AI in breast cancer model segmentation and classification outcomes through
detection. explainability, as well as the decision-making process. A
VGG-like network based on re-parameterization is used by the
By distinguishing between samples of normal lung opacity classification model. In contrast to previous multi-branch
and samples caused by various diseases, such as the novel systems such as ResNet, Incep-tion, and others, RepVGG
coronavirus illness (COVID-19), Pitroda, M. M. Fouda, et al. serves as a foundation. Network offers clear speed benefits
[10] created a framework for lung disease detection from chest and lowers model complexity and resource usage. Many
X-ray pictures. 3,616 COVID-19-positive patients, 10,192 techniques based on perturbation or gradient are used in
normal cases, and 6,012 lung opacity X-rays make up the explainability.Grad-CAM++ was selected as a framework for
dataset. A particular CNN design called the U-Net neural enhanced disease visualization. Brain Tumor Segmentation
network is used for quick and accurate image segmentation. (BraTS) 2021 dataset is a collection of brain MRI images. This
For comparison analytics, DenseNet-201, DenseNet-121, dataset contains 4 × 240 × 240 × 155 MRI sequences, with a
ResNet-50, and InceptionV3 models are utilized. DenseNet- pixel size of 240 × 240 for each image and 155 image
201 showed good prediction performance. The explainability sequences per case. The segmentation F1 score was 95% and
of the Layer wise Relevance Propagation (LRP) method they the classification accuracy was 98.5% for the suggested
have adopted has been assessed using a robust performance
framework.
metric based on pixel flipping. The performance of LRP is
compared with other explainable methods like Deep Taylor Zeineldin, R.A., Karar, M.E., Elshaer, Z. et
Decomposition (DTD), Guided Back-propagation (GB), and al. Explainability of deep neural networks for MRI analysis of
Local Interpretable Model Agnostic Explanation (LIME). brain tumours [13]. This study introduces a fresh
Best performance was demonstrated by the Deep Taylor explainability framework dubbed NeuroXAI, designed to aid
Decomposition (DTD) approach. On the other hand, layerwise in understanding the behaviour of DL networks through
Relevance Propagation (LRP) was shown to produce advanced visualization techniques like attention maps.
outcomes that were almost comparable. One drawback is that NeuroXAI operates post hoc, allowing its application to
some bone structure is still irrelevant in the heat maps that are diverse pre-trained deep neural models, thus offering insights
created. The model's performance can be enhanced and a more into their functioning. Furthermore, our research underscores
accurate explanation can be provided by using the bone the importance of integrating explainable artificial
suppression technique during image preparation. intelligence (XAI) methods into medical image analysis tasks,
as evidenced by two case studies. NeuroXAI also facilitates
In order to make the system more interpretable and CNN analysis by providing individual activation maps for
practical in a medical setting, Morteza Esmaeili, Riyas each internal filter. Additionally, our NeuroXAI findings
Vettukattil, et al. [11] created a system that uses explainable highlight the crucial role of XAI in medical imaging tasks,
AI to provide reasons for its conclusions in addition to aiming to hasten the clinical acceptance of DL models among
providing precise tumor localization. The dataset, which was medical practitioners. In future endeavours, we will focus on
obtained from The Cancer Imaging Archive archives, includes quantitatively assessing XAI methods to evaluate the quality
glioblastoma (WHO grade IV), the most aggressive type of of generated sensitivity maps and investigating their
cancer, and lower-grade gliomas. Images obtained T2- correlation with DL accuracy metrics, including further
weighted and FLAIR magnetic resonance (MR) data from 354 experiments on multi-modal MRI-guided neurosurgery.
participants, which was the inclusion criterion. Every image Another primary objective of our research will be to explore
has been scaled to 256 x 256 x 128 pixels in resolution. Brain the potential for extracting quantitative features from
pictures with and without tumor lesions were produced in explanation methods, such as tumour volume and centroid.
19,200 and 14,800 slices, respectively, for trainingOn the
testing dataset, the mean cross validated prediction accuracy Eder M, Moser E, Holzinger A, Jean-Quartier C,
for DenseNet-121, GoogLeNet, and MobileNet was 92.1, Jeanquartier F. Interpretable Machine Learning with Brain
87.3, and 88.9, respectively.To visualize each model's Image and Survival Data [14]. This study examines the use of
performance on tumor lesion location, the Grad-CAM visual explanations to interpret network models predicting
algorithm is used. Using saliency heatmaps, it produces visual glioma patient survival rates. Three network variations were
descriptions of post-processed picture space compared, achieving accuracies from 31% to 55.2%, with the
gradients.Compared to GoogLeNet and MobileNet, "2D full size" model performing best, akin to BraTS2020
DenseNet121 offered a much higher mean localization challenge winners. SHAP features aid in interpreting results,
accuracy of hit = 81.1% and IoU = 79.1%.Grad-CAM highlighting cases where high accuracy may not indicate
estimations and gradient based explanations are linked to the robust training. Limited additional datasets pose a challenge,
issue, particularly when aiming at several objects in an image. with focus on explainability over network optimization.
SHAP analysis, alongside pre-processing, helps evaluate
A novel framework has been suggested by Fei Yan, networks with scant training data. Future research aims to
Yunqing Chen, et al. [12] that performs end-to-end address dataset limitations and refine models by integrating
technologies of MRI images for improved brain tumor diverse data sources.
identification and analysis. The framework makes use of a
composite explainable network. The two models that make up Meske, C., Bunde, E., Schneider, J., & Gersch, M.[15].
the Framework are segmentation and classification. The Explainable artificial intelligence: objectives, stakeholders,
classification work for both low-grade (LGGs) and highgrade and future research opportunities. The research note addresses
(HGGs) gliomas can be finished using the classification the risks associated with black-box AI systems and
model. The MRI input sequences can be used by this emphasizes the need for explainability. It discusses previous
framework to generate segmentation, classification, and research on Explainable AI (XAI) within the realm of
explainability output findings. The primary goal is to do a information systems. The origin of the term XAI is explored,
thorough study and a complementary interpretation of the along with its generalized objectives and the various
Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
stakeholder groups involved. Quality criteria for personalized This dataset contains 7043 images of human brain MRI
explanations are also examined. The note concludes with a images which are classified into 4 classes: glioma -
look ahead at future research directions in XAI. meningioma - no tumour and pituitary tumour class images
taken from the Br35H dataset Fig. 1 shows sample examples.
Singh, A., Sengupta, S., & Lakshminarayanan, [16]. In the
review of Explainable deep learning models in medical image TABLE I. DATASET DETAILS
analysis which focuses on the current applications of
explainable deep learning in various medical imaging tasks. It CLASSES TRAINING TESTING
Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
Integrated Gradients, Local Interpretable Model-agnostic considered less important or contributing minimally to the
Explanations (LIME), SHapley Additive exPlanations prediction. These color-coded representations help interpret
(SHAP), and Grad-CAM are employed. The resulting the model’s decision-making process by highlighting the
interpretations from SHAP, LIME, Integrated Gradients, and regions of the input image that have the most impact on the
Grad-CAM are systematically evaluated to discern the predicted outcome.
significance and relevance of various aspects, ultimately
augmenting the interpretability of the CNN model’s
conclusions.
Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.
Grad-CAM typically produces a heatmap where the intensity Geitung, J. T. (2021). Explainable Artificial Intelligence for
Human-Machine Inter- action in Brain Tumor Localization. Journal
of color corresponds to the importance of a particular region of personalized medicine, 11(11), 1213.
in the input image. In Fig 10 has Warmer colors like red or https://doi.org/10.3390/jpm11111213
yellow often indicate higher importance, while cooler colors [3] Naz, Z., Khan, M. U. G., Saba, T., Rehman, A., Nobanee, H.,
like blue or green represent lower importance. This heatmap Bahaj, S. A. (2023). An Explainable AI-Enabled Framework for
Interpreting Pulmonary Diseases from Chest Radiographs.
is then overlaid on the original image, providing a visual Cancers, 15(1), 314. https://doi.org/10.3390/cancers15010314.
indication of the regions that influenced the neural network’s [4] V. Pitroda, M. M. Fouda and Z. M. Fadlullah, ”An Explainable
decision AI Model for Interpretable Lung Disease Classification,” 2021
IEEE In- ternational Conference on Internet of Things and
Intelligence Sys- tems (IoTaIS), Bandung, Indonesia, 2021, pp.
98-103, doi: 10.1109/Io- TaIS53735.2021.9628573.
[5] Vardhan, Jai Krishna, Ghanta. (2023). Breast Cancer
Segmentation using Attention-based Convolutional Network and
Explainable AI.
[6] Bas H.M. van der Velden, Hugo J. Kuijf, Kenneth G.A. Gilhuijs,
Max
A. Viergever,Explainable artificial intelligence (XAI) in deep
learning- based medical image analysis,Medical Image
Analysis,Volume 79,2022,
[7] Tiwari, Rudra. (2023). Explainable AI (XAI) and its Applications
in Building Trust and Understanding in AI Decision Making.
INTERAN- TIONAL JOURNAL OF SCIENTIFIC RESEARCH
IN ENGINEER- ING AND MANAGEMENT. 07.
10.55041/IJSREM17592.
Fig. 4. Glioma Fig. 5. Pituitary [8] Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana
Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham
Morgan, and Rajiv Ranjan. 2023. Explainable AI (XAI): Core
Ideas, Techniques, and Solutions. ACM Comput. Surv. 55, 9,
Article 194 (September 2023), 33 pages.
https://doi.org/10.1145/3561048
[9] X. Fan, B. Lang, Y. Zhou, and T. Zang, “Adding network
bandwidth resource management to hadoop yarn,” in 2017
seventh international conference on information science and
technology (ICIST). IEEE, 2017,pp. 444–449.
[10] Doilovi, F.K., Bri, M., Hlupi, N.: Explainable artificial
intelligence: A survey. In: 2018 41st International
Convention on Information and Communication Tech-
nology, Electronics and Microelectronics (MIPRO). pp. 0210
0215 (May 2018).
https://doi.org/10.23919/MIPRO.2018.8400040
Fig. 6. Meningioma Fig. 7. No tumour [11] Sebastian Wallkötter, Silvia Tulli, Ginevra Castellano, Ana
Paiva, and Mohamed Chetouani. 2021. Explainable Embodied
Agents Through Social Cues: A Review. J. Hum.-Robot Interact.
VI. CONCLUSIONS 10, 3, Article 27 (September 2021), 24 pages.
https://doi.org/10.1145/3457188
Building a Convolutional Neural Network(CNN) for [12] Li, H.; Chen, X.; Qian, X.; Chen, H.; Li, Z.; Bhattacharjee, S.;
classifying brain tumour diseases. This model will help Zhang, H.; Huang, M.-C.; Xu, W. An explainable COVID-19
classify the brain tumor diseases that fall into 4 categories that detection system based on human sounds. Smart Health 2022, 26,
100332.
we have considered for this project and integrating [13] Zeineldin, R.A., Karar, M.E., Elshaer, Z. et al. Explainability of
explainable AI methods such as Grad-CAM, LIME, SHAP, deep neural networks for MRI analysis of brain tumors. Int J
and integrated gradients enhances the interpretability of CARS 17, 1673–1683 (2022)
model predictions. Integration of XAI into brain tumor [14] Eder, M.; Moser, E.; Holzinger, A.; Jean-Quartier, C.;
Jeanquartier, F. Interpretable Machine Learning with Brain Image
detection systems holds the promise of improving diagnostic and Survival Data. BioMedInformatics 2022, 2, 492-510.
accuracy, reducing false positives, and ultimately enhancing https://doi.org/10.3390/biomedinformatics2030031
patient outcomes. By leveraging these techniques, we gain [15] Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2022).
insights into the influential features and decision-making Explainable artificial intelligence: objectives, stakeholders, and
future research opportunities. Information Systems
processes of CNN, fostering a more transparent and Management, 39(1), 53-63.
understandable framework for analyzing brain tumor images. [16] Singh, A., Sengupta, S., & Lakshminarayanan, V. (2020).
Comparison of general DNN techniques with XAI Explainable deep learning models in medical image
analysis. Journal of imaging, 6(6), 52.
framework are not in the present study, it can be considered
[17] Wang, Y.; Jiang, C.; Wu, Y.; Lv, T.; Sun, H.; Liu, Y.; Li, L.; Pan,
as future scope. Furthermore, a diverse array of explainable X. Semantic-Powered Explainable Model-Free Few-Shot
AI models exists, providing a comprehensive toolkit for Learning Scheme of Diagnosing COVID-19 on Chest X-ray.
researchers and practitioners to delve into the interpretability IEEE J. Biomed. Health Inform. 2022, 26, 5870–5882.
of complex neural network models in medical imaging and [18] Patil, P., Meena, S.M. (2021). Optimization in Artificial
Intelligence-Based Devices and Algorithms for Efficient
beyond. Training: A Survey. In: Kaiser, M.S., Xie, J., Rathore, V.S. (eds)
REFERENCES Information and Communication Technology for Competitive
Strategies (ICTCS 2020). Lecture Notes in Networks and
[1] Yan, Fei, Yunqing Chen, Yiwen Xia, Zhiliang Wang, and Ruoxiu Systems, vol 190. Springer, Singapore.
Xiao. 2023. ”An Explainable Brain Tumor Detection Frame- https://doi.org/10.1007/978-981-16-0882-7_79
work for MRI Analysis” Applied Sciences 13, no. 6: 3438.
https://doi.org/10.3390/app13063438
[2] Esmaeili, M., Vettukattil, R., Banitalebi, H., Krogh, N. R.,
Authorized licensed use limited to: National Taiwan Univ of Science and Technology. Downloaded on March 03,2025 at 03:47:43 UTC from IEEE Xplore. Restrictions apply.