0% found this document useful (0 votes)
28 views6 pages

Classification of Gastrointestinal Cancer Through Explainable AI and Ensemble Learning

This document summarizes a research paper that classified five types of gastrointestinal cancer through explainable artificial intelligence and ensemble learning. The researchers developed an ensemble model using three convolutional neural networks and trained it on pathological findings from the Kvasir-V2 dataset, achieving 93.17% accuracy and 97% F1 scores. They then used the Shapley Additive explanations method to analyze images from the three classes and provide explainability on the key features used in their classifications. The results indicated a high level of transparency in the ensemble model's decision-making process.

Uploaded by

Adhiraj Saxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

Classification of Gastrointestinal Cancer Through Explainable AI and Ensemble Learning

This document summarizes a research paper that classified five types of gastrointestinal cancer through explainable artificial intelligence and ensemble learning. The researchers developed an ensemble model using three convolutional neural networks and trained it on pathological findings from the Kvasir-V2 dataset, achieving 93.17% accuracy and 97% F1 scores. They then used the Shapley Additive explanations method to analyze images from the three classes and provide explainability on the key features used in their classifications. The results indicated a high level of transparency in the ensemble model's decision-making process.

Uploaded by

Adhiraj Saxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2023 Sixth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU)

2023 Sixth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU) | 978-1-6654-7723-9/23/$31.00 ©2023 IEEE | DOI: 10.1109/WiDS-PSU57071.2023.00048

Classification of Gastrointestinal Cancer through


Explainable AI and Ensemble Learning
Muhammad Muzzammil Auzine Maleika Heenaye-Mamode Khan Sunilduth Baichoo
Department of Software and Department of Software and Department of Software and
Information Systems Information Systems Information Systems
University of Mauritius University of Mauritius University of Mauritius
Reduit, Mauritius Reduit, Mauritius Reduit, Mauritius
[email protected] [email protected] [email protected]

Nuzhah Gooda Sahib Xiaohong Gao Preeti Bissoonauth-Daiboo


Department of Software and Department of Computer Science Department of Software and
Information Systems Middlesex University Information Systems
University of Mauritius London, England University of Mauritius
Reduit, Mauritius [email protected] Reduit, Mauritius
[email protected] [email protected]

Abstract—Despite the effectiveness of AI-assisted cancer de- 1) Gastric (stomach) cancer


tection systems, acquiring authorisation for their deployment in 2) Esophageal cancer
clinical settings has proven challenging owing to their inadequate 3) Colorectal cancer
level of explainability in their underlying mechanisms. Due to
the lack of transparency in AI-driven systems’ decision-making, 4) Pancreatic cancer
many medical practitioners are still reluctant to employ AI- 5) Liver cancer
assisted diagnoses. Explainable Artificial Intelligence (XAI) is a
new topic in AI that has the potential to solve the computational
black boxes that AI systems posed, allowing for an explanation
of a model prediction. Consequently, in this research work, we
have explored the Shapley Additive explanations (SHAP), which
is a model prediction explanation approach on our developed
ensemble model based on the averaging techniques by concate-
nating the predictions of three convolutional neural network
(InceptionV3, Inception-ResNetV2, and VGG16). The models
were trained on the pathological findings of Kvasir-V2 datasets
and have achieved an accuracy of 93.17% and F1-scores of
97%. Post-training of the ensemble model, images from the three
classes were analysed using SHAP to provide an explainability
on their deterministic features. The results obtained indicate a
favourable and optimistic improvement in the investigation of
XAI approaches in healthcare more specifically in the detection
of gastrointestinal cancer.
Index Terms—Gastrointestinal cancer, Endoscopic images, En-
semble model, Explainable AI, SHAP Fig. 1. GI tract [4]

I. I NTRODUCTION Recent research report that modifiable risk factors including


Gastrointestinal cancer is a group of diseases that affects cigarette smoking, alcohol consumption, infection, obesity and
the organs of digestion and absorption in the gastrointestinal diet are the causes of over 50% of all gastrointestinal cancers
(GI) tract. Globally, GI cancers account for 4.8 million(26.3%) [5], [6]. Additionally, according to [2], males are more likely
incidence cases of cancer cases and 3.4 million (35.4%) of than females to develop gastrointestinal cancers, and that risk
cancer deaths. [1]. Gastrointestinal cancer is a variety of cancer rises with age. Given that most diagnoses are made at a late
that starts in the digestive tract(gastrointestinal tract). Fig. 1 stage, the prognosis is typically poor [7] and hence, site-
depicts the GI tract, which is a 25-foot-long pathway that specific death rates remain consistent with incidence trends. If
travels from the mouth to the anus. According to [2] and detected in the earliest stages, GI-related cancers are reported
[3], the most common types of gastrointestinal cancers are to have a high 5-year survival rate [8]. Nevertheless, a study
as follows: by [9] revealed that significant diagnostic errors occur as a

978-1-6654-7723-9/23/$31.00 ©2023 IEEE 195


DOI 10.1109/WiDS-PSU57071.2023.00048
Authorized licensed use limited to: VIT University. Downloaded on January 18,2024 at 10:27:55 UTC from IEEE Xplore. Restrictions apply.
result of cognitive and technological issues, notwithstanding Similarly, for the screening of the lower gastrointestinal
the efficiency of traditional screening procedures. tract organs consisting of the anus, rectum, colon, and cecum,
According to the Global Cancer Observatory [1], by 2040, colonoscopy which is a type of endoscopy is the preferred
the worldwide rate of mortality and new cases from GI cancers method. A colonoscope similar to the endoscope, is inserted
is expected to rise by 73% (5.6 million) and 58% (7.5 million), via the anus and directed through the colon until it reaches
respectively. It is therefore of utmost priority for researches to the cecum during colonoscopy operations, as shown in Fig. 3.
develop reliable system to assist medical institutions in the
diagnosis of GI cancers.
Recent research reveals that artificial intelligence (AI) has
the capacity to minimize the rate of misdiagnosis of con-
ventional screening techniques, hence boosting the overall
accuracy [10]. This is accomplished via the use of deep
learning and machine learning algorithms. But a significant
obstacle to such AI-assisted systems is that they are considered
as computational ”black boxes”. Healthcare institutions are
hesitant to employ AI-driven systems in diagnosis due to a
lack of transparency in their decision-making process, despite Fig. 3. Colonoscopy procedure [17]
the effectiveness of such AI models [11], [12]. In order to
acquire the trust of healthcare professionals, it is imperative for Several studies have been conducted to establish automated
AI researchers to provide human-comprehensible explanations gastrointestinal cancer detection models. According to [18],
while developing medical AI applications. An emerging area esophageal cancer diagnostic methods using deep learning
of AI, Explainable Artificial Intelligence (XAI) has the ability and machine learning are growing in popularity. An auto-
to solve the computational challenges that AI systems pose , mated computer-aided application for the early screening of
thus enabling an explanation of a model prediction [13]. esophageal cancer has been developed by [19]. The authors
have subsequently used random forest as an ensemble classifier
In this research work, we have therefore investigated Shap-
for the classification of esophageal images. Deep learning
ley Additive explanations (SHAP), a model prediction ex-
models, on the other hand, are being researched. In one
planation approach introduced by [14] using our developed
study, [20] used a custom dataset of 787 images collected to
ensemble model trained on the pathology findings from the
identify 3 classes, namely normal, benign ulcer, and cancer by
publically accessible Kvasir dataset.
employing deep convolution neural network (CNN) transfer
II. L ITERATURE R EVIEW learning using VGG16, InceptionV3, and ResNet-50. The
researchers found that ResNet-50 had the highest accuracy in
Among the various screening techniques available for gas- all three categories.
trointestinal cancer, endoscopy compared to CT and MRI, is To identify endoscopic images for early gastric/stomach
considered as the most effective procedure since it enables cancer diagnosis, (Liu et al., 2018)[19] employed deep CNN
the examination of the complete GI tract and the removal transfer learning (VGG16, InceptionV3, and InceptionRes-
of polyps if required in a single session [15]. Endoscopy is NetV2). InceptionV3 was shown to perform better than
a frequent procedure diagnosing abnormalities in the upper VGG16 and InceptionResNetV2 with accuracy, sensitivity,
gastrointestinal tract organs such as the stomach and esoph- and specificity exceeding 98%. A model for classification
agus. During endoscopic procedures, an endoscope (a tube of colorectal polyps was developed by [21]. Their approach
consisting of a camera, light source and tool channel) is outperformed endoscopists’ visual inspection in terms of recall
introduced through the mouth and down the esophagus to the and accuracy while achieving a similar level of precision.
duodenum, as illustrated in Fig. 2, allowing doctors to view
[22] used noncancerous lesions and gastric cancer (GC)
the upper gastrointestinal tract.
images to assess CNN’s diagnosis performance. The model
was trained using 386 noncancerous lesions’ images and 1702
gastric cancer images, respectively. The findings showed that
CNN was extremely sensitive, specific, and accurate in detect-
ing early gastric cancer. Furthermore, no statistical difference
was observed between the AI system and expert endoscopists,
yet they were much higher than non-experts.
Shapley additive explanations for interpretable real-time
deep neural networks were presented by [23]. Better real-time
performance was demonstrated by the suggested technique.
The experimental findings also demonstrated that the tech-
Fig. 2. Endoscopy procedure [16] nique employed performs better than the current deep learning

196

Authorized licensed use limited to: VIT University. Downloaded on January 18,2024 at 10:27:55 UTC from IEEE Xplore. Restrictions apply.
approaches. Furthermore, along with satisfactory operational procedures and anatomical landmarks in the GI tract. However,
effectiveness and interpretable feedback, the author managed in this work, we have considered only the pathological findings
to address the requirements of the colorectal surgeon. which consists of three classes namely:
Following the analysis of the previous research in the field (a) Esophagitis - an esophageal mucosa break caused by
of artificial intelligence for gastrointestinal cancer detection, an inflammation of the esophagus.
it was discovered that there is room for potential investigation (b) Polyps - lesions that can be identified as mucosal
in this research area. The majority of AI model were used to outgrows.
identify abnormalities in medical images. However, despite re- (c) Ulcerative Colitis - a long termed inflammatory condi-
cent interest from researchers, only few works have developed tion that affects the colon and rectum.
a human-comprehensible model to enable an explanation of a
model prediction.

III. M ATERIAL AND M ETHODS


This section describes the architecture of the proposed
method as well as the XAI based ensemble model.
Fig. 4 illustrates the proposed architecture of the XAI-
based GI Cancer detection system. The pathological findings
from the Kvasir-V2 dataset were employed in this study. Fig. 5. Example of pathological findings. (a): Esophagitis, (b): Polyps, (c):
An enhanced ensemble model was developed to train the Ulcerative Colitis
system and finally, the deterministic features of each class were
depicted using an XAI technique. After applying data augmentation techniques (Rotation and
Zoom) to the original dataset, a new dataset consisting of 2000
A. Architecture of the XAI-based GI Cancer detection system images for each class was created to increase the variety in
the available dataset.
C. Ensemble Model
Using a variety of training data sets or algorithm , ensemble
modeling is the process of combining multiple diverse models
to predict a result. In order to create a single and compre-
hensive prediction, the ensemble model then aggregates the
predictions of several base models. The goal of ensemble
models are to decrease the test error of a prediction. This
method is employed in a variety of fields, such as speech and
image recognition [25]–[27].
In this work, an ensemble model based on the averaging
techniques were developed by combining the prediction of
three pre-trained convolutional neural networks (InceptionV3,
Inception-ResNetV2, and VGG16).
InceptionV3 was the first deep CNN employed in the
ensemble model’s development. It consists of 42 layers. It was
designed in 2015 by Google [28]. It is an upgraded version
Fig. 4. Architecture of proposed model
of the GoogleNet(InceptionV1) network. It is commonly used
for image classification. Second, the InceptionResNetV2 was
B. Dataset used. It is deep CNN based on the Inception Architecture,
The progress of several computing domains, especially deep however instead of the filter concatenation stage of Inception
learning applications, depends on datasets. Since the datasets Architecture, it uses residual connections . It has 164 layers
must be appropriately labelled, have adequate variety among and was developed in 2016 by [29]. Finally, VGG16 developed
the images, and have a sufficient number of images, the by [30] in 2014, is the third model employed in the ensemble
quality and availability of datasets are crucial. For the training model’s development. It has 16 layers and uses softmax as
and testing of their proposed models, several researchers and classifier.
institutes have created a variety of medical imaging datasets. In Furthermore, the three aforementioned models were indi-
this study, we utilized the Kvasir dataset, developed by [24] in vidually applied on our augmented Kvasir V2 dataset. The
2017. The Kvasir dataset is made up of validated and annotated dataset was split into 80% training and 20% validation. Post
images by medical professionals, with a thousand images training of the individual models, the ensemble model was
per class displaying the pathological findings or endoscopic developed by combining the predictions of the three models

197

Authorized licensed use limited to: VIT University. Downloaded on January 18,2024 at 10:27:55 UTC from IEEE Xplore. Restrictions apply.
using the average technique. The ensemble average approach, IV. R ESULTS & D ISCUSSIONS
as its name suggests, uses an average of the prediction from The initial phase of the experiments was to develop the en-
all the trained CNNs to generate the final prediction. Fig. 6 semble model. Firstly, the InceptionV3, Inception-ResNetV2,
illustrates the architecture of the ensemble model. and VGG16 models were trained on the Kvasir-V2 datasets
before aggregating them into the ensemble meta model. A
global average pooling layer followed by a dropout of 0.3
were applied on the original architecture of the three pre-
trained CNNs. As a method of leveraging the models, Adam
optimisation with sparse categorical crossentropy as the loss
function was used. All of the chosen deep CNNs were then
trained with 5 epochs and a batch size of 32. Subsequently,
Softmax is employed for classification.
Fig. 8 presents the classification outcomes achieved during
the individual training of the CNNs models as well as the
developed ensemble model.
Fig. 6. Architecture of the Ensemble model.

D. Explainable AI (XAI) in medical imaging


Despite the significant advances in deep learning-based AI
in the medical field, the lack of explainability, particularly in
complex decision-making situations such as medical image
analysis, has made it challenging to achieve an acceptance
for the use of such AI models in clinical settings [11], [31].
XAI has the potential of bringing human comprehensible
explanations when developing medical AI applications in order
to earn the trust of healthcare professionals [12].
Among the five categories that [13] have categorized XAI Fig. 8. Ensemble model compared to the individual models.
in medicine and healthcare, in this study, as we are seeking
to give an explainability on medical imaging, we have thus
explored the XAI technique of explanation through feature
relevance. The Shapley Additive explanations (SHAP) is a
method of XAI via feature relevance.

SHAP is a model agnostic approach developed by [14].


The output of any AI model may be explained using this
approach of additive feature attribution. Each attribute is given
a Shapley value, which indicates how important it is for that
specific prediction. The Shapley value measures how much
a feature value contributes to the discrepancy between the
mean prediction and the actual prediction. Fig. 7 illustrates Fig. 9. Confusion matrix[ensemble model].
the architecture of SHAP.
Based on the confusion matrix depicted on Fig. 9, the classi-
fication report comprising of the F1-score, recall and precision
for each class namely esophagitis, polyps and ulcerative colitis
is shown on the Fig. 10.

Fig. 7. Architecture of SHAP.


Fig. 10. Classification Report

198

Authorized licensed use limited to: VIT University. Downloaded on January 18,2024 at 10:27:55 UTC from IEEE Xplore. Restrictions apply.
The findings demonstrate that the enhanced ensemble model
has improved the overall accuracy of the individual models and
based on the results from the above classification report, it can
be deduced that the model is returning a high precision and
recall for all three classes, consequently resulting with a high
F1-score. Since, we are dealing with predication related to GI
cancers, our model requires a good coverage of actual positives
and actual negatives instances. The F1-score of 97% for each
class and an overall accuracy of 93.17% shows that our
ensemble model has the potential of being further developed
for clinical application for the diagnosis of gastrointestinal
cancers.

A. Model explanation using Shapley Additive Explanations


We used the SHAP partition explainer with a blurring-based
masker to visualise which areas of the image are contributing
to the predictions in order to explain the right predictions
generated by our ensemble model. Thus, we first used
four images from each class that were properly predicted
by our model and explained their deterministic properties.
Fig. 11 depicts the contributing characteristics of each of the
esophagitis, polyps, and ulcerative colitis classes.

As seen in Fig. 11, the chart depicts the real image with
sections of it highlighted in red and blue. Shades of red
indicate elements that contributed positively to the prediction
of that category, whereas shades of blue indicate parts that
contributed adversely. It also displays the categories in the
order that the model considers the image belongs to. Taking as
example the fourth image from Fig. 11, we can observe that the
red shades were mostly concentrated around the area having
the esophagitis pathology in its corresponding esophagitis
class. However, when we analyses the next two subsequent
classes which the models predict the image belong to, we
would see that both images have mostly blue shades on the
region having the esophagitis pathology. Similarly, the models
predict and output the deterministic features of each tested
images.

V. C ONCLUSION
Several AI-based approaches have been employed to analyse
medical images in a variety of domains, with outstanding re-
sults in applications such as cancer classification and detection.
However, the lack of explainibility in their decision-making
hinders their application in a clinical setting. Therefore, in
this research, we explored, SHAP, a model explanation tech-
nique to extract the deterministic features on the pathological
findings of gastrointestinal cancers. Firstly, we have developed
and trained an enhanced ensemble model using the averaging
approach by combining three CNN’s(InceptionV3, Inception-
ResnetV2 and VGG16) trained on pathological findings of
the Kvasir-V2 dataset. Furthermore, the relevant features of
each of the pathology were revealed using the SHAP explainer
algorithm. The findings demonstrate that the advancement of Fig. 11. Model’s Prediction explanation using SHAP
XAI models for cancer diagnosis is progressing optimistically
and favorably.

199

Authorized licensed use limited to: VIT University. Downloaded on January 18,2024 at 10:27:55 UTC from IEEE Xplore. Restrictions apply.
ACKNOWLEDGMENT [14] S. Lundberg and S.-I. Lee, “A Unified Approach to
Interpreting Model Predictions,” Nov. 2017. [Online]. Available:
This project has been funded by the Higher Education http://arxiv.org/abs/1705.07874
[15] T. Takahashi, Y. Saikawa, and Y. Kitagawa, “Gastric cancer: current
Commission (HEC) under grant number TB-2021-01. The status of diagnosis and treatment,” Cancers, vol. 5, no. 1, pp. 48–63,
funders had no role in the design, implementation, decision 2013.
to publish or preparation of the manuscript. [16] J. H. Medicine, “Upper GI Endoscopy,” Dec. 2021,
publication Title: What is an upper GI endoscopy? [Online].
Available: https://www.hopkinsmedicine.org/health/treatment-tests-and-
R EFERENCES therapies/upper-gi-endoscopy
[17] MayoClinic, “Colonoscopy - Mayo Clinic,” publication Title:
[1] International Agency for Research on Cancer (IARC), “Global Cancer Colonoscopy. [Online]. Available: https://www.mayoclinic.org/tests-
Observatory,” accessed December 05, 2022. [Online]. Available: procedures/colonoscopy/about/pac-20393569
https://gco.iarc.fr/ [18] C. S. Bang, J. J. Lee, and G. H. Baik, “Computer-aided diagnosis of
[2] yalemedicine, “Gastrointestinal Cancers,” esophageal cancer and neoplasms in endoscopic images: a systematic
https://www.yalemedicine.org/conditions/gastrointestinal-cancers, review and meta-analysis of diagnostic test accuracy,” Gastrointest
accessed December 05, 2022. [Online]. Available: Endosc, vol. 93, no. 5, pp. 1006–1015.e13, May 2021.
https://www.cancer.gov/types/stomach/hp [19] M. H. Janse, F. van der Sommen, S. Zinger, E. J. Schoon et al., “Early
[3] M. Arnold, C. C. Abnet, R. E. Neale, J. Vignat, E. L. Giovannucci, esophageal cancer detection using rf classifiers,” in Medical Imaging
K. A. McGlynn, and F. Bray, “Global burden of 5 major types of 2016: Computer-Aided Diagnosis, vol. 9785. SPIE, 2016, pp. 344–
gastrointestinal cancer,” Gastroenterology, vol. 159, no. 1, pp. 335–349, 351.
2020. [20] J. H. Lee, Y. J. Kim, Y. W. Kim, S. Park, Y.-I. Choi, Y. J. Kim, D. K.
[4] National Cancer Institute, “Definition of gastrointesti- Park, K. G. Kim, and J.-W. Chung, “Spotting malignancies from gastric
nal tract - NCI Dictionary of Cancer Terms - endoscopic images using deep learning,” Surg Endosc, vol. 33, no. 11,
NCI,” Feb. 2011, type: nciAppModulePage. [Online]. pp. 3790–3797, Nov. 2019.
Available: https://www.cancer.gov/publications/dictionaries/cancer- [21] R. Zhang, Y. Zheng, T. W. C. Mak, R. Yu, S. H. Wong, J. Y. W. Lau, and
terms/def/gastrointestinal-tract C. C. Y. Poon, “Automatic Detection and Classification of Colorectal
[5] F. Islami, A. Goding Sauer, K. D. Miller, R. L. Siegel, S. A. Fedewa, Polyps by Transferring Low-Level CNN Features From Nonmedical
E. J. Jacobs, M. L. McCullough, A. V. Patel, J. Ma, I. Soerjomataram Domain,” IEEE J Biomed Health Inform, vol. 21, no. 1, pp. 41–47,
et al., “Proportion and number of cancer cases and deaths attributable Jan. 2017.
to potentially modifiable risk factors in the united states,” CA: a cancer [22] L. Li, Y. Chen, Z. Shen, X. Zhang, J. Sang, Y. Ding, X. Yang, J. Li,
journal for clinicians, vol. 68, no. 1, pp. 31–54, 2018. M. Chen, C. Jin, C. Chen, and C. Yu, “Convolutional neural network
[6] P. A. van den Brandt and R. A. Goldbohm, “Nutrition in the prevention for the diagnosis of early gastric cancer based on magnifying narrow
of gastrointestinal cancer,” Best Practice & Research Clinical Gastroen- band imaging,” Gastric Cancer, vol. 23, no. 1, pp. 126–132, Jan. 2020.
terology, vol. 20, no. 3, pp. 589–603, 2006. [23] S. Wang, Y. Yin, D. Wang, Z. Lv, Y. Wang, and
[7] C. Allemani, T. Matsuda, V. Di Carlo, R. Harewood, M. Matz, M. Nikšić, Y. Jin, “An interpretable deep neural network for colorectal
A. Bonaventure, M. Valkov, C. J. Johnson, J. Estève, O. J. Ogunbiyi, polyp diagnosis under colonoscopy,” Knowledge-Based Systems,
G. Azevedo E Silva, W.-Q. Chen, S. Eser, G. Engholm, C. A. Stiller, vol. 234, p. 107568, Dec. 2021. [Online]. Available:
A. Monnereau, R. R. Woods, O. Visser, G. H. Lim, J. Aitken, H. K. Weir, https://www.sciencedirect.com/science/article/pii/S0950705121008303
M. P. Coleman, and CONCORD Working Group, “Global surveillance of [24] K. Pogorelov, K. R. Randel, C. Griwodz, S. L. Eskeland, T. de Lange,
trends in cancer survival 2000-14 (CONCORD-3): analysis of individual D. Johansen, C. Spampinato, D.-T. Dang-Nguyen, M. Lux, P. T. Schmidt,
records for 37 513 025 patients diagnosed with one of 18 cancers from M. Riegler, and P. Halvorsen, “Kvasir: A multi-class image dataset for
322 population-based registries in 71 countries,” Lancet, vol. 391, no. computer aided gastrointestinal disease detection,” in Proceedings of the
10125, pp. 1023–1075, Mar. 2018. 8th ACM on Multimedia Systems Conference, ser. MMSys’17. New
[8] B. Moghimi-Dehkordi and A. Safaee, “An overview of colorectal York, NY, USA: ACM, 2017, pp. 164–169.
cancer survival rates and prognosis in Asia,” World J Gastrointest [25] B. Nisbet, G. Miner, and K. Yale, “Preface,” in Handbook
Oncol, vol. 4, no. 4, pp. 71–75, Apr. 2012. [Online]. Available: of Statistical Analysis and Data Mining Applications (Second
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3334382/ Edition), R. Nisbet, G. Miner, and K. Yale, Eds. Boston:
[9] C. T. Frenette and W. B. Strum, “Relative rates of missed diagnosis for Academic Press, Jan. 2018, pp. xvii–xix. [Online]. Available:
colonoscopy, barium enema, and flexible sigmoidoscopy in 379 patients https://www.sciencedirect.com/science/article/pii/B9780124166325099801
with colorectal cancer,” Journal of gastrointestinal cancer, vol. 38, no. 2, [26] V. Kotu and B. Deshpande, “Chapter 2 - Data Mining Process,” in
pp. 148–153, 2007. Predictive Analytics and Data Mining, V. Kotu and B. Deshpande, Eds.
[10] Z. Ahmad, S. Rahim, M. Zubair, and J. Abdul-Ghafar, Boston: Morgan Kaufmann, Jan. 2015, pp. 17–36. [Online]. Available:
“Artificial intelligence (AI) in medicine, current applications https://www.sciencedirect.com/science/article/pii/B9780128014608000021
and future role with special emphasis on its potential and [27] M. A. Ganaie, M. Hu, A. K. Malik, M. Tanveer, and P. N. Suganthan,
promise in pathology: present and future impact, obstacles “Ensemble deep learning: A review,” Engineering Applications of
including costs and acceptance among pathologists, practical and Artificial Intelligence, vol. 115, p. 105151, Oct. 2022. [Online].
philosophical considerations. A comprehensive review,” Diagn Available: http://arxiv.org/abs/2104.02395
Pathol, vol. 16, p. 24, Mar. 2021. [Online]. Available: [28] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna,
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7971952/ “Rethinking the Inception Architecture for Computer Vision,” arXiv,
[11] J. Amann, A. Blasimme, E. Vayena, D. Frey, V. I. Madai, and 2015. [Online]. Available: https://arxiv.org/abs/1512.00567
the Precise4Q consortium, “Explainability for artificial intelligence in [29] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4,
healthcare: a multidisciplinary perspective,” BMC Medical Informatics Inception-ResNet and the Impact of Residual Connections on Learning,”
and Decision Making, vol. 20, no. 1, p. 310, Nov. 2020. [Online]. arXiv, 2016. [Online]. Available: https://arxiv.org/abs/1602.07261
Available: https://doi.org/10.1186/s12911-020-01332-6 [30] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks
[12] Y. Zhang, Y. Weng, and J. Lund, “Applications of Explainable Artificial for Large-Scale Image Recognition,” arXiv, 2014. [Online]. Available:
Intelligence in Diagnosis and Surgery,” Diagnostics, vol. 12, no. 2, https://arxiv.org/abs/1409.1556
p. 237, Feb. 2022. [Online]. Available: https://www.mdpi.com/2075- [31] B. H. M. van der Velden, H. J. Kuijf, K. G. A. Gilhuijs, and M. A.
4418/12/2/237 Viergever, “Explainable artificial intelligence (XAI) in deep learning-
[13] G. Yang, Q. Ye, and J. Xia, “Unbox the black-box for the based medical image analysis,” Medical Image Analysis, vol. 79, p.
medical explainable AI via multi-modal and multi-centre data 102470, Jul. 2022. [Online]. Available: http://arxiv.org/abs/2107.10912
fusion: A mini-review, two showcases and beyond,” Information
Fusion, vol. 77, pp. 29–52, Jan. 2022. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S1566253521001597

200

Authorized licensed use limited to: VIT University. Downloaded on January 18,2024 at 10:27:55 UTC from IEEE Xplore. Restrictions apply.

You might also like