Paper F
Paper F
Diagnostic Report
Kaliappan Madasamy1,Vimal Shanmuganathan2,Nithish3*,Vishakan4,Vijayabhaskar5,Muthukumar6,
Balamurali Ramakrishnan7, Ramnath.M8
1,2,8
Deep Learning lab, Department of Artificial Intelligence and Data Science, Ramco Institute of
Technology, Rajapalayam. Tamilnadu, India
3*,4
Graduate Student, Deep Learning lab, Department of Artificial Intelligence and Data Science, Ramco
Institute of Technology, Rajapalayam. Tamilnadu, India
2
Department of Software Engineering, Haliç University, Istanbul, Turkey
5
Hera Diagnostics, Rajapalayam, Tamilnadu, India
6
Department of Pathology, Kalasalingam Medical College and Hospital,Krishnankoil, Tamilnadu, India
7
Centre for Non Linear Systems, Chennai Institute of Technology, Chennai,Tamilnadu,India
*Corresponding author [email protected]
{[email protected] , [email protected],[email protected]*, [email protected] vijay@hera-
1
1
Keywords: YOLOV5, Residual Neural Network, histopathology image, Colon cancer, Breast cancer
1. Introduction
The origin of the concept to use deep learning techniques for identifying cancer images lies
at the intersection of several factors. One of the primary motivations for this research is the need to improve
cancer diagnosis and treatment, which is a pressing public health issue globally. Cancer is a complex disease
that requires accurate and early diagnosis in order to increase the chances of successful treatment. However,
traditional diagnostic methods for cancer, such as histopathology, can be time-consuming and require
extensive expertise to interpret accurately [1]. Deep-Learning techniques have emerged as a promising
approach for improving the accuracy and efficiency of cancer diagnosis. Deep learning algorithms can learn to
recognize patterns in large datasets and can be trained to identify cancerous cells and tissues from
histopathology images. This is particularly useful because histopathology images contain a wealth of
information that is not always visible to the human eye [2]. The use of deep learning for cancer diagnosis has
been a topic of research for several years. The first studies in this area focused on using convolution neural
networks (CNNs) to analyze histopathology images [19]. CNNs are a type of deep learning algorithm that are
particularly well-suited to image analysis tasks, as they can automatically learn to detect features at different
levels of abstraction[15]. By training CNNs on large datasets of histopathology images, researchers have been
able to achieve high levels of accuracy in detecting cancerous cells and tissues.
However, while CNNs have been successful in many applications, they are not without
limitations. One of the main challenges of using CNNs for cancer diagnosis is the need for large amounts of
annotated data[22][23]. Annotated data is data that has been labeled by human experts to indicate the
presence or absence of cancerous cells or tissues. This is a time-consuming and expensive process that
requires extensive expertise in histopathology. Additionally, CNNs are susceptible to over fitting, which
occurs when the model becomes too specialized to the training data and performs poorly on new data. ResNet
is a type of CNN that uses residual connections to allow the network to learn more efficiently from very deep
architectures. This has been shown to be particularly effective in applications where large amounts of data are
available. In addition to improving the accuracy of cancer diagnosis, deep learning techniques have also been
used to develop predictive models for cancer outcomes. By analyzing large datasets of patient data, deep
learning algorithms can learn to identify patterns and risk factors that are associated with different outcomes.
This can help to improve patient care by identifying patients who are at high risk for poor outcomes and
developing personalized treatment plans.
The genesis of the idea to use deep learning techniques for identifying cancer images can
also be traced to advances in technology [11]. The availability of large datasets of histopathology images, as
well as improvements in computing power and storage, have made it possible to train deep learning
algorithms on a scale that was previously not possible. Additionally, the development of open-source deep
learning frameworks, such as TensorFlow and PyTorch, has made it easier for researchers to develop and test
2
deep learning models for cancer diagnosis. The inception of the notion to use deep learning techniques for
identifying cancer images can be traced to a combination of factors, including the need to improve cancer
diagnosis and treatment, advances in technology, and the development of sophisticated deep learning
architectures[25]. The use of deep learning for cancer diagnosis has the potential to revolutionize the field by
improving the accuracy and efficiency of diagnosis and developing predictive models for patient outcomes.
However, there are still challenges that need to be addressed, including the need for large annotated datasets
and the risk of overfitting.
2. Literature survey
Cancer detection using histopathology images is a very active area of research in the field of deep
learning and medical image analysis. There have been numerous studies and research papers published in
recent years that have explored the use of convolutional neural networks (CNNs) and other deep learning
models for this task[9]. One of the most popular and successful deep learning architectures for image
classification is ResNet (short for "Residual Network"), which was introduced in 2015 by Microsoft Research.
ResNet has been used for a wide range of image classification tasks, including cancer detection in
histopathology images. In 2019, a team of researchers from the University of California, Berkeley, and UCSF
published a study in the journal Nature that used deep learning to analyze histopathology images for cancer
detection. The study used a ResNet model and achieved a classification accuracy of 92.5% for breast cancer
metastasis detection.
Another study published in the Journal of Pathology Informatics in 2020 used deep learning models, including
ResNet, to classify lung cancer histopathology images. The study achieved a classification accuracy of 91.4%
and demonstrated the potential of deep-learning models for accurate and efficient cancer detection.
Overall, deep learning-based cancer detection using histopathology images is a rapidly advancing field with
many promising developments[16][17]. While there are still challenges to overcome, such as the limited
availability of high-quality annotated datasets and the need for interpretability in model predictions, the
potential benefits of these models for improving cancer diagnosis and treatment make this an important area of
research. Cancer is a major public health concern in India, with an estimated 1.39 million new cancer cases
and 7.8 lakh cancer-related deaths occurring in the country in 2020, according to the National Cancer Registry
Programme. Histopathology plays a crucial role in the diagnosis and treatment of cancer, and the use of
artificial intelligence (AI) techniques such as deep learning is increasingly being explored to aid in the
detection and classification of cancer cells in histopathology images.
There have been several studies and initiatives in India exploring the use of deep learning
and other AI techniques[20] for cancer detection using histopathology images. One study published in the
journal Medical Image Analysis in 2020 used a deep learning model to classify breast cancer histopathology
images with an accuracy of 91.6%. Another study published in the journal Computer Methods and Programs
in Biomedicine in 2021 used a convolutional neural network (CNN) to classify gastric cancer histopathology
images with an accuracy of 93.1%.
3
However, despite these promising developments, there are still challenges to be addressed in
the application of deep learning and other AI techniques for cancer detection using histopathology images in
India. These include issues related to data quality, data privacy, and the need for standardization and
validation of AI-based tools. For example, a study published in the Journal of the American Medical
Association found that a deep learning algorithm was able to accurately diagnose skin cancer at a level
comparable to dermatologists. Another study published in Nature found that a deep learning algorithm was
able to accurately identify breast cancer on mammograms.
Despite these promising results, there is still much work to be done in developing and
validating deep learning algorithms for cancer diagnosis [9]. One challenge is the need for large datasets of
high-quality histopathology images to train these algorithms. Additionally, there is a need for rigorous
evaluation and validation of these algorithms to ensure that they are accurate, reliable, and safe for clinical
use. In the context of the current status, the intended paper of using deep learning for cancer detection using
histopathology images is of great importance. Early detection of cancer [3] is critical in improving survival
rates and reducing the economic burden of cancer treatment. In addition, the use of deep learning algorithms
for cancer diagnosis has the potential to improve the accuracy and efficiency of cancer diagnosis, leading to
better patient outcomes. The envisioned paper of using deep learning for cancer detection using
histopathology images is of great importance in the context of the current status of cancer diagnosis and
treatment[4][8]. With the potential to improve the accuracy and speed of cancer diagnosis, deep learning has
the potential to revolutionize cancer care and improve patient outcomes. The World Health Organization
estimates that cancer is the second leading cause of death globally, accounting for about 10 million deaths in
2020 .. The use of deep learning algorithms for cancer diagnosis has the potential to reduce the number of
unnecessary biopsies and surgeries, which can lead to significant cost savings for patients and healthcare
systems. According to a study published in The Lancet, the global economic burden of cancer was estimated
to be $1.16 trillion in 2020. The accuracy of histopathology-based cancer diagnosis can vary widely
depending on the type of cancer, with reported diagnostic accuracy ranging from 70% to 95%. In a study
published in Nature, a deep learning algorithm was able to accurately identify lung cancer on CT scans, with a
sensitivity of 94% and a specificity of 93%. The use of deep learning algorithms for cancer diagnosis has the
potential to improve access to care for patients in underserved areas, as it can reduce the need for expert
pathologists to physically examine tissue samples .
3. Data Sources and Diagnostic Approaches in Cancer Detection :
3.1 Places to get reliable data from:
The paper focuses greatly on detecting Cancer from Histopathology images. There are several reliable
websites where you can find histopathology image datasets for cancer detection using deep learning. Some of
these are:
3.1.1 The Cancer Genome Atlas (TCGA):
4
TCGA provides a comprehensive collection of publicly available cancer genomic and histopathology data.
You can download whole slide images (WSI) from the TCGA website for different types of cancers, including
breast, lung, colon, and prostate cancer.
3.1.2. The Cancer Imaging Archive (TCIA):
TCIA is a public repository of cancer imaging data. You can find a large number of histopathology images of
different cancer types, including breast, lung, and prostate cancer, in the TCIA dataset.
3.1.3. The PatchCamelyon (PCam) dataset:
The PCam dataset contains 327,680 color images of lymph node sections with metastatic tissue. It is
specifically designed for training deep learning models for cancer detection.
5
The malignant tumors, on the other hand are a mass of proliferating cells called neoplastic or tumor cells.
These cells grow very rapidly, invading and damaging the surrounding normal tissues. As these cells actively
divide and grow, they also starve the normal cells by competing for vital nutrients.
Characteristics of Malignant Tumors are mentioned below:
Uncontrolled Cell Growth: Cells within malignant tumors divide rapidly and uncontrollably, often
forming irregular masses.
Invasion: Malignant tumor cells have the ability to invade surrounding tissues and structures. They
can infiltrate neighboring tissues, blood vessels, and lymphatic vessels.
Metastasis: The most distinctive feature of malignant tumors is their potential to metastasize. Cells
can break away from the primary tumor and travel via the bloodstream or lymphatic system to distant
sites, where they establish secondary tumors.
All the above-mentioned characteristics are clearly visible and identifiable in the considered dataset below.
3.1.7. Diagnostic methods for cancer:
Methods for diagnosing cancer, and the specific method used will depend on the type of cancer and the
individual's symptoms. Some of the most common methods of cancer diagnosis include:
Biopsy: This is the most reliable way to diagnose cancer. A small sample of tissue is removed from the
affected area and examined under a microscope for abnormal cells.
Imaging tests: These include X-rays, CT scans, MRI scans, and PET scans. These tests can help identify
tumors and determine the size and location of the cancer.
Endoscopy: This involves inserting a thin, flexible tube with a camera into the body to examine the inside of
organs or tissues.
Molecular testing: This is a type of testing that looks for changes in the DNA or other molecules that are
specific to certain types of cancer.
4. Methodology
4.1 Proposed System
Our model is specifically designed to deal with colon and breast cancer images only. This means that
it has been trained on a large dataset of images of these two types of cancer, and it has been optimized to
accurately classify new images as either colon or breast cancer. The model has been developed using
advanced machine learning techniques, including deep learning algorithms [12], which allow it to analyze
complex patterns and features in the images that are characteristic of each type of cancer. By focusing
exclusively on these two types of cancer, our model is able to achieve a high degree of accuracy in its
predictions and provide valuable insights into the diagnosis and treatment of colon and breast cancer. While
our model may not be suitable for detecting other types of cancer, its focused approach allows it to excel in its
specific domain, making it a valuable tool for researchers, clinicians, and patients alike.
4.2.1 ResNet approach:
6
ResNet152V2 is a convolutional neural network (CNN) model [14] that has been pre-trained on a
large dataset of images called ImageNet. It is a variant of the original ResNet model, which was introduced to
address the problem of vanishing gradients in very deep neural networks. ResNet152V2 is a very deep neural
network that contains 152 layers, and it has shown to be effective in a wide range of image classification
tasks. The ResNet152V2 model is known for its ability to extract high-level features from images, which
makes it an excellent choice for transfer learning applications. Transfer learning involves using a pre-trained
model as a starting point for a new task, rather than training a new model from scratch. By utilizing the pre-
trained ResNet152V2 model, it is possible to achieve high accuracy on image classification tasks related to
colon and breast cancer with relatively few training examples. In our Model 2 we have used RESNET with a
transfer learning approach. In our RESNET model we do not split the data into train and test. Instead, we just
use deep transfer learning approach and make the model even easier to work. The code defines and trains a
machine learning model called ResNet model that can identify colon and breast cancer in images.
The pre-trained model part of the code imports a pre-trained model called ResNet152V2, which has
already learned how to identify many different objects in images. The code then adds a few layers to the
ResNet152V2 model to "fine-tune" it for the specific task of identifying colon and breast cancer. The compile
function of the model specifies the optimizer, loss function, and evaluation metric to be used during training.
The fit function[24] trains the model on the provided training data for a certain number of epochs (here, 10).
The model's performance on a separate validation dataset is evaluated during training, and the training history
is recorded. The goal of this code is to train a model that can accurately identify colon and breast cancer from
images. By doing so, this model could potentially assist in the early detection and treatment of these types of
cancers related to colon and breast cancer with relatively few training examples.
After the complete building of a model, we were able to find few insights from the cancer cells which
are listed below:
1. Cells are looking tightly packed in slides annotated as 'adenocarcinoma
2. Cells are loosely packed in slides annotated as 'benign'.
7
0.5 decrease in the learning rate. Only 10% of the photos in the dataset were utilized for testing and validation,
whereas 90% were used for training. In order to get the best outcomes, we applied a number of hyper-
parameter techniques, such as regularization and optimization using the AdaMax and SGD optimizers and the
categorical cross-entropy loss function. The optimal values for all of the tested model's hyper-parameters [27]
are shown in Table 1.
Table 1: Resnet Parameter Values
Parameters Value Best value
No of 16 16
Epoches
Batch ize 40/80 80
Activation ReLU ReLU
Dropout 0.8 0.7
Loss 10 10
After the training and testing we are comparing the Train and Validation loss with respect to the Epoch taken
to run which is shown the following figure 3. We are here able to observe that as we increase the number of
Epochs the loss is minimized and the Accuracy is increased.
8
approach we take only 10 parameters for the analysis. Then after we develop the parameters to account of 30
by including the standard error and the worst value. By doing so, we are capable of increasing the accuracy of
the model to higher extent. With this approach we are able to get an accuracy of 92 percentages for training
data set and 90 percentage data set.
The parameters that we have taken in here are listed below:
The goal of linear regression is to estimate the values the sum of squared errors (SSE) between the predicted
values and the actual values of the output variable. This is typically done using a method called least squares,
which involves finding the values of the coefficients that minimize the sum of the squared differences between
the predicted and actual values of y.
9
However, they also have limitations, such as their sensitivity to outliers and the assumption of
linearity between the input and output variables. As such, it is important to carefully evaluate the assumptions
and limitations of linear regression models before using them for predictive modelling [21].
4.2.3 The labelling approach using annotations on the histopathology images:
In this approach, we label the images using labelling tools and then use it in their model. We progress on to
this approach because the unsupervised method was not able to talk about the reason why the cell is
cancerous. But in this approach, we would try to get the reason for the cancer cell being cancerous. The
reliable parameters on which they sell is classified as cancerous or benign is listed below:
a. Cell polarity: Cancerous cell shows a huge variation in the
In cancer cells, the nucleus polarity is always lost nucleus size and shape. The nucleus of cancer cells
b. Cell size and shape: seems to be very charged and distorted.
Cancer cells show huge variation in their cell e. Chromatin content - Normal or Hyperchromatic
shape and size. Loss of normal architecture-
Invasion into the stroma: f. Increased Nuclear cytoplasmic ratio:
The extra or the surplus number of cancerous cells Now in this approach we trained our
are also found to invade into the stroma. model based on the listed parameters and we train
c. Mitotic figures: it to deliver the possible reason for cancer with the
Abnormal mitotic growths are observed in cancer output. By these parameters we were also able to
cells. observe few other points.
d. Nucleus size and shape:
● Crowding of glands
● Confined to Mucosa
architecture
10
● Increased number of Mitotic figures ● Severe variation in cell size and shape
The YOLOv5 model is a state-of-the-art object detection model that is designed to accurately and
efficiently detect objects in real-time images and videos. It is an upgrade to previous YOLO versions, with
better performance, speed, and accuracy. The YOLOv5 model is based on a deep neural network architecture,
which is trained on a large dataset of labeled images [18]. During training, the model learns to identify and
classify objects in images, as well as predict their location and size. This allows the model to accurately detect
objects of different shapes, sizes, and orientations, even when they are partially occluded or in cluttered
environments. The YOLOv5 model uses a single-stage object detection approach, which means that it directly
predicts the bounding boxes and class probabilities for all objects in an image in a single pass. This is in
contrast to two-stage approaches, which first generate region proposals and then classify them. The YOLOv5
model is designed to be fast and efficient, with inference times of just a few milliseconds per image on a
typical GPU. This makes it well-suited for real-time applications such as video surveillance, robotics, and
autonomous vehicles.
The Cancer Cell detection model which where we used an annotated dataset of types of cells in the
Histopathology same, we were able to train a YOLOv5 model with an very good accuracy. When we ran the
test inference the model was able to produce the following result.
Precision-confidence curve (PRC) is a graphical plot of the precision and confidence of a binary classifier for
different thresholds. The precision is the fraction of positive predictions that are actually positive, while the
confidence is the probability that a positive prediction is correct. PRCs are used to evaluate the performance of
binary classifiers, especially when the classes are imbalanced.
The final accuracy metrics from the Tensor Board looked like the following had had precision above 86% and
the other important accuracy metrics are also displayed. Precision and recall are two important metrics used to
evaluate the performance of machine learning models in healthcare. Precision measures the fraction of
positive predictions that are actually positive, while recall measures the fraction of actual positives that are
predicted as positive. In healthcare, precision is often more important than recall. This is because false
positives can have serious consequences, such as unnecessary tests or treatments. Recall is also important in
healthcare, especially for diseases that are rare or difficult to diagnose. For example, a model that predicts that
a patient has a rare disease with high recall will be more likely to identify patients who actually have the
disease, even if it also misses some patients who do have the disease. The choice of which metric is more
important depends on the specific application. In general, precision is more important when false positives are
more serious than false negatives, and recall is more important when false negatives are more serious than
false positives.
11
Use of Precision in our case: (Predicting cancer) A model that predicts that a patient has cancer with
high precision will be less likely to miss a cancer diagnosis, even if it also predicts that some patients who do
not have cancer have cancer. This is important because false positives can have serious consequences, such as
unnecessary tests or treatment
Precision is the fraction of predicted bounding boxes that are correctly classified. Recall is the fraction of
ground-truth bounding boxes that are correctly classified. Mean Average Precision (mAP) is a measure of the
overall performance of an object detection model. The evaluation results underscore the efficacy of the
proposed approach, with precision achieving 87%, recall reaching 82%, and a mean average precision at 50%
IoU of 88%. These results serve as a testament to the model's robust performance in accurately identifying
both benign and malignant cancer cases.
12
5. Conclusion:
The proposition to employ deep learning methods for the identification of cancer images, with a
particular focus on colon and breast cancer, emerges from the urgent need to enhance cancer diagnosis and
treatment. By utilizing advanced artificial intelligence techniques, such as the CNN model, the proposed
mechanism aims to accurately detect and predict the presence of benign and malignant cancer. The research
also addresses the authentication of clinical reports, expedites the generation of pathologist reports using AI,
and enables histopathology image analysis for precise classification of cancer types. Comparing the proposed
CNN model with other state-of-the-art models would involve; Architectural Choices: Different CNN
architectures might be chosen based on the complexity of the problem and the available data. State-of-the-art
architectures like ResNet, Inception, and DenseNet have demonstrated improved performance by addressing
issues like vanishing gradients and information bottleneck.; Transfer Learning: Many state-of-the-art models
leverage pre-trained networks on large image datasets (ImageNet) and then fine-tune them for specific tasks
like cancer detection. This transfer learning approach can significantly improve performance, especially when
the dataset for the specific task is limited.; Ensemble Methods: Combining predictions from multiple models
or different versions of the same model can lead to improved accuracy and robustness.; Computational
Resources: Some state-of-the-art models may require more computational resources for training and inference,
which can impact their practicality in real-world applications. The ultimate goal is to empower pathologists to
provide timely and accurate reports, leading to informed treatment decisions and a reduction in cancer
morbidity and mortality. Future directions might be briefed as; Multi-Cancer Integration: Extend research to
diverse cancer types, creating a multi-cancer model. Fuse data from various sources, like radiology and
genomics, for comprehensive diagnostics; Explainable AI (XAI): Improve model transparency by developing
methods that clarify its decision rationale. Boost trust among pathologists and clinicians, refining accuracy
while enhancing understanding of AI-generated conclusions.
6. Acknowledgement:
13
Here, I need convey my sincere thanks to Deep Learning Laboratory, Department of Artificial Intelligence
and Data Science, Ramco Institute of Technology and along with that Department of Pathology, Kalasalingam
Medical College and Hospital, Krishnankoil, Tamilnadu, India.
7. References:
[1] P. S. Roy and B. J. Saikia, Cancer and cure: a critical analysis, Indian Journal of Cancer, vol. 53, no. 3,
pp. 441-442, 2016.
[2] L. A. Torre, R. L. Siegel, E. M. Ward, and A. Jemal, Global cancer incidence and mortality rates and
trends--an update, Cancer Epidemiology Biomarkers & Prevention, vol. 25, no. 1, pp. 16–27, 2016.
[3] F. Chollet, Xception: deep learning with depthwise separable convolutions, in Proceedings of the 30th
IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 1251–1258,
Honolulu, HI, USA, July 2017.
[4] M. T. Sadiq, H. Akbari, A. U. Rehman et al., Exploiting feature selection and neural network
techniques for identification of focal and nonfocal EEG signals in TQWT domain, Journal of
Healthcare Engineering, vol. 2021, Article ID 6283900, 24 pages, 2021.
[5] S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: towards real-time object detection with region
proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6,
pp. 1137–1149, 2017.
[6] Z. Zhong, L. Sun, and Q. Huo, An anchor-free region proposal network for Faster R-CNN-based text
detection approaches, International Journal on Document Analysis and Recognition, vol. 22, no. 3, pp.
315–327, 2019.
[7] Vrinten, L. M. McGregor, M. Heinrich et al., What do people fear about cancer? A systematic review
and meta-synthesis of cancer fears in the general population, Psychooncology, vol. 26, no. 8, pp. 1070–
1079, 2017.M. Karamanou, E. Tzavellas, K. Laios, M. Koutsilieris, and G. Androutsos, Melancholy as
a risk factor for cancer: a historical overview, JBUON, vol. 21, no. 3, pp. 756–759, 2016.
[8] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, Learning transferable architectures for scalable image
recognition, 2018, https://arxiv.org/abs/1707.07012v4.
[9] F. Abbas-Aghababazadeh, Q. Mo, and B. L. Fridley, Statistical genomics in rare cancer, Seminars in
Cancer Biology, vol. 61, pp. 1–10, 2020.
[10] M. Asif, W. U. Khan, H. M. R. Afzal et al., Reduced-complexity LDPC decoding for next-generation
IoT networks, Wireless Communications and Mobile Computing, vol. 2021, Article ID 2029560, 10
pages, 2021.
[11] R. Junejo, M. K. A. Kaabar, and S. Mohamed, Future robust networks: current scenario and beyond
for 6G, IMCC Journal of Science, vol. 11, no. 1, pp. 67–81, 2021.
[12] K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proceedings of
the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 770–778,
Las Vegas, NV, USA, June 2016.
14
[13] S. Balajee and M. P. Hande, History and evolution of cytogenetic techniques: current and future
applications in basic and clinical research, Mutation Research/Genetic Toxicology and Environmental
Mutagenesis, vol. 836, Part A, pp. 3–12, 2018.
[14] S. Parida and D. Sharma, The microbiome and cancer: creating friendly neighborhoods and removing
the foes within, Cancer Research, vol. 81, no. 4, pp. 790–800, 2021.
[15] Marx, How to follow metabolic clues to find cancer's Achilles heel, Nature Methods, vol. 16, no. 3,
pp. 221–224, 2019.
[16] J. M. Baust, Y. Rabin, T. J. Polascik et al., Defeating cancers' adaptive defensive strategies using
thermal therapies: examining cancer's therapeutic resistanceTechnology in Cancer Research &
Treatment, vol. 17, 2018.
[17] R. Seelige, S. Searles, and J. D. Bui, Innate sensing of cancer's non-immunologic hallmarks, Current
Opinion in Immunology, vol. 50, pp. 1–8, 2018.
[18] Dasgupta, M. Nomura, R. Shuck, and J. Yustein, Cancer's Achilles' heel: Apoptosis and necroptosis to
the rescue, International Journal of Molecular Sciences, vol. 18, no. 1, p. 23, 2017.
[19] T. Sepp, B. Ujvari, P. W. Ewald, F. Thomas, and M. Giraudeau, Urban environment and cancer in
wildlife: available evidence and future research avenues, Proceedings of the Royal Society B:
Biological Sciences, vol. 286, no. 1894, article 20182434, 2019.
[20] V. Lichtenstein, Genetic mosaicism and cancer: cause and effect, Cancer Research, vol. 78, no. 6, pp.
1375–1378, 2018.
[21] Y. Lecun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[22] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, Densely connected convolutional
networks, in Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition,
CVPR, 2017, pp. 4700–4708, Honolulu, HI, USA, July 2017.
[23] J. Donahue, Y. Jia, O. Vinyals et al., DeCAF: a deep convolutional activation feature for generic
visual recognition, in Proceedings of the 31st International Conference on Machine Learning, ICML
2014, pp. 647–655, Beijing, China, June 2014.
[24] M. Tan and Q. V. Le, EfficientNet: rethinking model scaling for convolutional neural networks,
2020, https://arxiv.org/abs/1905.11946.
[25] S. Ioffe and C. Szegedy, Batch normalization: accelerating deep network training by reducing internal
covariate shift, in Proceedings of the 32nd International Conference on Machine Learning, ICML
2015, pp. 448–456, Lille, France, 2015.
[26] P. Royston and D. G. Altman, External validation of a Cox prognostic model: principles and
methods, BMC medical research methodology, vol. 13, no. 1, p. 33, 2013.
[27] S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Computation, vol. 9, no. 8, pp.
1735–1780, 1997.
[28] Dataset link : https://www.cancerimagingarchive.net/histopathology-imaging-on-tcia/
15