Finalwithcoorection
Finalwithcoorection
on
A Deep Analysis of AI Impact On Oncology Study
Submitted in partial fulfilment of the requirements
for the award of the degree of
Bachelor of Technology
In
Information Technology
by
Harshit Shukla (2100970130043)
Juhi (2100970130048)
Jyoti Yadav (2100970130049)
Group No.: 24IT7020
DECLARATION
We hereby declare that the project work presented in this project report entitled “A Deep
Analysis of AI Impact On Oncology Study” in partial fulfillment of the requirement for the
award of the degree of Bachelor of Technology in Information Technology, submitted to
A.P.J. Abdul Kalam Technical University, Lucknow, is based on my work carried out at
Department of Information Technology, Galgotias college of engineering and technology,
Greater Noida. The work contained in the report is original and project work reported in this
report has not been submitted by me/us for the award of any other degree or diploma.
Signature:
Name: Harshit Shukla
Roll No: 2100970130043
Signature:
Name: Juhi
Roll N: 2100970130048
Signature:
Name: Jyoti Yadav
Roll No: 2100970130049
Date:
Place: Greater Noida
ii
GALGOTIAS COLLEGE OF ENGINEERING & TECHNOLOGY
GREATER NOIDA - 201306, UTTAR PRADESH, INDIA.
ACKNOWLEDGEMENT
We would like to express our heartfelt gratitude to all those who supported and guided us
throughout the completion of our B. Tech project.
First and foremost, we are deeply grateful as a Head of the Department and our project guide,
Prof. (Dr.) S. K. Singh for his invaluable insights, encouragement, and guidance, which have
been instrumental in the successful completion of this work. His unwavering support
greatly aided in our accomplishment.
A heartfelt thanks to our project Co-guide Dr. Pooja, whose guidance and supervision played a
crucial role in our project completion. Her provision of necessary information, patience and
knowledge have inspired us throughout this journey.
We extend our sincere thanks to Prof. Javed Miya, for providing the necessary resources and a
conducive environment for carrying out this project.
We would also like to thank our PEC faculty member and lab staff for their constant support and
helpful suggestions during various phases of the project.
A special note of thanks to our teammates, whose collaboration and hard work made this project
a success.
We our profoundly thankful to our parents and friends for their unwavering support, motivation,
and encouragement, which kept us focused and determined.
Finally, we extend our gratitude to Galgotias College of Engineering & Technology for
providing us with this opportunity to undertake this project and gain invaluable knowledge and
experience.
Harshit Shukla
2100970130043
Juhi
2100970130048
Jyoti Yadav
2100970130049
iii
GALGOTIAS COLLEGE OF ENGINEERING & TECHNOLOGY
GREATER NOIDA - 201306, UTTAR PRADESH, INDIA.
CERTIFICATE
This is to certify that the project report entitled “A Deep Analysis of AI Impact On
Oncology Study” submitted by Harshit Shukla (Roll No.-210097013043),Juhi (Roll
No.-210097013048), Jyoti Yadav (Roll No.-210097013049) to the A. P. J. Abdul
Kalam Technical University, Lucknow, Uttar Pradesh in partial fulfillment for the
award of Degree of Bachelor of Technology in Information Technology is a bonafide
record of the project work carried out by them under my supervision during the year
2024-2025.
iv
ABSTRACT
Advanced diagnostic and cancer classifying tools in oncology are brought to the limelight
through the advent of AI. There are challenges ranging from overfitting and data dependency to
difficulties with integration, coupled with lack of robust validation frameworks for extensive
clinical implementation. Addressing these, this research involves synthetic data augmentation,
tuning the hyperparameter, and utilizing explainability techniques toward robust and reliable
ovarian cancer subtype classifier.
This augmentation improved the classification performance specifically for underrepresented
subtypes, such as Low-Grade Serous Carcinoma (LGSC) and Mucinous Ovarian Carcinoma
(MUC).
Hyperparameter tuning employing Bayesian optimization also improved the hyperparameters,
specifically learning rate, dropout rate, and the batch size to prevent overfitting and maximize
generalization from the model. The algorithm was trained and tested on images of 13,024
histopathological images representing the five ovarian cancer subtypes. Results had an accuracy
at 99.01%. The integration of explainability techniques has managed to bridge gaps between AI
models and the adoption of AI models within clinical practices, with improving trust from
medical practitioners toward AI models.
This framework, therefore, does not only improve the accuracy of diagnosis but also acts as a
basis for the use of AI models in the real-world clinical environment. Future research will focus
on validating the framework in several clinical settings and integrating it into existing healthcare
workflows so that it could have much greater accessibility.
v
LIST OF TABLES
LIST OF FIGURES
vi
CONTENTS
DECLARATION ii
ACKNOWLEDGEMENTS Iii
CERTIFICATE Iv
ABSTRACT v
LIST OF TABLES vi
LIST OF FIGURES vi
CHAPTER 1: INTRODUCTION
1.1 Overview 1
1.2 Challenges in AI- driven Oncology Tool 1
1.3 Synthetic Data Augmentation as a Solution 1
1.4 Role of Hyperparameter Tuning 2
1.5 Explainability in AI Models 2
1.6 OVA Net Framework 3
1.7 Significance of the Study 4
1.1 Overview
Now AI has emerged as a transforming tool in many industries including healthcare and
oncology being more innovative. The use of AI in oncology significantly supports not only
diagnosis and prognosis, or even treatment planning, since it analysis complex medical
information that often cannot be done by humans. Of major interest in this field have indeed
been histopathological images, which are at the core of cancer diagnosis. By utilizing AI tools,
medical practitioners can classify cancer subtypes with remarkable accuracy, predict the course
of disease progression, and recommend personalized treatment plans.
Despite these progressions, the use of AI in oncology is limited due to several challenges. Some
of the issues faced are dependency on data, overfitting, integration barriers, and concerns about
validation in the development and implementation of dependable AI-driven tools. The following
chapter gives an overview of these challenges and solutions explored in this study, which
included synthetic data augmentation, hyperparameter tuning, and explainability techniques.
1
TABLE 1: Summary of Challenges, Impact and Proposed Solutions [3][12][15]
2
1.4 Role of Hyperparameter Tuning
Hyperparameter tuning is essential for optimizing model performance. Hyperparameters, such as
learning rate, batch size, and dropout rate, determine how a model learns during training. Poor
configurations can result in suboptimal results or even worsen overfitting.
Tuning Techniques Used:
Bayesian Optimization: Systematically explores hyperparameter combinations to identify
optimal settings efficiently.
Early Stopping: Monitors validation loss to prevent overfitting.
Regularization: Incorporates techniques like dropout to improve generalization.
4
CHAPTER 2
LITERATURE SURVEY
2.1 Introduction
AI has transformed the face of medical research, particularly in oncology, to provide tools for
better diagnosis, treatment, and prognosis. However, the use of AI in clinical oncology has faced
several challenges, including overfitting, limited training data, and lack of interpretability of
models [16]. This paper surveys existing research on AI applications in oncology, focusing on
diagnostic innovations, challenges, and emerging methodologies.
However, these models generally require large datasets, which are scarce in the domain of
oncology. Transfer learning that fine-tunes pre-trained models partially mitigates this by
5
allowing models to adapt to smaller, domain-specific datasets [6]. Even with these advances,
generalization is still an issue when applying these models to diverse patient populations.
To ensure clinical trust and transparency, OVA Net includes integration of Grad-CAM
(Gradient-weighted Class Activation Mapping). Grad-CAM visualizations highlight the most
important image regions that contributed to model predictions, thus enabling pathologists to
understand and validate decisions made by AI [10].
8
CHAPTER 3
RESEARCH GAPS
10
Ethical challenges also arise in informed consent. Patients often do not understand how their
data is used to train AI models or how AI decisions might affect their treatment. AI tools should
therefore be accompanied by transparent documentation and patient-facing summaries
explaining their purpose, capabilities, and limitations.
Bias in data and algorithms can also lead to inequitable healthcare outcomes. Models trained
predominantly on data from one ethnic group may underperform for others, exacerbating
existing healthcare disparities. It is essential to not only audit models for fairness but also
include diverse populations in the training data. Moreover, interdisciplinary collaboration
involving ethicists, clinicians, patient advocates, and technologists is necessary to develop
comprehensive ethical frameworks for AI in oncology.
18
3.13. Computational Efficiency [15]
The proposed AI framework is computationally intensive, requiring high-end GPUs and long
training times. While this may be feasible in well-funded research labs or tertiary hospitals, it
poses a serious barrier to adoption in resource-constrained settings, including rural hospitals,
community clinics, or low-income regions.
To ensure global accessibility, AI tools must be optimized for computational efficiency without
compromising performance.
Techniques to achieve this include:
Model pruning: Removing redundant neurons and connections to reduce model size.
Quantization: Representing weights and activations with lower-precision data types (e.g., 8-
bit instead of 32-bit).
Knowledge distillation: Training a smaller model (student) to mimic a larger, more accurate
one (teacher).
In addition, deployment strategies such as edge computing (running AI models on local devices)
and cloud-based inference services can reduce infrastructure costs. Cloud platforms also offer
scalability, making them ideal for regions with inconsistent hardware access.
The field should also invest in:
Hardware-software co-design, optimizing models for specific chipsets like TPUs or edge AI
accelerators.
Developing benchmark datasets for evaluating both accuracy and efficiency trade-offs.
Encouraging open-source dissemination of lightweight model variants suitable for mobile or
rural deployment.
Only by addressing computational efficiency can AI in oncology achieve equitable global
deployment and maximize its impact across diverse healthcare systems.
21
Chapter 4
PROPOSED WORK
4.1. Standardization of AI Validation Protocols
Lack of standardized validation protocols leads to inconsistent assessments across healthcare
centers. Each institution often follows its own validation practices, making it hard to compare AI
tools or deploy them at scale. This creates regulatory challenges and slows adoption.
We propose a structured framework involving common evaluation metrics such as ROC curves,
precision-recall curves, sensitivity, specificity, and clinical relevance scoring. Data sources
should be diverse and include multi-institutional datasets. Validation should include
retrospective testing, prospective trials, and continuous learning models. Additionally, real-time
feedback loops must be incorporated to refine models post-deployment.
Proposed Methodology:
• Design standardized workflows and toolkits for implementation.
• Collaborate with international healthcare and AI regulatory bodies.
• Conduct comparative studies across multiple hospitals or institutions.
Expected Outcomes:
• Higher consistency and trust in AI predictions.
• Easier regulatory approvals.
• More collaborative research across borders.
Real-World Application:
• The FDA’s Good Machine Learning Practices initiative can serve as a model.
• Use in multicenter trials for early-stage cancer detection AI tools.
22
datasets reflect systemic inequities which AI models may inadvertently learn. We propose an
ethical framework that addresses data governance, transparency, and model accountability.
Techniques like federated learning allow decentralized data processing, minimizing patient data
movement.
Informed consent mechanisms must evolve to explain how AI is used in treatment. Furthermore,
audit trails and ethical review committees should monitor bias detection and mitigation
practices.
Proposed Methodology:
• Design standardized workflows and toolkits for implementation.
• Collaborate with international healthcare and AI regulatory bodies.
• Conduct comparative studies across multiple hospitals or institutions.
Expected Outcomes:
• Higher consistency and trust in AI predictions.
• Easier regulatory approvals.
• More collaborative research across borders.
Real-World Application:
• The FDA's Good Machine Learning Practices initiative can serve as a model.
• Use in multicenter trials for early-stage cancer detection AI tools.
Proposed Methodology:
• Design standardized workflows and toolkits for implementation.
• Collaborate with international healthcare and AI regulatory bodies.
• Conduct comparative studies across multiple hospitals or institutions.
Expected Outcomes:
• Higher consistency and trust in AI predictions.
• Easier regulatory approvals.
• More collaborative research across borders.
Real-World Application:
• The FDA's Good Machine Learning Practices initiative can serve as a model.
• Use in multicenter trials for early-stage cancer detection AI tools.
30
CHAPTER 5
FINDING AND CONCLUSION
5.1 Introduction
This section presents a comprehensive analysis and interpretation of the outcomes derived from
the research conducted. The primary objective is to evaluate the model or methodology
developed in the context of the previously established research questions and objectives.
Through rigorous examination and comparison with current methods, the study aims to validate
the effectiveness, efficiency, and practical applicability of the proposed solution. The findings
discussed here not only assess the technical performance but also offer insights into its real-
world relevance and potential limitations.
The past two decades have witnessed remarkable improvements in cancer diagnostics, largely
driven by technological innovations in imaging and molecular biology. Techniques like Positron
Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Computed Tomography
(CT) have evolved to provide higher resolution and functional imaging, allowing oncologists to
detect tumors at earlier and more treatable stages. These modalities are often combined with
contrast agents or radioactive tracers that highlight metabolic activity, aiding in the identification
of malignancies even before structural abnormalities occur.
Moreover, molecular diagnostics, including liquid biopsy, biomarker analysis, and circulating
tumor DNA (ctDNA), have opened new avenues for non-invasive cancer detection. These
technologies enable real-time monitoring of disease progression and recurrence. Early diagnosis
dramatically improves patient outcomes and survival rates, supporting the crucial role of
diagnostic accuracy in effective cancer management.
31
5.2 Analytical Study
The research employed a systematic and data-driven approach to analyze the collected datasets.
The data underwent thorough pre-processing and normalization to ensure consistency and
reliability. Key performance indicators such as Accuracy, Precision, Recall, and F1-Score were
employed to quantify the model's efficiency. These metrics enabled a balanced evaluation,
particularly in scenarios with class imbalance—a common challenge in oncology-related data.
To strengthen the findings, the results of the proposed system were benchmarked against
existing state-of-the-art models. Tabular comparisons highlighted numerical improvements,
while visual representations such as bar charts, ROC curves, and confusion matrices provided a
clearer understanding of model behavior. The inclusion of cross-validation techniques further
ensured robustness and minimized the risk of overfitting. The sequencing of the human genome
and advancements in next-generation sequencing (NGS) technologies have revolutionized the
understanding of cancer biology. Researchers can now map the complete mutational landscape
of tumors, uncovering critical genes involved in carcinogenesis such as TP53, BRCA1/2, EGFR,
and KRAS.
This genomic knowledge has laid the foundation for personalized medicine, where therapies are
tailored to a patient's unique genetic makeup rather than a one-size-fits-all approach. For
example, identifying a BRCA1 mutation in a breast cancer patient can guide the use of PARP
inhibitors, while EGFR mutations in lung cancer may suggest targeted tyrosine kinase inhibitors.
Personalized medicine not only increases treatment efficacy but also reduces unnecessary side
effects, leading to more effective and patient-friendly cancer care.
34
This economic disparity raises serious ethical concerns about access, fairness, and the right to
health. Many patients in low- and middle-income countries lack insurance coverage, forcing
them into financial toxicity, where treatment decisions are driven by cost rather than medical
need.
There is a growing call for policy interventions, including generic drug production, government
subsidies, and price negotiations, to make treatments more equitable. Furthermore, ethical issues
in clinical trials, such as informed consent, patient safety, and data privacy, must be addressed as
research moves toward more complex interventions.
From identifying genetic mutations responsible for tumor growth to formulating individualized
treatment plans based on molecular profiling, oncology has evolved into a highly specialized and
technologically advanced discipline. Therapies such as immunotherapy, targeted molecular
drugs, and precision radiation techniques have contributed to a noticeable improvement in
survival rates and quality of life for many cancer patients.
Yet, this progress is not without its challenges. The rise of drug resistance, especially in
aggressive and metastatic cancers, limits the long-term success of treatments. Similarly, the
adverse side effects of many therapies, including fatigue, immune suppression, and organ
damage, continue to affect patients’ well-being. The financial toxicity associated with modern
cancer treatments—often involving prolonged hospital stays, expensive medications, and follow-
up care—can lead to significant stress and economic burden, particularly in low- and middle-
income countries.
Moreover, inequitable access to healthcare creates a stark contrast in outcomes between patients
in high-resource and low-resource settings.
37
CHAPTER 6
FUTURE SCOPE
38
already demonstrate success in preventing cervical and liver cancers. Expanding this approach to
more cancer types could transform preventive oncology and reduce global cancer burdens.
5. Nanomedicine in Oncology:
Nanotechnology offers innovative solutions to longstanding problems in cancer treatment.
Engineered nanoparticles can carry chemotherapeutic agents directly to tumor sites, improving
drug concentration where it’s needed and minimizing exposure to healthy tissues. This targeted
delivery reduces side effects, improves therapeutic outcomes, and may even allow the use of
drugs that were previously too toxic. Beyond drug delivery, nanoparticles are also being
developed for cancer imaging, thermal ablation, and as biosensors for early detection. As
regulatory pathways become clearer, more nanomedicine products are expected to reach clinical
use.
The exponential growth of Big Data across various sectors, including healthcare, finance,
government, and social media, has created a parallel surge in the need for stronger, more
adaptive, and intelligent security mechanisms. As organizations increasingly rely on data-driven
approaches, ensuring the confidentiality, integrity, and availability of massive data sets becomes
not just a technical requirement but a foundational pillar of trust and compliance. The future of
Big Data security lies in addressing the limitations of current methods, adopting new
technologies, and evolving to match the pace at which threats and data volumes are expanding.
One of the primary directions for future work involves the development of lightweight and
scalable encryption techniques that can be efficiently applied to real-time data streams. Current
cryptographic mechanisms, although robust, often fail to meet the low-latency demands of Big
Data analytics. Research into homomorphic encryption, which allows computations to be
performed on encrypted data without decryption, is promising but still impractical for large-scale
deployment due to high computational overhead. Future work can focus on reducing these
overheads and making such encryption schemes viable for real-time use.
Privacy concerns will continue to be at the forefront of Big Data research, particularly in the
context of user-generated content and personal information. Differential privacy has emerged as
40
a critical technique to ensure that the output of a data analysis does not compromise the privacy
of individuals. However, fine-tuning the balance between data utility and privacy guarantees
remains a challenge. Future efforts should aim at developing more intuitive frameworks for
implementing differential privacy in various industry-specific contexts. Additionally, future
systems will need to incorporate user-centric privacy controls, giving individuals greater control
over how their data is accessed and used.
Another vital aspect of future research is the integration of artificial intelligence and machine
learning into Big Data security. Machine learning models can analyze large volumes of data to
detect patterns and anomalies indicative of security threats. However, these systems are
themselves vulnerable to adversarial attacks and data poisoning. The development of robust,
explainable, and secure AI models is essential for future security frameworks. Researchers must
work on algorithms that not only detect known attack signatures but also anticipate new,
evolving threats through behavioral analysis and unsupervised learning.
The heterogeneity of data sources in Big Data environments, such as structured, semi-structured,
and unstructured data, introduces unique security challenges. Current security solutions often
struggle with adapting to such diverse formats and lack interoperability across different
platforms. Future architectures should aim for security frameworks that are adaptable, cross-
compatible, and modular. This also includes designing new data models and access control
policies tailored to the varied nature of Big Data environments. Role-based and attribute-based
access control systems may evolve into more dynamic models that adjust permissions based on
real-time risk assessments and user behavior analytics.
Cloud computing, being the most popular infrastructure for Big Data storage and processing,
presents both opportunities and security challenges. Data security in cloud environments is still
maturing, with concerns related to multi-tenancy, data sovereignty, and insider threats. Future
work in this domain may involve the implementation of decentralized and edge-based security
models that distribute security responsibilities and reduce the risks associated with centralized
cloud architectures. Blockchain technology could also play a pivotal role in securing distributed
data systems through immutable and transparent logging mechanisms. Integrating blockchain
with Big Data platforms may address issues related to data integrity and auditability, although
scalability remains a concern.
41
Big Data applications in sensitive sectors like healthcare and finance demand adherence to strict
regulatory standards such as HIPAA, GDPR, and others. The future of Big Data security must be
aligned with evolving regulatory landscapes to ensure legal compliance and avoid penalties. This
necessitates the development of automated compliance-checking tools that can continuously
monitor data practices and alert administrators to potential violations. Future security
frameworks must embed compliance protocols as a fundamental design feature rather than an
afterthought.
Moreover, as the Internet of Things (IoT) continues to expand, so does the surface area for
cyberattacks. The proliferation of smart devices connected to Big Data ecosystems brings new
vulnerabilities that traditional security systems are ill-equipped to handle. Future research must
focus on creating end-to-end security models that protect not only the core Big Data
infrastructure but also the edge devices and gateways that feed data into the system. Lightweight
encryption and anomaly detection at the edge will be crucial components of such models.
User awareness and training will also play a critical role in future security frameworks. As much
as technical solutions are important, human error remains one of the leading causes of security
breaches. Future systems must incorporate intelligent user interfaces that guide users through
secure data practices, as well as continuous education programs to reinforce best practices.
Gamification and adaptive learning techniques may prove effective in maintaining high levels of
user engagement and knowledge retention.
In terms of research methodology, the future will likely witness a rise in interdisciplinary
collaborations involving computer scientists, data analysts, legal experts, and ethicists. Security
in Big Data is not just a technical issue but also a social and ethical concern. Future studies
should explore ethical implications of surveillance, consent, and algorithmic bias in Big Data
environments. Establishing ethical guidelines and developing fair algorithms will be essential to
ensure responsible innovation.
With the advent of quantum computing, current encryption protocols could become obsolete, as
quantum algorithms are expected to break many classical cryptographic schemes. Thus, future
work must also focus on the development of quantum-resistant encryption algorithms. This
includes both theoretical research and practical implementation strategies for post-quantum
cryptography that can be integrated into Big Data systems without compromising performance.
42
Energy efficiency and environmental sustainability are emerging concerns in the context of Big
Data security. Security mechanisms that require intensive computation contribute significantly to
energy consumption and carbon emissions. Future research must address the need for “green”
security practices, optimizing both security and environmental impact. This can involve the use
of energy-aware algorithms, resource-efficient encryption, and sustainable hardware designs.
Furthermore, security metrics and benchmarking will be critical to measuring the effectiveness
of future security solutions. There is currently no standardized way to evaluate Big Data security
tools across different platforms and use cases. The development of universal benchmarks and
evaluation frameworks will enable organizations to assess the strength of their security measures
and make informed decisions.
Collaboration between academia, industry, and government agencies will become increasingly
important in tackling complex security challenges. Shared threat intelligence, cooperative
development of security standards, and public-private partnerships can accelerate the innovation
and deployment of advanced security solutions. Policy-level interventions, including subsidies
and incentives for adopting strong security measures, may also shape the future of Big Data
security.
In conclusion, the future scope of Big Data security is vast and multifaceted. From improving
encryption methods and enhancing privacy to leveraging AI and addressing regulatory and
ethical concerns, there is a wide spectrum of opportunities for innovation. As data continues to
drive the modern digital economy, ensuring its security will remain a dynamic and ongoing
challenge. Researchers, practitioners, and policymakers must work collaboratively to build
secure, resilient, and ethical Big Data systems capable of withstanding the evolving threat
landscape of tomorrow.
43
REFERENCES
[1] Hamamoto, R., Suvarna, K., Yamada, M., Kobayashi, K., Shinkai, N., Miyake, M., ... &
Kaneko, S. (2020). Application of artificial intelligence technology in oncology: Towards the
establishment of precision medicine. Cancers, 12(12), 3532.
[2] Bhinder, B., Gilvary, C., Madhukar, N. S., & Elemento, O. (2021). Artificial intelligence in
cancer research and precision medicine. Cancer discovery, 11(4), 900-915.
[3] Kann, B. H., Hosny, A., & Aerts, H. J. (2021). Artificial intelligence for clinical
oncology. Cancer Cell, 39(7), 916-927.
[4] Zhen, L., & Chan, A. K. (2001). An artificial intelligent algorithm for tumor detection in
screening mammogram. IEEE transactions on medical imaging, 20(7), 559-567.
[5] Schmauch, B., Romagnoni, A., Pronier, E., Saillard, C., Maillé, P., Calderaro, J., ... &
Wainrib, G. (2019). Transcriptomic learning for digital pathology. BioRxiv, 760173.
[6] Litjens, G., Sánchez, C. I., Timofeeva, N., Hermsen, M., Nagtegaal, I., Kovacs, I., ... & Van
Der Laak, J. (2016). Deep learning as a tool for increased accuracy and efficiency of
histopathological diagnosis. Scientific reports, 6(1), 26286.
[7] Kaul, V., Enslin, S., & Gross, S. A. (2020). History of artificial intelligence in
44
medicine. Gastrointestinal endoscopy, 92(4), 807-812.
[8] Huynh, E., Hosny, A., Guthier, C., Bitterman, D. S., Petit, S. F., Haas-Kogan, D. A., ... &
Mak, R. H. (2020). Artificial intelligence in radiation oncology. Nature Reviews Clinical
Oncology, 17(12), 771-781.
[9] Takeishi, S., & Nakayama, K. I. (2016). To wake up cancer stem cells, or to let them sleep,
that is the question. Cancer science, 107(7), 875-881.
[10] Shimizu, H., Takeishi, S., Nakatsumi, H., & Nakayama, K. I. (2019). Prevention of cancer
dormancy by Fbxw7 ablation eradicates disseminated tumor cells. JCI insight, 4(4).
[11] Datta, N. R., Samiei, M., & Bodis, S. (2014). Radiation therapy infrastructure and human
resources in low-and middle-income countries: present status and projections for
2020. International Journal of Radiation Oncology* Biology* Physics, 89(3), 448-457.
[12] Luchini, C., Lawlor, R. T., Milella, M., & Scarpa, A. (2020). Molecular tumor boards in
clinical practice. Trends in cancer, 6(9), 738-744.
[13] Richards, S., Aziz, N., Bale, S., Bick, D., Das, S., Gastier-Foster, J., ... & Rehm, H. L.
(2015). Standards and guidelines for the interpretation of sequence variants: a joint consensus
recommendation of the American College of Medical Genetics and Genomics and the
Association for Molecular Pathology. Genetics in medicine, 17(5), 405-423.
[14] Lubner, M. G., Smith, A. D., Sandrasegaran, K., Sahani, D. V., & Pickhardt, P. J. (2017).
CT texture analysis: definitions, applications, biologic correlates, and
challenges. Radiographics, 37(5), 1483-1503.
[15] Sung, H., Ferlay, J., Siegel, R. L., Laversanne, M., Soerjomataram, I., Jemal, A., & Bray, F.
(2021). Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality
worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians, 71(3), 209-249.
[16] Cordeiro, J. V. (2021). Digital technologies and data science as health enablers: an outline
of appealing promises and compelling ethical, legal, and social challenges. Frontiers in
medicine, 8, 647897.
[17] Wang, S., Liu, Z., Rong, Y., Zhou, B., Bai, Y., Wei, W., ... & Tian, J. (2019). Deep learning
provides a new computed tomography-based prognostic biomarker for recurrence prediction in
high-grade serous ovarian cancer. Radiotherapy and Oncology, 132, 171-177.
[18] Lipkova, J., Chen, R. J., Chen, B., Lu, M. Y., Barbieri, M., Shao, D., ... & Mahmood, F.
(2022). Artificial intelligence for multimodal data integration in oncology. Cancer cell, 40(10),
1095-1110.
[19] Kann, B. H., Hosny, A., & Aerts, H. J. (2021). Artificial intelligence for clinical
oncology. Cancer Cell, 39(7), 916-927.
45
[20] Chua, I. S., Gaziel‐Yablowitz, M., Korach, Z. T., Kehl, K. L., Levitan, N. A., Arriaga, Y.
E., ... & Hassett, M. (2021). Artificial intelligence in oncology: Path to implementation. Cancer
Medicine, 10(12), 4138-4149.
[21] Siegel, R. L., Miller, K. D., & Jemal, A. (2018). Cancer statistics, 2018. CA: a cancer
journal for clinicians, 68(1), 7-30.
[22] Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., ... &
Dean, J. (2019). A guide to deep learning in healthcare. Nature medicine, 25(1), 24-29.
[23] Kann, B. H., Thompson, R., Thomas Jr, C. R., Dicker, A., & Aneja, S. (2019). Artificial
intelligence in oncology: current applications and future directions. Oncology, 33(2), 46-53.
46