Artificial Intelligence and The Medical Physicist Welcome To The

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

applied

sciences
Review
Artificial Intelligence and the Medical Physicist: Welcome to
the Machine
Michele Avanzo 1, *, Annalisa Trianni 2 , Francesca Botta 3 , Cinzia Talamonti 4 , Michele Stasi 5 and Mauro Iori 6

1 Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy
2 Medical Physics Unit, Ospedale Santa Chiara APSS, 38122 Trento, Italy; [email protected]
3 Medical Physics Unit, Istituto Europeo di oncologia IRCCS, 20141 Milan, Italy; [email protected]
4 Department Biomedical Experimental and Clinical Science “Mario Serio”, University of Florence,
50134 Florence, Italy; [email protected]
5 Medical Physics Unit, A.O. Ordine Mauriziano di Torino, 10128 Torino, Italy; [email protected]
6 Medical Physics Unit, Azienda USL-IRCCS di Reggio Emilia, 42122 Reggio Emilia, Italy; [email protected]
* Correspondence: [email protected]

Abstract: Artificial intelligence (AI) is a branch of computer science dedicated to giving machines or
computers the ability to perform human-like cognitive functions, such as learning, problem-solving,
and decision making. Since it is showing superior performance than well-trained human beings in
many areas, such as image classification, object detection, speech recognition, and decision-making,
AI is expected to change profoundly every area of science, including healthcare and the clinical
application of physics to healthcare, referred to as medical physics. As a result, the Italian Association
of Medical Physics (AIFM) has created the “AI for Medical Physics” (AI4MP) group with the aims of
coordinating the efforts, facilitating the communication, and sharing of the knowledge on AI of the
 medical physicists (MPs) in Italy. The purpose of this review is to summarize the main applications

of AI in medical physics, describe the skills of the MPs in research and clinical applications of AI,
Citation: Avanzo, M.; Trianni, A.;
and define the major challenges of AI in healthcare.
Botta, F.; Talamonti, C.; Stasi, M.; Iori,
M. Artificial Intelligence and the
Keywords: artificial intelligence; deep learning; medical physicist; machine learning; big data
Medical Physicist: Welcome to the
Machine. Appl. Sci. 2021, 11, 1691.
https://doi.org/10.3390/app11041691

Academic Editors: Francesco


1. Introduction
Bianconi and Salvatore Gallo Artificial intelligence (AI) is a branch of computer science dedicated to giving machines
or computers the ability to perform human-like cognitive functions, such as learning,
Received: 15 December 2020 problem-solving, and decision making [1,2]. AI-based systems have shown performance
Accepted: 8 February 2021 superior to experienced human beings in tasks, such as image classification and analysis,
Published: 13 February 2021 speech recognition, and decision-making [3]. Consequently, AI is expected to change
profoundly every area of science, including medical physics, the clinical application of the
Publisher’s Note: MDPI stays neutral principles of physics to healthcare [4,5]. The knowledge and skills of the medical physicists
with regard to jurisdictional claims in (MPs), which include aspects of mathematics, bioinformatics, statistics, safety, and ethics in
published maps and institutional affil- the use of medical devices, are invaluable in the clinical and research applications of AI in
iations. medicine.
Moreover, analytical and computational techniques of physics, in particular those
derived from statistical physics of disordered systems, can be extended to large-scale
problems, including machine learning, e.g., to analyze the weight space of deep neural
Copyright: © 2021 by the authors. networks [6,7].
Licensee MDPI, Basel, Switzerland. Given the exponential growth of applications of AI, such as machine learning (ML)
This article is an open access article and deep learning (DL) in all areas of medicine, which use ionizing radiation, ultrasounds,
distributed under the terms and and magnetic fields for diagnostic and treatment purposes, witnessed over the past few
conditions of the Creative Commons years, the MPs’ workflow will be profoundly affected by the advent of AI. The areas
Attribution (CC BY) license (https://
affected will include quality controls of equipment, as linear accelerators and imaging
creativecommons.org/licenses/by/
devices, and software like diagnostic support systems [4,8] and decision support systems.
4.0/).

Appl. Sci. 2021, 11, 1691. https://doi.org/10.3390/app11041691 https://www.mdpi.com/journal/applsci


Appl. Sci. 2021, 11, 1691 2 of 17

The MPs will be more and more involved in the use of the new AI applications in medicine
for patient diagnosis and treatment, with the primary scope of guaranteeing the quality of
the whole process and environment [9].
The Italian Association of Medical Physics (AIFM) has created the AI for Medical
Physics (AI4MP) task-group, with the aims of coordinating the efforts, facilitating the
communication, and sharing of the knowledge on AI of the MPs in Italy. The aim of the
present review is to summarize the point of view of the coordinators of AI4MP on the
role and the involvement of MPs in the new AI world by defining the challenges of AI in
healthcare for the MPs and by describing the skills the MPs can offer in this field. This will
be done with a question in mind: if AI is welcomed by the MPs or vice versa.

2. Artificial Intelligence in Healthcare


Machine learning (ML) is the discipline that builds mathematical models and com-
puter algorithms to perform specific tasks by learning patterns and inferences directly from
data using computers, without being explicitly programmed to conduct these tasks [10].
ML algorithms can be either used for supervised learning, where the machine is provided
with output labels to be associated with a set of input variables, or unsupervised learning.
A popular supervised ML method is Support Vector Machines (SVM), which, by means of a
kernel function, projects the data into a higher-dimensional feature space and determines a
hyperplane in this feature space, which separates data points into categories [11]. Ensemble
ML (EML) methods, such as Random forests or AdaBoost, are other supervised methods,
which aggregate multiple learners, such as Decision Trees, into a single learner [12,13].
Naïve-Bayesian (NB) classifier calculates the probability of each class using the Naive Bayes
formula [14,15]. In unsupervised learning, the labels for given sets of input variables are not
known, and the algorithm aims at finding correlations, patterns, or structures in the input
variable space [16,17]. These include k-means clustering [18], principal component analysis
(PCA) [19], Stochastic Neighbor Embedding (SNE) [20], and Laplacian eigenmaps [21].
Deep learning (DL) is a group of methods, which can be employed for supervised
or unsupervised learning on any type of data, image, or signal. DL employs models with
multiple stacks of neural layers to learn inherent patterns from input data and generate
comprehensive representations, in contrast to classical ML methods, which use hand-
crafted features manually extracted as input [2].
Nowadays, radiological and pathology images are stored, together with their reports,
in picture archiving and communication systems (PACS). Besides, with the introduction of
electronic health records (EHRs), systematic collections of patient health information have
been made available, which include qualitative data, such as documents and records of
patient demographics, medical records, and laboratory and diagnostics tests [22].
ML and DL, if applied to this large and often unstructured digital content, can deter-
mine information useful for epidemiological, clinical, and research studies [23,24]. Natural
language processing (NLP) techniques, a combination of AI and linguistics, aimed at de-
veloping a computer’s ability to understand human language [25], can be used to extract
clinically relevant information from pathology and radiology reports [26], which can be
integrated with features extracted from digital radiologic and pathology images stored in
PACS [27].
The process used for these analyses is defined as “Data Mining”. Data mining is used
to find trends, patterns, correlations, anomalies, and features of interest in a database [28] in
a data-driven inductive approach, which generates hypotheses from data [29]. Ideally, data
mining necessitates the ‘4 V’s’ of ‘Big Data’—volume, variety, velocity, and veracity of data.
Instead of being used for prediction or diagnosis, in this case, ML is used to find clinically
similar patients in the unstructured database, using all available multimodal clinical data
available, with the aim of discovering important groupings or defining features in the
data [28].
Once similar patients are identified, the diagnosis, treatment, and outcome extracted
from EHRs and other digital content can be ranked to give recommendations [17], e.g., by
Appl. Sci. 2021, 11, 1691 3 of 17

computerized clinical decision support systems (CDSS), which aid in decision-making [30].
In this way, pipelines can be designed to continuously and automatically extract informa-
tion and improve the accuracy of patient outcome prediction [31].

3. Clinical Applications of Artificial Intelligence


3.1. Imaging
The main purpose of the use of AI and ML applications in imaging is to support the
specialist in the diagnosis of diseases. Computer-aided diagnosis (CAD) is among the
first applications of these new algorithms in the imaging area [32,33] and incorporates
ML classifiers trained to distinguish lesions from normal tissue [34]. In lung computed
tomography (CT), ML applied to combinations of CT textural features scored high accuracy
in distinguishing malignant lesions [35] or invasive from minimally invasive lesions [36].
In the relatively recent radiomics approach, quantitative analysis of radiological
images (mainly CT [37–39], magnetic resonance imaging (MRI) [40–42], and positron
emission tomography (PET) [43] images, but also ultrasounds [44], mammograms [45],
and radiography) by extraction of a large number of image features (up to a few hundred
or thousands) can be combined with ML classifiers to produce prognostic and predictive
models [39].
In image elaboration, DL algorithms can learn the structure labeling of each image
voxel directly (semantic segmentation) in order to contour lesions or organs [46]. U-net,
one of the most popular DL architectures for image segmentation, has proven to be capable
of automatically segmenting lung parenchyma [47] and lung tumor using PET-CT hybrid
imaging [48].
A cornerstone of optimization of clinical imaging protocols is patients’ dose estima-
tion, which allows the dose to be balanced with image quality. Dose to the patient can be
automatically calculated by DL in CT [49], single-photon emission computed tomography
(SPECT) [50], and PET [51]. In interventional radiology, DL has been proposed for skin
dose estimation [52]. In chest CT, ML could be used to predict the volumetric computed to-
mography dose index (CTDIvol) based on scan patient metrics (scanner, study description,
protocol, patient age, sex, and water-equivalent diameter (DW)) and identify exams, which
hold potential for dose reduction by tuning the acquisition parameters [53].
Another pillar of patient dose optimization is image quality improvement, as it allows
dose reduction for the same image quality. The integration of AI algorithms within the
imaging technology allows for improving imaging quality and, consequently, to reduce
patient dose. DL methods have been used for improving PET image quality, reducing
noise [54], removing streak artifacts from CT [55], and developing novel techniques for
tomographic image reconstruction based on a reduced amount of acquired data. Other
promising applications are a generation of synthetic images, such as synthetic CT from
MRI [56], virtual contrast-enhanced images [57], and rigid/deformable intramodal and
multimodal image registration [58], and extraction of the respiratory signal [21] that could
be used for breathing motion compensation of images [59].
In interventional radiology, AI can predict tumor response to transarterial chemoem-
bolization based on image texture and patient characteristics [60,61]. In the future, real-time
registration DL algorithms could be used to superimpose high-resolution preoperative MR
imaging with intra-procedural fluoroscopy, guiding the physicians during the catheter’s
manipulation [62] for estimating ablation margins and helping minimize damages to
structures close to the treated area.
AI can be useful also in longitudinal studies during follow-up of treatments in order to
detect subtle changes between images, thus identifying progress or recurrence at an earlier
stage [63,64]. Ophthalmic imaging, e.g., fundus digital photography, optical coherence
tomography, among other imaging fields, is where artificial intelligence can support the
specialist in the diagnosis of ophthalmic disorders, such as diabetic retinopathy, age-
related macular degeneration, and others [65]. Other areas include cardiology [66,67] and
rheumatology, which have a long history of research in AI applications aimed to detect
Appl. Sci. 2021, 11, 1691 4 of 17

and assess also rheumatological manifestations, bone erosions, and cartilage loss [68]. The
development of digital pathology, due to the introduction of whole-slide scanners, and
the progression of computer vision algorithms have significantly grown the usage of AI
to perform tumor diagnosis, subtyping, grading, staging, and prognostic prediction. In
the big-data era, the pathological diagnosis of the future could merge proteomics and
genomics [69]. Spatial metabolomics is a new field aiming at measuring the distribution of
molecules, such as metabolites, lipids, and drugs, within body structures, using imaging,
such as mass spectrometry, where each pixel is represented by its mass spectrum [70].
Being characterized by a large amount of high dimensional data, including overlapping
and noisy molecular signals, this technique looks promising for the application of AI [71].
Other applications that could become a focus of AI in the near future are computer
vision [72], dealing with object detection and feature recognition in digital images, and
virtual assistants [73], employing speech recognition in neuroradiology [74], radiology, and
beyond. By augmented reality, the operator’s perception of an operating room environment
could be enhanced with AI-generated information [75].

3.2. Therapy
ML can be useful to carry out many of the activities during the whole workflow of
radiotherapy, starting with the choice of the optimal radiation approach, e.g., choice of
proton vs. photon [76]. A convolutional neural network (CNN) can automatically segment
targets and organs at risk in radiotherapy [77]. ML-based auto-planning [78,79] mimics
the iterative plan design, evaluation, and adjustments made by experienced operators
with the goal of improving quality and efficiency and reducing inter-user variability [46].
Knowledge-based approaches leverage a large database of prior treatment plans (up to
thousands) to develop associations between geometric and dosimetric parameters from
a selection of previous plans in order to determine achievable dose constraints or dose
distributions that can be used for benchmarking the quality of plans [9,80]. ML-based auto
planning was also developed for brachytherapy [81].
The dose distribution from radiation therapy treatment can be predicted by DL in
order to speed up the optimization [82] or determine the best achievable dose distribution
from the patient image [83]. ML was applied to predict dose in brachytherapy [84] and
in vivo measured dose in intraoperative radiotherapy [85].
Recently, dosomics, the application of radiomics or DL to the analysis of the dose
distribution, eventually corrected into biologically effective dose to account for diverse
fractionation, was investigated for the ability to predict side effects of radiation therapy
[86,87]. Radiomics can also be applied to cone-beam CT (CBCTs) acquired for image-
guidance of the radiotherapy treatment, making these images useful for data mining [88].
A major concern of radiotherapy is the change in the anatomy of the patient during
therapy, which could result in unwanted dose changes. In this case, re-planning of the
treatment is warranted. ML can identify significant changes in patient anatomy during
radiotherapy [19] and predict patients who would benefit from adaptive radiotherapy
(ART) [89]. Eventually, by using information extracted from radiomics voxel-based analy-
ses, sensitive/resistant tumor sub-volumes might be identified, requiring higher (or lower)
dose, thus enabling dose painting according to a “radiomic target volume” (RTV) [90].
In nuclear medicine, radiometabolic therapy with unsealed (radiopharmaceuticals) or
sealed sources (microspheres, etc.) is of growing importance. The application of AI in this
area can improve dosimetry by accounting for patients’ anatomy, activity distribution, and
tissue density, and planning, in order to administer the highest dose to the target while
sparing critical organs, as well as for predicting treatment response [91]. Methodological
studies have been performed to investigate the robustness of dosomic approaches [92].

3.3. Quality Assurance (QA)


According to the International Organization for Standardization, QA is a system
that ensures quality for a given product, service, process. Quality is the degree to which
Appl. Sci. 2021, 11, 1691 5 of 17

the system fulfills requirements (need or expectation that is stated—generally implied or


obligatory) [93], thus avoiding mistakes and defects. Quality controls (QC) are the tests
performed to describe, measure, analyze, improve, and control a certain product or process.
In radiological sciences, QCs are applied to verify and monitor devices and procedures for
diagnosis and therapy, as well as the support systems used by clinicians. AI can be used to
perform automatically QCs that, if carried out manually, would not be feasible routinely
due to a large amount of time required. AI QC systems could be used to learn and improve
their accuracy over time and develop new tests over time without human intervention.
Quality assurance of radiotherapy (RT) is a significant part of the MP’s work, and it
is aimed at preventing radiological incidents and misadministration of radiation dose. A
number of ML-based approaches have been explored to predict errors in treatment plans
in order to automate chart check of plans. A K-means clustering algorithm was employed
to learn from prior plans to perform the detection of errors in prostate plans [18].
Automated quality control of LINACs is another promising application of ML, which
can be used for predicting machine performance issues, such as deviation of dose out-
put [94], multileaf collimator (MLC) positions [95], and beam symmetry [96]. A method for
automated quality control of LINACs by ML applied to electronic portal imaging device
(EPID) was proposed, which could identify sag and deviations in the vertical direction
and field shift [97]. Other AI applications aim at predicting results of in-phantom pa-
tients’ specific QA of intensity modulated RT (IMRT) or volumetric modulated arc therapy
(VMAT) [98,99].

4. Challenges and Pitfalls of AI


4.1. Data Size and Quality
ML and DL algorithms require a large amount of training samples, which grows
rapidly with the dimensionality of data (the curse of dimensionality). An unappropriated
data size will lead to a reduction in the certainty of the prediction, considering that many
ML applications will always deliver a result, disregard the size and quality of the data
set [100]. Unfortunately, a proper metric to evaluate sample size and power for ML and DL
is missing.
Frequently, datasets used for training AI have a small number of samples with respect
to the dimensionality of data and of the desired tasks [101], to the point that, frequently,
there are more features per subject than subjects in the entire dataset [102]. Under these
circumstances, overfitting, a condition where models are more sensitive to noise in the data
than to their patterns, and instability occur, making the model poorly reproducible and
generalizable, meaning that it will perform poorly on unseen datasets [103].
Feature selection algorithms, such as stepwise feature selection [104], the minimum
redundancy maximum relevance (mRMR) [105], and RELIEF (relevance in estimating
features) [106,107], can be applied to reduce overfitting by selecting a non-redundant
subset of variables best suited to predict the outcome.
To reduce overfitting in DL, data augmentation (e.g., by the affine transformation of
the images) during training is commonly implemented [10], and layers in the networks
are specialized in reducing overfitting, such as dropout layers [108]. On the other side,
DL suffers from other sources of uncertainties (e.g., the presence of many local minima
in the loss function and the stochastic nature of training algorithms), so that repeating
model training multiple times does not necessarily produce the same model [2]. Besides,
the class imbalance problem, in which some classes have a significantly higher number of
samples, is detrimental for ML performance, if not properly accounted for [109,110]. For
overcoming class imbalance, under-sampling or over-sampling can be applied; the latter
has been proven to be more effective [110].
Other biases in the training datasets, e.g., age, gender, and race, or in the diagnostic
or therapeutical approach, e.g., technologies use for imaging or radiotherapy, may result
in biased models, which may lead to poor performance for minority groups who are
Appl. Sci. 2021, 11, 1691 6 of 17

poorly represented in the training dataset. This could potentially aggravate healthcare
disparities [103].
Another source of unreliability stems from the constant evolving of the patterns of
clinical practice over time due to the introduction of new treatment approaches, tech-
nologies, or gradual changes in patient population (e.g., percentage of patients with a
given histological subtype). This may result in increased unreliability of the AI system’s
recommendations or prediction over time [30]. The “half-life” of the relevance of clinical
data used for training is thought to be typical of 4 months [111].

4.2. Interpretability
Interpretability is the level of understanding of the information that the model extracts
from input data, why it is extracted, and how it arrives at its output [2]. ML models are
usually perceived as black boxes by the users and clinicians, meaning that they have a
low level of interpretability. This issue is exacerbated for deep neural networks, given the
complicated multi-layer structures and numerous numerical operations performed by each
layer, and hinders the application of AI in the clinic.
Graph approaches can be of help to improve the interpretability of ML and DL
methods. The activation maps extracted by the CNN, overlaid with the image analyzed,
can show on which image regions the CNN focuses strongly for prediction [112]. For ML
classifiers, interpretation can be facilitated by identification of the most important variables
or features for prediction and comparing their values in illustrative cases, e.g., patients
with a poor and good prognosis, as done in many radiomics studies, e.g., [86,113,114]. In
unsupervised learning, some methods, like t-distributed stochastic embedding (t-SNE),
allow visualization of high-dimensional data by giving each data point a location in a two
or three-dimensional map [20].

4.3. Legal and Ethical Issues


Key ethical issues associated with AI-systems automatically mining large patient
databases include informed consent, privacy and data protection, ownership, objectivity,
transparency of the obtained clinical or research model, and quality of training and valida-
tion data [115]. Automatizing tasks and decisions with the use of AI-based machines on a
large scale could bring increased systemic risks of harm and systematic errors. These errors
are categorized into omission when humans do not notice the failure of an AI tool and
commission when an action is performed following AI’s decision when there is evidence
that AI is wrong [115]. The responsibility to prevent these errors by anticipating incorrect
performance or misuses of AI before incidents occur falls to humans.
A model should be transparent, meaning that its formulas and code should be avail-
able and comprehensible so that it is possible to trace why an algorithm has failed and
adverse clinical events [115]. The data “truthfulness” consists of understanding the type
of information contained, the completeness and accuracy, their variance and bias, and if
they reflect the problem of interest. Because of the “black box” phenomenon, informing
the patient clearly could become more difficult for the doctor when a decision is influenced
by AI [116].
AI systems’ decisions are based on the data used for training, the algorithms that
are used, and what they have learned since their creation [117]. If some human biases,
such as variability in healthcare because of ethnic, social, environmental, or economic
factors, or clinically confounding factors, such as comorbidities, are present in the training
data, they could result in biased decisions of the AI systems [28,117]. Since AI does not
incorporate ethical concepts like equality, humans who use AI will hold the responsibility
for preventing these errors [115]. Finally, before integrating AI into medical practice, it is
important to prevent the loss of competence of the human who will not be able to carry
out a task he used to do before because it has been transferred to the AI, also defined as
“deskilling” [116].
Appl. Sci. 2021, 11, 1691 7 of 17

5. Role of MP
5.1. Imaging
As already underlined in this paper, one of the major tasks in which the MP is deeply
involved in the imaging field is the optimization process, i.e., finding the balance between
dose and image quality.
MP understands the components of an imaging device used and the basic physical
mechanisms at the root of signal change and image contrast and comprehends the tech-
nical and/or physiological artifacts limiting the performance [4,118]. Moreover, the MP
understands the limitations and potential pitfalls of dose measurement, calculation, and
prediction [90]. Thus, MP has knowledge and skills that are of value for the development,
implementation, and use of AI in imaging.
AI-based systems have been developed to estimate patient dose. MP shall validate and
periodically check these systems to avoid possible errors in the estimation. For example, the
dose to each voxel in the calculated distribution depends on the dose calculation algorithm
used, on the calculation voxel spacing, and on the uncertainty in dose measurement in the
dataset used for ML training. In phantom, dose measurements can be planned by the MP
to test algorithms’ predictions.
MP shall also assess image quality through routine testing [119]. Recently, image
quality enhancers, based on DL, have been introduced in clinical practice in order to
ameliorate image quality. Consequently, image acquisition protocols could be updated
to achieve dose reduction, and the MP will be involved in the optimization to ensure the
minimum possible ionizing radiation dose to the patient [119,120].
It is also necessary to verify to what extent the imaging parameters’ change influ-
ences the quantitative image content and, consequently, the response of AI systems. To
this purpose, various physical phantoms have been developed. The Credence Cartridge
Radiomics (CCR) phantom for radiomics was created for CT [121] and CBCT [122] images.
More recently, anthropomorphic phantoms with heterogeneous objects were designed
in order to simulate the texture of lung nodules [123]. PET phantoms with 3D printed
inserts simulating heterogeneities in FDG uptake have been proposed [124], as well as MR
phantoms simulating relaxation times and texture of pelvic tissue and malignancies [125].
Using these kinds of phantoms, the sensitivity of radiomics-based ML classifications on
image acquisition parameters has been investigated. In CT, the classification is affected by
the device used [121], method of image reconstruction [126], noise reduction algorithms,
slice thicknesses [127,128]. PET features depend on acquisition mode [129,130], reconstruc-
tion algorithm, image resolution, and discretization [131,132]. MRI features are sensitive
to the field of view, field strength, pulse sequence, reconstruction algorithm, and slice
thickness [133].
Physical and digital phantoms could also be used to periodically verify the perfor-
mances of image-based ML algorithms. Digital phantoms are usually representative scans
of patients with known acquisition parameters. A dataset of CTs acquired twice on the
same patient 15 min apart allows “test-retest”, an assessment of the reproducibility of the
radiomics workflow under the same conditions [127].
The accuracy of AI-generated segmentation, image reconstruction, and synthetic
images (e.g., MRI) can be assessed using a ground truth digital phantom, for example of
brain glioma patients [133] and image simulators, capable of simulating MRI acquired with
different pulse sequence or field strength and reconstructed with different methods [133].
Specific tests allow assessing the accuracy of AI-based image registration [134].
In addition, MP can ensure correct extraction and quantitative analysis of imaging
data. Thus, before performing quantitative analysis with AI algorithms, the accuracy and
precision associated with the quantitative parameters within the images (e.g., tumors)
should be assessed [29]. Moreover, MP is responsible for the pre-processing of images
necessary for correct AI application. This would include the conversion of PET and
SPECT images in standard uptake value (SUV), the standardization of MR images intensity
scale [135], as well as assessment and correction of confounding factors of images, such
Appl. Sci. 2021, 11, 1691 8 of 17

as artifacts for metal implants in CT, magnetic field non-uniformity in MRI, and partial
volume effect (PVE) in nuclear medicine images. Multimodal images should be registered
using a proper method for rigid or deformable registration [136], a critical step that may
affect the accuracy of AI models analyzing hybrid image datasets voxel by voxel [137] in
order to combine metabolic, functional, and morphologic information.
In interventional radiology, MPs are involved in monitoring patients’ dose and manage
patients’ radiation risks by reviewing interventional procedures [138]. The involvement
of MPs will also reach safe implementation and QA of other AI systems, such as robotic
angiographs and/or neuro-navigators, robots, etc., and platforms (catheter navigation
assistants, analyzing relationships between catheter positions, therapeutic effect, and
patient outcomes, etc.) for interventional therapies.
In other fields of medical imaging where AI is rapidly emerging, such as pathology
imaging, MPs can support the acceptance and validation of AI systems. Recently, [139]
pathology Digital Imaging and Communications in Medicine (DICOM) file format has
standardized the representation, storage, and communication of pathology images acquired
with whole-slide scanners [139]. Common acquisition protocols could reduce the variability
in slide preparation and digitization procedures and scanner models among different
centers and improve the performance of AI detection systems.

5.2. Data Collection and Curation


Given their skills in numerical analysis and clinical integration, MPs can significantly
aid in the management of aggregate data [4], which will include clinical and image data
from multiple modalities, such as PET, CT, radiography MRI, ultrasound, daily CBCT,
hybrid imaging, such as PET/CT and PET/MRI, 3D/4D and image time series, and 3D/4D
dose distribution from RT. MP will be involved in the development of metrics to assess
the quality and completeness of data, methods to curate data, and QA programs of data
archives [140].
CAD systems and other AI-based decision systems using images as input will need
minimum quality specification and acquisition protocols in order to ensure output accuracy.
The MP can ensure that the images are acquired according to the protocol required for
correct AI use, free from relevant imaging artifacts, and correctly preprocessed [141] and
harmonized [142] to reduce variability.
Moreover, MP can ensure that image data, together with their acquisition parameters
and the dosimetric data from imaging and therapy, are stored in commonly accepted
standards, such as the Digital Imaging and Communication in Medicine (DICOM), or
comparable format and can create new standards for raw acquisition data to be stored in
the standard format [143]. MP will necessarily oversee storage, security, and integrity of
the large, machine-readable data collections needed to build a model [103]. The QA of
datasets is a guarantee for the clinician, patient, and patient associations of the ethical and
unbiased use of patients’ health data by AI systems.

5.3. Commissioning and Validation of AI


Commissioning of AI tools is a series of tests to assess if the system installed in the
local site operates correctly and is ready for clinical use. The commissioning tasks, tests,
schedule, and tolerances, with the required equipment and human resources, should be
planned before installation [30]. The test plan could consist, for example, of applying AI to
a set of well-known clinical cases, for which ground truth data are available. Comparison
of different ML methods on the same dataset is useful and can show which ML algorithms
have the best performance and which are more prone to overfitting data for the task at
hand [85,144]. A technique called adversarial ML, where attempts to deceive models are
carried out with a number of crafted configurations of data, e.g., by adding noise to images,
can be used for quality assessment of many classes of ML and DL algorithms [145,146].
The lack of interpretability of AI systems—or ‘black-box’ problem—constitutes an
obstacle towards their adoption in the clinic [10]. Monitoring AI performance by proper
Appl. Sci. 2021, 11, 1691 9 of 17

quality controls that test the models in well-known situations can improve the interpretabil-
ity of models, as well as assessing architectures of DL models and their output using
activation and feature maps.
An initiative led by the US FDA, the Microarray Sequencing Quality Control MAQC/
SEQC [147], invites researchers to submit their models, features selected as important,
and performance estimates to a specific data analysis plan (DAP), which includes ML and
statistical crosscheck, before performing external validation data [100].
Validation, e.g., using the criteria in the TRIPOD statement [148], is required because
many of the available AI models are trained using small datasets, and although augmen-
tation and resampling methods are frequently applied, they are affected by overfitting
and poor generalizability and reproducibility [112]. Large and possibly multi-institutional
datasets, independent from the training datasets with realistic variability and the lowest
bias as possible, are needed for validation. These can be achieved by increasing the level
of collaboration among institutions [112], and the MP can play a role in checking the
compliance with the required standards.

5.4. AI in Radiotherapy
MPs contributed to making radiotherapy into a frontier of personalized precision
medicine by developing CT-based dose calculation, treatment planning, and image-guided
radiation therapy (IGRT) [90]. Other traditional domains of MPs in radiotherapy include
quality assurance and radiation protection [90]. MPs have been also at the forefront in
using AI in RT, leading to the implementation of knowledge-based treatment planning,
where ML algorithms are trained on the dataset, comprising patient images, contours,
clinical information, and treatment plans performed by experienced MPs to automatically
develop high-quality plans, allowing to accelerate radiotherapy plan design [46].
As with any other ML-based procedures, auto-planning systems also are as good as
their human-generated training data, and their outcome will need to be tested and finally
approved. Oftentimes, the proposed plan will need to be customized and modified by
clinical MPs because of the unique anatomy of every patient. More importantly, when
potential issues are identified for a specific plan, MPs communicate with other team
members, such as physicians, therapists, and dosimetrists, to reach a clinically acceptable
solution [149].
MPs are involved in validation and quality assurance of dose predicted by DL [90],
which can be tested by properly designed in-phantom film/ion chamber measurements
according to dosimetry protocols and benchmarking against previously established dose
calculation algorithms. Another critical aspect is also investigating how the uncertainties
of dose affect prognostic or predictive dosomic models [90].
Given their familiarity with imaging devices and LINACs derived from managing QA
programs, MP will have a critical role in the analysis of AI applied to the quality control of
LINACs. When an AI tool predicts a machine failure, MPs can help identify the cause of
the issue and corrective actions, such as calibrations [149].

5.5. Safety/Risk Management


One of the key activities of the MP is patient safety management that is the evaluation
of medical devices and procedures to guarantee the safety of patients. MPs are trained to
prevent and analyze accidents [149] by using risk assessment, which consists of the analysis
of events potentially involving accidental medical exposures or injury to a patient [150],
and failure modes and effects analysis (FMEA) [151].
ML has the potential to reduce imaging radiation exposure, which is a hazard for
patients and workers, without penalizing image quality [152].

5.6. Periodical Tests


QA should be applied to AI systems themselves, which, having an impact on patient’s
health, should be considered as medical devices [153]. Physicists are also responsible for
Appl. Sci. 2021, 11, 1691 10 of 17

ensuring that clinically used AI algorithms continue to perform with the desired level of
accuracy by conducting an appropriate routine QA test program with clearly established
frequency, metrics, tolerance levels, and actions to be performed in case of test failure [103].
The frequency and nature of the series of tests will be in need of frequent updates, given
the rapid pace of evolution of AI.
This is especially important for those AI systems that, being constantly learning and
updating, will be subject to change in terms of their response and accuracy [94,119]. At the
same time, it is critical to assess the effect of the decay of the relevance of the training data
due to changes in practices (e.g., changes in prescribed dose and dose per fractions) [94].

5.7. Training of AI Users


According to a white paper, the Canadian Association of Radiologists [154] should
provide practitioners with an understanding of the value, the pitfalls, weaknesses, and
potential errors that may occur in the use of AI products [154]. The medical physics associ-
ations are launching initiatives to provide appropriate training and education programs in
the field of AI applied to imaging and therapy [90]. On the other hand, being skilled at com-
munication and divulgation of science, MPs are critical to establishing a common language
with other professionals and patients [155]; MPs can take part in education and training in
the use of AI of other health care professionals, and be a part of the interdisciplinary team
working for the effective, efficient, and safe delivery of AI in the clinic [3].

5.8. Research in AI
MPs are often active researchers and, having expertise also in statistics, mathematics,
and informatics, are suitable for research in AI. Extensive research is needed to understand
how to successfully introduce AI and define the use and characteristics of AI in clinical
practice [119].
Other active areas of research where MPs will be primarily involved include assessing
data veracity and validity, developing metrics for completeness, accuracy, correctness, and
consistency, and perform data cleaning activities [140]. Physicists should promote the inte-
gration of digital information from diagnostic and therapeutic procedures with genotyping
and phenotyping data into large data sets acquisition across all areas (clinical, dosimetric,
imaging, molecular, pathological, etc.), requiring multi-institutional and multinational
collaboration [24,90]. Examples of this are The Cancer Imaging Archive (TCIA) [156] and
the Platform for Imaging in Precision Medicine (PRISM) platform [157].
The specific task for MPs in AI research includes the definition of the problem to be
solved and determining its category (e.g., classification, regression, pattern recognition)
in the lexicon of AI, choosing proper models to be trained, determining a strategy for
collecting data from the appropriate dataset, and validating the model [103]. MPs also
need to investigate and report the possible pitfalls of the AI-based methods developed and
on how to overcome them. Besides, challenging is a personalizing therapy according to AI
output, e.g., dose painting in radiotherapy [90].
Privacy, security, secure access to health information, de-identification of sensitive
data, and obtaining informed consent, which are also of concern in research areas, become
more relevant in the era of big data. The MP involved in these research areas will be
required to apply the statements and recommendations released by governmental agencies,
scientists, healthcare providers, companies, and other interested parties and will have an
active role in formulating these statements [140].
Moreover, if MPs work at developing AI models or fine-tuning them on their data, they
have to carefully understand and address the limitations of the data used for training and of
the trained models [94]. Exploring multiple approaches, such as different feature selection
and ML methods and their combinations, can help in understanding these limitations.
The Findability, Accessibility, Interoperability, and Reusability (FAIR) principles are
intended to guide researchers into data management and reporting [158]. The methodology
of research studies should be detailed thoroughly, including also deep learning architectures
Appl. Sci. 2021, 11, 1691 11 of 17

and optimization parameters, and the datasets used to train models should be clearly
described in order to increase reproducibility and facilitate meta-analysis. Moreover,
decision, automation, and prediction models relying on AI must be tested in independent
and sufficiently large datasets to compare their validity against established methods,
including conventional biomarkers (e.g., clinical, radiological, etc.). The codes and data
used for training and testing the models should be made publicly available, e.g., by The
Cancer Image Archive. More guidelines for improving transparency and reproducibility of
models can be found in the TRIPOD [148].

6. Conclusions
AI can extend the expertise area of MPs, extracting even more information to improve
patient care, and the MP is ready to welcome the AI revolution. On the other hand, the MPs’
knowledge and skills will be required and beneficial for safe and optimal implementation
of AI, especially in radiological sciences, and their involvement in the multidisciplinary AI
team is crucial.

Author Contributions: Writing—Original Draft preparation: M.A., M.I.; Writing—Review & Editing:
M.A., A.T., F.B., C.T., M.S., M.I. All authors have read and agreed to the published version of
the manuscript.
Funding: This research was funded by the Associazione Italiana di Fisica Medica e Sanitaria (AIFM).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Hashimoto, D.A.; Rosman, G.; Rus, D.; Meireles, O.R. Artificial Intelligence in Surgery: Promises and Perils. Ann. Surg. 2018, 268,
70–76. [CrossRef]
2. Shen, C.; Nguyen, D.; Zhou, Z.; Jiang, S.B.; Dong, B.; Jia, X. An introduction to deep learning in medical physics: Advantages.
potential, and challenges. Phys. Med. Biol. 2020, 65, 05TR01. [CrossRef]
3. Xing, L.; Krupinski, E.A.; Cai, J. Artificial intelligence will soon change the landscape of medical physics research and practice.
Med. Phys. 2018, 45, 1791–1793. [CrossRef]
4. Samei, E.; Grist, T.M. Why physics in medicine? Phys. Med. 2019, 64, 319–322. [CrossRef]
5. Samei, E.; Pawlicki, T.; Bourland, D.; Chin, E.; Das, S.; Fox, M.; Freedman, D.J.; Hangiandreou, N.; Jordan, D.; Martin, M.; et al.
Redefining and reinvigorating the role of physics in clinical medicine: A Report from the AAPM Medical Physics 3.0 Ad Hoc
Committee. Med. Phys. 2018, 45, e783–e789. [CrossRef] [PubMed]
6. Biehl, M.; Caticha, N.; Opper, M.; Villmann, T. Statistical Physics of Learning and Inference. In Proceedings of the European Sym-
posium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April 2019.
7. Ramezanpour, A.; Beam, A.L.; Chen, J.H.; Mashaghi, A. Statistical Physics for Medical Diagnostics: Learning, Inference, and
Optimization Algorithms. Diagnostics 2020, 10, 972. [CrossRef] [PubMed]
8. Tang, X.; Wang, B.; Rong, Y. Artificial intelligence will reduce the need for clinical medical physicists. J. Appl. Clin. Med. Phys.
2018, 19, 6–9. [CrossRef] [PubMed]
9. Thompson, R.F.; Valdes, G.; Fuller, C.D.; Carpenter, C.M.; Morin, O.; Aneja, S.; Lindsay, W.D.; Aerts, H.J.W.L.; Agrimson, B.C.;
Deville, C., Jr.; et al. Artificial intelligence in radiation oncology: A specialty-wide disruptive transformation? Radiother. Oncol.
2018, 129, 421–426. [CrossRef]
10. Avanzo, M.; Wei, L.; Stancanello, J.; Vallieres, M.; Rao, A.; Morin, O.; Mattonen, S.A.; El Naqa, I. Machine and deep learning
methods for radiomics. Med. Phys. 2020, 47, e185–e202. [CrossRef]
11. Chen, S.; Zhou, S.; Yin, F.F.; Marks, L.B.; Das, S.K. Investigation of the support vector machine algorithm to predict lung
radiation-induced pneumonitis. Med. Phys. 2007, 34, 3808–3814. [CrossRef]
12. Avanzo, M.; Stancanello, J.; El Naqa, I. Beyond imaging: The promise of radiomics. Phys. Med. 2017, 38, 122–139. [CrossRef]
13. Galar, M.; Fernandez, A.; Barrenechea, E.; Bustince, H.; Herrera, F. A Review on Ensembles for the Class Imbalance Problem:
Bagging-, Boosting-, and Hybrid-Based Approaches. IEEE Trans. Syst. Man, Cybern. Part C Applications Rev. 2012, 42, 463–484.
[CrossRef]
14. Ben-Bassat, M.; Klove, K.L.; Weil, M.H. Sensitivity Analysis in Bayesian Classification Models: Multiplicative Deviations. IEEE
Trans. Pattern Anal. Mach. Intell. 1980, PAMI-2, 261–266. [CrossRef] [PubMed]
Appl. Sci. 2021, 11, 1691 12 of 17

15. Kukar, M.; Kononenko, I.; Silvester, T. Machine learning in prognosis of the femoral neck fracture recovery. Artif. Intell. Med.
1996, 8, 431–451. [CrossRef]
16. Tseng, H.; Wei, L.; Cui, S.; Luo, Y.; Haken, R.K.T.; El Naqa, I. Machine Learning and Imaging Informatics in Oncology. Oncology
2020, 98, 344–362. [CrossRef] [PubMed]
17. Syeda-Mahmood, T. Role of Big Data and Machine Learning in Diagnostic Decision Support in Radiology. J. Am. Coll. Radiol.
2018, 15, 569–576. [CrossRef]
18. Azmandian, F.; Kaeli, D.; Dy, J.G.; Hutchinson, E.; Ancukiewicz, M.; Niemierko, A.; Jiang, S.B. Towards the development of an
error checker for radiotherapy treatment plans: A preliminary study. Phys. Med. Biol. 2007, 52, 6511–6524. [CrossRef]
19. Chetvertkov, M.A.; Siddiqui, F.; Kim, J.; Chetty, I.; Kumarasiri, A.; Liu, C.; Gordon, J.J. Use of regularized principal component
analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment.
Med Phys. 2016, 43, 5307–5319. [CrossRef]
20. Maaten, L.v.d.; Hinton, G.E. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605.
21. Sanders, J.C.; Ritt, P.; Kuwert, T.; Vija, A.H.; Maier, A.K. Fully Automated Data-Driven Respiratory Signal Extraction From SPECT
Images Using Laplacian Eigenmaps. IEEE Trans. Med Imaging 2016, 35, 2425–2435. [CrossRef] [PubMed]
22. Groenhof, T.K.J.; Koers, L.R.; Blasse, E.; de Groot, M.; Grobbee, D.E.; Bots, M.L.; Asselbergs, F.W.; Lely, A.T.; Haitjema, S.; van
Solinge, W.; et al. Data mining information from electronic health records produced high yield and accuracy for current smoking
status. J. Clin. Epidemiol. 2020, 118, 100–106. [CrossRef]
23. Gultepe, E.; Green, J.P.; Nguyen, H.; Adams, J.; Albertson, T.; Tagkopoulos, I. From vital signs to clinical outcomes for patients
with sepsis: A machine learning basis for a clinical decision support system. J. Am. Med Inform. Assoc. 2014, 21, 315–325.
[CrossRef]
24. Chamunyonga, C.; Edwards, C.; Caldwell, P.; Rutledge, P.; Burbery, J. The Impact of Artificial Intelligence and Machine Learning
in Radiation Therapy: Considerations for Future Curriculum Enhancement. J. Med Imaging Radiat. Sci. 2020, 51, 214–220.
[CrossRef]
25. Pons, E.; Braun, L.M.; Hunink, M.G.; Kors, J.A. Natural Language Processing in Radiology: A Systematic Review. Radiology 2016,
279, 329–343. [CrossRef] [PubMed]
26. Kreimeyer, K.; Foster, M.; Pandey, A.; Arya, N.; Halford, G.; Jones, S.F.; Forshee, R.; Walderhaug, M.; Botsis, T. Natural language
processing systems for capturing and standardizing unstructured clinical information: A systematic review. J. Biomed. Inform.
2017, 73, 14–29. [CrossRef] [PubMed]
27. Burger, G.; Abu-Hanna, A.; de Keizer, N.; Cornet, R. Natural language processing in pathology: A scoping review. J. Clin. Pathol.
2016, 69, 949–955. [CrossRef] [PubMed]
28. Benke, K.; Benke, G. Artificial Intelligence and Big Data in Public Health. Int. J. Environ. Res. Public Health 2018, 15, 2796.
[CrossRef]
29. Castiglioni, I.; Gallivanone, F.; Soda, P.; Avanzo, M.; Stancanello, J.; Aiello, M.; Interlenghi, M.; Salvatore, M. AI-based applications
in hybrid imaging: How to build smart and truly multi-parametric decision models for radiomics. Eur. J. Nucl. Med. Mol. Imaging
2019, 46, 2673–2699. [CrossRef] [PubMed]
30. Mahadevaiah, G.; Rv, P.; Bermejo, I.; Jaffray, D.; Dekker, A.; Wee, L. Artificial intelligence-based clinical decision support in
modern medical physics: Selection, acceptance, commissioning, and quality assurance. Med Phys. 2020, 47, e228–e235. [CrossRef]
31. Welch, M.L.; McIntosh, C.; McNiven, A.; Huang, S.H.; Zhang, B.B.; Wee, L.; Traverso, A.; O’Sullivan, B.; Hoebers, F.; Dekker, A.;
et al. User-controlled pipelines for feature integration and head and neck radiation therapy outcome predictions. Phys. Medica
2020, 70, 145–152. [CrossRef]
32. El Naqa, I.; Li, R.; Murphy, M.J. Machine Learning in Radiation Oncology: Theory and Applications; Springer: Berlin, Germany, 2015.
33. Giger, M.L.; Karssemeijer, N.; Schnabel, J.A. Breast image analysis for risk assessment, detection, diagnosis, and treatment of
cancer. Annu. Rev. Biomed. Eng. 2013, 15, 327–357. [CrossRef] [PubMed]
34. Elter, M.; Horsch, A. CADx of mammographic masses and clustered microcalcifications: A review. Med. Phys. 2009, 36, 2052–2068.
[CrossRef]
35. Chen, C.H.; Chang, C.K.; Tu, C.Y.; Liao, W.C.; Wu, B.R.; Chou, K.T.; Chiou, Y.R.; Yang, S.N.; Zhang, G.; Huang, T.C. Radiomic
features analysis in computed tomography images of lung nodule classification. PLoS ONE 2018, 13, e0192002. [CrossRef]
36. Weng, Q.; Zhou, L.; Wang, H.; Hui, J.; Chen, M.; Pang, P.; Zheng, L.; Xu, M.; Wang, Z.; Ji, J. A radiomics model for determining
the invasiveness of solitary pulmonary nodules that manifest as part-solid nodules. Clin. Radiol. 2019, 74, 933–943. [CrossRef]
37. Botta, F.; Raimondi, S.; Rinaldi, L.; Bellerba, F.; Corso, F.; Bagnardi, V.; Origgi, D.; Minelli, R.; Pitoni, G.; Petrella, F.; et al.
Association of a CT-Based Clinical and Radiomics Score of Non-Small Cell Lung Cancer (NSCLC) with Lymph Node Status and
Overall Survival. Cancers 2020, 12, 1432. [CrossRef] [PubMed]
38. Cong, M.; Feng, H.; Ren, J.L.; Xu, Q.; Cong, L.; Hou, Z.; Wang, Y.Y.; Shi, G. Development of a predictive radiomics model for
lymph node metastases in pre-surgical CT-based stage IA non-small cell lung cancer. Lung Cancer 2020, 139, 73–79. [CrossRef]
39. Avanzo, M.; Stancanello, J.; Pirrone, G.; Sartor, G. Radiomics and deep learning in lung cancer. Strahlenther. Onkol. 2020, 196,
879–887. [CrossRef]
40. Stanzione, A.; Gambardella, M.; Cuocolo, R.; Ponsiglione, A.; Romeo, V.; Imbriaco, M. Prostate MRI radiomics: A systematic
review and radiomic quality score assessment. Eur. J. Radiol. 2020, 129, 109095. [CrossRef] [PubMed]
Appl. Sci. 2021, 11, 1691 13 of 17

41. Algohary, A.; Viswanath, S.; Shiradkar, R.; Ghose, S.; Pahwa, S.; Moses, D.; Jambor, I.; Shnier, R.; Bohm, M.; Haynes, A.M.; et al.
Radiomic features on MRI enable risk categorization of prostate cancer patients on active surveillance: Preliminary findings. J.
Magn. Reson. Imaging 2018, 48, 818–828. [CrossRef]
42. Zhang, Z.; Yang, J.; Ho, A.; Jiang, W.; Logan, J.; Wang, X.; Brown, P.D.; McGovern, S.L.; Guha-Thakurta, N.; Ferguson, S.D.; et al.
A predictive model for distinguishing radiation necrosis from tumour progression after gamma knife radiosurgery based on
radiomic features from MR images. Eur. Radiol. 2018, 28, 2255–2263. [CrossRef]
43. Hatt, M.; Tixier, F.; Visvikis, D.; Le Rest, C.C. Radiomics in PET/CT: More Than Meets the Eye? J. Nucl. Med. 2016, 58, 365–366.
[CrossRef]
44. Lee, S.E.; Han, K.; Kwak, J.Y.; Lee, E.; Kim, E.K. Radiomics of US texture features in differential diagnosis between triple-negative
breast cancer and fibroadenoma. Sci. Rep. 2018, 8, 1–8. [CrossRef]
45. Sapate, S.G.; Mahajan, A.; Talbar, S.N.; Sable, N.; Desai, S.; Thakur, M. Radiomics based detection and characterization of
suspicious lesions on full field digital mammograms. Comput. Methods Progr. Biomed. 2018, 163, 1–20. [CrossRef]
46. Jarrett, D.; Stride, E.; Vallis, K.; Gooding, M.J. Applications and limitations of machine learning in radiation oncology. Br. J. Radiol.
2019, 92, 20190001. [CrossRef] [PubMed]
47. Skourt, B.A.; El Hassani, A.; Majda, A. Lung CT Image Segmentation USING Deep Neural Networks. Procedia Comput. Sci. 2018,
127, 109–113. [CrossRef]
48. Zhong, Z.; Kim, Y.; Zhou, L.; Plichta, K.; Allen, B.; Buatti, J.; Wu, X. 3D fully convolutional networks for co-segmentation of
tumors on PET-CT images. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018),
Washington, DC, USA, 4–7 April 2018; pp. 228–231.
49. Peng, Z.; Fang, X.; Yan, P.; Shan, H.; Liu, T.; Pei, X.; Wang, G.; Liu, B.; Kalra, M.K.; Xu, X.G. A method of rapid quantification of
patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose
computing. Med. Phys. 2020, 47, 2526–2536. [CrossRef]
50. Gotz, T.I.; Schmidkonz, C.; Chen, S.; Al-Baddai, S.; Kuwert, T.; Lang, E.W. A deep learning approach to radiation dose estimation.
Phys. Med. Biol. 2019, 65, 035007. [CrossRef]
51. Kaplan, S.; Zhu, Y.M. Full-Dose PET Image Estimation from Low-Dose PET Image Using Deep Learning: A Pilot Study. J. Digit.
Imaging 2019, 32, 773–778. [CrossRef] [PubMed]
52. Roser, P.; Zhong, X.; Birkhold, A.; Strobel, N.; Kowarschik, M.; Fahrig, R.; Maier, A. Physics-driven learning of x-ray skin dose
distribution in interventional procedures. Med. Phys. 2019, 46, 4654–4665. [CrossRef]
53. Meineke, A.; Rubbert, C.; Sawicki, L.M.; Thomas, C.; Klosterkemper, Y.; Appel, E.; Caspers, J.; Bethge, O.T.; Kropil, P.; Antoch,
G.; et al. Potential of a machine-learning model for dose optimization in CT quality assurance. Eur. Radiol. 2019, 29, 3705–3713.
[CrossRef]
54. Gong, K.; Guan, J.; Liu, C.; Qi, J. PET Image Denoising Using a Deep Neural Network Through Fine Tuning. IEEE Trans. Radiat.
Plasma Med. Sci. 2019, 3, 153–161. [CrossRef]
55. Xie, S.; Zheng, X.; Chen, Y.; Xie, L.; Liu, J.; Zhang, Y.; Yan, J.; Zhu, H.; Hu, Y. Artifact Removal using Improved GoogLeNet for
Sparse-view CT Reconstruction. Sci. Rep. 2018, 8, 1–9. [CrossRef]
56. Han, X. MR-based synthetic CT generation using a deep convolutional neural network method. Med. Phys. 2017, 44, 1408–1419.
[CrossRef]
57. Kleesiek, J.; Morshuis, J.N.; Isensee, F.; Deike-Hofmann, K.; Paech, D.; Kickingereder, P.; Köthe, U.; Rother, C.; Forsting, M.; Wick,
W.; et al. Can Virtual Contrast Enhancement in Brain MRI Replace Gadolinium: A Feasibility Study. Investig. Radiol. 2019, 54,
653–660. [CrossRef]
58. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sanchez,
C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [CrossRef] [PubMed]
59. Kesner, A.; Schmidtlein, C.R.; Kuntner, C. Real-time data-driven motion correction in PET. EJNMMI Phys. 2019, 6, 3. [CrossRef]
[PubMed]
60. Li, M.; Fu, S.; Zhu, Y.; Liu, Z.; Chen, S.; Lu, L.; Liang, C. Computed tomography texture analysis to facilitate therapeutic decision
making in hepatocellular carcinoma. Oncotarget 2016, 7, 13248–13259. [CrossRef]
61. Yu, J.Y.; Zhang, H.P.; Tang, Z.Y.; Zhou, J.; He, X.J.; Liu, Y.Y.; Liu, X.J.; Guo, D.J. Value of texture analysis based on enhanced MRI
for predicting an early therapeutic response to transcatheter arterial chemoembolisation combined with high-intensity focused
ultrasound treatment in hepatocellular carcinoma. Clin. Radiol. 2018, 73, 758.e9–758.e18. [CrossRef]
62. Iezzi, R.; Goldberg, S.N.; Merlino, B.; Posa, A.; Valentini, V.; Manfredi, R. Artificial Intelligence in Interventional Radiology: A
Literature Review and Future Perspectives. J. Oncol. 2019, 2019, 6153041. [CrossRef] [PubMed]
63. van Timmeren, J.E.; van Elmpt, W.; Leijenaar, R.T.H.; Reymen, B.; Monshouwer, R.; Bussink, J.; Paelinck, L.; Bogaert, E.; de Wagter,
C.; Elhaseen, E.; et al. Longitudinal radiomics of cone-beam CT images from non-small cell lung cancer patients: Evaluation of the
added prognostic value for overall survival and locoregional recurrence. Radiother. Oncol. 2019, 136, 78–85. [CrossRef] [PubMed]
64. Rahmim, A.; Huang, P.; Shenkov, N.; Fotouhi, S.; Davoodi-Bojd, E.; Lu, L.; Mari, Z.; Soltanian-Zadeh, H.; Sossi, V. Improved
prediction of outcome in Parkinson’s disease using radiomics analysis of longitudinal DAT SPECT images. Neuroimage Clin. 2017,
16, 539–544. [CrossRef] [PubMed]
65. Moraru, A.D.; Costin, D.; Moraru, R.L.; Branisteanu, D.C. Artificial intelligence and deep learning in ophthalmology—Present
and future (Review). Exp. Ther. Med. 2020, 20, 3469–3473. [CrossRef]
Appl. Sci. 2021, 11, 1691 14 of 17

66. Ricciardi, C.; Cantoni, V.; Improta, G.; Iuppariello, L.; Latessa, I.; Cesarelli, M.; Triassi, M.; Cuocolo, A. Application of data mining
in a cohort of Italian subjects undergoing myocardial perfusion imaging at an academic medical center. Comput. Methods Progr.
Biomed. 2020, 189, 105343. [CrossRef]
67. Moccia, S.; Banali, R.; Martini, C.; Muscogiuri, G.; Pontone, G.; Pepi, M.; Caiani, E.G. Development and testing of a deep
learning-based strategy for scar segmentation on CMR-LGE images. MAGMA Magn. Reson. Mater. Phys. Biol. Med. 2018, 32,
187–195. [CrossRef]
68. Stoel, B. Use of artificial intelligence in imaging in rheumatology—Current status and future perspectives. RMD Open 2020, 6,
e001063. [CrossRef]
69. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for
diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [CrossRef] [PubMed]
70. Piehowski, P.D.; Zhu, Y.; Bramer, L.M.; Stratton, K.G.; Zhao, R.; Orton, D.J.; Moore, R.J.; Yuan, J.; Mitchell, H.D.; Gao, Y.; et al.
Automated mass spectrometry imaging of over 2000 proteins from tissue sections at 100-µm spatial resolution. Nat. Commun.
2020, 11, 8. [CrossRef] [PubMed]
71. Alexandrov, T. Spatial Metabolomics and Imaging Mass Spectrometry in the Age of Artificial Intelligence. Annu. Rev. Biomed.
Data Sci. 2020, 3, 61–87. [CrossRef]
72. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput.
Intell. Neurosci. 2018, 2018, 7068349. [CrossRef] [PubMed]
73. Cai, T.; Giannopoulos, A.A.; Yu, S.; Kelil, T.; Ripley, B.; Kumamaru, K.K.; Rybicki, F.J.; Mitsouras, D. Natural Language Processing
Technologies in Radiology Research and Clinical Applications. Radiographics 2016, 36, 176–191. [CrossRef]
74. Zaharchuk, G.; Gong, E.; Wintermark, M.; Rubin, D.; Langlotz, C.P. Deep Learning in Neuroradiology. Am. J. Neuroradiol. 2018,
39, 1776–1784. [CrossRef]
75. Vávra, P.; Roman, J.; Zonča, P.; Ihnát, P.; Němec, M.; Kumar, J.; Habib, N.; El-Gendi, A. Recent Development of Augmented
Reality in Surgery: A Review. J. Health Eng. 2017, 2017, 4574172. [CrossRef]
76. Cheng, Q.; Roelofs, E.; Ramaekers, B.L.; Eekers, D.; van Soest, J.; Lustberg, T.; Hendriks, T.; Hoebers, F.; van der Laan, H.P.;
Korevaar, E.W.; et al. Development and evaluation of an online three-level proton vs photon decision support prototype for head
and neck cancer—Comparison of dose, toxicity and cost-effectiveness. Radiother. Oncol. 2016, 118, 281–285. [CrossRef]
77. Lustberg, T.; van Soest, J.; Gooding, M.; Peressutti, D.; Aljabar, P.; van der Stoep, J.; van Elmpt, W.; Dekker, A. Clinical evaluation
of atlas and deep learning based automatic contouring for lung cancer. Radiother. Oncol. 2018, 126, 312–317. [CrossRef] [PubMed]
78. Cagni, E.; Botti, A.; Micera, R.; Galeandro, M.; Sghedoni, R.; Orlandi, M.; Iotti, C.; Cozzi, L.; Iori, M. Knowledge-based treatment
planning: An inter-technique and inter-system feasibility study for prostate cancer. Phys. Med. 2017, 36, 38–45. [CrossRef]
79. Cagni, E.; Botti, A.; Wang, Y.; Iori, M.; Petit, S.F.; Heijmen, B.J.M. Pareto-optimal plans as ground truth for validation of a
commercial system for knowledge-based DVH-prediction. Phys. Med. 2018, 55, 98–106. [CrossRef] [PubMed]
80. Stanhope, C.; Wu, Q.J.; Yuan, L.; Liu, J.; Hood, R.; Yin, F.F.; Adamson, J. Utilizing knowledge from prior plans in the evaluation of
quality assurance. Phys. Med. Biol. 2015, 60, 4873–4891. [CrossRef]
81. Nicolae, A.; Semple, M.; Lu, L.; Smith, M.; Chung, H.; Loblaw, A.; Morton, G.; Mendez, L.C.; Tseng, C.L.; Davidson, M.;
et al. Conventional vs machine learning-based treatment planning in prostate brachytherapy: Results of a Phase I randomized
controlled trial. Brachytherapy 2020, 19, 470–476. [CrossRef] [PubMed]
82. Barragan-Montero, A.M.; Nguyen, D.; Lu, W.; Lin, M.H.; Norouzi-Kandalan, R.; Geets, X.; Sterpin, E.; Jiang, S. Three-dimensional
dose prediction for lung IMRT patients with deep neural networks: Robust learning from heterogeneous beam configurations.
Med. Phys. 2019, 46, 3679–3691. [CrossRef] [PubMed]
83. Nguyen, D.; Jia, X.; Sher, D.; Lin, M.; Iqbal, Z.; Liu, H.; Jiang, S. 3D radiotherapy dose prediction on head and neck cancer patients
with a hierarchically densely connected U-net deep learning architecture. Phys. Med. Biol. 2019, 64, 065020. [CrossRef] [PubMed]
84. Mao, X.; Pineau, J.; Keyes, R.; Enger, S.A. RapidBrachyDL: Rapid Radiation Dose Calculations in Brachytherapy via Deep
Learning. Int. J. Radiat. Oncol. 2020, 108, 802–812. [CrossRef]
85. Avanzo, M.; Pirrone, G.; Mileto, M.; Massarut, S.; Stancanello, J.; Baradaran-Ghahfarokhi, M.; Rink, A.; Barresi, L.; Vinante, L.;
Piccoli, E.; et al. Prediction of skin dose in low-kV intraoperative radiotherapy using machine learning models trained on results
of in vivo dosimetry. Med. Phys. 2019, 46, 1447–1454. [CrossRef]
86. Avanzo, M.; Pirrone, G.; Vinante, L.; Caroli, A.; Stancanello, J.; Drigo, A.; Massarut, S.; Mileto, M.; Urbani, M.; Trovo, M.;
et al. Electron Density and Biologically Effective Dose (BED) Radiomics-Based Machine Learning Models to Predict Late
Radiation-Induced Subcutaneous Fibrosis. Front. Oncol. 2020, 10, 490. [CrossRef]
87. Talamonti, C.; Piffer, S.; Greto, D.; Mangoni, M.; Ciccarone, A.; Dicarolo, P.; Fantacci, M.E.; Fusi, F.; Oliva, P.; Palumbo, L.; et al.
Radiomic and Dosiomic Profiling of Paediatric Medulloblastoma Tumours Treated with Intensity Modulated Radiation Therapy.
Commun. Comput. Inf. Sci. 2019, 56–64.
88. Shi, L.; Rong, Y.; Daly, M.; Dyer, B.A.; Benedict, S.; Qiu, J.; Yamamoto, T. Cone-beam computed tomography-based delta-radiomics
for early response assessment in radiotherapy for locally advanced lung cancer. Phys. Med. Biol. 2020, 65, 015009. [CrossRef]
89. Guidi, G.; Maffei, N.; Meduri, B.; D’Angelo, E.; Mistretta, G.M.; Ceroni, P.; Ciarmatori, A.; Bernabei, A.; Maggi, S.; Cardinali,
M.; et al. A machine learning tool for re-planning and adaptive RT: A multicenter cohort investigation. Phys. Med. 2016, 32,
1659–1666. [CrossRef] [PubMed]
Appl. Sci. 2021, 11, 1691 15 of 17

90. Peeken, J.C.; Bernhofer, M.; Wiestler, B.; Goldberg, T.; Cremers, D.; Rost, B.; Wilkens, J.J.; Combs, S.E.; Nusslin, F. Radiomics in
radiooncology—Challenging the medical physicist. Phys. Med. 2018, 48, 27–36. [CrossRef] [PubMed]
91. Arabi, H.; Zaidi, H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur. J. Hybrid
Imaging 2020, 4, 17. [CrossRef]
92. Placidi, L.; Lenkowicz, J.; Cusumano, D.; Boldrini, L.; Dinapoli, N.; Valentini, V. Stability of dosomics features extraction on grid
resolution and algorithm for radiotherapy dose calculation. Phys. Med. 2020, 77, 30–35. [CrossRef]
93. Delis, H.; Christaki, K.; Healy, B.; Loreti, G.; Poli, G.L.; Toroi, P.; Meghzifene, A. Moving beyond quality control in diagnostic
radiology and the role of the clinically qualified medical physicist. Phys. Med. 2017, 41, 104–108. [CrossRef]
94. Kalet, A.M.; Luk, S.M.H.; Phillips, M.H. Radiation Therapy Quality Assurance Tasks and Tools: The Many Roles of Machine
Learning. Med. Phys. 2020, 47, e168–e177. [CrossRef]
95. Kimura, Y.; Kadoya, N.; Tomori, S.; Oku, Y.; Jingu, K. Error detection using a convolutional neural network with dose difference
maps in patient-specific quality assurance for volumetric modulated arc therapy. Phys. Med. 2020, 73, 57–64. [CrossRef] [PubMed]
96. Li, Q.; Chan, M.F. Predictive time-series modeling using artificial neural networks for Linac beam symmetry: An empirical study.
Ann. N. Y. Acad. Sci. 2017, 1387, 84–94. [CrossRef] [PubMed]
97. El Naqa, I.; Irrer, J.; Ritter, T.A.; DeMarco, J.; Al-Hallaq, H.; Booth, J.; Kim, G.; Alkhatib, A.; Popple, R.; Perez, M.; et al. Machine
learning for automated quality assurance in radiotherapy: A proof of principle using EPID data description. Med Phys. 2019, 46,
1914–1921. [CrossRef]
98. Nyflot, M.J.; Thammasorn, P.; Wootton, L.S.; Ford, E.C.; Chaovalitwongse, W.A. Deep learning for patient-specific quality
assurance: Identifying errors in radiotherapy delivery by radiomic analysis of gamma images with convolutional neural
networks. Med. Phys. 2019, 46, 456–464. [CrossRef]
99. Valdes, G.; Chan, M.F.; Lim, S.B.; Scheuermann, R.; Deasy, J.O.; Solberg, T.D. IMRT QA using machine learning: A multi-
institutional validation. J. Appl. Clin. Med. Phys. 2017, 18, 279–284. [CrossRef] [PubMed]
100. Bizzego, A.; Bussola, N.; Chierici, M.; Maggio, V.; Francescatto, M.; Cima, L.; Cristoforetti, M.; Jurman, G.; Furlanello, C.
Evaluating reproducibility of AI algorithms in digital pathology with DAPPER. PLoS Comput. Biol. 2019, 15, e1006269. [CrossRef]
[PubMed]
101. Shaikhina, T.; Lowe, D.; Daga, S.; Briggs, D.; Higgins, R.; Khovanova, N. Machine Learning for Predictive Modelling based on
Small Data in Biomedical Engineering. IFAC-PapersOnLine 2015, 48, 469–474. [CrossRef]
102. Chatterjee, A.; Vallières, M.; Dohan, A.; Levesque, I.R.; Ueno, Y.; Bist, V.; Saif, S.; Reinhold, C.; Seuntjens, J. An Empirical Approach
for Avoiding False Discoveries When Applying High-Dimensional Radiomics to Small Datasets. IEEE Trans. Radiat. Plasma Med.
Sci. 2019, 3, 201–209. [CrossRef]
103. Cui, S.; Tseng, H.H.; Pakela, J.; Haken, R.K.T.; El Naqa, I. Introduction to machine and deep learning for medical physicists. Med.
Phys. 2020, 47, e127–e147. [CrossRef]
104. Stepwise Regression, F.G.R. Anonymous Wiley International Encyclopedia of Marketing; American Cancer Society: Atlanta, GA, USA,
2010.
105. Parmar, C.; Grossmann, P.; Rietveld, D.; Rietbergen, M.M.; Lambin, P.; Aerts, H.J. Radiomic Machine-Learning Classifiers for
Prognostic Biomarkers of Head and Neck Cancer. Front. Oncol. 2015, 5, 272. [CrossRef]
106. Lian, C.; Ruan, S.; Denoeux, T.; Jardin, F.; Vera, P. Selecting radiomic features from FDG-PET images for cancer treatment outcome
prediction. Med. Image Anal. 2016, 32, 257–268. [CrossRef]
107. Wu, W.; Parmar, C.; Grossmann, P.; Quackenbush, J.; Lambin, P.; Bussink, J.; Mak, R.; Aerts, H.J. Exploratory Study to Identify
Radiomics Classifiers for Lung Cancer Histology. Front. Oncol. 2016, 6, 71. [CrossRef]
108. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving neural networks by preventing
co-adaptation of feature detectors. arXiv 2012, arXiv:1207.0580.
109. Lemaitre, G.; Nogueira, F.; Aridas, C.K. Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in
Machine Learning. arXiv 2016, arXiv:1609.06570.
110. Buda, M.; Maki, A.; Mazurowski, M.A. A systematic study of the class imbalance problem in convolutional neural networks.
arXiv 2017, arXiv:1710.05381. [CrossRef] [PubMed]
111. Chen, J.H.; Alagappan, M.; Goldstein, M.K.; Asch, S.M.; Altman, R.B. Decaying relevance of clinical data towards future decisions
in data-driven inpatient clinical order sets. Int. J. Med. Inform. 2017, 102, 71–79. [CrossRef] [PubMed]
112. Nensa, F.; Demircioglu, A.; Rischpler, C. Artificial Intelligence in Nuclear Medicine. J. Nucl. Med. 2019, 60, 29S–37S. [CrossRef]
113. Li, H.; Zhu, Y.; Burnside, E.S.; Drukker, K.; Hoadley, K.A.; Fan, C.; Conzen, S.D.; Whitman, G.J.; Sutton, E.J.; Net, J.M.; et al. MR
Imaging Radiomics Signatures for Predicting the Risk of Breast Cancer Recurrence as Given by Research Versions of MammaPrint,
Oncotype DX, and PAM50 Gene Assays. Radiology 2016, 281, 382–391. [CrossRef]
114. Aerts, H.J.; Grossmann, P.; Tan, Y.; Oxnard, G.G.; Rizvi, N.; Schwartz, L.H.; Zhao, B. Defining a Radiomic Response Phenotype: A
Pilot Study using targeted therapy in NSCLC. Sci. Rep. 2016, 6, 33860. [CrossRef]
115. Geis, J.R.; Brady, A.P.; Wu, C.C.; Spencer, J.; Ranschaert, E.; Jaremko, J.L.; Langer, S.G.; Kitts, A.B.; Birch, J.; Shields, W.F. Ethics of
Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Can. Assoc.
Radiol. J. 2019, 70, 329–334. [CrossRef]
116. Lai, M.C.; Brian, M.; Mamzer, M.F. Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study
among actors in France. J. Transl. Med. 2020, 18, 1–13. [CrossRef]
Appl. Sci. 2021, 11, 1691 16 of 17

117. Pesapane, F.; Codari, M.; Sardanelli, F. Artificial intelligence in medical imaging: Threat or opportunity? Radiologists again at the
forefront of innovation in medicine. Eur. Radiol. Exp. 2018, 2, 1–10. [CrossRef]
118. Townsend, D.; Cheng, Z.; Georg, D.; Drexler, W.; Moser, E. Grand challenges in biomedical physics. Front. Phys. 2013, 1, 1.
[CrossRef]
119. Sensakovic, W.F.; Mahesh, M. Role of the Medical Physicist in the Health Care Artificial Intelligence Revolution. J. Am. Coll.
Radiol. 2019, 16, 393–394. [CrossRef] [PubMed]
120. Cody, D.D.; Fisher, T.S.; Gress, D.A.; Layman, R.R., Jr.; McNitt-Gray, M.F.; Pizzutiello, R.J., Jr.; Fairobent, L.A. AAPM medical
physics practice guideline 1.a: CT protocol management and review practice guideline. J. Appl. Clin. Med Phys. 2013, 14, 3–12.
[PubMed]
121. Mackin, D.; Fave, X.; Zhang, L.; Fried, D.; Yang, J.; Taylor, B.; Rodriguez-Rivera, E.; Dodge, C.; Jones, A.K.; Court, L. Measuring
Computed Tomography Scanner Variability of Radiomics Features. Investig. Radiol. 2015, 50, 757–765. [CrossRef] [PubMed]
122. Fave, X.; Cook, M.; Frederick, A.; Zhang, L.; Yang, J.; Fried, D.; Stingo, F.; Court, L. Preliminary investigation into sources of
uncertainty in quantitative imaging features. Comput. Med. Imaging Graph. 2015, 44, 54–61. [CrossRef] [PubMed]
123. Samei, E.; Hoye, J.; Zheng, Y.; Solomon, J.B.; Marin, D. Design and fabrication of heterogeneous lung nodule phantoms for
assessing the accuracy and variability of measured texture radiomics features in CT. J. Med. Imaging 2019, 6, 021606. [CrossRef]
124. Pfaehler, E.; Beukinga, R.J.; de Jong, J.R.; Slart, R.H.J.A.; Slump, C.H.; Dierckx, R.A.J.O.; Boellaard, R. Repeatability of (18) F-FDG
PET radiomic features: A phantom study to explore sensitivity to image reconstruction settings, noise, and delineation method.
Med. Phys. 2019, 46, 665–678. [CrossRef] [PubMed]
125. Bianchini, L.; Botta, F.; Origgi, D.; Rizzo, S.; Mariani, M.; Summers, P.; García-Polo, P.; Cremonesi, M.; Lascialfari, A. PETER
PHAN: An MRI phantom for the optimisation of radiomic studies of the female pelvis. Phys. Med. 2020, 71, 71–81. [CrossRef]
[PubMed]
126. Kim, H.; Park, C.M.; Lee, M.; Park, S.J.; Song, Y.S.; Lee, J.H.; Hwang, E.J.; Goo, J.M. Impact of Reconstruction Algorithms on CT
Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm
Variability. PLoS ONE. 2016, 11, e0164924. [CrossRef] [PubMed]
127. Leijenaar, R.T.; Carvalho, S.; Velazquez, E.R.; van Elmpt, W.J.; Parmar, C.; Hoekstra, O.S.; Hoekstra, C.J.; Boellaard, R.; Dekker,
A.L.; Gillies, R.J.; et al. Stability of FDG-PET Radiomics features: An integrated analysis of test-retest and inter-observer variability.
Acta Oncol. 2013, 52, 1391–1397. [CrossRef]
128. Zhao, B.; James, L.P.; Moskowitz, C.S.; Guo, P.; Ginsberg, M.S.; Lefkowitz, R.A.; Qin, Y.; Riely, G.J.; Kris, M.G.; Schwartz, L.H.
Evaluating Variability in Tumor Measurements from Same-day Repeat CT Scans of Patients with Non–Small Cell Lung Cancer.
Radiology 2009, 252, 263–272. [CrossRef] [PubMed]
129. Desseroit, M.C.; Tixier, F.; Weber, W.A.; Siegel, B.A.; le Rest, C.C.; Visvikis, D.; Hatt, M. Reliability of PET/CT shape and
heterogeneity features in functional and morphological components of Non-Small Cell Lung Cancer tumors: A repeatability
analysis in a prospective multi-center cohort. J. Nucl. Med. 2016, 58, 406–411. [CrossRef]
130. Galavis, P.E.; Hollensen, C.; Jallow, N.; Paliwal, B.; Jeraj, R. Variability of textural features in FDG PET images due to different
acquisition modes and reconstruction parameters. Acta Oncol. 2010, 49, 1012–1016. [CrossRef] [PubMed]
131. Lu, L.; Lv, W.; Jiang, J.; Ma, J.; Feng, Q.; Rahmim, A.; Chen, W. Robustness of Radiomic Features in [11C]Choline and [18F]FDG
PET/CT Imaging of Nasopharyngeal Carcinoma: Impact of Segmentation and Discretization. Mol. Imaging Biol. 2016, 18, 935–945.
[CrossRef] [PubMed]
132. Bailly, C.; Bodet-Milin, C.; Couespel, S.; Necib, H.; Kraeber-Bodéré, F.; Ansquer, C.; Carlier, T. Revisiting the robustness of
PET-based textural features in the context of multi-centric trials. PLoS ONE 2016, 11, e0159984. [CrossRef]
133. Yang, F.; Dogan, N.; Stoyanova, R.; Ford, J.C. Evaluation of radiomic texture feature error due to MRI acquisition and reconstruc-
tion: A simulation study utilizing ground truth. Phys. Med. 2018, 50, 26–36. [CrossRef]
134. Kaus, M.R.; Brock, K.K.; Pekar, V.; Dawson, L.A.; Nichol, A.M.; Jaffray, D.A. Assessment of a model-based deformable image
registration approach for radiation therapy planning. Int. J. Radiat. Oncol. 2007, 68, 572–580. [CrossRef]
135. Isaksson, L.J.; Raimondi, S.; Botta, F.; Pepa, M.; Gugliandolo, S.G.; de Angelis, S.P.; Marvaso, G.; Petralia, G.; de Cobelli, O.;
Gandini, S.; et al. Effects of MRI image normalization techniques in prostate cancer radiomics. Phys. Med. 2020, 71, 7–13.
[CrossRef]
136. Brock, K.K. Deformable Registration Accuracy Consortium, Results of a multi-institution deformable registration accuracy study
(MIDRAS). Int. J. Radiat. Oncol. 2010, 76, 583–596. [CrossRef] [PubMed]
137. Avanzo, M.; Barbiero, S.; Trovo, M.; Bissonnette, J.P.; Jena, R.; Stancanello, J.; Pirrone, G.; Matrone, F.; Minatel, E.; Cappelletto, C.;
et al. Voxel-by-voxel correlation between radiologically radiation induced lung injury and dose after image-guided, intensity
modulated radiotherapy for lung tumors. Phys. Med. 2017, 42, 150–156. [CrossRef] [PubMed]
138. Mahesh, M. Essential Role of a Medical Physicist in the Radiology Department. Radiographics 2018, 38, 1665–1671. [CrossRef]
[PubMed]
139. Herrmann, M.D.; Clunie, D.A.; Fedorov, A.; Doyle, S.W.; Pieper, S.; Klepeis, V.; Le, L.P.; Mutter, G.L.; Milstone, D.S.; Schultz, T.J.;
et al. Implementing the DICOM Standard for Digital Pathology. J. Pathol. Inform. 2018, 9, 37. [PubMed]
140. Kortesniemi, M.; Tsapaki, V.; Trianni, A.; Russo, P.; Maas, A.; Kallman, H.E.; Brambilla, M.; Damilakis, J. The European Federation
of Organisations for Medical Physics (EFOMP) White Paper: Big data and deep learning in medical imaging and in relation to
medical physics profession. Phys. Med. 2018, 56, 90–93. [CrossRef] [PubMed]
Appl. Sci. 2021, 11, 1691 17 of 17

141. Zwanenburg, A.; Leger, S.; Vallieres, M.; Lock, S. Image Biomarker Standardisation Initiative for, Image biomarker standardisation
initiative. arXiv 2016, arXiv:1612.07003.
142. Mahon, R.N.; Ghita, M.; Hugo, G.D.; Weiss, E. ComBat harmonization for radiomic features in independent phantom and lung
cancer patient computed tomography datasets. Phys. Med. Biol. 2019, 65, 015010. [CrossRef]
143. Kesner, A.; Laforest, R.; Otazo, R.; Jennifer, K.; Pan, T. Medical imaging data in the digital innovation age. Med. Phys. 2018, 45,
e40–e52. [CrossRef]
144. Parmar, C.; Grossmann, P.; Bussink, J.; Lambin, P.; Aerts, H.J. Machine Learning methods for Quantitative Radiomic Biomarkers.
Sci. Rep. 2015, 5, 13087. [CrossRef]
145. Barucci, A. Adversarial radiomics: The rising of potential risks in medical imaging from adversarial learning. Eur. J. Nucl. Med.
Mol. Imaging 2020, 47, 2941–2943. [CrossRef]
146. Li, S.; Chen, Y.; Peng, Y.; Bai, L. Learning More Robust Features with Adversarial Training. arXiv 2018, arXiv:1804.07757.
147. U.S. Food and Drug Administration: MicroArray/Sequencing Quality Control (MAQC/SEQC). 2021. Available online: https://
www.fda.gov/science-research/bioinformatics-tools/microarraysequencing-quality-control-maqcseqc (accessed on 12 February 2021).
148. Collins, G.S.; Reitsma, J.B.; Altman, D.G.; Moons, K.G. Transparent reporting of a multivariable prediction model for individual
prognosis or diagnosis (TRIPOD): The TRIPOD Statement. MC Med. 2015, 13, 1–10. [CrossRef] [PubMed]
149. Wang, B.; White, G. The role of clinical medical physicists in the future: Quality, safety, technology implementation, and enhanced
direct patient care. J. Appl. Clin. Med. Phys. 2019, 20, 4–6. [CrossRef] [PubMed]
150. Caruana, C.J.; Tsapaki, V.; Damilakis, J.; Brambilla, M.; Martin, G.M.; Dimov, A.; Bosmans, H.; Egan, G.; Bacher, K.; Mc-
Clean, B. EFOMP policy statement 16: The role and competences of medical physicists and medical physics experts under
2013/59/EURATOM. Phys. Med. 2018, 48, 162–168. [CrossRef] [PubMed]
151. Okamoto, H.; Ota, S.; Kawamorita, R.; Sakamoto, M.; Nakamura, S.; Nishioka, S.; Kabuki, S.; Masai, N.; Mizuno, N.; Furuya,
T.; et al. Summary of the Report of Task Group 100 of the AAPM: Application of Risk Analysis Methods to Radiation Therapy
Quality Management. Igaku Butsuri 2020, 40, 28–34. [PubMed]
152. Bang, J.Y.; Hough, M.; Hawes, R.H.; Varadarajulu, S. Use of Artificial Intelligence to Reduce Radiation Exposure at Fluoroscopy-
Guided Endoscopic Procedures. Am. J. Gastroenterol. 2020, 115, 555–561. [CrossRef]
153. Liu, Y.; Ma, L.; Zhao, J. Secure Deep Learning Engineering: A Road Towards Quality Assurance of Intelligent Systems. In Lecture
Notes in Computer Science; Springer: Berlin, Germany, 2019; pp. 3–15.
154. Tang, A.; Tam, R.; Cadrin-Chênevert, A.; Guest, W.; Chong, J.; Barfett, J.; Chepelev, L.; Cairns, R.; Mitchell, J.R.; Cicero, M.D.; et al.
Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology. Can. Assoc. Radiol. J. 2018, 69, 120–135.
[CrossRef] [PubMed]
155. Currie, G.; Hawk, K.E.; Rohren, E.; Vial, A.; Klein, R. Machine Learning and Deep Learning in Medical Imaging: Intelligent
Imaging. J. Med Imaging Radiat. Sci. 2019, 50, 477–487. [CrossRef] [PubMed]
156. Prior, F.W.; Clark, K.; Commean, P.; Freymann, J.; Jaffe, C.; Kirby, J.; Moore, S.; Smith, K.; Tarbox, L.; Vendt, B.; et al. TCIA: An
information resource to enable open science. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2013, 2013, 1282–1285.
157. Sharma, A.; Tarbox, L.; Kurc, T.; Bona, J.; Smith, K.; Kathiravelu, P.; Bremer, E.; Saltz, J.H.; Prior, F. PRISM: A Platform for Imaging
in Precision Medicine. JCO Clin. Cancer Inform. 2020, 4, 491–499. [CrossRef] [PubMed]
158. Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.W.; Santos, L.B.D.
The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 2016, 3, 160018. [CrossRef] [PubMed]

You might also like