Generative AI in healthcare - an implementation science informed translational path on application, integration and governance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Reddy Implementation Science (2024) 19:27 Implementation Science

https://doi.org/10.1186/s13012-024-01357-9

DEBATE Open Access

Generative AI in healthcare:
an implementation science informed
translational path on application, integration
and governance
Sandeep Reddy1*   

Abstract
Background Artificial intelligence (AI), particularly generative AI, has emerged as a transformative tool in healthcare,
with the potential to revolutionize clinical decision-making and improve health outcomes. Generative AI, capable
of generating new data such as text and images, holds promise in enhancing patient care, revolutionizing disease
diagnosis and expanding treatment options. However, the utility and impact of generative AI in healthcare remain
poorly understood, with concerns around ethical and medico-legal implications, integration into healthcare ser-
vice delivery and workforce utilisation. Also, there is not a clear pathway to implement and integrate generative AI
in healthcare delivery.
Methods This article aims to provide a comprehensive overview of the use of generative AI in healthcare, focus-
ing on the utility of the technology in healthcare and its translational application highlighting the need for careful
planning, execution and management of expectations in adopting generative AI in clinical medicine. Key consid-
erations include factors such as data privacy, security and the irreplaceable role of clinicians’ expertise. Frameworks
like the technology acceptance model (TAM) and the Non-Adoption, Abandonment, Scale-up, Spread and Sustain-
ability (NASSS) model are considered to promote responsible integration. These frameworks allow anticipating
and proactively addressing barriers to adoption, facilitating stakeholder participation and responsibly transitioning
care systems to harness generative AI’s potential.
Results Generative AI has the potential to transform healthcare through automated systems, enhanced clinical
decision-making and democratization of expertise with diagnostic support tools providing timely, personalized
suggestions. Generative AI applications across billing, diagnosis, treatment and research can also make healthcare
delivery more efficient, equitable and effective. However, integration of generative AI necessitates meticulous change
management and risk mitigation strategies. Technological capabilities alone cannot shift complex care ecosystems
overnight; rather, structured adoption programs grounded in implementation science are imperative.
Conclusions It is strongly argued in this article that generative AI can usher in tremendous healthcare progress,
if introduced responsibly. Strategic adoption based on implementation science, incremental deployment and bal-
anced messaging around opportunities versus limitations helps promote safe, ethical generative AI integration.
Extensive real-world piloting and iteration aligned to clinical priorities should drive development. With conscientious

*Correspondence:
Sandeep Reddy
[email protected]
Full list of author information is available at the end of the article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecom-
mons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reddy Implementation Science (2024) 19:27 Page 2 of 15

governance centred on human wellbeing over technological novelty, generative AI can enhance accessibility,
affordability and quality of care. As these models continue advancing rapidly, ongoing reassessment and transparent
communication around their strengths and weaknesses remain vital to restoring trust, realizing positive potential and,
most importantly, improving patient outcomes.
Keywords Generative artificial intelligence, Healthcare, Implementation science, Translation pathway

Contributions to the literature Furthermore, it is uncertain how far generative AI can


improve patient outcomes and how this can be assessed.
• Generative AI has the potential to revolutionize clinical Finally, the value of generative AI beyond augmenting
decision-making and improve health outcomes, but its clinical and administrative tasks needs to be explored.
utility and impact in healthcare remain poorly under- Realizing generative AI’s vast potential in healthcare
stood. requires translational approaches rooted in implemen-
• The article outlines the vast capacity of generative AI tation science. Such approaches recognize technological
to revolutionize healthcare system operations, scien- progress alone will not revolutionize healthcare over-
tific investigation and patient care, contending that if night [16, 17]. Real change requires carefully orchestrated
applied conscientiously, generative AI could enhance sociotechnical transitions that put people first. Imple-
medical care quality, fairness and efficiency. mentation science-based approaches provide generaliza-
• Though generative AI holds promise, successfully inte- ble roadmaps grounded in empirical evidence from prior
grating it into the intricate healthcare system requires health IT deployments [16]. As such, healthcare lead-
carefully planned approaches. The article provides ers pioneering generative AI integration would be well
methodical integration plans informed by implemen- served in leveraging these models to reinforce patient
tation science principles, which are vital for gradually safety, trust and impact [17]. To facilitate the appropriate
and effectively transforming complex care environ- incorporation and application of generative AI in health-
ments with new technologies over time. care, this article aims to provide an overview of the use of
generative AI in healthcare followed by guidance on its
translational application.
Background
Artificial intelligence (AI) has become an increasingly
Generative AI
popular tool in a variety of fields, including healthcare,
Generative AI is a class of machine learning technology
with the potential to transform clinical decision-making
that learns to generate new data from training data [18,
and improve health outcomes [1–3]. Generative AI is one
19]. Generative models generate data that is similar to
area of AI that has gained attention recently for its abil-
the original data. This can be useful in a variety of appli-
ity to use machine learning algorithms to generate new
cations such as image and speech synthesis. Another
data, such as text, images and music [4–7]. Generative AI
unique capability is that they can be used to perform
is proving to be a change catalyst across various indus-
unsupervised learning, which means that they can learn
tries, and the healthcare sector is no exception [8]. With
from data without explicit labels [8]. This can be useful
its remarkable ability to analyse extensive datasets and
in situations where labelled data is scarce or expensive to
generate valuable insights, generative AI has emerged
obtain. Furthermore, generative AI models can generate
as a powerful tool in enhancing patient care [9], revolu-
synthetic data by learning the underlying data distribu-
tionizing disease diagnosis [10] and expanding treatment
tions from real data and then generating new data that
options [11]. By harnessing the potential of this cutting-
is statistically similar to the real data. Generative mod-
edge technology, healthcare professionals can now access
els differ from other types of machine learning models
unprecedented levels of accuracy, efficiency and innova-
in that they aim to endow machines with the ability to
tion in their practices.
synthesise new entities [8]. They are designed to learn
Despite the potential benefits, the utility and impact
the underlying structure of a dataset and generate new
of generative AI in healthcare remain poorly understood
samples that are like the original data. This contrasts
[12, 13]. The application of generative AI in healthcare
with discriminative models, which are designed to learn
raises ethical and medico-legal concerns [14]. Moreo-
the boundary between different classes of data. These
ver, it is unclear how generative AI applications can be
models focus on tasks such as classification, regression
integrated into healthcare service delivery and how the
or reinforcement learning, where the goal is to make pre-
healthcare workforce can utilise them appropriately [15].
dictions or take actions based on existing data. There are
Reddy Implementation Science (2024) 19:27 Page 3 of 15

several categories of generative AI, as outlined in Table 1 processing (NLP) tasks [25]. In particular, the availabil-
[20–23]. ity of OpenAI’s GPT-4 [26], Anthropic’s Claude [27] and
While there are several generative AI models, this arti- Google’s PaLM2 [28] has significantly galvanised the
cle will mainly focus on two models, which are popular progress of not just NLP but the field of AI in general,
in the healthcare context: generative adversarial networks whereby commentators are discussing achievement of
and large language models. human-level performance by AI [10, 29]. LLMs like Ope-
nAI’s GPT-4 are based on the autoregressive model. An
Generative adversarial networks autoregressive model is used to generate sequences, such
Generative adversarial networks (GANs) differ from as sentences in natural language, by predicting a next
traditional generative modelling techniques in their item based on previous ones [30]. The difference between
approach to learning [24]. GANs use a game-theoretic LLMs and traditional language models lies in their capa-
framework with competing networks. GANs consist of bilities and training methods [25]. LLMs, like GPT-4,
two neural networks, a generator and a discriminator, utilise the Transformers architecture, which has proven
that compete against each other. The generator creates to be effective for understanding the context of words
fake data to pass to the discriminator. The discrimina- in a sentence. A transformer uses a mechanism called
tor then decides if the data it received is like the known, ‘attention’ to weigh the importance of words when mak-
real data. Over time, the generator gets better at produc- ing predictions [31]. This mechanism allows the model
ing data that looks real, while the discriminator gets bet- to consider the entire history of a sentence, making it a
ter at telling the difference between real and fake data. powerful tool for sequence prediction tasks. LLMs are
This adversarial training process allows GANs to learn trained on a large corpus of text data. During training,
representations in an unsupervised and semi-supervised the model learns to predict the next word in a sentence
fashion. In contrast, traditional generative modelling given the previous words. It does this by adjusting its
techniques often rely on explicit probabilistic models or internal parameters to minimise the difference between
variational inference methods. its predictions and the actual words that follow in the
Recent developments in GANs relating to representa- training data.
tion learning include advancements in learning latent One of the key advantages of LLMs is their ability to
space representations [24]. These developments focus perform many language processing tasks without the
on improving the ability of GANs to transform vectors need for additional training data [32]. This is because
of generated noise into synthetic samples that resemble they have already been trained on a vast corpus of text,
data from the training set. Some specific examples of allowing them to generate coherent and contextually
recent developments in this area include GANs applied relevant responses based on the input they receive. This
to image generation, semi-supervised learning, domain makes them particularly useful as references or oracles
adaptation, generation controlled by attention and com- for text summarization models. Text summarization
pression [5]. These advancements aim to enhance the is a complex task that involves understanding the main
representation learning capabilities of GANs and enable points of a piece of text and then condensing these points
them to generate more realistic and diverse samples. into a shorter form. LLMs can be used to generate sum-
GANs have been used to generate realistic images [24]. maries of text, which can then be used as a reference or
These models can learn the underlying distribution of a ‘gold standard’ for other summarization models [25]. This
dataset and generate new images that resemble the origi- can help to improve the performance of these models
nal data. This has applications in areas like computer by providing them with high-quality summaries to learn
graphics, art and entertainment. Moreover, GANs can from.
be used to augment training data by generating synthetic In addition to text summarizations, LLMs have also
samples. This can help in cases where the original data- been used in a variety of other applications [15]. In the
set is small or imbalanced, improving the performance realm of text classification, LLMs can be used to auto-
of machine learning models. Synthetic data, created by matically categorise pieces of text into predefined cat-
machine learning algorithms or neural networks, can egories. This can be useful in a variety of applications,
retain the statistical relationships of real data while offer- from spam detection in email filters to sentiment analy-
ing privacy protection. Synthetic data is also being con- sis in customer reviews. Finally, LLMs have been used
sidered for enhancing privacy. for the automatic evaluation of attribution. This involves
determining the source or author of a piece of text. For
Large language models example, an LLM could be used to determine whether a
Large language models (LLMs) are powerful AI mod- particular tweet was written by a specific person, or to
els that have shown promise in various natural language identify the author of an anonymous piece of writing.
Reddy Implementation Science

Table 1 Generative AI models [20–23]


Generative AI model Description Applications
(2024) 19:27

Generative adversarial networks (GANs) GANs consist of 2 neural networks, a generator and a discriminator, Image synthesis, style transfer, face ageing, data augmentation, 3D object
that compete against each other. GANs are often used in image synthesis, creation
super-resolution, style transfer, and more
Variational autoencoders (VAEs) VAEs are a type of autoencoder which adds additional constraints Image generation, anomaly detection, image denoising, exploration of latent
to the encoding process, causing the network to generate continuous, spaces, content generation in gaming
structured representations. This makes them useful for tasks such as gener-
ating new images or other data points
Autoregressive models These models predict the next output in a sequence based on previous Text generation (e.g., GPT models), music composition, image generation
outputs. They have been used extensively in language modelling tasks (like (e.g., PixelRNN), time-series forecasting
text generation), as well as in generating music and even images
Flow-based models These models leverage the change of variables formula to model complex High-quality image synthesis, speech and music modelling, density estima-
distributions. They are characterised by their ability to both generate new tion, anomaly detection
samples and perform efficient inference
Energy-based models (EBMs) In EBMs, the aim is to learn an energy function that assigns low-energy val- Image synthesis and restoration, pattern recognition, unsupervised and semi-
ues to data points from the data distribution and higher energies to other supervised learning, structured prediction
points. EBMs can be used for a wide range of applications, including image
synthesis, denoising and in painting
Diffusion models These models gradually learn to construct data by reversing a diffusion pro- High-fidelity image generation (DALL-E2), audio synthesis, molecular struc-
cess, which transforms data into a Gaussian distribution. They have shown ture generation
remarkable results in generating high-quality, diverse samples
Page 4 of 15
Reddy Implementation Science (2024) 19:27 Page 5 of 15

It is important to note that while LLMs are power- electronic health record (EHR) data by learning the
ful, they have limitations [15]. Because they generate underlying data distributions, which allows for excel-
sequences one component at a time, they are inherently lent performance and addresses challenges such as data
sequential and cannot be parallelised. Moreover, they are privacy concerns. This approach can be particularly use-
causal, meaning that they can only use information from ful in situations where there is a limited amount of real-
the past, not the future, when making predictions [33, world patient data available, or when access to such data
34]. They can struggle to capture long-range dependen- is restricted due to privacy concerns. Additionally, the
cies because of the vanishing gradient problem, although use of synthetic data can help to improve the accuracy
architectures like Transformers help mitigate this issue. and robustness of machine learning models, as it allows
for a more diverse and representative range of data to
Application of generative AI in healthcare be used in the training process. Furthermore, the ability
Generative AI models that facilitate the creation of text to generate synthetic data with different characteristics
and images are seen as a promising tool in the healthcare and parameters can enable researchers and clinicians to
context [26, 35, 36]. Generative AI can transform health- investigate and test various hypotheses [5, 9, 37], leading
care by enabling improvements in diagnosis, reducing the to new insights and discoveries.
cost and time required to deliver healthcare and improv-
ing patient outcomes (Fig. 1). Drug discovery
Generative AI models are also being used to generate
Synthetic data generation and data augmentation novel small molecules, nucleic acid sequences and pro-
Synthetic data, which is created using generative AI mod- teins with a desired structure or function, thus aiding in
els like GANs, is an increasingly promising solution for drug discovery [11]. By analysing the chemical structure
balancing valuable data access and patient privacy pro- of successful drugs and simulating variations, genera-
tection [9]. By using generative AI models, realistic and tive AI can produce potential drug candidates at a much
anonymised patient data can be created for research and faster rate than traditional drug discovery methods.
training purposes, while also enabling a wide range of This not only saves time and resources but can also help
versatile applications. Moreover, GANs can synthesise to identify drugs that may have gone unnoticed using

Fig. 1 Use cases of generative AI in healthcare. Generative AI models like generative adversarial networks (GANs) and large language models
(LLMs) are used to generate various data modalities including text and image data, which are then used for various scenarios including drug
discovery, medical diagnosis, clinical documentation, patient education, personalized medicine, healthcare administration and medical education
amongst other use cases
Reddy Implementation Science (2024) 19:27 Page 6 of 15

traditional methods. Moreover, the use of generative AI have shown near-passing performance on medical exams,
can also aid in predicting the efficacy and safety of new demonstrating a level of understanding comparable to
drugs, which is a crucial step in the drug development that of a medical student [29]. However, limits exist, and
process. By analysing vast amounts of data, generative AI the models’ outputs may carry certain risks and cannot
can help to identify potential issues that may arise during fully substitute outpatient physicians’ clinical judgement
clinical trials, which can ultimately reduce the time and and decision-making abilities [14].
cost of drug development [11, 38]. In addition, generative
AI by identifying specific biological processes that play a Clinical documentation and healthcare administration
role in disease can help to pinpoint new targets for drug LLMs such as GPT-4 and PALM-2 can be used to gener-
development, which can ultimately lead to the develop- ate summaries of patient data [41]. This could be particu-
ment of more effective treatments. larly useful in healthcare settings where large amounts of
data are collected and need to be interpreted quickly and
Medical diagnosis accurately. For instance, an EHR may contain patient data
Generative models can be trained on vast datasets of such as medical history, medications, allergies and labo-
medical records and imagery (like MRIs and CT scans) ratory results. A generative AI model could be trained
to identify patterns related to diseases. For instance, to read through this data, understand the key points and
GANs have been used for image reconstruction, synthe- generate a concise summary. This summary could high-
sis, segmentation, registration and classification [5, 9, 37, light critical information such as diagnosis, prescribed
39]. Moreover, GANs can be used to generate synthetic medications and recommended treatments. It could
medical images that can be used to train machine learn- also identify trends in the patient’s health over time. By
ing models for image-based diagnosis or augment medi- automating this process, healthcare providers could save
cal datasets. LLMs can enhance the output of multiple time and ensure that nothing important is overlooked.
CAD networks, such as diagnosis networks, lesion seg- Furthermore, these summaries could be used to improve
mentation networks and report generation networks, by communication between different healthcare provid-
summarising and reorganizing the information presented ers and between providers and patients, as they provide
in natural language text format. This can create a more a clear and concise overview of the patient’s health sta-
user-friendly and understandable system for patients tus. The ability of LLMs to automate such processes can
compared to conventional CAD systems. alleviate the current documentation burden and the con-
EHRs and other patient records are rich repositories of sequent burnout many physicians across the world face
data, and LLMs can be used to analyse these records in a [41]. Currently, many clinicians, due to organisational
sophisticated manner [40]. They can process and under- policies or health insurance requirements, are required
stand the information and terminology used in these to fill in lengthy documentation beyond what is required
records, which allows them to extract and interpret com- for routine clinical care. Studies have shown that many
plex medical information. This capability extends beyond physicians spend over 1 h of time on electronic health
simple keyword matching, as LLMs can infer meaning record tasks for every hour of direct clinical face time
from incomplete information, and even draw on a vast [42]. Additionally, the cognitive load and frustration
medical corpus to make sense of the data. Moreover, associated with documentation can reduce work satisfac-
LLMs can integrate and analyse information from multi- tion. contributing to their burnout [43]. Implementation
ple sources within the EHR. They can correlate data from of natural language processing tools to automate docu-
lab results, physician’s notes and medical imaging reports mentation could lessen this burden. An LLM embedded
to generate a more holistic view of the patient’s health in the relevant information platform can undertake the
[10]. This can be particularly useful in complex cases documentation and provide draft versions for the clini-
where the patient has multiple conditions or symptoms cian to approve [40, 41]. For example, hospitals can use
that may be related. LLMs to generate routine progress notes and discharge
LLMs, like GPT-4, have shown medical knowledge summaries [44].
despite lacking medicine-specific training [10, 29]. One Further to this, there is potential for these LLM-based
of the most impressive aspects of these models is their applications to reduce medical errors and capturing
ability to apply this knowledge in decision-making tasks missed information by providing a layer of scrutiny when
[10]. For example, when presented with a hypothetical embedded in EHRs [45]. In addition to automating docu-
patient scenario, an LLM can generate a list of poten- mentation, LLMs integrated into EHRs could help reduce
tial diagnoses based on the symptoms described, sug- medical errors and ensure important information is not
gest appropriate tests to confirm the diagnosis and even missed. Studies have found that many hospital patients
propose a treatment plan. In some studies, these models will experience a preventable medical error, often due
Reddy Implementation Science (2024) 19:27 Page 7 of 15

to issues like misdiagnosis, prescription mistakes or scenario that triggers a patient’s anxiety, and then guide
examination findings that are not followed up correctly the patient through a series of responses to help them
[46]. Also, LLMs have the potential to serve as a deci- manage their reaction. This can provide patients with a
sion support tool by analysing patient charts and flagging safe and controlled environment in which to practice
discrepancies or gaps in care [45]. For example, an LLM their coping strategies, potentially leading to improved
could cross-reference current symptoms and diagnostics mental health outcomes.
against past medical history to prompt physicians about
conditions that require further investigation. Addition- Medical education and training
ally, they could scan medication lists and warn of poten- In the context of medical education and training, this
tial adverse interactions or contraindications. technology can be used to generate a wide variety of vir-
Generative AI can also be used to automate routine tual patient cases. These cases can be based on a diverse
tasks in healthcare, such as scheduling appointments, range of medical conditions, patient demographics and
processing claims and managing patient records [47]. clinical scenarios, providing a comprehensive learn-
For example, AI models can be used to develop intelli- ing platform for medical students and healthcare pro-
gent scheduling systems. These systems can interact with fessionals [51, 52]. One of the primary benefits of using
patients through chatbots or voice assistants to schedule, generative AI in medical education is the ability to cre-
reschedule or cancel appointments. They can consider ate a safe and controlled learning environment. Medical
factors such as doctor’s availability, patient’s preferred students can interact with these virtual patients, make
time and urgency of the appointment to optimize the diagnoses and propose treatment plans without any risk
scheduling process. Generative AI can also automate the to real patients. This allows students to make mistakes
process of insurance claims. It can read and understand and learn from them in a low stake setting. Generative
the claim documents, verify the information, check for AI can also create patient cases that are rare or complex,
any discrepancies and process the claim. This can sig- giving students the opportunity to gain experience and
nificantly reduce the time taken to process claims and knowledge in areas they might not encounter frequently
minimise errors. By automating these tasks, healthcare in their clinical practice. This can be particularly benefi-
providers can save time and resources and improve the cial in preparing students for unexpected situations and
patient experience as they get faster responses and more enhancing their problem-solving skills. Furthermore,
efficient service. the use of AI in medical education can provide a more
personalized learning experience. The AI can adapt to
Personalized medicine the learning pace and style of each individual, presenting
Generative AI can analyse a patient’s genetic makeup, cases that are more relevant to their learning needs. For
lifestyle and medical history to predict how they might example, if a student is struggling with a particular medi-
respond to different treatments [48]. This is achieved cal condition, the AI can generate more cases related to
by training the AI on large datasets of patient informa- that condition for additional practice.
tion, allowing it to identify patterns and correlations that In addition to creating virtual patient cases, generative
might not be immediately apparent to human doctors. AI can also be used to simulate conversations between
For example, the AI might notice that patients with a cer- healthcare professionals and patients [51, 52]. This can
tain genetic marker respond particularly well to a specific help students improve their communication skills and
medication. This information can then be used to create learn how to deliver difficult news in a sensitive and
a personalized treatment plan that is tailored to the indi- empathetic manner. Moreover, the integration of AI in
vidual patient’s needs. This approach can lead to more medical education can provide valuable data for edu-
effective treatment, as it considers the unique factors that cators. The AI can track the performance of students,
might affect a patient’s response to medication. It can identify areas of improvement and provide feedback,
also lead to improved patient outcomes, as treatments helping educators to refine their teaching strategies and
can be optimized based on the AI’s predictions [48]. curricula.
Generative AI can also be utilised in the field of men-
tal health, particularly in the creation of interactive tools Patient education
for cognitive behavioural therapy (CBT) [49, 50]. CBT is Generative AI can be used for patient education in sev-
a type of psychotherapy that helps patients manage their eral ways [35, 41]. It can be used to create personalized
conditions by changing the way they think and behave. educational content based on a patient’s specific condi-
Generative AI can be used to create personalized sce- tion, symptoms or questions. For example, if a patient has
narios and responses that are tailored to the individual diabetes, the AI can generate information about man-
patient’s needs. For example, the AI might generate a aging blood sugar levels, diet, exercise and medication.
Reddy Implementation Science (2024) 19:27 Page 8 of 15

Generative AI can also engage patients in interactive incomplete information can lead to misdiagnosis or inap-
learning experiences. Patients can ask questions, and the propriate treatment, which can harm patients [14, 55].
AI can generate responses, creating a dialogue that helps Therefore, it is essential to ensure that the AI systems
the patient understand their condition better. This can be are well designed and thoroughly tested to produce reli-
particularly useful for patients who may be shy or embar- able results. Despite the potential benefits, adopting gen-
rassed to ask certain questions to their healthcare pro- erative AI in clinical medicine is not a straightforward
viders. Furthermore, generative AI can also create visual process. It requires careful planning and execution [56].
aids, such as diagrams or infographics, to help patients This includes understanding the needs of the healthcare
understand complex medical concepts. For example, it providers and patients, selecting the right AI technology,
could generate a diagram showing how a particular drug integrating it into the existing healthcare systems and
works in the body. training the staff to use it effectively. Moreover, there are
Generative AI can be programmed to generate content also legal and ethical considerations, such as data privacy
at different reading levels, helping to improve health lit- and security, that need to be addressed. Furthermore, it is
eracy amongst patients with varying levels of education important to manage expectations about what generative
and comprehension [53]. It can also be used to create fol- AI can and cannot do. Clinicians’ expertise and their abil-
low-up educational content and reminders for patients. ity to empathize with patients are still crucial in provid-
For example, it could generate a series of emails or text ing high-quality care.
messages reminding a patient to take their medication, The successful translation of generative AI into clini-
along with information about why it is important. In cal practice hinges on thoughtful adoption strategies
addition, generative AI can be used to provide mental grounded in implementation science. Two models offer
health support, generating responses to patients’ con- robust scaffolds: the technology acceptance model
cerns or anxieties about their health conditions. This can (TAM) at the individual user level [57] and the Non-
help patients feel more supported and less alone in their Adoption, Abandonment, Scale-up, Spread and Sustain-
health journey. Finally, generative AI can generate edu- ability (NASSS) framework for organisational integration
cational content in multiple languages, making health- [58]. Grounded in sociopsychological theory, TAM pro-
care information more accessible to patients who do not vides an evidence-based model for how end-user per-
speak English as their first language. ceptions shape acceptance of new technologies like
generative AI [59]. Its core tenets posit that perceived
Translational path usefulness and perceived ease of use prove most deter-
The translational path of generative AI in healthcare is minative of uptake. TAM offers a quantifiable approach
a journey that involves the integration of this advanced for predicting and influencing adoption that deployment
technology into the clinical setting [54]. This process efforts must consider. Segmenting staff and assessing
has the potential to revolutionize the way healthcare is beliefs allows tailored interventions addressing barriers
delivered, by automating tasks and generating relevant like skills gaps, engagement, workflow integration and
information, thus enhancing the efficiency of healthcare demonstrable benefits. Equally crucial, the NASSS
delivery [26, 35]. Generative AI can automate routine framework delivers a holistic methodology assessing
tasks such as data entry, appointment scheduling and multi-level variables implicated in successfully embed-
even some aspects of patient care like monitoring vital ding innovations. Its seven critical domains encompass
signs or administering medication. This automation can technology design, value propositions, adopter priori-
free up a significant amount of time for clinicians, allow- ties, organisational dynamics, wider contextual factors
ing them to focus more on direct patient care. By reduc- and their complex interplay [58]. Together, these lenses
ing the administrative burden on healthcare providers, reinforce introduced generative AI responsibly, moni-
generative AI can help improve the quality of care and tor progress and recalibrate based on emerging feed-
increase patient satisfaction [41, 53]. In addition to auto- back. Melding TAM and NASSS perspectives provides a
mating tasks, generative AI can also generate relevant powerful blueprint for thoughtfully ushering generative
information for clinicians. For example, it can analyse AI into the twenty-first-century healthcare. They bring
patient data to predict health outcomes, identify poten- implementable strategies for the sociotechnical transition
tial health risks and suggest personalized treatment such innovations necessitate, promoting buy-in, facilitat-
plans. This ability to generate insights from data can help ing integration, delivering sustained value and ultimately
clinicians make more informed decisions about patient transforming care.
care, potentially leading to improved patient outcomes. Based on these frameworks, the below content dis-
However, the accuracy and completeness of the cusses the key components or steps a healthcare
information generated by AI are crucial. Inaccurate or organisation or service should consider in integrating
Reddy Implementation Science (2024) 19:27 Page 9 of 15

generative AI in their service delivery. The description demonstrating the successful use of AI in similar
will enable partners as to how to prepare their organisa- healthcare settings.
tions and workforce to adopt and integrate generative AI ▪ Behavioural intention to use: Once healthcare pro-
to enable optimal care delivery. However, the description fessionals have a positive attitude towards the AI
does not cover wider policy and legislative aspects that system, they are more likely to intend to use it. This
are required to facilitate the introduction of generative AI intention could be turned into actual use by provid-
to healthcare. These characteristics are unique to various ing opportunities to use the AI system in a safe and
jurisdictions and continue to evolve rapidly, therefore are supportive environment and by integrating the AI
considered beyond the scope of this article. system into existing workflows.
▪ Actual system use: The final step is the actual use of
the AI system in daily healthcare practice. This could
First component: acceptance and adoption be encouraged by providing ongoing support and by
The successful implementation of AI in healthcare hinges continuously monitoring and improving the AI sys-
on the understanding and acceptance of its applications tem based on user feedback and performance data.
by end users [54], including medical professionals and
patients. This comprehension fosters trust in AI systems, In addition to these factors, the model also suggests
enables their effective use and aids in navigating ethi- that external factors like social influence and facilitat-
cal and regulatory challenges. Moreover, a solid grasp of ing conditions can influence the acceptance and use of
AI promotes continuous learning and adaptation to the a new technology [57, 59]. In the case of generative AI
evolving landscape of AI technology. Therefore, invest- in healthcare, these could include regulatory approval,
ment in improving awareness for all partners is crucial ethical considerations, patient acceptance and the overall
to ensure the effective adoption and utilisation of AI in healthcare policy and economic environment.
healthcare.
Utilising the TAM and NASSS frameworks to the Second component: data and resources
implementation generative AI in healthcare involves con- Adopting generative AI involves preparing data and
sideration of the following components: resources within an organisation to effectively utilise this
technology. This is a complex process requiring a sys-
▪ Perceived usefulness: This refers to the degree to tematic and strategic approach that involves several key
which a person believes that using a particular sys- steps.
tem would enhance his or her job performance. In
the context of generative AI in healthcare, this could ▪ Identifying use cases: Healthcare organisations need
be how the AI can help in diagnosing diseases, pre- to begin by identifying the specific use cases where
dicting patient outcomes, personalizing treatment generative AI can bring value. Generative AI aims
plans and improving administrative efficiency. For to address various medical conditions, from chronic
instance, AI could generate predictive models for diseases like diabetes to acute conditions like stroke
patient outcomes based on their medical history, cur- [6, 38, 60]. The complexity of the medical condition
rent health status and a vast database of similar cases. often dictates the level of sophistication required
▪ Perceived ease of use: This refers to the degree to from the AI model. For instance, using AI for diag-
which a person believes that using a particular sys- nostic imaging in cancer is complex and requires
tem would be free of effort. For generative AI in high accuracy. Understanding the specific use cases
healthcare, this could mean how easy it is for health- will help guide the data preparation process.
care professionals to understand and use the AI sys- ▪ Data collection: Generative AI models learn from
tem. This includes the user interface, the clarity of the data [8], so the healthcare organisation needs to col-
AI’s outputs and the level of technical support avail- lect and prepare relevant data for training the mod-
able. els. This could involve gathering existing primary
▪ Attitude towards using: The value proposition of data from various sources within the organisation or
generative AI in healthcare is compelling, offering collecting new data if necessary. The data then needs
benefits like cost-effectiveness, speed and personal- to be cleaned and preprocessed, which may involve
ized treatment options [5]. If healthcare professionals tasks such as removing duplicates, handling missing
perceive the AI system as useful and easy to use, they values and normalizing data.
are likely to develop a positive attitude towards using ▪ Data cleaning and preprocessing: It is neces-
it. This positive attitude could be further enhanced sary to clean and preprocess the collected data to
by providing adequate training and support and by ensure its quality and consistency [61, 62]. This
Reddy Implementation Science (2024) 19:27 Page 10 of 15

may involve removing duplicates, handling missing Factors that impact computational requirements
values, standardizing formats and addressing any include model size, training data volume and speed of
other data quality issues. Preprocessing steps may iteration desired. For example, a firm aiming to train a
also include data normalization, feature scaling and model with over a billion parameters on tens of billions
data augmentation techniques to enhance the train- of text examples would likely pursue a high-performance
ing process. It is important to highlight the need for computing cluster or leverage cloud–based machine
uniformity in the quality of the datasets to enable learning platforms. The precise hardware configuration—
seamless cross-functional data integration. Also, including GPUs/TPUs, CPUs, memory, storage and net-
data quality is crucial as generative AI algorithms working—scales with the model architecture and training
learn from data. The quality of data can be affected plan [63].
by various factors such as noise, missing values, Ongoing model development and fine-tuning also
outliers, biased data, lack of balance in distribu- necessitates available compute. Organisations can
tion, inconsistency, redundancy, heterogeneity, data choose between continuing to allocate internal resources
duplication and integration. or outsourcing cycles via cloud services [63]. Budget-
▪ Data annotation and labelling: Depending on the ary planning should account for these recurring com-
use case, the organisation may need to annotate and pute demands if continually enhancing in-house LLMs
label the data to provide ground truth and clinical is a priority. Overall, while leveraging external LLMs
standard information for training the generative AI can minimise infrastructure investments, serious inter-
models, specifically for fine-tuning LLMs with local nal LLM initiatives can rival the computational scale of
data [10]. This could involve tasks such as image industrial research labs.
segmentation, object detection, sentiment analysis
or text categorization. Accurate and comprehensive Third component: technical integration
annotations are essential for training models effec- Integrating generative AI into a healthcare information
tively. system or platform can bring numerous benefits, such
▪ Data storage and management: Their will be a as improved disease diagnosis, enhanced patient moni-
requirement to establish or utilise a robust data stor- toring and more efficient healthcare delivery. However,
age and management system to handle the large generative AI technologies like GANs and LLMs are
volumes of data required for generative AI. This complex to understand and implement [8]. The tech-
may involve setting up a data warehouse, cloud stor- nology’s maturity, reliability and ease of integration into
age or utilising data management platforms. All the existing systems are crucial factors affecting its adoption
while ensuring that the data is organised, accessible [58]. Therefore, integrating generative AI into a hospital
and secure for efficient training and model deploy- or healthcare information system involves several steps
ment. Data federation is a technology that can be ranging from understanding the needs of the system to
considered here as it enables the creation of a physi- implementing and maintaining the AI solution. The first
cally decentralized but functionally unified database. step in integrating generative AI into a healthcare system
This technology is particularly useful in healthcare is to identify the focus area of implementation [62]. This
as it allows various sources of data to keep the data could be anything from improving patient care, stream-
within their firewalls. However, this step may not be lining administrative tasks, enhancing diagnostic accu-
required in most instances of the use of LLMs, par- racy or predicting patient outcomes. Once the need is
ticularly if they are drawn upon through application identified, the right AI model needs to be chosen. Gener-
programming interface (API) calls or cloud services. ative AI models, such as GANs, can be used for tasks like
▪ Computational resources: Generative AI models synthesising medical images or generating patient data
often require significant computational power and [6, 37). LLMs can be used for EHR analysis and as a clini-
resources for training and inference such as GPUs cal decision support tool [40]. Once the model is chosen,
or cloud computing services [8, 15]. In-house devel- it needs to be trained on the collected data. This involves
opment and training of LLMs requires significant feeding the data into the model and adjusting the model’s
computational resources, which organisations must parameters until it can accurately predict outcomes or
carefully consider [63]. Commercial LLMs offered generate useful outputs.
through cloud services or APIs spare organisations Once the AI model is trained and tested, it can be inte-
this infrastructure burden. However, for those intent grated into the healthcare information system [56, 62].
on training proprietary models tuned to their specific This involves developing an interface between the AI
data and use cases, securing sufficient computing model and the existing system, ensuring that the model
capacity is critical. can access the data it needs and that its outputs can be
Reddy Implementation Science (2024) 19:27 Page 11 of 15

used by the system. Developing such an interface or API Fourth component: governance
allows the generative AI models to be seamlessly inte- While generative AI has several potential applications in
grated into the organisational or clinical workflow. After clinical medicine, there are also several challenges asso-
integration, the AI system needs to be extensively tested ciated with its implementation. Some of the challenges
to ensure its functionality, usability and reliability. include the following:
Regular maintenance is also necessary to update the
model as new data becomes available and to retrain it ▪ Data availability: Generative AI requires large
if its performance drops [56, 62]. Furthermore, gather- amounts of data to train models effectively [8]. How-
ing regular/scheduled feedback from healthcare profes- ever, in clinical medicine, data is often limited due to
sionals will ensure the organisation can make necessary privacy concerns and regulations. This can make it
refinements to improve the system’s performance. difficult to train models effectively.
When leveraging external LLMs for healthcare applica- ▪ Bias in training data: Generative AI models require
tions, stringent data governance practices are imperative large amounts of training data to learn patterns and
to safeguard sensitive patient information [64]. As text or generate new data. If the training data is biased, the
speech data gets routed to third-party LLM services for generative AI model will also be biased [13]. For
analysis, the contents contain protected health informa- example, if the training data is skewed towards a par-
tion (PHI) and personally identifiable information (PII) ticular demographic group, the generative AI model
that must remain confidential. may produce biased results for that group.
While LLMs themselves are static analysis models ▪ Transparency: While powerful LLMs like Chat-
rather than continuously learning systems, the vendors GPT demonstrate impressive conversational abil-
hosting these models and powering predictions still phys- ity, the opaque sourcing of their massive training
ically or computationally access submitted data [65, 66]. corpora has rightly drawn scrutiny [64, 65]. Absent
Irrespective of the vendors’ reassurances about privacy transparency around the origin, copyright status and
commitments, obligations and restrictions on ingesting consent policies of underlying data, legal and ethical
customer content for model retraining, residual risks of blind spots remain. For commercially offered LLMs,
data leakage or unintended retention persist. To miti- details of training processes understandably remain
gate these risks, comprehensive legal contracts between proprietary intellectual property. However, the use
the healthcare organisation and LLM vendor are founda- of scraped web pages, private discussions, or copy-
tional to ensuring PHI/PII protection in accordance with righted content without permission during model
health regulations. Business associate agreements, data development can still create liability. Recent lawsuits
usage agreements and master service provider contracts alleging unauthorised scraping by LLM providers
allow formally codifying allowable LLM data process- exemplify the growing backlash.
ing, storage, transmission and disposal protocols. Such ▪ Model interpretability: Generative AI models can
contracts also establish liability and enforcement mecha- be complex and difficult to interpret, making it chal-
nisms in case of a breach attributed to the vendor, includ- lenging for clinicians to understand how the model
ing notification, indemnification and restitution clauses. arrived at its conclusions [13, 67]. This can make it
Strict access controls, encryption schemes, activity audit difficult to trust the model’s output and incorporate it
protocols and authorization procedures should comple- into clinical decision-making.
ment these contractual protections. While LLMs them- ▪ Inaccurate generation: While LLMs demonstrate
selves may not endlessly accumulate healthcare data like impressive fluency and versatility in conversational
perpetually learning systems, due diligence around the applications, their reliability breaks down when
long-term fate of data sent to LLM prediction services applied to high-stakes domains like healthcare [14,
remains highly advisable for risk-averse legal and compli- 55]. Without the contextual grounding in factual
ance teams [14]. Establishing robust data governance for knowledge and reasoning capacity needed for medi-
emerging clinical LLM integration can prevent problem- cal decision-making, LLMs pose substantial patient
atic regulatory, ethical and reputational exposure [64]. safety risks if overly trusted by clinicians [14]. Hal-
While beyond the scope of this article to discuss in lucination errors represent one demonstrated fail-
detail, the organisation will additionally have a respon- ure mode, where LLMs confidently generate plau-
sibility to ensure the AI system complies with relevant sible-sounding but entirely fabricated responses
healthcare regulations and privacy laws [55], such as lied outside their training distributions. For patient
Health Insurance Portability and Accountability Act assessments, treatment plans or other clinical sup-
(HIPAA) in the USA or General Data Protection Regula- port functions, such creative falsehoods could read-
tion (GDPR) in the European Union. ily culminate in patient harm if not rigorously vali-
Reddy Implementation Science (2024) 19:27 Page 12 of 15

dated [64]. Additionally, LLMs often ignore nuanced policies with diverse voices, emphasizing transparency,
dependencies in multi-step reasoning that underlie providing extensive user training resources, developing
sound medical judgments. Their capabilities centre protocols to assess AI quality and fairness, allowing user
on statistical associations rather than causal implica- customization of tools, and continually evaluating impact
tions [68]. As such, they frequently oversimplify the to enable appropriate adaptation over time. Drawing
complex decision chains requiring domain exper- from these models ensures responsible and ethical inte-
tise that clinicians must weigh. Blindly accepting an gration guided by end-user needs. The following are cor-
LLM-generated diagnostic or therapeutic suggestion responding steps:
without scepticism can thus propagate errors.
▪ Regulatory and ethical issues: The use of genera- ▪ Establish or utilise a governance committee: This
tive AI in clinical medicine raises several regulatory committee should be composed of experts in AI,
and ethical issues [14], including patient privacy, healthcare, ethics, law and patient advocacy. The
data ownership and accountability. Regulatory poli- committee’s responsibility is to supervise the creation
cies, ethical considerations and public opinion form and implementation of generative AI applications in
the wider context. Data privacy laws like GDPR in healthcare, making sure they adhere to the highest
Europe or HIPAA in the USA have implications moral, statutory and professional standards.
for AI in healthcare [65]. These aspects need to be ▪ Develop relevant policies and guidelines: Create
addressed to ensure that the use of generative AI is policies and guidelines that address issues like data
ethical and legal. protection and security, informed consent, openness,
▪ Validation: Generative AI models need to be vali- responsibility and fairness in relation to the usage of
dated to ensure that they are accurate and reliable generative AI in healthcare. The guidelines should
[62]. This requires large datasets and rigorous testing, also cover potential AI abuse and lay out precise
which can be time-consuming and expensive. reporting and resolution processes.
▪ Implement robust data management practices: This
To minimise risks arising from the application of gen- includes ensuring data privacy and security, obtain-
erative AI in healthcare, it is important to establish a ing informed consent for data use, and ensuring data
governance and evaluation framework grounded in quality and integrity. It also involves using diverse
implementation science [64]. Frameworks such as the and representative datasets to avoid bias in AI out-
NASSS framework and the TAM should inform sub- puts.
sequent steps to promote responsible and ethical use ▪ Mitigate inaccurate generated data: Overall, while
of generative AI [58, 69]. This implementation science LLMs have strengths in certain narrow applications,
informed approach includes several steps to ensure their limitations in recalling latest findings, ground-
appropriate testing, monitoring and iteration of the tech- ing advice in biomedical knowledge and deliberative
nology. The NASSS framework provides a useful lens analytical thinking pose risks in clinical roles [14].
for assessing the complex adaptive systems into which Mitigating these requires both technological and pro-
generative AI solutions would be embedded [58]. This cess safeguards. At minimum, meticulous testing on
framework examines factors like the condition, technol- massive, validated datasets, transparent uncertainty
ogy, value proposition, adopter system, organisation, quantification, multi-modal human-AI collaboration
wider context, and interaction and mutual adaptation and consistent expert oversight prove essential before
over time. Analysing these elements can reveal barriers contemplating LLM adoption for patient-impacting
and enablers to adopting generative AI across health- functions. With careful governance, LLMs may aid
care organisations. Similarly, the TAM focuses specifi- clinicians but cannot replace them.
cally on human and social factors influencing technology ▪ Risk assessment: Prior to implementation, health-
uptake [59]. By evaluating perceived usefulness and per- care organisations must undertake structured risk
ceived ease of use of generative AI systems, TAM pro- assessments to inventory and quantify potential
vides insights into how both patients and providers patient harms from generative AI adoption. Multi-
may respond to and interact with the technology. TAM disciplinary teams including clinicians, IT security,
encourages stakeholder participation in system design to legal/compliance, risk management and AI engineers
optimize user acceptance. should participate. A broad examination of use cases,
Both NASSS and TAM demand a thoughtful change data dependencies, performance assumptions, safe-
management strategy for introducing new technologies guards, governance and liability scenarios provide
like generative AI. This means conducting iterative test- the foundation. Identified dangers span clinical inac-
ing and piloting of systems, co-developing governance curacies like inappropriate treatment suggestions to
Reddy Implementation Science (2024) 19:27 Page 13 of 15

operational risks like biased outputs or diagnostics thermore, litigation alleging regulatory noncompli-
halted by technical outages. Other key considerations ance, privacy violations or misrepresentation based
are malicious misuse, defects propagating as training on questionable LLM data sourcing could follow
data and breach of sensitive records compromising [14]. Nonetheless, for many clinical functions, exter-
privacy or trust. nally developed LLMs can sufficiently assist physi-
cians without full transparency into underlying cor-
For each plausible risk, the assessment calibrates probabil- pora. Simple conversational applications likely pose
ity and severity estimates for variables like user types, infor- little concern. However, for more impactful care
mation classes and mitigating controls. Continuous risk recommendations or patient-specific outputs, clini-
monitoring based on leading indicators and usage audits cians should validate suggestions accordingly rather
ensures the initial assessment adapts alongside inevitable than presume integrity [64]. Overall, the inaccessible
model and application changes over time. Periodic proba- nature of commercial LLM training data is an obsta-
bilistic modelling using safety assurance methodologies fur- cle, but not a wholesale deal-breaker with careful
ther reinforces responsible governance. Overall, a nimble governance around how predictions get utilised. Still,
quantified risk approach prepares organisations to responsi- transparency remains an ongoing advocacy issue that
bly pursue generative AI’s benefits while protecting patients. healthcare providers should champion [64].
▪ Regulatory compliance: Ensure compliance with
▪ Ensuring transparency: Ensure transparency of relevant regulatory frameworks, such as data pro-
generative AI models by providing clear documen- tection laws and medical device regulations. Col-
tation of the underlying algorithms, data sources laborate with regulatory authorities to establish
and decision-making processes. This promotes trust guidelines specific to generative AI in healthcare
and enables healthcare professionals to understand Establishing procedures for ongoing monitoring
and interpret the generated outputs. For risk-averse and evaluation of the models is crucial in addition
healthcare organisations, partnering with LLM ven- to the measures to enable governance for the gen-
dors who refuse reasonable data transparency raises erative AI models [67, 70]. This involves collect-
concerns. If unsuitable, illegal or fraudulent data ing input from patients and healthcare experts as
underpins model predictions, patient safety and well as regular monitoring of performance, safety
organisational reputation may suffer [13, 68]. Fur- and ethical considerations. Healthcare organisa-

Fig. 2 Translational path for generative AI in healthcare. Generative AI needs careful planning to incorporate it into healthcare delivery. Appropriate
steps including ensuring there is acceptance amongst partners followed by planning for data acquisition and computation resources. Then after,
integration and utilisation of generative AI in healthcare information systems is governed by a risk mitigation framework
Reddy Implementation Science (2024) 19:27 Page 14 of 15

tions can reduce risks and guarantee the appropri- Consent for publication
Not applicable.
ate and ethical use of generative AI in healthcare by
adhering to every step in this framework (Fig. 2). Competing interests
The governance framework harnesses the potential The author declares no competing interests.
advantages of generative AI technology while pro- Author details
moting openness, responsibility and patient safety. 1
Deakin School of Medicine, Waurn Ponds, Geelong, VIC 3215, Australia.

Received: 17 August 2023 Accepted: 6 March 2024

Conclusion
Healthcare systems worldwide face crises of affordabil-
ity, access and inconsistent quality that now endanger
References
public health [71]. Generative AI presents solutions to 1. Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in health-
begin rectifying these systemic failures through respon- care: transforming the practice of medicine. Future Healthc J.
2021;8(2):e188–94.
sible implementation guided by scientific best practices.
2. Desai AN. Artificial intelligence: promise, pitfalls, and perspective. JAMA.
Validated frameworks like the TAM and NASSS 2020;323(24):2448–9.
model provide actionable roadmaps for change man- 3. Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare
delivery. J R Soc Med. 2019;112(1):22–8.
agement, stakeholder alignment and impact optimiza-
4. Kothari AN. ChatGPT, large language models, and generative ai as future
tion [58, 59]. They allow anticipating adoption barriers augments of surgical cancer care. Ann Surg Oncol. 2023;30(6):3174–6.
related to perceived value, usability, risks and more 5. Lan L, You L, Zhang Z, Fan Z, Zhao W, Zeng N, et al. Generative adversarial
networks and its applications in biomedical informatics. Front Public
while delineating interventions to drive acceptance.
Health. 2020;8:164.
With meticulous planning grounded in evidence, gen- 6. Arora A, Arora A. Generative adversarial networks and synthetic patient
erative AI can transform productivity, insight and care data: current challenges and future perspectives. Future Healthc J.
2022;9:190–3.
enhancement. Use cases like workflow and documenta-
7. Jadon A, Kumar S. Leveraging generative AI models for synthetic
tion automation, personalized predictive analytics, and data generation in healthcare: balancing research and privacy.
patient education chatbots confirm vast potential [26, 2023. arXivorg.
8. Brynjolfsson E, Li D, Raymond LR. Generative AI at Work, NBER Working
41, 45], provided the technology supports rather than
Papers 31161, National Bureau of Economic Research, Inc. 2023.
supplants human expertise. Structured integrations 9. Suthar AC, Joshi V, Prajapati R. A review of generative adversarial-based
emphasizing clinician control safeguard quality while networks of machine learning/artificial intelligence in healthcare. 2022.
10. Kanjee Z, Crowe B, Rodman A. Accuracy of a generative artificial
unlocking efficiency. Thoughtful translation is essential,
intelligence model in a complex diagnostic challenge. JAMA.
but implementation science provides proven guidance. 2023;330:78–80.
The time for debate has passed. Patients worldwide 11. Vert JP. How will generative AI disrupt data science in drug discovery? Nat
Biotechnol. 2023;41(6):750–1.
stand to benefit, and responsible leaders must act
12. Zhavoronkov A. Caution with AI-generated content in biomedicine. Nat
urgently. Strategic pilots, iterative scaling and govern- Med. 2023;29(3):532.
ance emphasizing ethics alongside innovation will 13. Zohny H, McMillan J, King M. Ethics of generative AI. J Med Ethics.
2023;49(2):79–80.
realize long-overdue progress. Generative AI cannot
14 Duffourc M, Gerke S. Generative AI in health care and liability risks for
single-handedly fix broken systems, but carefully facili- physicians and safety concerns for patients. JAMA. 2023;330:313–4.
tated adoption can catalyse reform while upholding 15. Stokel-Walker C, Van Noorden R. What ChatGPT and generative AI mean
for science. Nature. 2023;614(7947):214–6.
healthcare’s humanitarian obligations. The approach,
16. Payne TH, Corley S, Cullen TA, Gandhi TK, Harrington L, Kuperman GJ,
not just technology, defines success. Guided by wisdom et al. Report of the AMIA EHR-2020 Task Force on the status and future
and compassion, generative AI may help restore health- direction of EHRs. J Am Med Inform Assoc. 2015;22(5):1102–10.
17 Kass NE, Faden RR, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. The
care ideals so many now lack: quality, affordability and
research-treatment distinction: a problematic approach for determin-
humane care for all. ing which activities should have ethical oversight. Hastings Cent Rep.
2013;Spec No:S4-s15.
18 Epstein Z, Hertzmann A, Investigators of Human C, Akten M, Farid
Author’s contributions H, Fjeld J, et al. Art and the science of generative AI. Science.
This is a single-author manuscript. 2023;380(6650):1110–1.
19. Takefuji Y. A brief tutorial on generative AI. Br Dent J. 2023;234(12):845.
Availability of data and materials 20. Gozalo-Brizuela R, Garrido-Merchan EC. ChatGPT is not all you need. A
Data sharing is not applicable to this article as no datasets were generated or state of the art review of large generative AI models. 2023. arXiv preprint
analysed during the current study. arXiv:230104655.
21 Kingma DP, Welling M. An introduction to variational autoencoders.
Found Trends Mach Learn. 2019;12(4):307–92.
Declarations 22. Kumar M, Babaeizadeh M, Erhan D, Finn C, Levine S, Dinh L, Kingma D.
Videoflow: a conditional flow-based model for stochastic video genera-
Ethics approval and consent to participate tion. 2019. arXiv preprint arXiv:190301434.
Not applicable. 23. Du Y, Mordatch I. Implicit generation and modeling with energy based
models. Adv Neural Inf Process Syst. 2019;32.
Reddy Implementation Science (2024) 19:27 Page 15 of 15

24. Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. 50. Yang K, Ji S, Zhang T, Xie Q, Ananiadou S. On the evaluations of chatgpt
Generative adversarial networks: an overview. IEEE Signal Process Mag. and emotion-enhanced prompting for mental health analysis. 2023. arXiv
2018;35(1):53–65. preprint arXiv:230403347.
25. Brants T, Popat AC, Xu P, Och FJ, Dean J. Large language models in 51. Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT-reshaping medical
machine translation. 2007. education and clinical management. Pak J Med Sci. 2023;39(2):605.
26. Uprety D, Zhu D, West HJ. ChatGPT-a promising generative AI tool and its 52. Eysenbach G. The role of ChatGPT, generative language models, and
implications for cancer care. Cancer. 2023;129(15):2284–9. artificial intelligence in medical education: a conversation with ChatGPT
27. Saha S. Llama 2 vs GPT-4 vs Claude-2. Analytics India Magazine. 2023. and a call for papers. JMIR Med Educ. 2023;9(1):e46885.
19th July 2023. 53. Gabrielson AT, Odisho AY, Canes D. Harnessing generative artificial
28. Vincent J. Google’s AI palm 2 language model announced at I/O. The intelligence to improve efficiency among urologists: welcome ChatGPT.
Verge. 2023. Available from: https://​www.​theve​rge.​com/​2023/5/​10/​23718​ Wolters Kluwer: Philadelphia, PA; 2023. p. 827–9.
046/​google-​ai-​palm-2-​langu​age-​model-​bard-​io. 54. Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello CP, Stephan A.
29. Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash An integrative review on the acceptance of artificial intelligence among
D. How does ChatGPT perform on the United States Medical Licensing healthcare professionals in hospitals. NPJ Digit Med. 2023;6(1):111.
Examination? The implications of large language models for medical 55. Gottlieb S, Silvis L. Regulators face novel challenges as artificial
education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. intelligence tools enter medical practice. JAMA Health Forum.
30. Liu T, Jiang Y, Monath N, Cotterell R, Sachan M. Autoregressive structured 2023;4(6):e232300.
prediction with language models. 2022. arXiv preprint arXiv:221014698. 56. Novak LL, Russell RG, Garvey K, Patel M, Thomas Craig KJ, Snowdon J,
31. Vaswani A, Shazeer NM, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Miller B. Clinical use of artificial intelligence requires AI-capable organiza-
L, Polosukhin I. Attention is all you need. Neural Information Processing tions. JAMIA Open. 2023;6(2):ooad028.
Systems. 2017. 57. Holden RJ, Karsh B-T. The technology acceptance model: its past and its
32. Agrawal M, Hegselmann S, Lang H, Kim Y, Sontag D. Large language future in health care. J Biomed Inform. 2010;43(1):159–72.
models are zero-shot clinical information extractors. 2022. arXiv preprint 58. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C,
arXiv:220512689. et al. Beyond adoption: a new framework for theorizing and evaluating
33 y Arcas BA. Do large language models understand us? Daedalus. nonadoption, abandonment, and challenges to the scale-up, spread,
2022;151(2):183–97. and sustainability of health and care technologies. J Med Internet Res.
34. Józefowicz R, Vinyals O, Schuster M, Shazeer NM, Wu Y. Exploring the 2017;19(11):e367.
limits of language modeling. 2016. ArXiv;abs/1602.02410. 59. Marangunić N, Granić A. Technology acceptance model: a literature
35. Haupt CE, Marks M. AI-generated medical advice-GPT and beyond. JAMA. review from 1986 to 2013. Univ Access Inf Soc. 2015;14:81–95.
2023;329(16):1349–50. 60. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its
36 Korngiebel DM, Mooney SD. Considering the possibilities and pitfalls of applications, advantages, limitations, future prospects, and ethical con-
Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ siderations. Front Artif Intell. 2023;6:1169595.
Digit Med. 2021;4:93. 61. Aristidou A, Jena R, Topol EJ. Bridging the chasm between AI and clinical
37. Limeros SC, Majchrowska S, Zoubi MK, Ros’en A, Suvilehto J, Sjöblom L, implementation. Lancet. 2022;399(10325):620.
Kjellberg MJ. GAN-based generative modelling for dermatological appli- 62. van de Sande D, Van Genderen ME, Smit JM, Huiskens J, Visser JJ, Veen
cations - comparative study. 2022. ArXiv. RER, et al. Developing, implementing and governing artificial intelligence
38. Callaway E. How generative AI is building better antibodies. Nature. 2023. in medicine: a step-by-step approach to prevent an artificial intelligence
https://​doi.​org/​10.​1038/​d41586-​023-​01516-w. winter. BMJ Health Care Inform. 2022;29(1):e100495.
39. Gong C, Jing C, Chen X, Pun CM, Huang G, Saha A, et al. Generative AI for 63. Christiano P. Large language model training in 2023: a practical guide:
brain image computing and brain network computing: a review. Front expert beacon. 2023. Available from: https://​exper​tbeac​on.​com/​large-​
Neurosci. 2023;17:1203104. langu​age-​model-​train​ing/.
40. Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, et al. 64. Reddy S. Evaluating large language models for use in healthcare: A
A large language model for electronic health records. NPJ Digit Med. framework for translational value assessment. Infor Med Unlocked.
2022;5(1):194. 2023;41:101304. Available from: https://​www.​scien​cedir​ect.​com/​scien​ce/​
41 Patel SB, Lam K. ChatGPT: the future of discharge summaries? Lancet artic​le/​pii/​S2352​91482​30015​08?​via%​3Dihub.
Digit Health. 2023;5:e107–8. 65. Reddy S. Navigating the AI revolution: the case for precise regulation in
42. Tai-Seale M, Olson CW, Li J, Chan AS, Morikawa C, Durbin M, et al. health care. J Med Internet Res. 2023;25:e49989.
Electronic health record logs indicate that physicians split time evenly 66. Li H, Moon JT, Purkayastha S, Celi LA, Trivedi H, Gichoya JW. Ethics of large
between seeing patients and desktop medicine. Health Aff (Millwood). language models in medicine and medical research. Lancet Digit Health.
2017;36(4):655–62. 2023;5(6):e333–5.
43. Downing NL, Bates DW, Longhurst CA. Physician burnout in the elec- 67. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the appli-
tronic health record era: are we ignoring the real cause? Ann Intern Med. cation of AI in health care. J Am Med Inform Assoc. 2020;27(3):491–7.
2018;169(1):50–1. 68. Harrer S. Attention is not all you need: the complicated case of ethically
44. Lin SY, Shanafelt TD, Asch SM. Reimagining clinical documentation with using large language models in healthcare and medicine. EBioMedicine.
artificial intelligence. Mayo Clin Proc. 2018;93(5):563–5. 2023;90:104512.
45. Clusmann J, Kolbinger FR, Muti HS, Carrero ZI, Eckardt J-N, Laleh NG, et al. 69. Dyb K, Berntsen GR, Kvam L. Adopt, adapt, or abandon technology-
The future landscape of large language models in medicine. Communi- supported person-centred care initiatives: healthcare providers’ beliefs
cations Medicine. 2023;3(1):141. matter. BMC Health Serv Res. 2021;21(1):240.
46. James JT. A new, evidence-based estimate of patient harms associated 70. Reddy S, Rogers W, Makinen VP, Coiera E, Brown P, Wenzel M, et al. Evalu-
with hospital care. J Patient Saf. 2013;9(3):122–8. ation framework to guide implementation of AI systems into healthcare
47. Kocaballi AB, Ijaz K, Laranjo L, Quiroz JC, Rezazadegan D, Tong HL, et al. settings. BMJ Health Care Inform. 2021;28(1):e100444.
Envisioning an artificial intelligence documentation assistant for future 71. Reddy S. Artificial intelligence and healthcare—why they need each
primary care consultations: a co-design study with general practitioners. other? J Hosp Manag Health Policy. 2020;5:9.
J Am Med Inform Assoc. 2020;27(11):1695–704.
48. Kline A, Wang H, Li Y, Dennis S, Hutch M, Xu Z, et al. Multimodal
machine learning in precision health: a scoping review. NPJ Digit Med. Publisher’s Note
2022;5(1):171. Springer Nature remains neutral with regard to jurisdictional claims in pub-
49. van Schalkwyk G. Artificial intelligence in pediatric behavioral health. lished maps and institutional affiliations.
Child Adolesc Psychiatry Ment Health. 2023;17(1):38. https://​doi.​org/​10.​
1186/​s13034-​023-​00586-y.

You might also like