Benefits and Risks of Using Iaa For Pharmaceutical
Benefits and Risks of Using Iaa For Pharmaceutical
Benefits and Risks of Using Iaa For Pharmaceutical
Some rights reserved. This work is available under the Creative Commons Attribution-NonCommercial-
ShareAlike 3.0 IGO licence (CC BY-NC-SA 3.0 IGO; https://creativecommons.org/licenses/by-nc-sa/3.0/igo).
Under the terms of this licence, you may copy, redistribute and adapt the work for non-commercial purposes,
provided the work is appropriately cited, as indicated below. In any use of this work, there should be no
suggestion that WHO endorses any specific organization, products or services. The use of the WHO logo is
not permitted. If you adapt the work, then you must license your work under the same or equivalent Creative
Commons licence. If you create a translation of this work, you should add the following disclaimer along with
the suggested citation: “This translation was not created by the World Health Organization (WHO). WHO is not
responsible for the content or accuracy of this translation. The original English edition shall be the binding and
authentic edition”.
Any mediation relating to disputes arising under the licence shall be conducted in accordance with the
mediation rules of the World Intellectual Property Organization (http://www.wipo.int/amc/en/mediation/rules/).
Suggested citation. Benefits and risks of using artificial intelligence for pharmaceutical development and
delivery. Geneva: World Health Organization; 2024. Licence: CC BY-NC-SA 3.0 IGO.
Third-party materials. If you wish to reuse material from this work that is attributed to a third party, such as
tables, figures or images, it is your responsibility to determine whether permission is needed for that reuse and
to obtain permission from the copyright holder. The risk of claims resulting from infringement of any third-
party-owned component in the work rests solely with the user.
General disclaimers. The designations employed and the presentation of the material in this publication do
not imply the expression of any opinion whatsoever on the part of WHO concerning the legal status of any
country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.
Dotted and dashed lines on maps represent approximate border lines for which there may not yet be
full agreement.
The mention of specific companies or of certain manufacturers’ products does not imply that they are endorsed
or recommended by WHO in preference to others of a similar nature that are not mentioned. Errors and
omissions excepted, the names of proprietary products are distinguished by initial capital letters.
All reasonable precautions have been taken by WHO to verify the information contained in this publication.
However, the published material is being distributed without warranty of any kind, either expressed or implied.
The responsibility for the interpretation and use of the material lies with the reader. In no event shall WHO be
liable for damages arising from its use.
Contents
Acknowledgements.................................................................................................v
Abbreviations.........................................................................................................vii
Executive summary..............................................................................................viii
1 Introduction....................................................................................................1
6 Challenges in governance.............................................................................18
6.1 Governance of data.................................................................................. 18
6.2 Ownership and intellectual property...................................................... 19
6.3 Governance of the private sector............................................................ 20
6.4 Regulatory oversight and approval of AI-developed medicines
and vaccines............................................................................................. 22
7 Next steps.....................................................................................................23
References.............................................................................................................24
v
Acknowledgements
This discussion paper was developed by Andreas Reis (co-lead, Health Ethics and Governance
Unit, Department of Research for Health) and Sameer Pujari (Technical Officer, Department
of Digital Health and Innovation) with direction from Deirdre Dimancesco (Technical Officer,
Department of Health Products Policy and Standards) and the overall guidance of John
Reeder (Director, Research for Health), Alain Labrique (Director, Digital Health and Innovation)
and Jeremy Farrar (Chief Scientist).
Rohit Malpani (consultant, France) was the lead writer. The co-chairs of the expert group
on ethics and governance of AI for health, Effy Vayena (ETH Zurich, Switzerland) and Partha
Majumder (Indian Statistical Institute and National Institute of Biomedical Genomics, India),
provided overall guidance on drafting of the report and leadership of the expert group.
WHO is grateful to the following individuals who contributed to development of the guidance.
Observers
David Gruson, Luminess, Paris, France; Lee Hibbard, Council of Europe, Strasbourg, France.
vi
External reviewers
Yara Aboelwaffa (Dubai, United Arab Emirates); Andreas Bender (Cambridge University,
Cambridge, United Kingdom); Michelle Childs (Drugs for Neglected Diseases Initiative, Geneva,
Switzerland); Alexandrine Pirlot de Corbion (Privacy International, London, United Kingdom);
Rieke van der Graaf (Julius Centre, University Medical Centre, Utrecht, Netherlands (Kingdom
of the)); Tim Hubbard (King’s College, London, United Kingdom); Karen Jongsma (Julius
Centre, University Medical Centre Utrecht, Utrecht, Netherlands (Kingdom of the)); Charles
Mowbray (Drugs for Neglected Diseases Initiative, Geneva, Switzerland); Janneke van Oirschot
(Health Action International, Amsterdam, Netherlands (Kingdom of the)); Laura Piddock
(Global Antibiotic Research and Development Partnership, Birmingham, United Kingdom);
Mario Ravic (Ericsson Nikola Tesla, Zagreb, Croatia); Tim Reed (Health Action International,
Amsterdam, Netherlands (Kingdom of the)); Sarah Simms (Privacy International, London,
United Kingdom); and Tom West (Privacy International, London, United Kingdom).
All experts, observers and external reviewers declared their interests in line with WHO
policies. None of the interests declared were assessed to be significant.
WHO
Craig Burgess (Development Cooperation Specialist, Division of Data, Analytics and Delivery
for Impact); Steve Estevao Cordeiro (Technical Officer, Health Products Policy and Standards);
Jeremy Farrar (Chief Scientist); Christine Guillard (Knowledge Management Adviser, Regulation
and Prequalification); Luther Gwaza (Team Lead, Health Products Policy and Standards);
Mattias Helble (Scientist, Research for Health); and Vasee Sathiyamoorthy (Coordinator,
Research for Health).
vii
Abbreviations
AI artificial intelligence
IP intellectual property
Executive summary
WHO recognizes that artificial intelligence (AI) holds great promise for pharmaceutical
development and delivery. However, AI also presents risks and ethical challenges that must be
addressed if societies, health systems and individuals are to fully reap its benefits.
This discussion paper examines the expanding application of AI to each step of development
and deployment of medicines and vaccines. AI is already used in most steps of pharmaceutical
development, and, in the future, it is likely that nearly all pharmaceutical products that come
to market will have been “touched” by AI at some point in their development, approval or
marketing. Although these uses of AI may have a commercial benefit, it is imperative that use
of AI also has public health benefit and appropriate governance.
This discussion paper also addresses the opportunities and ethical challenges of using AI
for pharmaceutical R&D and in certain marketing registration and post-approval activities.
This includes long-standing ethical challenges and risks associated with pharmaceutical
development that pre-date the emergence of AI, but which AI has the potential to exacerbate.
The paper also examines other risks with the use of AI, many of which were previously
discussed in WHO guidance published in June 2021 on the Ethics and governance of artificial
intelligence for health.
WHO is issuing this paper to inform a broad range of stakeholders, in particular policymakers,
researchers and civil society, on the scope of use of AI in pharmaceutical development and
delivery. More research and analysis and future guidance are required to keep pace with this
fast-moving field and to study use cases and the benefits and risks of such uses.
1 Introduction 1
1 Introduction
Artificial Intelligence (AI) reffers to the ability of algorithms encoded in technology to learn
from data so that they can perform automated tasks without explicit programming of every
step by a human (1). WHO recognizes that AI holds great promise for the advancement of
human health and for attainment of universal health coverage; however, AI also presents risks
and ethical challenges that must be addressed if societies, health systems and individuals are
to fully reap its benefits. The development and adoption of appropriate principles, rules and
regulations have become more urgent with the speed of technological advances in use of AI
and its rapid adoption and uptake for diverse and occasionally unforeseeable uses.
A fast-growing use of AI has been in the lifecycle of discovery, clinical development and
delivery of pharmaceutical products (medicines and vaccines). This discussion paper provides
a brief overview of the ever-expanding application of AI to each step of development and
deployment. Although the document does not address the growing use of AI for medical
devices, including diagnostic technologies, many of the principles and challenges discussed
are relevant to that use. WHO has published guidance on training, validation and evaluation of
AI for cervical cancer screening (2).
Present-day use of AI in the pharmaceutical sector is not the first use of computational
approaches for this purpose. Computing has played a critical role for decades: computer-aided
drug design dates to the 1970s (3), and, in the early 1980s, the “next industrial revolution”
was proclaimed, with pharmaceuticals designed solely by computers (4). Computational
approaches are also routinely used, for example, for screening compound libraries (5).
The past decade has witnessed a dramatic increase in use of AI in pharmaceutical research
and development (R&D), beyond previous uses. In 2021, the US Food and Drug Administration
received more than 100 submissions for registration of medicines, including biological
therapeutics, with AI components (6). Table 1 provides a quantitative overview of the current
breadth of AI outputs for pharmaceutical development.
Although the figures in Table 1 indicate accelerated interest in, investment in and outputs
from use of AI in pharmaceutical development, there is concern that the increasing hope in
the potential of AI is yet another example of the AI “hype” cycle. While at least 73 AI-derived
compounds are in development, all but two are in phase I or phase II trials (8), and no novel
compound fully designed or identified with AI has been approved for use in humans (although
AI has played a role in the development of novel medicines and vaccines that have been
approved for human use).
This discussion paper addresses the opportunities and ethical challenges of using AI for
pharmaceutical R&D and in certain marketing registration and post-approval activities. WHO
recognizes that the uses of AI in the development and delivery of new medicines and vaccines
vary and that certain applications do not pose significant ethical risks. WHO also recognizes
that certain long-standing ethical challenges and risks associated with pharmaceutical
development and delivery pre-date the emergence of AI. Nevertheless, AI has the potential to
exacerbate these long-standing risks and challenges.
WHO is issuing this paper to inform a broad range of stakeholders, in particular policy-makers,
researchers and civil society, on the scope of use of AI in pharmaceutical R&D. More research
and analysis are required to keep pace with this fast-moving field and to study use cases and
the benefits and risks of such uses.
2 Uses of AI in pharmaceutical development and delivery 3
2 Uses of AI in pharmaceutical
development and delivery
Pharmaceutical R&D consists of the discovery and testing of medicines or vaccines with
the aim of obtaining regulatory approval for their clinical use. Improvements in AI have
gradually extended the tasks that AI can execute on behalf of pharmaceutical scientists and
companies. It is now used in every stage of the pharmaceutical development cycle and for
marketing registration and delivery of medicines. The current or anticipated uses of AI in the
pharmaceutical development cycle are described below.
targets or new compounds (3). Companies are also using synthetic data, or “artificially-
generated data that mimic real-world patterns and characteristics” (20), to identify disease
targets for which there are few experimental data (such as for rare diseases) to train AI models
to identify targets that might otherwise be overlooked or to validate predictions made by AI
algorithms (20). The quantity, quality and translational relevance of all types of data to identify
appropriate treatments remain, however, a major challenge; synthetic data in particular have
significant limitations (20).
One example of such data use for pharmaceutical discovery is that of a genetic testing
company that collected genetic data on about 10 million people (as of 2020) and secured
permission from customers to use anonymized data for pharmaceutical research. The
company used the data to identify a candidate medicine (and conduct animal studies)
that was licensed to a pharmaceutical company, which is now completing clinical
development (21). The genetic testing company has also shared the genetic data with
pharmaceutical companies, including one multinational pharmaceutical company, which
purchased exclusive rights to use of the data for pharmaceutical development and acquired
a US$ 300 million stake in the company (22). There are, however, ethical concerns about the
company’s collection of consumer health data, including inadequate regulatory oversight (23),
their secondary use for commercial purposes, and potential violations of customers’ right
to privacy because of difficulty in full anonymization of the data and the possibility of
cybersecurity breaches or leaks (24). Concern about secondary use of consumer data in the
development of medicines is discussed below.
As has been the case for several decades (25), AI is also used to screen compounds, providing
researchers with a much larger chemical space than traditional processes (large-scale, high-
throughput screening), and facilitating identification of molecules with relevant biological
properties (3), including those for addressing diseases for which few or no treatments exist.
This use of AI could allow researchers to “fail faster” and therefore assess more candidates
before conducting additional research (8).
AI is also now used in the design of novel medicines and vaccines (29). For example, in the
development of therapeutic mRNA vaccines against cancer, AI is used to identify which
mutations on a tumour drive its growth and are likely to generate an immune response so that
an mRNA molecule can be synthesized and then translated into protein fragments identical
to those on the tumour cell to generate an immune response (30). AI could also be used to
2 Uses of AI in pharmaceutical development and delivery 5
design the RNA sequence used in mRNA vaccines by both choosing a sequence that encodes a
desired protein and optimizes its stability, so that it persists long enough to express sufficient
protein before degradation (31). One large pharmaceutical company formed a partnership
with an AI-focused drug-discovery firm to accelerate formulation of an antiviral combination
(nirmatrelvir plus ritonavir), which was used widely during the coronavirus disease 2019
(COVID-19) pandemic (32).
AI has been used for de-novo drug design (33). Current approaches rely on generative AI,
the AI method used in large multi-modal models such as ChatGPT, to develop compounds
with specific properties (34). Researchers have now developed a large multi-modal model,
ESMFold, a protein language model, that can predict a full atomic-level protein structure from
a single sequence and is faster than DeepMind’s AlphaFold2, which requires multiple sequence
alignments (35). ESMFold can generate a database of more than 600 million structures of
metagenomic proteins, including more than 225 million proteins that are predicted with high
confidence (36). However, in a competition hosted by the Critical Assessment of Techniques
for Protein Structure Prediction in 2022 (37), ESMFold performed “significantly worse” than
the DeepMind AlphaFold with respect to protein structure prediction. The large technology
company that developed and maintains ESMFold may now abandon or deprioritize it
(see below).
AI can also be used to repurpose existing medicines (repurposing is a strategy to identify new
indications or uses of an approved medicine). For example, during the COVID-19 pandemic,
one AI-based company added a few clues about how SARS-CoV2 acts into its algorithm, which
then searched over 50 million medical journal articles to identify the biological pathways
that should be targeted to find an approved pharmaceutical that could be repurposed (38).
The company identified baricitinib in only 4 days, and the medicine was subsequently
recommended by WHO to treat patients with severe or critical COVID-19.
products, and to ensure their quality and consistency during manufacture. This would
include use in process design to reduce waste and development time, quality control, process
monitoring and fault detection (41). Machine learning could also be used, for example, for
single-step and multi-step retrosynthesis (42,43), although such uses of AI are considered to be
“far from a mature state” (44).
Thirdly, AI is used in clinical trials for collecting, managing and analysing the data that
accumulate in various digital health technologies during a specific trial (47). AI can also be
applied to the assessment of clinical end-points, such as safety signals (including in real time
during a trial) and outcomes, from sources of data that could not otherwise be analysed (47).
AI researchers are discussing the prospect of replacing clinical trials with virtual trials (46);
however, there are many unanswered questions about the feasibility of this proposal, and
discussion between AI researchers and regulators remains at an early stage.
2 Uses of AI in pharmaceutical development and delivery 7
Fourthly, AI could be used for analysing results, providing more informative insights for
drug developers, for automating inclusion of data into statistical analytical tools and for
producing the documents, tables, reports and labels required during clinical development of
a compound. One challenge for use of AI for analysis is that the algorithms require additional
development and validation (45). AI is used in preparing the reports required for regulatory
approval (47). It could also be used in registration by generating the standard language
used in package information, including summaries of product characteristics and other
information (53). If AI is used in producing any documentation during drug development,
there must be human oversight, review and quality assurance to avoid generation of false
information and “hallucinations”.
Pharmaceutical companies are increasingly using AI to manage the supply and distribution of
medicines (57), including monitoring the cold chain for transport of vaccines (58). AI can also
be used in forecasting demand, monitoring and identifying corruption in the supply chain, and
anticipating or detecting shortages and stock-outs (59).
A common expectation of AI is that it will allow the pharmaceutical industry to move drug
development towards precision medicine and personalized medicine. While most medical
treatments are designed for the “average patient”, in precision medicine, treatments
(and vaccines) are tailored to different genetic profiles, environments and lifestyles (63).
Personalized medicine is an “extreme” form of precision medicine in that a treatment is
tailored to the genetic characteristics of a single individual. For example, companies that
developed mRNA COVID-19 vaccines, even before the COVID-19 pandemic, were developing
therapeutic cancer vaccines to direct an individual’s immune system to attack his or her
disease by using the data on that person’s tumour to choose appropriate targets (48). One
company is working to find an appropriate antidepressant and its dosage for an individual
by using AI to analyse the patient’s medical history and genetic data and exposing brain cells
from the patient to several antidepressants to identify biomarkers (38). While such uses of AI
could provide better outcomes for individual patients, the benefits could be available and
accessible to only a select few, thus accentuating health inequality or disparity, for at least
three reasons.
skewed investment in the long-term, whereby R&D that results in new data or insights in one
area encourages companies to focus on that specific therapeutic area or population.
Secondly, tailoring of therapies to individuals could accentuate inequality (and the tendency
of pharmaceutical R&D to focus on profitable populations) by directing resources for drug
development to ever smaller, privileged cohorts of patients, ignoring a significant number of
unmet needs, such as medicines for infectious diseases that can affect children and babies in
low- and middle-income countries. Thirdly, while such uses of AI may result in better outcomes
for individual patients, the high prices of personalized therapy may only be affordable for a privileged
few. (65).
Another expectation of the use of AI is that it could dramatically improve the speed of drug
development, from the slow, anaemic rate today to significantly shorter timelines and success
rates of 20–50% (19), mainly because scientists will be able to predict how investigational
compounds will behave in the human body and abandon those that might not be
successful (16). The better success rate is expected to save pharmaceutical companies billions
of US dollars in drug development costs (19). Use of AI to direct resources more efficiently
could, however, result in loss of serendipity, which has been a key factor in drug discovery (66).
Clinical trials could be improved and perhaps conducted more quickly and accurately, at least
for identifying patients or sub-populations that are particularly suited to an investigational
compound (67), improving adherence to medication and reducing attrition during clinical
trials (48). This could reduce costs significantly and improve the overall efficacy of a medicine.
As discussed above, however, such selection could accentuate and reinforce disparities and
biases in R&D, as developers might focus on populations and diseases for which companies
already have good-quality data. AI could nevertheless significantly accelerate drug
development (39).
Any savings made by use of AI could allow pharmaceutical companies to reduce the prices
of medicines and vaccines, as the high cost of R&D and the high rate of failure are the
main reasons given by companies for charging high prices for medicines (68). Studies have
demonstrated, however, that the prices that pharmaceutical companies charge for medicines,
whether on or off patent, are not related to the amount of money invested by those companies
into R&D (68). Insofar as companies can reduce R&D costs, any savings on a medicine that is
under patent are unlikely to be passed on to health systems and individuals.
Another concern is that the use of AI to accelerate the development of medicines may increase
a medicine’s effective patent life (effective patent life is the period of patent protection
remaining for a drug at the time of regulatory approval), and therefore the total time a
10 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
company can charge high prices and maximize revenues. This could accentuate the tendency
of companies to get to the market as quickly and as inexpensively as possible by addressing
the most profitable indications or subpopulations and ignoring indications or populations,
such as children or babies (due to the requirement for additional clinical development
after approval for use in adults), that may produce an overall greater public health benefit.
Furthermore, speed might be prioritized over the quality of R&D, including the reproducibility
of results (69). However, it has been noted that the quality of R&D decisions, including which
compounds to take forward and the conduct of clinical trials, can reduce the failure rate, and
therefore “has by far the most significant impact on project value overall, multiple times that
of a reduction of the cost of a particular phase or a decrease in the amount of time a particular
phase” (4).
As noted above, AI is used to refine and improve recruitment and adherence to the protocol
in clinical trials and to improve targeting of marketing and sales of new medicines to specific
patient groups with unmet needs. This gives pharmaceutical executives and scientists
tremendous power to refine the decisions that are made throughout drug development and
commercialization to focus on individuals identified by algorithms as best suited to maximize
both clinical trial results and commercial returns.
Use of AI, for example, to improve patient adherence or to identify patients who could benefit
from a product, either during clinical trials or once a medicine has been approved, could
have a public health benefit. These uses of AI could also serve pharmaceutical companies
to: shorten the time for enrolment and recruitment into clinical trials, improve outcomes
by ensuring better adherence, and increase sales. These goals, which are commercially
profitable, could lead companies to collect and use data in ways that undermine patient
privacy and informed consent, either for use of data or informing patients, otherwise
unaware, that they could benefit from a therapy. It could also increase the use of surveillance
for commercial returns. Targeted marketing, even for public health, also raises concern about
micro-targeting, manipulative marketing and amplification of biases, especially in ways that
negatively affect minority populations (1).
4 Identifying and maximizing the public health benefits of AI for development and delivery of
pharmaceutical products 11
The Drugs for Neglected Diseases Initiative, a not-for-profit partnership for the development
of new medicines against neglected tropical diseases and other infectious diseases (such
as hepatitis C and COVID-19), has formed a partnership with DeepMind, wherein the team in
charge of AlphaFold2 (see above) can predict protein structures for a target disease. An initial
focus of the partnership has been a protein on Trypanosoma cruzi, the parasite that causes
Chagas disease, to determine whether an investigational compound being developed by DNDi
can bind to the protein and eliminate the parasite. This could also indicate other compounds
that could bind to the protein (72). The Global Antibiotic Research and Development
Partnership, another not-for-profit partnership that develops new treatments for drug-
resistant infections, has also formed a partnership with DeepMind to investigate unrealized
targets for antibacterial drug discovery.
The Coalition for Epidemic Preparedness Innovations, a global partnership to accelerate the
development of vaccines against epidemic and pandemic threats, has issued two grants for
application of AI in the development of novel vaccines. For example, the partnership has
provided funding to a vaccine research consortium to use AI in developing broadly protective
beta coronavirus mRNA vaccines (71).
12 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
While these are promising examples of application of AI in the development of new treatments
for drug-resistant infections, they will not come to fruition unless pharmaceutical companies
are willing to share or themselves apply AI technologies for such therapies. They will also
require “push funding” for not-for-profit organizations or pharmaceutical companies (whereby
governments provide direct funding for specific stages of R&D projects in the form of grants,
investments, tax credits or low-interest loans for which governments bear the development
risk) or “pull incentives” (whereby governments encourage private sector engagement by
rewarding successful development through creating viable market demand or ensuring future
revenue) to encourage companies to use AI-based technologies to meet these needs.
guidance on best practices for non-State actors in the design and conduct of
clinical trials and in strengthening the global clinical trial ecosystem to meet
the needs of major population groups that the intervention is intended to
benefit, with a particular focus on under-represented populations, developed in
consultation with Member States and relevant non-State actors.
Use of AI for recruitment into clinical trials introduces both risks and opportunities for
improving or undermining inclusivity. Use of AI could, for example, either accentuate or
mitigate racial bias or bias in relation to differences in sex and gender. AI-powered patient
matching algorithms could improve the diversity of trial cohorts if they are used to increase
outreach to identify a more diverse patient cohort (46). To do so, other investments will be
required, including measures to address the digital divide (uneven distribution of access
to, use of or effect of information and communication technologies in distinct groups) and
appropriate safeguards to respect patient privacy. Such investments, when accompanied
by other measures, can help to overcome entrenched barriers to equitable participation in
clinical trials, including injustice to racial and ethnic minorities in the name of science, lack of
access to care due to physical distance and unaffordable health care.
4 Identifying and maximizing the public health benefits of AI for development and delivery of
pharmaceutical products 13
There is, however, broader concern that such use of AI, whether by pharmaceutical
companies, governments or international agencies, could undermine the right to privacy if the
data are not collected within a robust national and international regulatory environment. In
response, the Council for International Organizations of Medical Sciences formed a working
group in February 2022 to promote principles and guidance for use of AI in the field of
pharmacovigilance (78).
The inter-related challenges in supplying and distributing medicines and vaccines, especially
in low- and middle-income countries, include lack of transparency in the supply chain and
shortages and stock-outs of medicines. These undermine work to achieve universal health
coverage. Although pharmaceutical companies are increasingly using AI to manage the
supply and distribution of medicines (57), including monitoring the cold chain for transport of
vaccines (58), introduction of such uses of AI by low- and middle-income countries will require
significant investment to overcome the digital divide to ensure that appropriate data are
collected. Those parts of a country’s health system most susceptible to shortages and stock-
outs may also be less likely to have the appropriate connectivity to collect data.
14 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
AI technologies not only provide benefits for public health and drug development but also
pose challenges and risks. WHO guidance on the ethics and governance of AI for health (1)
identifies 10 specific concerns with use of AI for health. These challenges are relevant to use of
AI in drug development, as discussed below.
5.1 Bias
As noted above, development of medicines and vaccines is already affected by bias.
Clinical testing of investigational compounds often does not represent all potential patient
populations according to race, ethnicity, gender, age and other characteristics (46). This can
undermine achievement of universal health coverage. For example, the average delay in
access to paediatric versions of adult medicines for infectious diseases such as HIV/AIDS and
for antibiotics is 10 years (80).
Biases and discrimination are often replicated by AI technologies used in health care.
The three most common forms of bias are in the data sets used to train AI technologies,
those related to who develops AI technologies, and those in deployment of the technology
(contextual bias) (1).
Thus, data sets used to train algorithms for use in drug development may contain certain
biases, including under-sampling of people with irregular or limited access to health care.
These include ethnic minorities, women and socially disadvantaged groups and can be
expressed in electronic health records, genomic databases and biobanks (81). Training or
validating algorithms with these data can encode the biases in algorithms, making them
unrepresentative, such that the models are not sufficiently generalizable, resulting in
suboptimal outcomes or harm for disadvantaged groups (81). For example, if a researcher uses
machine learning with a biased dataset and finds a biomarker for predicting the response
to a therapy, there is no guarantee that the biomarker will be appropriate for a more diverse
population than that represented in the training data. If the biomarker is used to define the
approved indication for a medicine, the medicine could have different effects in different racial
groups (82).
Another form of bias is that electronic health records and other forms of health data encode
disparities in healthcare access and quality, as well as human biases and discriminatory
biases in clinical care. The discriminatory patterns in such data will infiltrate AI models trained
with them (81). Biases may also be introduced according to who funds and designs an AI
technology, as AI-based technologies have tended to be developed by people of one gender in
one demographic group. This can increase the likelihood of certain biases in the design (1).
5 Risks and challenges 15
5.2 Safety
Patient safety can be endangered if the algorithms used in drug development are not tested
for potential errors or for whether they provide, for example, false-positive or false-negative
recommendations. It has been shown that an algorithm can not only identify or design new,
medically beneficial compounds but could also be used to discover 40 000 toxic chemical
compounds that could be used as biological weapons in less than 6 h. Thus, in the words of
a researcher, “let loose on the world of biology, AI could be dangerous” (38). While humans
could conduct such research without AI, the diffusion of such technology with the speed and
accuracy of AI heightens such concerns. WHO has identified dual use research as a major area
of bio-risk and has released guidance to govern such research (83). To prevent these types of
risks, governments will have to introduce laws and regulations.
Another concern is the degree to which developers are transparent, including sharing the
data used to train an algorithm, the source code and the performance of the AI. Ultimately,
regulators must determine whether use of AI technologies in the design, discovery,
development and post-marketing monitoring of new medicines can be trusted, and the
information that is required to validate use of specific AI technologies in drug development or
use of new medicines (76).
A second challenge is the problem of “many hands” or the “‘traceability” of harm, which
bedevils health-care decision-making systems and could bedevil a complex system
for development of a new medicine. Development of both AI and medicines involves
many different entities, which can make it difficult, both legally and morally, to assign
responsibility, as it is diffused among all the contributors. Furthermore, some entities
involved in the development or use of AI may not yet be within the scope of relevant legal or
regulatory mechanisms.
The potential benefits of health data and biomedical “big data” can be ethically important,
as they can be used to identify new drug targets, improve the accuracy and speed of clinical
trials and reduce the rate of attrition. There is, however, concern about use of health data in
AI-guided drug development, particularly for safeguarding everyone’s right to privacy. The
collection, use, analysis and sharing of health data have consistently raised broad concern
about individual privacy, because lack of privacy may either harm an individual (such as
discrimination, manipulation or exploitation of individuals or their families on the basis of
5 Risks and challenges 17
health status) or cause a wrong, such as affecting a person’s dignity, autonomy or safety if
sensitive health data are shared or broadcast. Sharing or transferring data can make people
vulnerable to cyber-theft, accidental disclosure, government exploitation, discriminatory
health insurance terms or exploitative marketing practices (1).
Use of health data in AI-based drug development presents unique concerns. For example,
in clinical trials, patients who are recruited through AI (by mining health records and other
information, such as social media) must give informed consent that is meaningful for such
uses of their data. They might have to be contacted proactively and additional measures used
to ensure that their informed consent is meaningful. Use of publicly available data (such as
from social media) or combining health-care and non-health-care datasets is inherently risky
unless high standards of protection for privacy and human rights are followed.
Another concern is that the investigators in clinical trials in which AI is used might request
third parties to make sense of the data or to apply proprietary algorithms. The participation
of third parties raises concern about the handling of sensitive health data, the commitment
of any third party to the business and professional standards of health-care companies,
subsequent uses of the data and to whom access is provided (50).
Pharmaceutical companies are working with health systems and hospitals to prepare and
use data sets that could provide unique insights for drug discovery and clinical development.
For example, the Mount Sinai Health System in the USA is building a vast database of genetic
information on patients that can be used by researchers. One large pharmaceutical company,
which is assuming the cost of sequencing many DNA samples, will in return have access to
the genetic sequences and anonymized medical records of each participant, comprising
diagnoses, laboratory reports and vital signs. The genetic datasets can be used to identify
mutations that are either associated with a disease or protect against it (86). While such
collaboration may provide ethically important medical benefits, including knowledge of
mutations and their association with illnesses, there is strong concern that if such datasets
are leaked, sold or stolen, individuals and their families could be discriminated against.
Furthermore, once such data have been used, they may not be carefully protected, which
could result in unauthorized access and disclosure (86).
A further issue is that entry of technology companies, including the world’s largest, into the
field of AI for drug development could introduce data practices that raise concern. Large
technology companies, many of which are now in partnership with the pharmaceutical
industry, operate in a sector in which data-sharing and protection of privacy are less
constrained than in the biomedical domain. Thus, large technology companies may not
observe many of the practices of companies that have provided health-care products and
services for decades, and technology companies may not be subject to relevant regulations
or laws, as they are not yet characterized as “health providers”. Given broader concern about
how technology companies use data, including exploitation, their business practices might
have implications for how they use medical and health data in AI to develop medicines (87).
18 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
6 Challenges in governance
Introduction and use of AI in the pharmaceutical value chain will pose new challenges to
governance of the pharmaceutical sector. Furthermore, although drug development is
regulated both individually and collectively by governments, the introduction of AI will
require revision of existing regulatory approaches and new standards to ensure quality,
safety and efficacy. Some challenges in governance that should be addressed collectively by
the international community and governments to keep up with this fast-changing sector are
discussed below.
Currently, pharmaceutical companies are collaborating to develop systems for combining data
to strengthen their work on drug discovery (88). While such data-sharing platforms may be rich
and can benefit the pharmaceutical industry (8), they may not be available to other entities,
including not-for-profit developers and smaller pharmaceutical firms that usually target
unmet needs that will generate public health benefits (29).
One means for wider distribution of the benefits of AI is to ensure open data or to make
datasets that are legally collected and used for or generated by R&D open and freely available,
at least for non-commercial use. This concept was supported by over 80 governments that
adopted the “Open Data Charter”, which commits signatory countries to develop policies
for making data accessible and freely available, while protecting the individual rights of
people and communities (29). A concern related to open data approaches is that the data
sets lack adequate “depth, dimensionality and scale” for application of AI to drug R&D (8).
Some governments have taken steps to improve the availability of open data for biomedical
6 Challenges in governance 19
research by supporting cohorts of volunteers who have consented to use of their data
in research. One of the largest is the UK Biobank in the United Kingdom, with 500 000
participants, which allows distribution of de-identified data to approved researchers who are
investigating common and life-threatening diseases (89). Its Ethics and Governance Council
applies a framework to govern ethical use of the data that are collected and shared (90).
Population-wide data, such as from an entire health-care system, would be preferable for
research, rather than smaller, potentially biased cohorts. Citizens’ concern about the privacy
of sensitive health data, even after anonymization, has led to development of models in which
data are not distributed, and can be accessed only by researchers in secure environments.
The United Kingdom, through Health Data Research UK, has been advocating for adoption of
“trusted research environments” (TREs), which are single locations from which researchers
can access datasets, encourage collaboration and assure those who contribute data that such
data are accessed securely and their privacy is protected (91). The TRE approach has been
adopted as policy by the English National Health Service, which is establishing a number
of regional “secure data environments” to support research. The approach creates specific
challenges for AI and for other intensive data analysis, as any software required by researchers
should be moved into a TRE, which must have access to sufficient computer resources to
run it.
Currently, it is envisaged that individual data will remain within national boundaries in TRE-
like infrastructure. This will require use of “data federation” technology to analyse data from
several countries, which will pose challenges for training AI models while preserving privacy.
Introduction of broader standards that can be applied worldwide could improve data-
sharing and governance but would require strong cooperation among databanks, academic
institutions, funders and governments (8).
investments in R&D through the exclusive right to register and market a new medicine or
vaccine, such monopoly rights can lead to unaffordable prices and other practices, such as
distortion of markets and inadequate supplies, which limit access (94). Furthermore, IP rights
do not encourage companies to invest in R&D for commercially unattractive areas, such
as neglected tropical diseases, paediatric health, drug-resistant infections and pandemic
threats (95). Use of AI in drug development introduces new considerations for policy-makers,
such as finding and using alternative models to pay for development of and access to AI to be
used in the public interest.
First, mining of data can help to identify promising new compounds and vaccines. If the data
are collected from individuals for other purposes, however, it is questionable whether they
can be used for commercial purposes if the medicines developed from such data are not
available to those who provided the data and more broadly if the compounds and vaccines are
not available in the public interest, such as by setting affordable prices (96).
Another concern is the increased patenting of algorithms used in drug development and the
use of trade secrets to prevent use of algorithms except by their IP owner, which could lead
to IP “lock-up” (8). IP restrictions on algorithms will exclude most entities from using them
to improve drug development, especially academic researchers, not-for-profit organizations
and entities in low- and middle-income countries (8), all of whom are more likely to focus on
unmet needs ignored by the pharmaceutical industry (97). Exclusive control of algorithms
also impedes advancement of science and limits the potential broader social benefits of AI.
Exercise of IP rights encourages secrecy, as companies will not disclose source code or other
information about their proprietary algorithms. While companies may have a commercial
rationale for lack of transparency, this can undermine public trust and prevent regulators from
ascertaining whether medicines developed with AI meet regulatory standards. Patenting also
often involves protection of AI algorithms that were originally developed with public funding
or by publicly funded research (97).
Companies that develop AI algorithms may not themselves develop medicines and vaccines
but may issue licenses to the highest bidder, thereby limiting use of the technologies to
the world’s largest pharmaceutical and technology firms or for use in certain geographical
locations. Such licensing practices are ending, however, as companies that develop AI-based
platforms and technologies are developing new medicines rather than out-licensing their
software (8). Pharmaceutical companies are also spending significant sums of money to
acquire compounds developed by smaller companies that use AI for drug development (98),
thereby increasing the incentive for those companies to protect their proprietary algorithms
and to focus on diseases that generate profits from the pharmaceutical industry.
development. There is concern about the emergence of “walled gardens” in the private sector
with respect to use of AI (98). Companies already exert significant control over data, including
through industry-owned pre-competitive platforms. Furthermore, there is a growing number
of partnerships between large technology companies and large pharmaceutical companies,
which could consolidate power in the hands of a few. Such companies are likely to have not
only significant financial resources but also data, computing power, technology and most AI
programmers. The size of a company’s platform, including for drug development, may create
a monopoly. As AI, such as generative AI to design new medicines, becomes increasingly
sophisticated and requires more resources, few companies will be able to use it. Monopoly
power can concentrate decision-making in the hands of a few individuals and companies,
which can result in higher prices for goods and services, less consumer protection and less
innovation, and the choices, priorities and outputs of AI will be limited by the decisions
made by a few companies (99). These could include decisions to deprioritize or abandon
products and services that may be of significant public health importance and instead
prioritize services that can generate revenue. In 2023, one major technology company “axed”
the team that had developed the protein-folding model ESMFold (see above), and there is
concern about whether the company will “absorb the costs to keep the database running,
as well as another service that allows scientists to run the ESM algorithm on new protein
sequences” (29).
Competition and anti-trust laws, depending on the degree of concentration and exclusion,
may be necessary to sustain or promote equitable access to critical AI technologies, a healthy
innovation eco-system and affordable prices for end products, while avoiding ethical risks
such as discrimination and bias.
Companies often refer to use of their own ethics codes to guide use of AI, and pharmaceutical
companies are increasingly developing their own ethical standards for use of AI (100). While
consideration of ethics by a company is welcome, it can raise concern that the companies
are engaging in “ethics-washing” and that the measures are intended to forestall regulation
instead of adapting to oversight (88). Such codes are not an alternative to or a substitute for
legal rules and obligations set by governments. One means for governments to ensure that
companies select and use AI appropriately is impact assessments, which can address ethics,
human rights, safety and data protection throughout the life cycle of an AI system.
As AI increasingly automates or replaces some functions usually carried out by humans in the
development and delivery of medicines, regulatory authorities could act to preserve “human-
led governance” (8) of AI-based pharmaceutical R&D, particularly to uphold core ethical
principles, human rights obligations and legal and safety requirements. New approaches will
be needed to ensure that independent oversight and review of pharmaceuticals developed
with AI technology do not impede use of such technologies but also do not ignore risks. This
should be done individually and collectively, such as the on-going efforts of the International
Coalition of Medicine Regulatory Agencies to reflect on the challenges of AI and to identify a
common response (47).
Regulators will have to address many challenges to assess both AI technologies that
companies wish to use in development and delivery of medicines and the medicines and
vaccines in which AI technologies have been used. First, as for all AI technologies, regulators
will have to overcome the lack of explainability of how algorithms arrive at decisions that may
guide pharmaceutical discovery and development, especially in highly complex models (76).
Secondly, there may be concern about the quality of the data used to train AI, including
bias (47). Thirdly, regulators will need to coordinate with other relevant government agencies,
such as data protection agencies, to ensure that data were collected lawfully, according to
national or international principles and standards for data protection. Fourthly, they must
overcome the reticence of companies to be fully transparent about the source code or data
sets used in AI technologies, which undermines the ability of regulators to assess the uses
of AI (102). Fifthly, different regulatory standards may emerge around the world, which could
pose a challenge to both regulators who assess AI technologies and developers who use
AI (76).
Irrespective of such risks and challenges for regulatory authorities, it is the responsibility
of developers who choose to use AI technologies to ensure that the tools and methods are
“fit-for-purpose” and adhere to all ethical, technical, scientific and regulatory standards
designated and enforced by regulatory agencies (76).
7 Next steps 23
7 Next steps
The promise of AI requires the international community and governments to ensure that
use of the technology in pharmaceutical and vaccine development and delivery does not
exacerbate inequity but contributes to addressing the needs of neglected populations (such as
children and infants) and countries, and to find new vaccines and medicines for unmet needs.
To do so, governments must establish an effective approach to governance, including defining
standards, rules, regulations and legal frameworks that prioritize public health and the public
interest. WHO will continue to examine and monitor how AI is affecting the development
and delivery of medicines and vaccines and identify ways in which WHO, Member States,
pharmaceutical companies, civil society and global health-oriented product development
partnerships and researchers can harness AI to improve pharmaceutical development
and access to address unmet health needs. WHO may also develop new ethics guidance
and address issues of governance for management of data, regulatory considerations and
legislation to address the many benefits and challenges associated with use of AI in the
development and delivery of medicines and vaccines.
24 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
References
1. Ethics and governance of artificial intelligence for health. Geneva: World Health
Organization; 2021 (https://www.who.int/publications/i/item/9789240029200, accessed
13 April 2023).
3. Savage N. News feature. Tapping into the drug discovery potential of AI. Nature, 27 May
2021 (https://www.nature.com/articles/d43747-021-00045-7, accessed 13 April 2023).
5. Artificial intelligence and machine learning for drug development. Silver Spring (MD):
Food and Drug Administration; 2023. (https://www.fda.gov/science-research/science-
and-research-special-topics/artificial-intelligence-and-machine-learning-aiml-drug-
development, accessed 26 June 2023).
8. Unlocking the potential of AI in drug discovery. London: Wellcome Trust; 2023 (https://
wellcome.org/reports/unlocking-potential-ai-drug-discovery, accessed 4 August 2023).
10. Artificial intelligence for drug discovery. Landscape overview Q3 2022. London: Deep
Pharma Intelligence; 2022 (https://www.deep-pharma.tech/ai-in-dd-q3-2022-subscribe,
accessed 13 April 2023).
11. Furlong A. Regulating the machine: Europe’s race to get to grips with AI drugs. Politico, 7
April 2023. (https://www.politico.eu/article/regulate-europe-race-artificial-intelligence-
ai-drugs-medicines/, accessed 25 April 2023).
12. Mohs RC, Greig NH. Drug discovery and development: Role of basic biological research.
Alzheimers Dement (N Y). 2017;3(4). doi:10.1016/j.trci.2017.10.005.
13. Wang H, Fu T, Du Y, Gao W, Huang K, Liu Z et al. Scientific discovery in the age of artificial
intelligence. Nature. 2023;620(7972):47–60. doi:10.1038/s41586-023-06221-2.
14. Dill KA, Ozkan SB, Shell MS, Weikl TR. The protein folding problem. Annu Rev Biophys.
2008;37:289–316. doi:10.1146/annurev.biophys.37.092707.153558.
15. Metz C. London AI lab claims breakthrough that could accelerate drug discovery. The
New York Times, 30 November 2020 (https://www.nytimes.com/2020/11/30/technology/
deepmind-ai-protein-folding.html, accessed 23 April 2023).
16. Heaven WD. AI for protein folding. MIT Technology Review, 23 February
2022 (https://www.technologyreview.com/2022/02/23/1044957/ai-protein-
folding-deepmind/#:~:text=That%20changed%20with%20DeepMind’s%20
AlphaFold2,techniques%20used%20in%20the%20lab., accessed 23 April 2022).
18. Nolan A. Artificial intelligence and the future of science. Paris: Organisation for Economic
Co-operation and Development; 2021 (https://oecd.ai/en/wonk/ai-future-of-science,
accessed 22 April 2023).
19. Kollewe, Julia, Drug companies look to end “hit and miss” research. The Guardian, 20
February 2021 (https://www.theguardian.com/business/2021/feb/20/drug-companies-
look-to-ai-to-end-hit-and-miss-research, accessed 22 April 2023).
20. Pun FW, Ozerov IV, Zhavoronkov A. AI-powered therapeutic target discovery. Trends
Pharmacol Sci. 202344(9):561–72. doi:10.1016//j.tips.2023.06.010.
21. Wetsman N. 23andMe sold the rights to a drug it developed from its genetic database.
The Verge, 10 January 2020 (https://www.theverge.com/2020/1/10/21060456/23andme-
licensed-drug-developed-genetic-database-autoimmune-psoriasis-almirall, accessed 22
April 2023).
26 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
22. GSK and 23andMe sign agreement to leverage genetic insights for the development of
novel medicines. London: GlaxoSmithKline; 2018 (https://www.gsk.com/en-gb/media/
press-releases/gsk-and-23andme-sign-agreement-to-leverage-genetic-insights-for-the-
development-of-novel-medicines/, accessed 22 April 2023).
23. Seife C. 23andMe is terrifying, but not for reasons the FDA thinks. Scientific American, 27
November 2013 (https://www.scientificamerican.com/article/23andme-is-terrifying-but-
not-for-the-reasons-the-fda-thinks/#, accessed 5 August 2023).
24. Paul K. Fears over DNA privacy and 23andMe plans to go public in deal with
Richard Branson. The Guardian, 9 February 2021 (https://www.theguardian.com/
technology/2021/feb/09/23andme-dna-privacy-richard-branson-genetics, accessed 5
August 2023).
25. Schneider G, Böhm HJ. Virtual screening and fast automated docking methods. Drug
Discovery Today, 1 January 2002 (https://pubmed.ncbi.nlm.nih.gov/11790605/, accessed
5 August 2023).
26. Liu G, Catacutan D, Rathod K, Swanson K, Jin W, Mohammed J et al. Deep learning-
guided discovery of an antibiotic targeting Acinetobacter baumannii. Nature Chem Biol.
2023. doi:10.1038/s41589-023-01349-8.
27. Yang M. Scientists use AI to discover new antibiotic to treaty deadly superbug. The
Guardian, 25 May 2023 (https://www.theguardian.com/technology/2023/may/25/
artificial-intelligence-antibiotic-deadly-superbug-hospital, accessed 23 June 2023).
28. Using AI, MIT researchers identify a new class of antibiotic candidates. Eurkalert.
Massachusetts Institute of Technology; 2023 (https://www.eurekalert.org/news-
releases/1029354, accessed 3 January 2024).
29. Artificial intelligence for public good drug discovery: recommendations for policy
development. Paris: Global Partnership on Artificial Intelligence; 2021 (https://gpai.
ai/projects/ai-and-pandemic-response/public-domain-drug-discovery/ai-for-public-
domain-drug-discovery.pdf, accessed 22 April 2023).
30. Geddes L. A silver lining: How Covid ushered in a vaccines golden era. The Guardian, 7
April 2023 (https://www.theguardian.com/science/2023/apr/07/covid-vaccines-golden-
era-pandemic-techology-diseases, accessed 22 April 2023).
31. Blakney AK. A tool for optimizing messenger RNA sequence. Nature. 2023;621(7978):262–
4. doi:10.1038/d41586-023-01745-z.
32. Matsuyama K. AI drug discovery is a $50 billion opportunity for Big Pharma. Bloomberg
Businessweek, 12 May 2023 (https://www.bloomberg.com/news/articles/2023-05-09/
pharmaceutical-companies-embrace-ai-to-develop-new-drugs, accessed 11 May 2023).
References 27
33. Schneider G. De novo molecular design. Hoboken (NJ): Wiley; 2013 (https://wiley.com/
en-us/De+novo+Molecular+Design-p-9783527677030).).
34. Paul D, Sanap G, Shenoy S, Kalyane D, Kalia K, Tekade RK. Artificial intelligence in
drug discovery and development. Drug Discov Today. 2021;26(1):89–93. doi:10.1016/j.
drudis.2020.10.010.
35. Sphere L. How huge protein language models could disrupt structural biology. Towards
Data Science, 22 November 2022. (https://towardsdatascience.com/how-huge-protein-
language-models-could-disrupt-structural-biology-6b98193f880b, accessed 2 June
2023).
36. Lin Z, Akin H, Rao R, Hie B, Zhu Z, Lu W et al. Evolutionary-scale prediction of atomic-
level protein structure with a language model. Science. 2023;379(6637):1123–30.
doi:10.1126/science.ade2574.
37. Elofsson A. Progress at protein structure prediction, as seen in CASP15. Curr Opin
Structural Biol. 2023;80:102594. doi:10.1016/sbi.2023.102594.
38. Kuchler H. Will AI turbocharge the hunt for new drugs? Financial Times, 20 March 2022
(https://www.ft.com/content/3e57ad6c-493d-4874-a663-0cb200d3cdb5, accessed 23
April 2023).
39. Artificial intelligence in healthcare: Benefits and challenges of machine learning in drug
development. Washington DC: Government Accountability Office; 2019 (https://www.
gao.gov/assets/gao-20-215sp.pdf, accessed 27 April 2023).
40. Commission acts to accelerate phasing out of animal testing in response to a European
Citizens’ Initiative. Brussels: European Commission; 2023 (https://ec.europa.eu/
commission/presscorner/detail/en/ip_23_3993, accessed 20 September 2023).
42. Jiang Y, Yu Y, Kong M, Mei Y, Yuan L, Huang Z et al. Artificial intelligence for retrosynthesis
prediction. Engineering. 2023; 25(6). doi:10.1016/j.end.2022.04.021.
43. Lowe D. The machines rise a bit more. Science, 20 October 2020 (www.science.org/
content/blog-post/machines-rise-bit-more, accessed 5 August 2023).
44. Zhong Z, Song J, Feng Z, Liu T, Jia L, Yao S et al. Recent advances of artificial intelligence
for retrosynthesis. arXiv:2302.05864v1. doi:10.48550/arXiv.2301.05864.
28 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
47. Using artificial intelligence & machine learning in the development of drug & biological
products. Silver Spring (MD): Food and Drug Administration; 2023 (https://www.fda.gov/
media/167973/download, accessed 26 June 2023).
48. Zhavoronkov A, Vanhaelen Q, Oprea TI. Will artificial intelligence for drug discovery
impact clinical pharmacology? Clin Pharmacol Ther. 2020;107(4):780–5. doi:10.1002/
cpt.1795.
49. Reed J. How AI-based technologies improve clinical trial design, site selection and
competitive intelligence. Drug Discovery and Development, 21 July 2022 (https://www.
drugdiscoverytrends.com/how-ai-based-technologies-improve-clinical-trial-design-site-
selection-and-competitive-intelligence/, accessed 5 August 2023).
50. Fultinaviciute U. AI benefits in patient identification and clinical trial recruitment has
challenges in sight. Clinical Trials Arena, 20 April 2022 (https://www.clinicaltrialsarena.
com/features/ai-clinical-trial-recruitment/, accessed 23 April 2023).
51. Tanaka I, Furukawa T, Morise M. The current issues and future perspective of artificial
intelligence for developing new treatment strategy in non-small cell lung cancer:
harmonization of molecular cancer biology and artificial intelligence. Cancer Cell Int.
2021;21:454. doi:10.1186/s12935-021-02165-7.
52. Wong CH, Siah KW, Lo AW. Estimation of clinical trial success rates and related
parameters. Biostatistics. 2019;20(2):273–86. doi:10.1093/biostatistics/kxx069.
54. Consultation on good practices for health products manufacture and inspection:
industry consultation. Geneva: World Health Organization; 2023 (https://www.who.int/
publications/m/item/consultation-on-good-practices-for-health-products-manufacture-
and-inspection--industry-consultation, accessed 5 October 2023).
55. Galata DL, Mészáros LA, Kállai-Szabó N, Szabó E, Pataki H, Marosi G et al. Applications
of machine vision in pharmaceutical technology: a review. Eur J Pharmaceutic Sci.
2021;159:105717. doi:10.1016/j.ejps.2021.105717.
References 29
56. Artificial intelligence in drug manufacturing. Silver Spring (MD): Center for Drug
Evaluation and Research, Food and Drug Administration; 2023 (https://www.fda.gov/
media/165743/download, accessed 26 June 2023).
57. Keeping an eye and ear on patients’ needs: Where AI meets supply chains. Darmstadt:
Merck Group; undated (https://www.merckgroup.com/en/research/science-space/
envisioning-tomorrow/precision-medicine/ai-in-supply-chain.html, accessed 25 April
2023)
58. Gao Y, Gao H, Xiao H, Yao F. Vaccine supply chain coordination using blockchain and
artificial intelligence technologies. Comput Industr Eng. 2023;175:108885. doi:1016/j.
cie.2022.108885.
59. Role of AI in pharmaceutical supply chain management. Bellevue (WA): Xenore; 2023
(https://www.xenore.com/role-of-ai-in-pharmaceutical-supply-chain-management/,
accessed 16 May 2023).
61. The future of patient support. Cairo: NAOS Solutions; 2023 (https://naos-solutions.com/
the-future-of-patient-support/, accessed 5 August 2023).
62. Bhalodia P, Edwards K. How pharma companies can enhance patient support programs
with generative AI. Chicago (IL): West Monroe; 2023 (https://www.westmonroe.com/
perspectives/in-brief/how-pharma-companies-can-enhance-patient-support-programs-
generative-ai, accessed 5 August 2023).
63. Precision medicine. Silver Spring (MD): Food and Drug Administration; 2018
(https://www.fda.gov/medical-devices/in-vitro-diagnostics/precision-
medicine#:~:text=Precision%20medicine%2C%20sometimes%20known%20
as,genes%2C%20environments%2C%20and%20lifestyles., accessed 24 April 2023).
64. Lisbona N. How artificial intelligence is matching drugs to patients BBC News, 17 April
2023 (https://www.bbc.com/news/business-65260592, accessed 11 May 2023).
65. The promises and perils of personalized medicines. Pittsburgh (PA): Knowledge at
Wharton, University of Pennsylvania; 2012 (https://knowledge.wharton.upenn.edu/
article/the-promise-and-perils-of-personalized-medicine/, accessed 23 April 2023).
66. Ban TA. The role of serendipity in drug discovery. Dialogues Clin Neurosci. 2006;8(3):335–
44. doi:10.31887/DCNS.2006.8.3/tban..
30 Benefits and risks of using artificial intelligence for pharmaceutical development and delivery
67. Artificial intelligence in healthcare: Applications, risks and ethical and societal impacts.
Strasbourg: European Parliament, Panel for the Future of Science and Technology;
2022 (https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_
STU(2022)729512_EN.pdf, accessed 24 April 2023).
68. Angelis, A, Polyakov R, Wouters OJ, Torreele E, McKee M. High drug prices are not
justified by industry’s spending on research and development BMJ. 2023;380:e071710.
doi:10.1136/bmj-2022-071710.
69. Sohn E. The reproducibility issues that haunt health-care AI. Nature. 2023;613(7943):402–
3. doi:10.1038/d41586-023-00023-2.
70. Harnessing AI and new technologies for pharmaceutical R&D. Geneva: Drugs for
Neglected Diseases Initiative; undated (https://dndi.org/advocacy/ai-and-new-
technologies-for-pharmaceutical-rd/, accessed 24 April 2023).
71. CEPI partners with Japan’s NEC Group to develop artificial intelligence-designed broadly
protective betacoronavirus vaccine. London: Coalition for Epidemic Preparedness
Innovations; 2022 (https://cepi.net/news_cepi/cepi-partners-with-japans-nec-group-to-
develop-artificial-intelligence-designed-broadly-protective-betacoronavirus-vaccine/,
accessed 24 April 2023).
72. Cox D. DeepMind wants to use its AI to cure neglected diseases Wired, 23 June 2021
(https://www.wired.co.uk/article/deepmind-alphafold-protein-diseases, accessed 24
April 2023).
73. Gender equity in drug development. Geneva: Drugs for Neglected Diseases Initiative;
undated (https://dndi.org/advocacy/gender-equity-in-drug-development/, accessed 6
August 2023).
74. Baumann J. Diversity in clinical trials at FDA gets a boost from new law. Bloomberg Law,
19 January 2023 (https://news.bloomberglaw.com/pharma-and-life-sciences/diversity-
in-clinical-trials-at-fda-gets-a-boost-from-new-law, accessed 6 August 2023).
75. Strengthening clinical trials to provide high-quality evidence on health interventions and
to improve research quality and coordination (WHA 75.8), 27 May 2022. Geneva: World
Health Organization; 2022 (https://apps.who.int/gb/ebwha/pdf_files/WHA75/A75_R8-en.
pdf, accessed 6 August 2023).
77. Unmaking safety signals during a pandemic. Technical meeting report. Geneva: World
Health Organization; 2021 (https://cdn.who.int/media/docs/default-source/medicines/
pharmacovigilance/unmasking-safety-signals-in-an-infodemic_technical-report.
pdf?sfvrsn=5890874b_1&download=true, accessed 6 August 2023).
79. Reflection paper on the use of artificial intelligence in the medicinal product lifecycle.
Copenhagen: European Medicines Agency; 2023 (https://www.ema.europa.eu/en/
documents/scientific-guideline/draft-reflection-paper-use-artificial-intelligence-ai-
medicinal-product-lifecycle_en.pdf, accessed 10 August 2023).
80. Pennezato M, Lewis L, Watkins M, Prabhu V, Pascual F, Auton M et al. Shortening the
decade-long gap between adult and paediatric drug formulations: a new framework
based on the HIV experience in low- and middle-income countries. J Int AIDS Soc.
2018;21(12):e25049. doi:10.1002/jia2.25049.
81. Leslie D, Mazumder A, Peppin A, Wolters MK, Hagerty A. Does “AI” stand for augmenting
inequality in the era of covid-19 healthcare? BMJ. 2021;372:n304. doi:10.1136/bmj.n304.
82. Fisher CK. The FDA needs to set standards for using artificial intelligence in drug
development. STAT, 7 November 2019 (https://www.statnews.com/2019/11/07/artificial-
intelligence-drug-development-fda-standards-needed/, accessed 25 April 2023).
83. Global guidance framework for the responsible use of the life sciences: Mitigating bio-
risks and governing dual-use research. Geneva: World Health Organization; 2022 (https://
www.who.int/publications-detail-redirect/9789240056107, accessed 10 May 2023).
84. Sand M, Durán JM, Jongsma KR. Responsibility beyond design: Physicians’ requirements
for ethical medical AI. Bioethics. 2022;36(2):162–9. doi:10.1111/bioe.12887.
85. Neville S. Drug companies hooked on data. Financial Times, 27 January 2020 (https://
www.ft.com/content/8f5de4e2-6847-11ea-a6ac-9122541af204, accessed 26 April 2023).
86. Goldstein J. Hospital and drugmaker move to build vast database of New Yorkers’ DNA.
The New York Times, 12 August 2022 (https://www.nytimes.com/2022/08/12/nyregion/
database-new-yorkers-dna.html, accessed 26 April 2023).
88. Our commitment to ethical and responsible AI. Basel: Novartis AG; 2023 (https://www.
novartis.com/about/strategy/data-and-digital/artificial-intelligence/our-commitment-
ethical-and-responsible-use-ai, accessed 26 April 2023
91. Trusted research environments. London: Health Data Research UK; 2023 (https://www.
hdruk.ac.uk/access-to-health-data/trusted-research-environments/, accessed 11 August
2023).
92. Questions and answers – EU health: European Health Data Space (EHDS). Brussels:
European Commission; 2022 (https://ec.europa.eu/commission/presscorner/detail/en/
qanda_22_2712, accessed 10 August 2023).
94. Public health, innovation and intellectual property rights: Report of the Commission on
Intellectual Property, Innovation, and Public Health. Geneva: World Health Organization;
2006 (https://www.who.int/publications/i/item/9241563230, accessed 27 April 2023).
95. Research and development to meet health needs in developing countries: strengthening
global financing and coordination. Geneva: World Health Organization; 2012 (https://
www.who.int/publications/i/item/9789241503457, accessed 27 April 2023).
96. Hamzelou J. 23andMe sold the rights to develop a drug based on its users’ DNA New
Scientist, 10 January 2020 (https://www.newscientist.com/article/2229828-23andme-
has-sold-the-rights-to-develop-a-drug-based-on-its-users-dna/, accessed 27 April 2023).
97. The Guardian’s view on DeepMind’s brain: the shape of things to come. The Guardian,
6 December 2020 (https://www.theguardian.com/commentisfree/2020/dec/06/the-
guardian-view-on-deepminds-brain-the-shape-of-things-to-come, accessed 27 April
2023).
98. Taylor NP. Lilly inks US$425 million biobuck drug discovery pact with Schrodinger.
Fierce BioTech, 6 October 2022 (https://www.fiercebiotech.com/biotech/lilly-inks-425m-
biobuck-drug-discovery-pact-schrodinger, accessed 27 April 2023).
References 33
99. Veale M. Privacy is not the problem with the Apple-Google contact tracing toolkit. The
Guardian, 1 July 2020 (https://www.theguardian.com/commentisfree/2020/jul/01/apple-
google-contact-tracing-app-tech-giant-digital-rights, accessed 27 April 2023).
100. Criddle C, Murphy H. Meta disbands protein-folding team in shift towards commercial AI.
Financial Times, 7 August 2023 https://www.ft.com/content/919c05d2-b894-4812-aa1a-
dd2ab6de794a, accessed 18 September 2023).
102. You Y, Lai X, Pan Y, Zheng H, Vera J, Liu S et al. Artificial intelligence in cancer target
identification and drug discovery. Signal Transduct Target Ther. 2022;7(1):156. doi:
10.1038/s41392-022-00994-0.
World Health Organization
20 Avenue Appia
CH-1211 Geneva 27
Switzerland
Website: https://www.who.int