Artificial Intelligence in Health Care : Bachelor of Technology in Electronics and Communication Engineering

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

A Technical Seminar Report

on
*ARTIFICIAL INTELLIGENCE IN HEALTH CARE*
Submitted
In the partial fulfillment of the academic requirements for the award of the Degree of
BACHELOR OF TECHNOLOGY
In
ELECTRONICS AND COMMUNICATION ENGINEERING

By

PAVARALA ANUSHA
19Q91A0445

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING
MALLA REDDY COLLEGE OF ENGINEERING
(Approved by AICTE- Permanently Affiliated to JNTU Hyderabad)

Accredited by NBA & NAAC, Recognized under section 2(f) &12(B) of UGC New Delhi: ISO
9001:2015 certified Institution

Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100

2022 - 2023

Mrce 1 19Q91A0445
MALLA REDDY COLLEGE OF ENGINEERING
(MALLA REDDY GROUP OF INSTITUTIONS)
(Approved by AICTE- Permanently Affiliated to JNTU Hyderabad)
Accredited by NBA & NAAC, Recognized under section 2(f) &12(B) of UGC
New Delhi: ISO 9001:2015 certified Institution.
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100.

Department of Electronics and Communication Engineering

CERTIFICATE :

This is to certify that the Seminar work entitled “ARTIFICIAL INTELLIGENCE IN HEALTH CARE”
that is being submitted by P. ANUSHA (19Q91A0445) in partial fulfillment for the award of
BachelorsofTechnology in ELECTRONICS AND COMMUNICATIONENGINEERING from
MALLA REDDY COLLEGE OF ENGINEERING during the academic year 2022-2023.

Seminar Incharge Head of the Department


BALA SURESH REDDY SHIVA KUMAR
Department of ECE Department of ECE
MRCE MRCE

Mrce 2 19Q91A0445
ACKNOWLEDGEMENT

First and fore most we would like to express our immense gratitude towards our institution Malla Reddy
College of Engineering, which helped us to attain profound technical skills in the field of Electronics &
Communication Engineering, there by fulfilling our most cherished goal.
We are pleased to thank Sri Ch. Malla Reddy, our Founder, Chairman MRGI, Sri Ch. Mahender
Reddy, Secretary, MRGI for providing this opportunity and support throughout the course.

It gives us immense pleasure to acknowledge the perennial inspiration of Dr. M. Sreedhar Reddy our
beloved principal for his kind co-operation and encouragement in bringing out this task. We would
like to thank Dr. T. V. Reddy our vice principal, Mr. M. Shiva kumar, HOD, ECE Department for
their inspiration adroit guidance and constructive criticism for successful completion of our degree.

We convey our gratitude to Mr. P. Venkatapathi, Associate Professor, our project coordinator for
their valuable guidance. We would like to thank Mr. P. Venkatapathi, Associate Professor our
internal guide, for his valuable suggestions and guidance during the exhibition and completion of this
project. Finally, we avail this opportunity to express our deep gratitude to all staff who have contribute
their valuable assistance and support making our project success.

P . ANUSHA ( 19Q91A0445)

Mrce 3 19Q91A0445
ABSTRACT
Use of Artificial intelligence (AI) has increased in the healthcare in many sectors. Organizations from
healthcare of different sizes,types and different specialties are now-a-days more interested in how
artificial intelligence has evolved and is helping patient needs and their care, reducing costs,and
increasing their efficiency.

This study explores the implications of AI on health care management,and challenges involved with b
using AI in health care along with the review of several research papers that used AI models in different
sectors of health care like Dermatology,Radiology, Drug design etc…..

The complexity and rise of data in healthcare means that artificial intelligence (AI) will increasingly be
applied within the field. Several types of AI are already being employed by payers and providers of care,
and life sciences companies. The key categories of applications involve diagnosis and treatment
recommendations, patient engagement and adherence, and administrative activities. Although there are
many instances in which AI can perform healthcare tasks as well or better than humans, implementation
factors will prevent large-scale automation of healthcare professional jobs for a considerable period.
Ethical issues in the application of AI to healthcare are also discussed.

Mrce 4 19Q91A0445
TABLE OF CONTENTS
Page .no
CHAPTER - 1
INTRODUCTION 6
1.1WHAT IS ARTIFICIAL INTELLIGENCE? 7
HISTORY OF AI 8
1.1 IMPORTANCE OF AI 10

2 .AI IN HEALTH 11

2.1. HISTORY OF AI IN HEALTH CARE 12

2.2. WHY ARTIFICIAL INTELLIGENCE IN HEALTH CARE 15

3. APPLICATIONS OF AI IN HEALTH CARE 16

4. CHALLENGES OF AI IN HEALTHCARE

4.1. ADVANTAGES OF AI IN HEALTHCARE 24


4.2. LIMITS OF AI IN HEALTH CARE 28

5. AI TECHNOLOGIES IN HEALTH CARE 30


5.1. HOW DOES AI HELPS HEALTH CARE SECTOR 35

6. FUTURE SCOPE 38
7. CONCLUSION 39
REFERENCES 40

Mrce 5 19Q91A0445
CHAPTER - 1

INTRODUCTION :

In Computer Science, Artificial Intelligence(AI) sometimes called “MACHINE


INTELLIGENCE”, is intelligence demonstrated by machine, in contrast to the natural intelligence
displayed by humans and animals.

The term “Artificial Intelligence” is used to describe machines that mimic “cognitive” .
Functions that humans associate with other human minds ,such as ”learning” and “problem solving”.
Artificial Intelligence has also enabled significant progress in speech recognition and natural language
processing.

Mrce 6 19Q91A0445
1.1.WHAT IS AI?

Artificial Intelligence (AI) is a branch of Science which deals with helping machines
find solutions to complex problems in a more human-like fashion. This generally involves borrowing
characteristics from human intelligence, and applying them as algorithms in a computer friendly way.
A more or less flexible or efficient approach can be taken depending on the requirements established,
which influences how artificial the intelligent behavior appears Artificial intelligence can be viewed
from a variety of perspectives.

From the perspective of intelligence artificial intelligence is making machines "intelligent" -- acting
as we would expect people to act. The inability to distinguish computer responses from human
responses is called the Turing test. Intelligence requires knowledge Expert problem solving -
restricting domain to allow including significant relevant knowledge From a business perspective AI is
a set of very powerful tools, and methodologies for using those tools to solve business problems.

From a programming perspective, AI includes the study of symbolic programming, problem solving,
and search.
 Typically AI programs focus on symbols rather than numeric processing. o Problem solving -
achieve goals.

 Search - seldom access a solution directly. Search may include a variety of techniques .

 AI programming languages include: – LISP, developed in the 1950s, is the early


programming language strongly associated with AI.

Mrce 7 19Q91A0445
 LISP is a functional programming language with procedural extensions. LISP
(LISt Processor) was specifically designed for processing heterogeneous lists -- typically a list of
symbols.

Features of LISP are run- time type checking, higher order functions (functions that
have other functions as parameters), automatic memory management (garbage collection) and an
interactive environment. – The second language strongly associated with AI is PROLOG. PROLOG
was developed in the 1970s. PROLOG is based on first order logic. PROLOG is declarative in nature
and has facilities for explicitly limiting the search space. – Object-oriented languages are a class of
languages more recently used for AI programming. Important features of object-oriented languages
include: concepts of objects and messages, objects bundle data and methods for manipulating the data,
sender specifies what is to be done receiver decides how to do it, inheritance (object hierarchy where
objects inherit the attributes of the more general class of objects). Examples of object-oriented
languages are Smalltalk, Objective C, C++. Object oriented extensions to LISP (CLOS - Common LISP
Object System) and PROLOG (L&O - Logic & Objects) are also used
 Artificial Intelligence is a new electronic machine that stores large amount of
information and process it at very high speed
 The computer is interrogated by a human via a teletype It passes if the human cannot
tell if there is a computer or human at the other end
 The ability to solve problems
 It is the science and engineering of making intelligent machines, especially intelligent
computer programs. It is related to the similar task of using computers to understand
human intelligence

1.2.HISTORY OF AI:

“Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent
computer systems, that is, systems that exhibit characteristics we associate with intelligence in human
behaviour – understanding language, learning, reasoning, solving problems, and so on.

” Scientific Goal To determine which ideas about knowledge representation, learning, rule systems,
search, and so on, explain various sorts of real intelligence.

Mrce 8 19Q91A0445
 Engineering Goal To solve real world problems using AI techniques such as knowledge
representation, learning, rule systems, search, and so on.

 Traditionally, computer scientists and engineers have been more interested in the engineering
goal, while psychologists, philosophers and cognitive scientists have been more interested in
the scientific goal.

 The Roots - Artificial Intelligence has identifiable roots in a number of older disciplines,
particularly:
 Philosophy
 Logic/Mathematics
 Computation
 Psychology/Cognitive Science
 Biology/Neuroscience
 Evolution

There is inevitably much overlap, e.g. between philosophy and logic, or between mathematics and
computation. By looking at each of these in turn, we can gain a better understanding of their role in AI,
and how these underlying disciplines have developed to play that role.

Mrce 9 19Q91A0445
Importance of AI:
 Game playing :

You can buy machines that can play master level chess for a few
hundred dollars. There is some AI in them, but they play well against people mainly through brute force
computation--looking at hundreds of thousands of positions. To beat a world champion by brute force
and known reliable heuristics requires being able to look at 200 million positions per second.

 Speech Recognition:
In the 1990s, computer speech recognition reached a practical level
for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a
system using speech recognition of flight numbers and city names. It is quite convenient. On the
other hand, while it is possible to instruct some computers using speech, most users have gone back
to the keyboard and the mouse as still more convenient.

 Understanding Natural Language :


Just getting a sequence of words into a computer is not enough.
Parsing sentences is not enough either. The computer has to be provided with an understanding of
the domain the text is about, and this is presently possible only for very limited domains. Computer
Vision.The world is composed of three-dimensional objects, but the inputs to the human eye and
computers' TV cameras are two dimensional. Some useful programs can work solely in two
dimensions, but full computer vision requires partial three-dimensional information that is not just a
set of two-dimensional views.

 Expert Systems
A ``knowledge engineer'' interviews experts in a certain domain and
tries to embody their knowledge in a computer program for carrying out some task.
How well this works depends on whether the intellectual mechanisms required for the task
are within the present state of AI. When this turned out not to be so, there were many disappointing
results.
One of the first expert systems was MYCIN in 1974, which diagnosed bacterial
infections of the blood .

Mrce 10 19Q91A0445
CHAPTER - 2
2. 1. AI IN HEALTH CARE :

This study centers on how computer-based decision procedures, under the broad umbrella of artificial
intelligence (AI), can assist in improving health and health care. Although advanced statistics and
machine learning provide the foundation for AI, there are currently revolutionary advances underway in
the sub-field of neural networks.
This has created tremendous excitement in many fields of science, including in medicine and
public health. First demonstrations have already emerged showing that deep neural networks can
perform as well as the best human clinicians in well-defined diagnostic tasks.
In addition, AI-based tools are already appearing in health-oriented apps that can be employed on
handheld, networked devices such as smart phones.

Artificial Intelligence (AI) IN HEALTH CARE,where computers perform tasks that are usually
assumed to require human intelligence, is currently being discussed in nearly every domain of science
and engineering. Major scientific competitions like ImageNet Large Scale Visual Recognition
Challenges are providing evidence that computers can achieve human-like competence in image
recognition . AI has also enabled significant progress in speech recognition and natural language
processing .
Two recent high-profile research papers have demonstrated that AI can perform clinical diagnostics on
medical images at levels equal to experienced clinicians, at least in very specific examples.

Mrce 11 19Q91A0445
2.1.HISTORY OF AI IN HEALTH CARE

U.S. Department of Health and Human Services (HHS), with support from the Robert Wood
Johnson Foundation, asked JASON to consider how AI will shape the future of public health,
community health, and health care delivery. We focused on technical capabilities, limitations, and
applications that can be realized within the next ten years.
Some questions raised by this study are: Is the recent level of interest in AI just another period of hype
within the cycles of excitement that have arisen around AI? Or would different circumstances this time
make people more receptive to embracing the promise of AI applications, particularly related to health?
AI is primarily exciting to computational sciences researchers throughout academia and industry.
Perhaps, the previous advances in AI had no obvious influence on the lives of individuals. The potential
influence of AI for health, including health care delivery, may be affected by current societal factors
that may make the fate of AI hype different this time. Currently, there is great frustration with the cost
and quality of care delivered by the US health care system. To some degree, this has fundamentally
eroded patient confidence, opening people’s minds to new paradigms, tools, services. Dovetailing with
this, there is an explosion in new personal health monitoring technology through smart device platforms
and internet-based interactions.
This seemingly perfect storm leads to an overarching observation, which defines the
environment in which AI applications are now being developed and has helped shape this study: The
first patent for the invention of the telephone happened in 1876 and AI was introduced at a
much later stage.In true terms, the field of AI research was founded at a workshop held on the
campus of Dartmouth College during the summer of 1956.
At that time, it was predicted that a machine as intelligent as a human being
would exist in no more than a generation and they were given millions of dollars to make this
vision come true.

Mrce 12 19Q91A0445
AI has been around for decades and its promise to revolutionize our lives has been frequently
raised, with many of the promises remaining unfulfilled. Fueled by the growth of capabilities in
computational hardware and associated algorithm development, as well as some degree of hype, AI
research programs have ebbed and flowed.
Mrce 13 19Q91A0445
2.2.Why AI IN HEALTH CARE?

OPPORTUNITIES AND ISSUES FOR CLINICAL PRACTICE :

There have been significant demonstrations of the potential utility of Artificial Intelligence
approaches based on Deep Learning [10] for use in medical diagnostics . While continuing basic
research on these methods is likely to lead to further advances, we recommend parallel, focused work
on creating rigorous testing and validation approaches for the clinical use of AI algorithms . This is
needed to identify and ameliorate any problems in implementation , as soon as possible, in order to
develop confidence within the medical community and to provide feedback to the basic research
community on areas where continued development is most needed.
We point out a key issue of balance in expectations, which is that AI algorithms, including Deep
Learning, should not be expected to perform at higher levels than the training sets . However, where
good training sets represent the highest levels of medical expertise, applications of Deep Learning
algorithms in clinical settings provide the potential of consistently delivering high quality results. Thus,
one aspirational goal for such applications should be to make high quality health care services available
to all.

Mrce 14 19Q91A0445
CHAPTER - 3

3.1.APPLICATIONS OF AI IN HEALTH CARE :


Advances in AI Applications for Medical Imaging
In the following sections, we review examples in which applications of Deep Learning have been
demonstrated, with attention to quantitative understanding of characteristics of the data sets, the
problem definition, and the nature of the comparison standard used for labeling the sets. The two
examples described are based on medical imaging, specifically diabetic retinopathy and dermatology.

Detection of diabetic retinopathy in retinal fundus images


Many diseases of the eye can be diagnosed through non-invasive imaging of the retina through the
pupil. Early screening for diabetic retinopathy is important as early treatment can prevent vision loss
and blindness in the rapidly growing population of patients with diabetes. Such screening also provides
the opportunity to identify other eye diseases, as well as providing indicators of cardiovascular disease.
The increasing need for such screening, and the demands for expert analysis that it creates, motivates
the goal of low cost, quantitative retinal image analysis. Routine imaging for screening uses the
specially designed optics of a ‘fundus camera,’ with several images taken at different orientations
(fields, see Figure 2) [20] and can be accomplished with (mydriatic) or without (non-mydriatic) dilation
of the pupil. Assessment of the image requires skilled readers, and may be performed by remote
specialists. With the advent of digital photography, digital recording of retinal images can be carried out
routinely through Picture Archiving and Communication Systems (PACS).

As a point of reference, the standards for screening [21] for diabetic retinopathy in the UK require at
least 80% sensitivity and 95% specificity to determine referral for further evaluation. Screening using
fundus photography, followed by manual image analysis, yields sensitivity and specificity rates cited as
96%/89% when two fields (angles of view) are included, and 92%/97% for three fields. (For a single
field, cited rates are 78%/86%).
Recently a transformational advance in automated retinal image analysis, using Deep Learning
algorithms, has been demonstrated [22]. The algorithm was trained against a data set of over 100,000
images [23], which were recorded with one field (macula-centered). Each image in the training set was
evaluated by 3-7 ophthalmologists, thus allowing training with significantly reduced image analysis
variability. The results from tests on two validation sets, also involving only one image per eye (fovea
centered), are striking. Selecting for high specificity (low false negatives), yielded
sensitivities/specificities of 90.3%/98.1% and 87.0%/98.5%). Selecting for high sensitivity yielded
values of 97.5%/93.4% and 96.1%/93.9%). These results compare favorably with manual assessments
Mrce 15 19Q91A0445
even where those are based on images from multiple fields as noted above. They also are a significant
advance over previous automated assessments, which consistently suffered from significantly lower
sensitivities [24].
The Deep Learning algorithm shows great promise to provide increased quality of outcomes with
increased accessibility. Continued work to establish its use as an approved clinical protocol (see Section
2.3), will be needed. Once validated, its use can be envisioned in a wide range of scenarios, including
decision support in existing practice, rapid and reduced cost analysis in place of manual assessment, or
enabling diagnostics in non-traditional settings able to reach underserved populations. Greatly expanded
accessibility is likely to be aided by deployment of b
low cost fundus cameras, which are under rapid development [25,26] and likely to be supported by apps
as described.
Dermatological classification of skin cancer
757 disease classes and over 2000 diseases. Skin cancer represents a challenging diagnostic problem
because only a small fraction (3–5% of about ~1.5 million annual US skin cancer cases) are the most
serious type, melanoma, which accounts for 75% of the skin cancer deaths.
Identifying melanomas early is a critical health issue, and because diagnosis can be performed on
photographic images, there are already services that allow individuals to send their smart-phone photos
in for analysis by a dermatologist [27]. However, the detection of melanomas in screening exams is
limited – sensitivity 40.2% and specificity 86.1% for primary care physicians and 49.0%/ 97.6% for
dermatologists .
A recent demonstration of automated skin cancer evaluation using a convolutional neural network
(CNN) algorithm yielded striking results [29].
The authors drew on a training set of over 125,000 dermatologists labeled images, from 18 different
online repositories. Two thousand of the images were also labeled based on biopsies.

Mrce 16 19Q91A0445
A further classification test was performed drawing only on images that were biopsy-proven to be in a
specific disease class. The algorithm then was run to answer only the question of whether the lesion in
the image was benign or malignant. The results for analysis of 130 images of melanocytic lesions are
shown in Figure 3b, compared with results from assessments by 22 different dermatologists. As with
the broader classification tests, the algorithm performs similarly or slightly better than individual
dermatologists. The performance for both algorithm and dermatologists is much better for this specific
task than for the classification, noted above, of images from a set representing all the different diseases.
As with the retinopathy example, these results indicate that AI algorithms can perform at levels
matching their training sets. The poor level of results for the broad screening tests is consistent with the
training set, which is based on dermatological characterization. It would be of great interest to
understand whether a training set based on a more accurate method of discrimination, for instance
biopsies, would allow the algorithm to perform significantly better. The much better results on the
narrower classification task, for both the algorithm and the dermatologists, suggests that the clinical
decisions that originally led to these cases being selected for biopsies may have removed many less-
easily classified images. Overall this is a very promising result, but as the authors note, more work is
needed for it to deliver value in a broad clinical setting.
Data issues
Mrce 17 19Q91A0445
Each of the examples above relied on access to large sets of medical images, some of which were
available in professionally maintained archives. In addition, however, each also required the
development of a labeled training set. In the retinopathy example these were obtained via labor-
intensive independent professional assessments of the images. In the dermatology case, the training was
also based on clinician assessments of the images. It would be of significant interest to determine
whether training against a more rigorous assessment, such as the outcome of a biopsy, improves the
performance of the algorithm. The data sets available in the dermatology example included too small a
number of biopsy-proven images to serve as the sole training basis. One question to consider is whether
mixing images labeled based on a biopsy with those labeled by image reading would improve the
performance of the algorithm.
Findings:
AI algorithms based on high quality training sets have demonstrated performance for medical image
analysis at the levels of the medical capability that is captured in their training data.
AI algorithms cannot be expected to perform at a higher level than their training data, but should deliver
the same standard of performance consistently for images within the training space.

Moving Computational Advances into Clinical Practice


Computational methods are recognized as a special type of tool in medical practice and where
they impact clinical practice, are subject to different forms of regulation. AI-based tools represent a new
set of opportunities, but lack the extensive basis of experience and validation that is needed for
acceptance into formal medical practice. In this section and Section 2.3, we address the question of how
AI-based computational approaches could become sufficiently trusted to modify protocols for diagnosis
and treatment, enabling improved outcomes at lowered costs.
We use the example of a new computational tool that requires large scale computation, but is based on
physical properties rather than a trained AI algorithm. The example illustrates the rigorous process of
review and development needed to validate new techniques for clinical practice. It also illustrates how
important the demonstration of the benefits of a new approach is to its eventual uptake as a clinical tool.

Development of new approaches – non-invasive diagnostics :


Coronary computed tomographic angiography (CCTA) has been established as a non-invasive
technique for screening . However even at its best performance, CCTA-like invasive coronary
angiography (ICA) – has limited ability to discriminate which cases of stenosis are truly causing
impaired blood flow. Recently advances in computational fluid dynamics (CFD) and in physical

Mrce 18 19Q91A0445
understanding of arterial behavior have enabled computational determination of the fluid flow reserve
computationally using CCTA as the input for the structure.

Three-dimensional pressure and velocity fields at one point in the cardiac cycle using FFR- based on
CTA imaging (FFRCT). The computation is repeated throughout the cardiac cycle. Source: Images
have been taken from Taylor et al 2013
The process of establishing this approach for clinical use included early assessments (ClinicalTrials.gov
NCT01189331 and NCT01233518) that compared the performance of computational FFR (FFRCT) and
CCTA alone for diagnosing significantly reduced blood flow. The results showed that FFRCT
performed dramatically better on per-vessel assessments, and significantly better on per-patient
assessments . These positive results, with the potential for greatly reducing the number of invasive tests
while maintaining quality in diagnosis, created significant interest in evaluating the technique for use in
clinical care.

Mrce 19 19Q91A0445
Development and validation for clinical applications :
A private company, Heart Flow, Inc, led development of the FFRCT technology for clinical use. In
2014, the company received FDA approval to market the technology, based on its demonstration of
substantial equivalence to predicate devices . In parallel, additional studies were underway to establish
whether FFRCT is a feasible alternative to invasive coronary angiography and its potential use as a
direct diagnostic for CAD-induced reduced blood flow requiring revascularization . The test of
diagnostic performance yielded the results are showing favorable performance for FFRCT.
Per-patient (upper panel) and per-vessel (lower panel) diagnostic performance of CCTA, fractional flow
reserve derived from standard acquired coronary computed tomography angiography datasets, and
invasive coronary angiography. Ranges represent a 95% confidence interval. Threshold for stenosis was
50% or more reduction in vessel diameter and for FFR was a reduction to 0.8 or less.

These (and other) study results enabled the critical step of evaluation for actual clinical use. Such
assessment requires comparison with the established standards of care along with quantification of the
impact for patient quality of life and economics [40, 41]. The results confirmed that using FFRCT can
correctly predict patients who should not be referred for an ICA, greatly reducing the amount of
invasive testing.
The reduction in invasive procedures and associated hospitalization and medications during a 90-day
follow-up period improved patient quality of life metrics and reduced costs by approximately 30%.
Being established as a standard of care is the final step of approval for new clinical practices.
The UK National Institute for Health and Care Excellence (NICE) guidance for FFRCT, illustrates this
process .

Mrce 20 19Q91A0445
The company presented the case for the FFRCT technology to NICE. NICE carried out an independent
literature review and an independent assessment of the cost reductions due to fewer inconclusive or
inaccurate diagnostic tests and avoidance of unnecessary staff and procedure costs.
The resulting recommendation is that FFRCT should be considered as an option as part of the NICE
pathway on chest pain, with a potential cost savings of over £200 per patient. Similar approval
processes are required on a country-by-country basis.
In the US, almost all health insurance payment and information systems require that a procedure have
been awarded an American Medical Association Current Procedural Terminology Code.
With approval in place, the challenge becomes the rate of uptake in practice. HeartFlow is continuing
with additional clinical trials to demonstrate additional benefits to the medical community.
Another pathway to uptake is relationships with equipment providers, which offers the potential for
the new diagnostic technique to be marketed as part of an equipment line. The potential to further
improve the technology using AI has been recognized and is in development.
Evolution of Standards for AI in Medical Applications :
In the US, adoption of AI applications for clinical practice will be regulated by a
combination of Food and Drug Administration (FDA) regulations and clinical business models. For
non-clinical personal smart device uses discussed in the following section, it will require public (user)
confidence and may or may not require regulation.
In the example above, the adoption of the FFRCT computational tool required clinical studies. This will
also be necessary for AI outcomes to be accepted as definitive decision factors in standard of care. Such
decisions may need FDA regulations and the specifics of what those requirements might be are still
evolving.
FDA has been paying close attention to the international development of both software in medical
devices and software as a medical device .
AI applications can fall into both of these categories. FDA has been participating (actually chairing) an
international regulators forum . The goal of the forum is to develop frameworks, risk categorizations,
vocabulary, considerations, and principles that could support regulation around software. Their report
on clinical evaluation [48] was put out in the US for public comment [49] by the FDA. FDA plans to
use that input as the basis to guide their new development of regulation for SIMD and SAMD and to be
responsive to new policies under the 21st Century Cures Act noting .
“... the Act revised FDA’s governing statute to, among other things, make clear that certain digital
health technologies—such as clinical administrative support software and mobile apps that are intended
only for maintaining or encouraging a healthy lifestyle—generally fall outside the scope of FDA
regulation. Such technologies tend to pose low risk to patients but can provide great value to the health
care system.”
Mrce 21 19Q91A0445
FDA’s new digital health unit will be using this input to formulate FDA’s role in regulation of digital
health to include mobile health (m Health), health information technology (IT), wearable devices, tele-
health and telemedicine, and personalized medicine .
Clinical trials and regulation are only part of the story for adoption of AI applications. In addition
clinicians or health professionals (e.g., physical training, doctor, or public health specialist) must be
willing to incorporate these applications into their workflows. . However, even to support the
development of clinical trials and the assurance that the AI applications are legitimate, including for
non-regulated applications, the technical soundness of the algorithms need to confirmed

Mrce 22 19Q91A0445
CHAPTER - 4

CHALLENGES OF AI IN HEALTH CARE :


Major scientific competitions like Image Net Large Scale Visual Recognition
Challenges are providing evidence that computers can achieve human-like competence in image
recognition. AI has also enabled significant progress in speech recognition and natural language
processing

4.1.ADVANTAGES :

PROLIFERATION OF DEVICES AND APPS FOR DATA COLLECTION AND ANALYSIS


Smart phones and other smart technologies are already a primary platform for the
adoption of health and wellness through m Health (mobile or digital health) apps and networked devices
[53]. These apps and devices support the full spectrum from healthy to sick, e.g., from episodic fitbit
users to diabetics who use glucose monitors. Questions arise regarding the use and usefulness of these
devices by individuals and the willingness of the medical community to integrate m Health into health
care.
There is active research on the design and testing of the usability of the apps and devices [54].
Additionally, many of the m Health applications are being included in clinical trials. They are being
used as a mechanism to collect information for the trial, to evaluate the specific device or app use, or to
evaluate their usefulness in combination with other health behaviors. For example, in 2016 it was
reported that Fitbit alone is being used in 21 clinical trials.
This growth in the development of more serious health devices that could be used to monitor and
communicate health status to a professional has attracted the attention of the American Medical
Association (AMA). AMA recently adopted a set of principles to promote safe, effective mHealth
applications . AMA is encouraging physicians and others to support and establish patient-physician
relationships around the use of apps and associated devices, trackers, and sensors.
An AMA survey released in 2016 reported that 31% of physicians see the potential for digital tools to
improve patient care and about half are attracted to digital tools because they believe they will improve
current practices with respect to efficiency, patient safety, improved diagnostic ability, and physician-
patient relationships . The study goes on to point out that adoption in practice will require these tools fit
within existing systems and practices including coverage for liability, data privacy assurance, ability to
link to electronic health records, and billing/reimbursements
These are all promising directions for the development and application of mHealth tools. The focus here
is on the types of devices and apps that have the potential to benefit from AI applications either in their

Mrce 23 19Q91A0445
individual functions or when the data generated by the device can be integrated with other health
information to support wellness versus sickness.

Personal Networked Devices and Apps :


There are many impressive smart phone attachments and apps currently available for monitoring of
personal health. These devices 1) Empower individuals to monitor and understand their own health,
2) Large corpuses of data that can, in theory, be used for AI applications, and
3) Capture health data that can be shared with clinicians and researchers. AI
Algorithms drive the performance of many of these devices and, reciprocally,
these
Devices are capturing data that could be used to develop or improve AI
algorithms. Here we list some specific examples of modern health-monitoring tools available for use on
mobile devices.
Personal EKG. Kardia Mobile [58] has produced an FDA-cleared personal EKG recording device.
The platform uses a finger pad and a smart phone app to record an EKG over a 30 second window. The
device operates with no wires or gels. The platform claims to use AI-enabled detection of atrial
fibrillation.
Parkinson’s tremors. Cloud UPDRS [59] is a smart phone app that can assess Parkinson’s disease
symptoms. The app uses the gyroscope found in many mobile devices to analyze and quantify tremors,
patterns in gait, and performance in a “finger tapping” test. An AI algorithm differentiates between
actual tremors and “bad data,” such as a dropped phone or the wrong action in response to the app’s
question. This tool enables Parkinson’s patients to perform in-home testing, providing valuable and
quantitative feedback on how their personal lifestyle factors and medications may affect their symptoms.
Asthma tracking and control.
Asthma MD offers a hand-held flow meter which gauges lung performance by assessing peak flow
during exhalation. The flow meter pairs with an app that logs data for people with asthma and other
respiratory diseases. Users can also record symptoms and medications. An interesting feature of this
app is that users may opt-in to a program where their data are uploaded anonymously onto a Google
database being assembled for research purposes. Asthma MD states “anonymous, aggregate data will
help correlate asthma with environmental factors, triggers and climate change.”

These sorts of technologies can collect information of clear and vital importance to patients and use by
clinicians, but we must again emphasize that each new data stream must be evaluated, collected and
curated to formats consistent with clinical needs and AI applications.

Mrce 24 19Q91A0445
New devices are surely to come online and new ideas for these devices are being sponsored by
government agencies. The National Institutes of Health has recently put out a request for proposals,
called Mobile Monitoring of Cognitive Change.
“This Funding Opportunity Announcement (FOA) invites applications to design and implement
research infrastructure that will enable the monitoring of cognitive abilities and age, state, context, or
health condition-related changes in cognitive abilities on mobile devices. This effort will include the
development (or support for development) of apps on the Android and iOS platforms, the validation of
tests and items to be used on the two leading smart phone platforms in age groups ranging from 20 to
85, and the norming of successfully validated measures to nationally representative U.S. population
samples that will also receive gold standard measures, including the NIH Toolbox® for Assessment of
Behavioral and Neurological Function. A goal of this project is to also support data collection efforts
from participants enrolled in projects awarded through this FOA as well as other NIH-funded studies
through FY2022, and enable the widespread sharing of both the collected data and the test instruments.”
There are certain parts of the human body, however, that haven’t been successfully scrutinized by cell
phone-based attachments. The bloodstream is one example. There are many things that would be
desirable to measure in blood, both metrics of health (e.g., vitamins and minerals) and disease (e.g.,
viruses and cancer biomarkers). However, with the notable exception of glucose.

Online plus AI :
There is a proliferation of companies developing apps that offer online doctors’ appointments. In the
U.S., this includes the new company Plush Care but these services currently seem to be more prevalent
in the U.K. These apps allow nearly instantaneous access to a live doctor, over mobile devices, anytime
of day, every day of the week. One of them, Babylon, claims to use an AI algorithm, along with a series
of questions about symptoms, to automatically triage patients. Interestingly, the company is now
expanding to Rwanda, where there is a serious shortage of doctors, yet a high penetration of smart
phones. Online doctors’ appointments are likely to appeal to many people who are already acclimated
to the use of apps to fulfill personal needs (e.g., Amazon, Uber etc.). However, the potential dangers of
sharing personal health information over such networked connections is a concern.

LARGE SCALE HEALTH DATA :


An aspirational goal for health and health care is to amass large datasets (labeled and unlabeled) and
systematically curated health data so that novel disease correlations can be identified, and people can be
matched to the best treatments based on their specific health, life-experiences, and genetic profile. AI
holds the promise of integrating all of these data sources to develop medical breakthroughs and new
insights on individual health and public health. However, major limiting factors will be the availability
Mrce 25 19Q91A0445
and accessibility of high quality data, and the ability of AI algorithms to function effectively and
reliability on the complex data streams.
It is estimated that 60% of premature deaths are accounted for by social circumstances, environmental
exposures, and behavioral patterns. These three areas are a combination of experiences throughout our
life based on where we were born, live, learn, work, and play. Frequently coined the social determinants
of health , these include economic stability, neighborhood and physical environment, education, food,
community and social context and health care system.

Social Determinants of Health. Source: Figure modified from the Kaiser Family Foundation
[110].

Genetic information also must be included in this enterprise. However, it must be recognized that
genetic sequencing continues to fall short in explaining many health conditions. In some cases, human
diseases are easily tracked to well-characterized mutations in very specific genes. But this seems to be
the exception rather than rule. Sometimes, human illnesses result from combinations of genetic

Mrce 26 19Q91A0445
mutations, and in these cases, it is much more difficult to track down the genetic underpinnings of
disease. But, in addition, the forces of chance are at play, and susceptibilities are being altered by
behavior (e.g., exercise, diet, smoking, etc.) and environmental exposures (e.g., environmental toxins,
noise pollution, industrial chemicals

Mrce 27 19Q91A0445
4.2.Limits of AI in Medicine :

1. NEEDS HUMAN SURVEILLANCE

Although AI has come a long way in the medical world, human surveillance is still essential.
For example, surgery robots operate logically, as opposed to empathetically. Health
practitioners may notice vital behavioral observations that can help diagnose or prevent
medical complications.

“AI has been around for a few decades and continues to mature. As this area advances, there is
more interaction between healthcare professionals and tech experts,” Yang explains. AI
requires human input and review to be leveraged effectively.

2. MAY OVERLOOK SOCIAL VARIABLES

Patient needs often extend beyond immediate physical conditions. Social, economic and
historical factors can play into appropriate recommendations for particular patients. For
instance, an AI system may be able to allocate a patient to a particular care center based on a
specific diagnosis. However, this system may not account for patient economic restrictions or
other personalized preferences.

Privacy also becomes an issue when incorporating an AI system. Brands like Amazon have free
reign when it comes to collecting and leveraging data. Hospitals, on the other hand, may face
some set backs when attempting to channel data from Apple mobile devices, for instance.

3. MAY LEAD TO UNEMPLOYMENT

Although AI may help cut costs and reduce clinician pressure, it may also render some jobs
redundant. This variable may result in displaced professionals who invested time and money
in healthcare education, presenting equity challenges.

A 2018 World Economic Forum report projected AI would create a net sum of 58 million jobs
by 2022. However, this same study finds 75 million jobs will be displaced or destroyed by AI
by the same year. The major reason for this elimination of job opportunities is, as AI is more
integrated across different sectors, roles that entail repetitive tasks will be redundant

Mrce 28 19Q91A0445
4. INACCURACIES ARE STILL POSSIBLE

Medical AI depends heavily on diagnosis data available from millions of catalogued cases. In
cases where little data exists on particular illnesses, demographics, or environmental factors, a
misdiagnosis is entirely possible. This factor becomes especially important when prescribing
particular medicine.

Remarking on this data gap, Yang says, “No matter the system, there is always some portion of
missing data. In the case with prescriptions, some information regarding certain populations
and reactions to treatments may be absent. This occurrence can lead to issues with diagnosing
and treating patients belonging to certain demographics.”

5. SUSCEPTIBLE TO SECURITY RISKS

As AI is generally dependent on data networks, AI systems are susceptible to security risks.


The onset of Offensive AI, improved cyber security will be required to ensure the technology is
sustainable. According to Forrester Consulting, 88% of decision-makers in the security
industry are convinced offensive AI is an emerging threat.

Mrce 29 19Q91A0445
CHAPTER - 5

5.1. AI TECHNOLOGIES IN HEALTH CARE :

Machine learning – neural networks and deep learning


Machine learning is a statistical technique for fitting models to data and to ‘learn’ by training models
with data. Machine learning is one of the most common forms of AI; in a 2018 Deloitte survey of 1,100
US managers whose organisations were already pursuing AI, 63% of companies surveyed were
employing machine learning in their businesses.1 It is a broad technique at the core of many approaches
to AI and there are many versions of it.

In healthcare, the most common application of traditional machine learning is precision medicine –
predicting what treatment protocols are likely to succeed on a patient based on various patient attributes
and the treatment context.2 The great majority of machine learning and precision medicine applications
require a training dataset for which the outcome variable (eg onset of disease) is known; this is called
supervised learning.

A more complex form of machine learning is the neural network – a technology that has been available
since the 1960s has been well established in healthcare research for several decades3 and has been used
for categorisation applications like determining whether a patient will acquire a particular disease. It
views problems in terms of inputs, outputs and weights of variables or ‘features’ that associate inputs
with outputs. It has been likened to the way that neurons process signals, but the analogy to the brain's
function is relatively weak.

Natural language processing

Making sense of human language has been a goal of AI researchers since the 1950s. This field, NLP,
includes applications such as speech recognition, text analysis, translation and other goals related to
language. There are two basic approaches to it: statistical and semantic NLP. Statistical NLP is based
on machine learning (deep learning neural networks in particular) and has contributed to a recent
increase in accuracy of recognition. It requires a large ‘corpus’ or body of language from which to learn.

In healthcare, the dominant applications of NLP involve the creation, understanding and classification
of clinical documentation and published research. NLP systems can analyse unstructured clinical notes

Mrce 30 19Q91A0445
on patients, prepare reports (eg on radiology examinations), transcribe patient interactions and conduct
conversational AI.

Rule-based expert systems

Expert systems based on collections of ‘if-then’ rules were the dominant technology for AI in the 1980s
and were widely used commercially in that and later periods. In healthcare, they were widely employed
for ‘clinical decision support’ purposes over the last couple of decades5 and are still in wide use today.
Many electronic health record (EHR) providers furnish a set of rules with their systems today.

Expert systems require human experts and knowledge engineers to construct a series of rules in a
particular knowledge domain. They work well up to a point and are easy to understand. However, when
the number of rules is large (usually over several thousand) and the rules begin to conflict with each
other, they tend to break down. Moreover, if the knowledge domain changes, changing the rules can be
difficult and time-consuming. They are slowly being replaced in healthcare by more approaches based
on data and machine learning algorithms.

Physical robots

Physical robots are well known by this point, given that more than 200,000 industrial robots are
installed each year around the world. They perform pre-defined tasks like lifting, repositioning, welding
or assembling objects in places like factories and warehouses, and delivering supplies in hospitals.
More recently, robots have become more collaborative with humans and are more easily trained by
moving them through a desired task. They are also becoming more intelligent, as other AI capabilities
are being embedded in their ‘brains’ (really their operating systems). Over time, it seems likely that the
same improvements in intelligence that we've seen in other areas of AI would be incorporated into
physical robots.

Surgical robots, initially approved in the USA in 2000, provide ‘superpowers’ to surgeons, improving
their ability to see, create precise and minimally invasive incisions, stitch wounds and so
forth.6 Important decisions are still made by human surgeons, however. Common surgical procedures
using robotic surgery include gynaecologic surgery, prostate surgery and head and neck surgery.

Robotic process automation

This technology performs structured digital tasks for administrative purposes, ie those involving
information systems, as if they were a human user following a script or rules. Compared to other forms
of AI they are inexpensive, easy to program and transparent in their actions. Robotic process
automation (RPA) doesn't really involve robots – only computer programs on servers. It relies on a
Mrce 31 19Q91A0445
combination of workflow, business rules and ‘presentation layer’ integration with information systems
to act like a semi-intelligent user of the systems. In healthcare, they are used for repetitive tasks like
prior authorisation, updating patient records or billing. When combined with other technologies like
image recognition, they can be used to extract data from, for example, faxed images in order to input it
into transactional systems.7

We've described these technologies as individual ones, but increasingly they are being combined and
integrated; robots are getting AI-based ‘brains’, image recognition is being integrated with RPA.
Perhaps in the future these technologies will be so intermingled that composite solutions will be more
likely or feasible.

Diagnosis and treatment applications

Diagnosis and treatment of disease has been a focus of AI since at least the 1970s, when MYCIN was
developed at Stanford for diagnosing blood-borne bacterial infections.8 This and other early rule-based
systems showed promise for accurately diagnosing and treating disease, but were not adopted for
clinical practice. They were not substantially better than human diagnosticians, and they were poorly
integrated with clinician workflows and medical record systems.

More recently, IBM's Watson has received considerable attention in the media for its focus on precision
medicine, particularly cancer diagnosis and treatment. Watson employs a combination of machine
learning and NLP capabilities. However, early enthusiasm for this application of the technology has
faded as customers realised the difficulty of teaching Watson how to address particular types of
cancer9 and of integrating Watson into care processes and systems.10 Watson is not a single product
but a set of ‘cognitive services’ provided through application programming interfaces (APIs), including
speech and language, vision, and machine learning-based data-analysis programs. Most observers feel
that the Watson APIs are technically capable, but taking on cancer treatment was an overly ambitious
objective. Watson and other proprietary programs have also suffered from competition with free ‘open
source’ programs provided by some vendors, such as Google's TensorFlow.

Implementation issues with AI bedevil many healthcare organisations. Although rule-based systems
incorporated within EHR systems are widely used, including at the NHS,11 they lack the precision of
more algorithmic systems based on machine learning. These rule-based clinical decision support
systems are difficult to maintain as medical knowledge changes and are often not able to handle the
explosion of data and knowledge based on genomic, proteomic, metabolic and other ‘omic-based’
approaches to care.

Mrce 32 19Q91A0445
This situation is beginning to change, but it is mostly present in research labs and in tech firms, rather
than in clinical practice. Scarcely a week goes by without a research lab claiming that it has developed
an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than
human clinicians. Many of these findings are based on radiological image analysis,12 though some
involve other types of images such as retinal scanning13 or genomic-based precision medicine.14 Since
these types of findings are based on statistically-based machine learning models, they are ushering in an
era of evidence- and probability-based medicine, which is generally regarded as positive but brings with
it many challenges in medical ethics and patient/clinician relationships.15

Tech firms and startups are also working assiduously on the same issues. Google, for example, is
collaborating with health delivery networks to build prediction models from big data to warn clinicians
of high-risk conditions, such as sepsis and heart failure.16 Google, Enlitic and a variety of other startups
are developing AI-derived image interpretation algorithms. Jvion offers a ‘clinical success machine’
that identifies the patients most at risk as well as those most likely to respond to treatment protocols.
Each of these could provide decision support to clinicians seeking to find the best diagnosis and
treatment for patients.

There are also several firms that focus specifically on diagnosis and treatment recommendations for
certain cancers based on their genetic profiles. Since many cancers have a genetic basis, human
clinicians have found it increasingly complex to understand all genetic variants of cancer and their
response to new drugs and protocols. Firms like Foundation Medicine and Flatiron Health, both now
owned by Roche, specialise in this approach.

Both providers and payers for care are also using ‘population health’ machine learning models to
predict populations at risk of particular diseases17 or accidents18 or to predict hospital
readmission.19 These models can be effective at prediction, although they sometimes lack all the
relevant data that might add predictive capability, such as patient socio-economic status.

But whether rules-based or algorithmic in nature, AI-based diagnosis and treatment recommendations
are sometimes challenging to embed in clinical workflows and EHR systems. Such integration issues
have probably been a greater barrier to broad implementation of AI than any inability to provide
accurate and effective recommendations; and many AI-based capabilities for diagnosis and treatment
from tech firms are standalone in nature or address only a single aspect of care. Some EHR vendors
have begun to embed limited AI functions (beyond rule-based clinical decision support) into their
offerings,20 but these are in the early stages. Providers will either have to undertake substantial
integration projects themselves or wait until EHR vendors add more AI capabilities.

Mrce 33 19Q91A0445
Patient engagement and adherence applications

Patient engagement and adherence has long been seen as the ‘last mile’ problem of healthcare – the
final barrier between ineffective and good health outcomes.

The more patients proactively participate in their own well-being and care, the better the outcomes –
utilisation, financial outcomes and member experience. These factors are increasingly being addressed
by big data and AI.

Providers and hospitals often use their clinical expertise to develop a plan of care that they know will
improve a chronic or acute patient's health.

However, that often doesn't matter if the patient fails to make the behavioural adjustment necessary, eg
losing weight, scheduling a follow-up visit, filling prescriptions or complying with a treatment plan.
Noncompliance – when a patient does not follow a course of treatment or take the prescribed drugs as
recommended – is a major problem.

In a survey of more than 300 clinical leaders and healthcare executives, more than 70% of the
respondents reported having less than 50% of their patients highly engaged and 42% of respondents
said less than 25% of their patients were highly engaged.21

If deeper involvement by patients results in better health outcomes, can AI-based capabilities be
effective in personalising and contextualising care? There is growing emphasis on using machine
learning and business rules engines to drive nuanced interventions along the care
continuum.22 Messaging alerts and relevant, targeted content that provoke actions at moments that
matter is a promising field in research.

Another growing focus in healthcare is on effectively designing the ‘choice architecture’ to nudge
patient behaviour in a more anticipatory way based on real-world evidence. Through information
provided by provider EHR systems, biosensors, watches, smart phones, conversational interfaces and
other instrumentation, software can tailor recommendations by comparing patient data to other effective
treatment pathways for similar cohorts. The recommendations can be provided to providers, patients,
nurses, call-centre agents or care delivery coordinators.

Mrce 34 19Q91A0445
Administrative applications

There are also a great many administrative applications in healthcare. The use of AI is somewhat less
potentially revolutionary in this domain as compared to patient care, but it can provide substantial
efficiencies.

These are needed in healthcare because, for example, the average US nurse spends 25% of work time
on regulatory and administrative activities. The technology that is most likely to be relevant to this
objective is RPA. It can be used for a variety of applications in healthcare, including claims processing,
clinical documentation, revenue cycle management and medical records management.

Implications for the healthcare workforce

There has been considerable attention to the concern that AI will lead to automation of jobs and
substantial displacement of the workforce. A Deloitte collaboration with the Oxford Martin
Institute26 suggested that 35% of UK jobs could be automated out of existence by AI over the next 10
to 20 years.

Other studies have suggested that while some automation of jobs is possible, a variety of external
factors other than technology could limit job loss, including the cost of automation technologies, labour
market growth and cost, benefits of automation beyond simple labour substitution, and regulatory and
social acceptance. These factors might restrict actual job loss to 5% or less.

Second, clinical processes for employing AI-based image work are a long way from being ready for
daily use. Different imaging technology vendors and deep learning algorithms have different foci: the
probability of a lesion, the probability of cancer, a nodule's feature or its location. These distinct foci
would make it very difficult to embed deep learning systems into current clinical practice.

Third, deep learning algorithms for image recognition require ‘labelled data’ – millions of images from
patients who have received a definitive diagnosis of cancer, a broken bone or other pathology. However,
there is no aggregated repository of radiology images, labelled or otherwise.

Finally, substantial changes will be required in medical regulation and health insurance for automated
image analysis to take off.

Mrce 35 19Q91A0445
Ethical implications

Finally, there are also a variety of ethical implications around the use of AI in healthcare.
Healthcare decisions have been made almost exclusively by humans in the past, and the use of smart
machines to make or assist with them raises issues of accountability, transparency, permission and
privacy.

Perhaps the most difficult issue to address given today's technologies is transparency. Many AI
algorithms – particularly deep learning algorithms used for image analysis – are virtually impossible to
interpret or explain.

If a patient is informed that an image has led to a diagnosis of cancer, he or she will likely want to
know why. Deep learning algorithms, and even physicians who are generally familiar with their
operation, may be unable to provide an explanation.

5.2.How Does AI Help Healthcare?

1. PROVIDES REAL-TIME DATA

A critical component of diagnosing and addressing medical issues is acquiring accurate


information in a timely manner. With AI, doctors and other medical professionals can leverage
immediate and precise data to expedite and optimize critical clinical decision-making.
Generating more rapid and realistic results can lead to improved preventative steps, cost-
savings and patient wait times.

2. STREAMLINES TASKS

Artificial intelligence in medicine has already changed healthcare practices everywhere.


Innovations include appointment-scheduling, translating clinical details and tracking patient
histories. AI is enabling healthcare facilities to streamline more tedious and meticulous tasks.
For example, intelligent radiology technology is able to identify significant visual markers,
saving hours of intense analysis. Other automated systems exist to automate appointment
scheduling, patient tracking and care recommendations.

One specific task that is streamlined with AI is reviewing insurance. AI is used to minimize
costs resulting from insurance claim denials. With AI, health providers can identify and
address mistaken claims before insurance companies deny payment for them. Not only does

Mrce 36 19Q91A0445
this streamline the claims process, AI saves hospital staff the time to work through the denial
and resubmit the claim.

3. SAVES TIME AND RESOURCES

As more vital processes are automated, medical professionals have more time to assess
patients and diagnose illness and ailment. AI is accelerating operations to save medical
establishments precious productivity hours. In any sector, time equals money, so AI has the
potential to save hefty costs.

It’s estimated around $200 billion is wasted in the healthcare industry annually. A good
portion of these unnecessary costs are attributed to administrative strains, such as filing,
reviewing and resolving accounts. Another area for improvement is in medical necessity
determination.. .

4. ASSISTS RESEARCH

AI enables researchers to amass large swaths of data from various sources. The ability to draw
upon a rich and growing information body allows for more effective analysis of deadly diseases.
Related to real-time data, research can benefit from the wide body of information available, as
long as it’s easily translated.

Medical research bodies like the Childhood Cancer Data Lab are developing useful
software for medical practitioners to better navigate wide collections of data. AI has also been
used to assess and detect symptoms earlier in an illness’s progression

Mrce 37 19Q91A0445
CHAPTER -6

FUTURESCOPE:
6.1.THE FUTURE OF AI IN HEALTH CARE :
We believe that AI has an important role to play in the healthcare offerings of the future. In the form of
machine learning, it is the primary capability behind the development of precision medicine, widely
agreed to be a sorely needed advance in care.
Although early efforts at providing diagnosis and treatment recommendations have proven challenging,
we expect that AI will ultimately master that domain as well. Given the rapid advances in AI for
imaging analysis, it seems likely that most radiology and pathology images will be examined at some
point by a machine.
Speech and text recognition are already employed for tasks like patient communication and capture of
clinical notes, and their usage will increase.
The greatest challenge to AI in these healthcare domains is not whether the technologies will be capable
enough to be useful, but rather ensuring their adoption in daily clinical practice.
For widespread adoption to take place, AI systems must be approved by regulators, integrated with
EHR systems, standardised to a sufficient degree that similar products work in a similar fashion, taught
to clinicians, paid for by public or private payer organisations and updated over time in the field.
These challenges will ultimately be overcome, but they will take much longer to do so than it will take
for the technologies themselves to mature.
As a result, we expect to see limited use of AI in clinical practice within 5 years and more extensive
use within 10.
It also seems increasingly clear that AI systems will not replace human clinicians on a large scale, but
rather will augment their efforts to care for patients.
Over time, human clinicians may move toward tasks and job designs that draw on uniquely human
skills like empathy, persuasion and big-picture integration.
Perhaps the only healthcare providers who will lose their jobs over time may be those who refuse to
work alongside artificial intelligence.

Mrce 38 19Q91A0445
CHAPTER - 7
CONCLUSION :

AI can undoubtedly bring new efficiencies and quality to healthcare outcomes in India. However,
gaps and challenges in the healthcare sector reflect deep-rooted issues around inadequate funding, weak
regulation, insufficient healthcare infrastructure, and deeply embedded socio-cultural practices. These
cannot be addressed by AI solutions alone.

Moreover, technological possibility cannot be equated to adoption. In India, poor digital infrastructure,
a large, diverse and unregulated private sector, and variable capacity among states and medical
professionals alike, mean that the adoption of AI is likely to be slow and deeply heterogeneous. The
same factors also make it quite likely that well-established private hospitals will be the main adopters.
This in turn would imply that much of the dominant narrative or rationale for the development of AI
in healthcare, in terms of improving equity and quality, is unlikely to be addressed through market
forces alone: these solutions are more likely to serve populations who already have access to high-
quality care, typically in cities with well-developed digital infrastructure. In many small hospitals and
single-provider practices in India, administrative systems have barely moved beyond rudimentary ICT
solutions such as invoicing and billing platforms

Mrce 39 19Q91A0445
REFERENCES :

1. ://www.image-net.org/challenges/LSVRC/2017/index.php

2. https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html

3. Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al. (2016).
Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy
in Retinal Fundus Photographs. Jama, 316(22), 2402. http://doi http.org/10.1001/jama.2016.17216

4. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017).
Dermatologist-level classification of skin cancerwith deep neural networks. Nature, 542(7639),
115–118. http://doi.org/10.1038/nature21056

5. https://medium.com/machine-intelligence-report/data-not-algorithms-is-key-to-machine-learning-
success-69c6c4b79f33

6. http://www.datasciencecentral.com/profiles/blogs/10-great-healthcare-data-sets

7. JASON 2013, A Robust Health Data Infrastructure. JSR-13-Task-007.

8. JASON 2014, Data for Individual Health. JSR-14-Task-007.

9. https://www.cbinsights.com/research/artificial-intelligence-startups-healthcare/

10. JASON 2017, Perspectives on Research in Artificial Intelligence and Artificial General Intelligence
Relevant to DoD. JSR-16-Task-003.

11. https://contently.com/strategist/2017/05/23/artificial-intelligence-hype-cycle-5-stats/

12. K. Davis, K. Stremikis, C. Schoen, and D. Squires, Mirror, Mirror on the Wall, 2014 Update: How
the U.S. Health Care System Compares Internationally, The Commonwealth Fund, June 2014.

13. http://www.pewresearch.org/fact-tank/2017/01/12/evolution-of-technology/

14. http://www.pewinternet.org/fact-sheet/mobile/

15. For instance DirectDerm: https://www.directderm.com/, and PlushCare:


https://techcrunch.com/2016 /11/03/plushcare-nabs-8m-series-a-to-prove-telehealth-can-go-
mainstream/

16. Translating Artificial Intelligence into Clinical Care, Andrew L. Beam, Isaac S. Kohane, JAMA 316,
2368, 2016
Mrce 40 19Q91A0445
17. Opportunities and Obstacles for Deep Learning in Biology and Medicine, CS Greene et al., bioRxiv
preprint first posted online May. 28, 2017; doi: http://dx.doi.org/10.1101/142760.

18. http://www.image-net.org/challenges/LSVRC/2017/index.php

19. https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html

20. Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al. (2016).
Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy
in Retinal Fundus Photographs. Jama, 316(22), 2402. http://doi.org/10.1001/jama.2016.17216

21. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017).
Dermatologist-level classification of skin cancerwith deep neural networks. Nature, 542(7639),
115–118. http://doi.org/10.1038/nature21056

22. https://medium.com/machine-intelligence-report/data-not-algorithms-is-key-to-machine-learning-
success-69c6c4b79f33

23. http://www.datasciencecentral.com/profiles/blogs/10-great-healthcare-data-sets

24. JASON 2013, A Robust Health Data Infrastructure. JSR-13-Task-007.

25. JASON 2014, Data for Individual Health. JSR-14-Task-007.

26. https://www.cbinsights.com/research/artificial-intelligence-startups-healthcare/

27. JASON 2017, Perspectives on Research in Artificial Intelligence and Artificial General Intelligence
Relevant to DoD. JSR-16-Task-003.

28. https://contently.com/strategist/2017/05/23/artificial-intelligence-hype-cycle-5-stats/

29. K. Davis, K. Stremikis, C. Schoen, and D. Squires, Mirror, Mirror on the Wall, 2014 Update: How
the U.S. Health Care System Compares Internationally, The Commonwealth Fund, June 2014.

30. http://www.pewresearch.org/fact-tank/2017/01/12/evolution-of-technology/

31. http://www.pewinternet.org/fact-sheet/mobile/

32. For instance DirectDerm: https://www.directderm.com/, and PlushCare:


https://techcrunch.com/2016 /11/03/plushcare-nabs-8m-series-a-to-prove-telehealth-can-go-
mainstream/

Mrce 41 19Q91A0445
33. Translating Artificial Intelligence into Clinical Care, Andrew L. Beam, Isaac S. Kohane, JAMA 316,
2368, 2016

34. Opportunities and Obstacles for Deep Learning in Biology and Medicine, CS Greene et al., bioRxiv
preprint first posted online May. 28, 2017; doi: http://dx.doi.org/10.1101/142760.

35. The Parable

Mrce 42 19Q91A0445

You might also like