AI Healthcare
AI Healthcare
AI Healthcare
IN
HEALTHCARE
HAN20090214
British University Vietnam
Ethical Issues in Artificial Intelligence in
Healthcare
HAN20090214
Abstract
AI raises legal, ethical, and philosophical questions about privacy, surveillance, bias, and human
judgment. New digital technologies have raised concerns about inaccuracy and data breaches.
Healthcare failures may be harmful to patients. Remember that people see doctors at their
most vulnerable. Artificial intelligence in healthcare may raise legal and ethical difficulties, yet
there are no clear rules. This evaluation emphasizes the algorithmic openness, privacy, and
security of all stakeholders and the cybersecurity of linked vulnerabilities.
Introduction
Healthcare systems encounter growing medical demand, chronic illness, and resource
restrictions. As digital health technologies are used more, healthcare data is expanding. Health
professionals might concentrate on disease causes and monitor preventive actions and
therapies if they are correctly exploited. Therefore, decision-makers, including policymakers,
legislators, and others, should be educated. Computer scientists, statisticians, and clinical
entrepreneurs all agree that AI, and especially machine learning, is going to be critical to the
success of healthcare reform (Marley, 2020). Computer programs that can reason and learn are
called artificially intelligent (includes adaptability, sensory comprehension, and social
engagement. By trying to extract useful insights from the masses of digital data generated at
every level of healthcare delivery, artificial intelligence (AI) could be able to radically alter the
industry (Drukker, 2020).
Generally, artificial intelligence is built using a software and hardware systems. An artificial
neural network (ANN) provides a theoretical basis for the development of AI algorithms. It is a
simulation of the brain of a human with weighted channels for information transfer between
individual neurons. Artificial intelligence programs find complicated, non-linear associations in
vast datasets (analytics). By identifying and fixing algorithmic failures, training machines may
boost the accuracy of the predicting framework (Rong, 2020) (Miller, 2018).
New technologies may introduce inaccuracies and data breaches. Errors in high-risk healthcare
may have severe repercussions for patients. This is crucial because patients encounter
professionals at their most vulnerable healthcare may have severe repercussions for patients.
This is crucial because patients encounter professionals at their most vulnerable. AI can deliver
evidence-based guidance and decision making to clinicians if harnessed properly (AI-Health). It
offers diagnostics, medication development, epidemiology, individualized treatment, and
operational efficiency. Integrating AI solutions into medical practice requires a robust
governance structure to safeguard people from damage, including unethical conduct (Lim,
2017).
A significant area of study in the field of artificial intelligence and healthcare is the use of data
from electronic health records. If the underlying database and IT architecture do not try to
minimize the distribution of inconsistent or poor data, then it may be difficult to put the data
that was gathered to good use.
Despite this, AI has the ability to advance clinical care, care quality, and research as a
component of electronic health records (EHRs). Artificial intelligence (AI) that has been properly
built and trained on healthcare data may aid in the identification of clinically optimum
procedures outside of the customary channels of scientific publication, guideline development,
and clinical support systems.. Clinical practice patterns produced from electronic health data
may also be studied by AI, which might help in the creation of recent medical methodologies
for health services (Char, 2020).
Economist Antonio Argandoa argues that proper moral standards should be enforced on
information and data if emerging technologies depend on and are based on them (Hartman, et
al., 2021). The following are some of the components he prescribes:
Providing information means guaranteeing that it is true and reliable, at least to a fair
degree.
When collecting or using someone else's data, it's important to remember the ethical
boundaries that surround doing so.
Respect for property and safety rights: Areas of possible vulnerability, such network
security, sabotage, theft of information, and impersonation, are strengthened and must
thus be secured.
Technology's increased anonymity and separation from its users increases the
importance of each individual taking responsibility for their actions.
There are four major ethical problems that must be solved before the full potential of AI in
healthcare can be realized. Data privacy, algorithmic fairness, biases, and transparency are
other major factors to think about (Gerke, 2020). There is a political dimension to the issue of
whether or not AI systems may be considered legitimate (Rodrigues, 2020).
The goal is to provide policymakers with the tools they need to proactively address the ethically
complex challenges raised by mandatory AI use in healthcare settings (C.V.Machadoa, 2020)).
Most of the legal discussion around AI has been motivated by worries about the lack of
information about how algorithms work. Due to the increasing prevalence of AI deployment in
potentially harmful settings, there is a rising need for responsible, ethical, and visible AI
development and administration. Information availability and understanding are the two most
crucial aspects of transparency. Algorithm performance details are often hidden from public
view (Albrecht, 2013).
It has been suggested that the human capacity to identify the creator or operator of a violation
might be compromised by machines that operate according to uncorrected principles and
acquire new behavioural patterns. This is troubling because it threatens the foundation of
society's morals and the legal system's responsibility premise. There may be no way to
determine who is responsible for any damage done if AI is utilized. However, it is difficult to
assess the seriousness of the threat since the widespread adoption of robots will drastically
reduce human consciousness (Tigard, 2020).
The use of AI in a healthcare setting necessitates the capacity to maintain professionalism and
integrity in the face of frequent interruptions and changing priorities (Mirbabaie, 2021).
However, the capacity to evaluate the program and understand how it could fail is a basic and
vital component of assessing the security of any medical software. For instance, similarities
exist between the software development process and the method used to create
pharmaceuticals or mechanical systems, in terms of both their constituent parts and the
physiological processes involved.
Artificial intelligence systems are vulnerable to sudden and catastrophic failure when the
environment or circumstance changes. Artificial intelligence (AI) may go from being very
reliable to profoundly untrustworthy in a short duration. Every AI technology must have
restrictions, even if there is not much bias. To make good choices, humans need to be aware of
and comfortable with any limitations imposed by their environment. In addition, Sometimes
people use decision-support tools without questioning their validity. The court system is not
immune to this sort of mistake; judges have revised their verdicts based on the assessments of
risk that resulted in wrong-doings (Mannes, 2020).
Concerns regarding cyber security are prompted by the use of AI outside human involvement.
RAND Perspectives warns that "data diet" vulnerabilities might open up a new attack vector if
AI is used for monitoring or network security in the area of national security. The research also
addresses domestic security problems, such as the (increasing) use of artificial agents by
governments for citizen monitoring. These have been identified as possible threats to people's
fundamental rights. These problems are serious because they endanger essential infrastructure,
which in turn endangers people's lives, their safety, and their ability to get what they need.
Since many cyber security flaws are difficult to spot until after the fact (after the damage has
already been done), they offer a potentially serious risk (Rodrigues, 2020).
Problems with biases in the datasets used to develop algorithms are commonplace in artificial
intelligence (AI) research and development. According to Buolamwini and Gebru, the datasets
used in automated face recognition are biased, making it less effective at identifying people
with darker skin tones, particularly women. In order to be effective, machine learning relies on
a large dataset, and the great amount of currently used datasets from clinical trial research
taken from predetermined groups. For this reason, it's possible that underserved and, by
extension, underrepresented patient groups would fare worse under the created algorithms
(M.Safdarac, 2019).
There are a variety of situations in which individuals fail to follow through on the ethical
decisions they make and when responsible decision making goes horribly wrong. Of course,
there are situations when individuals deliberately act dishonestly. As unlikely as it may seem,
unethical decisions and actions are always a possibility. Testing AISs is advised; before adopting
such robots and AI systems, they must be developed, tested, evaluated, and analyzed logically
and statistically for reliability, efficiency, stability, and ethical adherence. Verification and
validation may assist clinicians in justifying AIS use. Clinical ethics prohibit unaccountable
behaviour. However, physicians and AIS may be opaque. AIS cannot work in human care if it
cannot be punished. Managers of organizations utilizing AIS should make it very apparent to
their medical staff that blaming the technology is not an acceptable means of escaping
accountability (Smith, 2020).
AI and Bias
AI systems have been shown to include and make use of human and social biases, even at a
large scale. The method is not without fault, but rather the data it employs. Models may be
trained using a wide variety of data, including human assessments and data representing the
downstream effects of social or historical injustices. Furthermore, there is a possibility of bias in
data collecting and utilization, and user-generated data might function as a feedback loop that
bolsters prejudice. We are unaware of any guidelines or standards for documenting and
assessing these models; nonetheless, they should be used as a foundation for future work by
scientists and medical professionals (Nelson, 2019) (Shah et al., 2020).
The role of artificial intelligence is becoming more important that judgments made by AI be
ethical and free of prejudice as our dependence on these systems grows. An open,
understandable, and accountable AI is what humans feel is necessary. In several domains, the
usage of AI algorithms for improving patient paths and surgical results surpasses that of
humans. Starting the era of artificial intelligence in healthcare without using AI is probably
unscientific and immoral, given that AI is expected to supplement, coexist with, or replace
existing systems (Ravi B. Parikh, 2019).
Evaluation
The ethical theory we will discuss is utilitarianism, which has its origins in 18th and 19th century
social and political philosophy but whose central premise is just as important in the 21st
century (Hartman et al., 2021). The core concept of utilitarianism is that results matter, and that
we should make decisions based on how those results will affect the greater good.
Utilitarianism is a consequentialist theory of ethics and social policy because its proponents
argue that we should choose courses of action that have the greatest net benefit to society
(Hartman et al., 2021).
The utilitarian approach has made important contributions to sound ethical decision making,
despite the fact that it has criticism. When assessing the merits of future utilitarian decision
making, it can be helpful to first consider some of the more general criticisms to the theory.
One set of issues is that it's hard to count, measure, compare, and quantify the effects of
actions without using utilitarian reasoning. To follow the utilitarian principle that judgments
should be made based on weighing the relative benefits and costs of several courses of action,
we need some kind of comparative framework. However, in reality, it can be challenging to
make certain comparisons and measures.
In an industry like healthcare where human’s life is the most important thing of all, decisions
should be made to save the most lives possible.
Conclusion
There is a growing need for morally sound AI in the medical field. Data bias may be avoided
with the use of algorithms trained on objective, real-time information. The method and its
implementation in a system need to be evaluated. Machine learning can't take the place of
doctors' experience, but it might help them make more informed choices. Artificial intelligence
might be utilized for screening and evaluation if medical professionals are few in a low-resource
setting. Since all AI decisions are made using algorithms, even the quickest ones are methodical
in comparison to human decision-making. Therefore, even if actions do not have legal
consequences, not the technologies themselves but the minds behind them and the ones who
use them are the ones who must shoulder the burden of accountability. Even though there are
ethical concerns associated with AI, it is expected to either integrate with or replace existing
healthcare systems. Refusing to adopt AI to help human advance might be both unscientific and
unethical, further research should made from different perspective.
(2163 words)
REFFERECNE LIST