0% found this document useful (0 votes)
34 views78 pages

Final Project (AI)

The document discusses the threats posed by artificial intelligence including risks related to employment, bias, security, autonomous weapons, privacy, control, social manipulation and resource depletion. It aims to provide an overview of the major risks of AI and their potential impacts on society to raise awareness and help develop strategies to address the issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views78 pages

Final Project (AI)

The document discusses the threats posed by artificial intelligence including risks related to employment, bias, security, autonomous weapons, privacy, control, social manipulation and resource depletion. It aims to provide an overview of the major risks of AI and their potential impacts on society to raise awareness and help develop strategies to address the issues.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

A PROJECT REPORT

ON

“ Artificial Intelligence
is more dangerous then
Nuclear Weapons ”
Submitted in partial fulfillment of requirement for the award of the Degree of

BACHELOR OF COMMERCE
Submitted to

Osmania University
Hyderabad

By

1. T GAYATHRI 208620405061
2. G MAHALAKSHMI 208620405064
3. D POOJA 208620405065
4. SHREE PATHAK 208620405066

Under the guidance of


G. KAVITHA REDDY
Associate Professor

AVINASH COLLEGE OF COMMERCE


(Affiliated to Osmania University)
L.B. Nagar, Hyderabad
[2020 – 2023]
AVINASH COLLEGE OF COMMERCE
(Affiliated to Osmania University)
L.B. Nagar, Hyderabad
Department of Commerce

Certificate

This is to certify that the Project Report titled “Artificial Intelligence is more dangerous than

Nuclear Weapons ” is the Bonafide work done by T GAYATHRI, G MAHALAKSHMI, D

POOJA, SHREE PATHAK bearing the Hall Ticket No’s: 208620405061, 208620405064,

208620405065, 208620405066 submitted in partial fulfillment of requirements for the award of

the degree of BACHELOR OF COMMERCE of Osmania University, Hyderabad during the

academic year 2020- 2023.

Place: L.B. Nagar,


Hyderabad
Date:

Internal Guide External Examiner

Principal

i
DECLARATION

We hereby declare that the Project work entitled “ Artificial Intelligence is more
dangerous than Nuclear Weapons ” submitted to Department of Commerce, Avinash
College of Commerce, L.B. Nagar, Hyderabad (affiliated to Osmania University) is a
Bonafide record of original work done by us under the guidance of G. KAVITHA
REDDY and this project work is submitted in the partial fulfillment of the requirements for
the award of the degree of BACHELOR OF COMMERCE. This record has not been
submitted to any other University or Institute for the award of any degree or diploma.

Place: L. B. Nagar

Date:

T GAYATHRI 208620405061

G MAHALAKSHMI 208620405064

D POOJA 208620405065

SHREE PATHAK 208620405066

ii
ACKNOWLEDGEMENT

We would like to express sense of gratitude to our Management, Dr. Neetu Sachdeva,
Principal, Avinash College of Commerce, L.B. Nagar, Hyderabad.

We thank our Guide, Mrs. G. Kavitha Reddy for his / her valuable guidance and support in
completing the project. We also thankful to the entire faculty members of our college for their
kind co-operation.

Lastly, We would like to express our love and affection to our beloved parents and best wishes
towards our classmates providing us the moral support and encouragement.

Date:

Place: L. B. Nagar

T GAYATHRI
G MAHALAKSHMI
D POOJA
SHREE PATHAK

iii
ABSTRACT

In recent years, with rapid technological advancement in both computing hardware


and algorithm, Artificial Intelligence (AI) has demonstrated significant advantage over human
being in a wide range of fields, such as image recognition, education, autonomous vehicles,
finance, and medical diagnosis. However, AI-based systems are generally vulnerable to various
security threats throughout the whole process, ranging from the initial data collection and
preparation to the training, inference, and final deployment. In an AI-based system, the data
collection and pre-processing phase are vulnerable to sensor spoofing attacks and scaling
attacks, respectively, while the training and inference phases of the model are subject to
poisoning attacks and adversarial attacks, respectively. To address these severe security threats
against the AI-based systems, in this article, we review the challenges and recent research
advances for security issues in AI, so as to depict an overall blueprint for AI security. More
specifically, we first take the lifecycle of an AI-based system as a guide to introduce the
security threats that emerge at each stage, which is followed by a detailed summary for
corresponding countermeasures. Finally, some of the future challenges and opportunities for
the security issues in AI will also be discussed.
Table Of Contents

S.No. Particulars Page No.


Introduction 2-4
Objectives 5

1. Chapter 1 Need 6
Scope 7
Research Methodology 8 – 10

2. Chapter 2 Research Literature 12 – 13

3. Chapter 3 Industry Profile 15 - 28

4. Chapter 4 Data Analysis 30 - 56

Findings 58 - 62

5. Chapter 5 Suggestions 63 - 65
Conclusion 66 - 67

6. Bibliography 68
Chapter – 1

1
INTRODUCTION
Artificial intelligence (AI) is the field of computer science that develops systems able to perform tasks
commonly associated with intelligent human beings. Machine learning (ML) is a branch of AI that
develops programs and mathematical algorithms with the objective of providing computers the ability
to learn automatically without, or with less, human intervention. This subtype of AI is the most widely
used in radiology. However, there are other approaches of AI with potential applications in radiology,
such as use of natural language processing (NLP), applications for speech recognition, and even tools
for guiding patient planning or optimizing work lists.

At the dawn of AI, a wide variety of ML methods are being developed in various fields of radiology.
ML serves to optimize data derived from almost every type of imaging technique. ML allows
processing and analysis of radiologic data in different means according to specific purposes and tasks
Radiology applications for image-based analysis, including computer-aided diagnosis (CAD) tools,
radiologic imaging segmentation and registration, medical images classification, or lesion detection
and classification, are based on ML. Currently, radiologists’ familiarity with a specific glossary of
concepts and workflows related to ML applied to radiology is important because this language is
becoming popular in scientific and imaging communities.

Artificial Intelligence (AI) has become an integral part of our lives, and its use is only expected to
increase in the coming years. While AI offers numerous benefits, including improved efficiency and
productivity, it also poses significant threats that cannot be ignored. As AI becomes more advanced
and autonomous, the potential risks become greater, and there is a growing need for policymakers,
researchers, and industry leaders to address these threats and ensure that the benefits of AI are realized
in a safe and responsible way.
This study aims to provide an overview of the major threats posed by AI and their potential impact
on society. The study will examine the risks of AI in areas such as employment, bias, security,
autonomous weapons, privacy, control, social manipulation, and resource depletion. By identifying
and understanding these threats, policymakers, researchers, and industry leaders can work together to
develop effective strategies to address them.
It is important to note that the threats posed by AI are not purely hypothetical; many of these risks
have already materialized to some extent. For example, AI has been used in decision-making processes
in the criminal justice system, resulting in biased outcomes. AI-powered cyberattacks have also
become increasingly common, posing a significant threat to businesses and governments. As such, it
is imperative that we take these risks seriously and work towards mitigating them.
Overall, this study aims to raise awareness about the potential risks of AI and the importance of
developing strategies to address them. By doing so, we can ensure that the benefits of AI are realized
while minimizing its potential negative impact on society.
In recent years, applications making use of Artificial Intelligence (AI) have gained re-newed
popular interest. Expectations that AI might change the face of various life domains for the better are
abundant. Be it medicine, mobility, scientific progress, the economy, or politics; hopes are that AI will
increase the veracity of input, effectiveness and efficiency of procedures as well as the overall quality
of outcomes. Irrespective whether changes apply to the workplace, public management, industries
producing goods and services as well as private life: As usual with the diffusion of new technologies
there is tremendous uncertainty as to how exactly developments will play out, what social
2
consequences will manifest and to what extent respective expectations of stakeholders and societal
groups will materialize. Oftentimes, there will be some people that immensely profit from socio-
technological innovations, while others are left behind and cannot cope with the unfolding of events.
Thus, whenever new technologies bring about social change, the success of their implementation or
failure depends upon the reaction of the affected people. People might happily accept new technology,
they might not care nor use it at all, or they may even show severe reactance towards it. There is first
empirical evidence suggesting that the general public itself shows some considerable restraint when it
comes to the broad societal diffusion of AI applications or robots that might even border on actual fear
of such technology. However, as fear and respective threat perceptions are presuppositional theoretical
constructs, they necessitate a more fine-grained approach that goes beyond broad claims of concerns
or even fear regarding autonomous systems.
Accordingly, in this paper, we argue for an improved assessment of the perceived threats of AI and
propose a survey scale to measure these threat perceptions. First, a broadly usable measurement would
need to address perceived threats of AI as a precondition to any actual fear experienced. This
conceptual difference is subsequently based on the literature on fear and fear appeals. Second, the
perceived threat of AI would need to take into account the context-dependency of respective fears as
most real-world applications of AI are highly domain-specific. AI that assists in the medical treatment
of a person’s disease might be perceived vastly different from an AI that takes over their job. Third,
not only do perceptions hinge on the domain in which people encounter AI applications, it would also
be necessary to differentiate between the extent of an AI’s actual autonomy and reach in inflicting
consequences upon a person. Thus, it needs to be asked to what extent the AI is merely used for analysis
of a given situation, or going even further, whether the AI is used to actively give suggestions or even
making autonomous decisions.
As the field of application is crucial for the mechanism and effects of threat perceptions concerning
AI, any standardized survey measure needs to be somewhat flexible and individually adaptable to
accommodate the necessities of a broad application that considers AI’s functions and the context of
implementation. That is why our scale construction opts for a design that can easily be adapted to
varying research interests of AI scholars.
Consequently, we developed a scale addressing threats of AI that takes into account such necessary
distinctions and subsequently tested the proposed measure for three domains (i.e. loan origination, job
recruitment and medical treatment that are subject to an AI application) in an online survey with
German citizens. In our proposed measure of perceived threats of AI, we aim to cover all aspects of
AI functionality and make it applicable to various societal fields, where AI applications are used.
Thereby, we highlight three contributions of our scale, that are addressed in the following:
(1) We underpin our scale development theoretically by connecting it with the psychological
literature on fear appeals.
(2) The construction of the scale differentiates between the discrete functionalities of AI that may
cause different emotional reactions.
(3) Moreover, we consider perceived threats of AI as dependent on the context of the AI’s
implementation. This means that any measure must pay respect to AI’s domain-specificity.
The collected data supports the factorial structure of the proposed TAI scale. Furthermore, results show
that people differentiate between distinct AI functionalities, in that, the extent of the functional reach
and autonomy of an AI application evoke different degrees of threat perceptions irrespective of
domain. Still, such distinct perceptions do also differ between the domains tested. For instance,

3
recognition and prediction with regard to a physical ailment as well as the recommendation for a
specific therapy made by an AI do not evoke substantial threat perceptions. Contrarily, autonomous
decision-making in which an AI unilaterally decides on the proscribed treatment was met with
relatively bigger apprehension. At the same time, the application of AI in medical treatment was
generally perceived as less fearsome than situations where AI applications are used to screen applicants
on a job or a financial loan.
Eventually, to measure construct validity, we assessed the effects of the Threats of Artificial
Intelligence (TAI) scale on emotional fear. Threat perceptions are a necessary, but not sufficient
prerequisite to fear. While most research directly focuses on fear, we will subsequently argue for the
benefits of addressing the preceding threat perceptions. Ultimately, the threat perceptions do in fact
trigger emotional fear. Lastly, we discuss the adoption and use of the TAI scale in survey
questionnaires and make suggestions for its application in empirical research as well as general
managerial recommendations with regard to public concerns of AI.
The scope of AI applications is quite wide and covers both familiar technologies and emerging new
areas that are far from mass use, in other words, it is the entire range of solutions, from vacuum cleaners
to space stations. You can divide all their diversity by the criterion of key points of development. AI
is not a monolithic domain. Moreover, some technological areas of AI appear as new sub-sectors of
the economy and separate entities, while simultaneously serving most areas in the economy. The
development of the use of AI leads to the adaptation of technologies in classical sectors of the economy
throughout the value chain and transforms them, leading to the algorithmization of almost all
functionality, from logistics to company management.

4
OBJECTIVES

● To reduce the Job Losses caused due toArtificial Intelligence


Automation.
● To overcome Social Manipulation caused due toArtificial Intelligence
Algorithms.
● To eliminate Social Surveillance caused byArtificial Intelligence
Technology.
● To identify Biases caused due to ArtificialIntelligence.
● To know about the dangers of Autonomous Weapons powered by
Artificial Intelligence.

5
Need to study the risks of Artificial Intelligence

Artificial intelligence (AI) has the potential to revolutionize our world, butit also carries certain
risks that need to be studied and managed. Here are some key areas where the risks of AI should
be explored:

1. Safety: As AI becomes more advanced, there is a risk that it couldcause harm to humans or
the environment.
2. Bias: AI systems are only as unbiased as the data they are trained on, and there is a risk that
biased data could perpetuate and amplifyexisting inequalities and discrimination.
3. Privacy: AI has the potential to gather and analyze vast amounts ofdata about individuals,
raising concerns about privacy and the potential for misuse of personal information.
4. Security: AI systems could be vulnerable to hacking, leading tocyberattacks or the theft
of sensitive information.
5. Job displacement: The rise of automation through AI could lead to significant job
displacement and economic disruption, particularly inindustries where tasks can be easily
automated.
6. Governance: As AI becomes more powerful, there is a risk that it could be used in ways that
are not aligned with societal values or thatare harmful to society as a whole.
It is important to study these risks and take steps to mitigate them as AIcontinues to advance and
become more integrated into our lives. This may involve developing new technologies, policies,
and regulations to manage the risks of AI and ensure that its benefits are maximized whileits risks
are minimized.

6
Scope of understanding the risks of Artificial Intelligence

Artificial intelligence (AI) has the potential to revolutionize our world, butit also carries certain risks
that need to be studied and managed. Here are some key areas where the risks of AI should be explored:

1. Technical: There is a need to study the technical risks associated with AI systems, including
issues related to reliability, safety, and security. Technical experts can explore these risks by
examining the design and implementation of AI systems and identifying potential point of
failure.

2. Social: The risks of AI extend beyond technical concerns, and socialscientists can explore
issues related to ethics, privacy, bias, and governance. They can examine how AI is impacting
society and identify potential unintended consequences that could arise from its use.

3. Economic: The rise of AI has the potential to disrupt existing industries and lead to job
displacement. Studying the economic risksassociated with AI requires analyzing the potential
impact on employment, income distribution, and economic growth.

4. Legal and regulatory: There is a need for legal and regulatory frameworks to manage the risks
associated with AI. Studying the legal and regulatory risks of AI involves examining issues
related to liability, intellectual property, and privacy, and identifying areas wherenew laws and
regulations may be necessary.

5. International: As AI becomes more ubiquitous, there is a need forglobal cooperation to


manage its risks. Studying the international risks associated with AI involves exploring
issues related to governance, security, and diplomacy.
Overall, the scope of studying the risks of artificial intelligence is vast and requires interdisciplinary
collaboration across technical, social, economic, legal, and international domains. By studying
these risks, wecan better understand the challenges associated with AI and develop strategies to
mitigate its potential negative consequences.

7
RESEARCH METHODOLOGY
Methodology in research is defined as the systematic method to resolve a research problem through
data gathering using various techniques, providing an interpretation of data gathered and drawing
conclusions about the research data. Essentially, a research methodology is the blueprint of a research
or study. As such, the methodology in research proposal is of utmost importance.

Methodology vs. Methods


The confusion between “methodology” and “methods” in research is a common occurrence, especially
with the terms sometimes being used interchangeably. Methods and methodology in the context of
research refer to two related but different things: method is the technique used in gathering evidence;
methodology, on the other hand, “is the underlying theory and analysis of how a research does or
should proceed”. Similarly, Birks and Mills define methodology as “a set of principles and ideas that
inform the design of a research study.” Meanwhile, methods are “practical procedures used to generate
and analyze data.
To summarize these definitions, methods cover the technical procedures or steps taken to do the
research, and methodology provides the underlying reasons why certain methods are used in the
process.

Methodological Approach or Methods Used in Research


The next step is to identify the different methods used in research. Traditionally, researchers
often approach research studies using the methodology research institutions typically use which are
two distinct paradigms, namely positivistic and phenomenological. Also sometimes called qualitative
and quantitative, positivistic and phenomenological approaches play a significant role in determining
your data gathering process, especially the methods you are going to use in your research.

Research methods lay down the foundation of your research. According to Neil McInroy, the chief
executive of Centre for Local Economic Strategies, not using the appropriate research methods and
design creates “a shaky foundation to any review, evaluation, or future strategy. In any type of research,
the data you will gather can come either in the form of numbers or descriptions, which means you will
either be required to count or converse with people. In research, there are two fundamental methods
used for either approach—quantitative and qualitative research methods. Even if you take the path of
a philosophy career, these are still methods that you may encounter and even use.

Quantitative Method

This approach is often used by researchers who follow the scientific paradigm. This method seeks to
quantify data and generalize results from a sample of a target population. It follows structured data
collection methods and processes with data output in the form of numbers. Quantitative research also
observes objective analysis using statistical means. Based on a report, quantitative research took the
biggest portion of the global market research spend in 2018.

8
Qualitative Method

Unlike the quantitative approach that aims to count things in order to explain what is observed, the
qualitative research framework is geared toward creating a complete and detailed description of your
observation as a researcher. Rather than providing predictions and/or causal explanations, the
qualitative method offers contextualization and interpretation of the data gathered. This research
method is subjective and requires a smaller number of carefully chosen respondents.

Mixed Methods

A contemporary method sprung from the combination of traditional quantitative and qualitative
approaches. According to Brannen and Moss (2012), the existence of the mixed methods approach
stemmed from its potential to help researchers view social relations and their intricacies clearer by
fusing together the quantitative and qualitative methods of research while recognizing the limitations
of both at the same time.

Mixed methods are also known for the concept of triangulation in social research. According to Haq,
triangulation provides researchers with the opportunity to present multiple findings about a single
phenomenon by deploying various elements of quantitative and qualitative approaches in one research.
This is the kind of method that one may use when studying sleep and academic performance.

9
Hypothesis Of The Study
The potential threats posed by artificial intelligence have been a topic of much discussion and debate
in recent years. One hypothesis is that AI could become superintelligent and surpass human-level
intelligence, leading to the technological singularity, which could potentially pose a serious threat to
humanity. Another concern is the potential for widespread unemployment and economic instability as
AI continues to automate many jobs that are currently performed by humans. AI algorithms that are
trained on biased data can perpetuate and amplify that bias, leading to discriminatory outcomes.
Additionally, as AI systems become more widespread and powerful, they could be used for cyber
attacks, surveillance, and other malicious activities. Finally, there is growing concern around the
development of autonomous weapons that could make decisions and take actions without human
intervention, potentially leading to unintended consequences and catastrophic outcomes. It is important
for society to continue researching and developing AI in a responsible and ethical manner, taking into
account the potential risks and benefits of this rapidly advancing technology.

There are several potential threats posed by artificial intelligence (AI) that have been hypothesized by
experts in the field. Here are a few:

1. Unintended consequences: As AI systems become more complex, it becomes increasingly


difficult to predict their behavior. This means that even well-intentioned AI systems could have
unintended consequences that could be harmful.
2. Job displacement: As AI technology becomes more advanced, it has the potential to automate
many jobs that are currently performed by humans. This could lead to significant job
displacement, particularly in industries such as transportation and manufacturing.
3. Bias and discrimination: AI systems are only as unbiased as the data they are trained on. If
the data used to train an AI system is biased, then the system will also be biased. This could
lead to discrimination against certain groups of people.
4. Autonomous weapons: There is concern that AI could be used to develop autonomous
weapons that could make decisions about when and where to use lethal force without human
intervention. This could potentially lead to devastating consequences.
5. Lack of transparency and accountability: As AI systems become more complex, it becomes
more difficult to understand how they make decisions. This lack of transparency could make it
difficult to hold AI systems accountable for their actions.

These are just a few of the potential threats posed by AI. As AI technology continues to advance, it is
important to consider these potential threats and take steps to mitigate them.

10
Chapter – 2

11
RESEARCH LITERATURE

Sharma Academy ( October 22, 2022 ) - AI brings impressive applications to all of us with
remarkable benefits; But there are remarkably unanswered questions with social, political or ethical aspects. We
have to be aware of them. – “Opportunities & Challenges Of Artificial Intelligence in India”

Yashi Chowdhary ( March 4, 2022 ) - AI and machine learning are now becoming essential
to information security, as these technologies are capable of swiftly analyzing millions of data sets and
tracking down a wide variety of cyber threats from malware menaces to shady behavior that might
result in a phishing attack – “Artificial Intelligence And Laws In India”

Venu Madhav Govindu ( August 17, 2021 ) - Apart from enhancing the powers of
surveillance & control by the state, AI is being deployed by powerful corporations in an era of
unprecedented influence of capital & stratospheric levels of economic inequality. – “Artificial
Intelligence and its Discontents”

Kaushiki Sanyal and Rajesh Chakrabarti ( December 28, 2021 ) - Today’s AI suffers
from a number of novel unresolved vulnerabilities. They demonstrate that while AI systems can exceed
human performance in many ways, they can also fail in ways that a human never would. – “Artificial
Intelligence: Promise, Challenges and Threats for India”

Ram M ( July 2, 2021 ) - The artificial intelligence and machine learning driver is expected to
lead to negative employment outcomes in job families such as Education and Training, Legal and
Business and Financial Operations. – “Risks and Benefits of Artificial Intelligence for India's
Employment and Agrarian Economy”

Dr Urvashi Aneja ( February 18, 2019 ) - In the absence of thinking about both technical
feasibility and social viability, there is a strong risk that AI-based technology gains are likely to benefit
only a select few Indians. – “The Problem With India’s ‘AI for All’ Strategy”

Vidushi Marda ( December 15, 2018 ) - Artificial intelligence (AI) is an emerging focus area
of policy development in India. The country's regional influence, burgeoning AI industry and
ambitious governmental initiatives around AI make it an important jurisdiction to consider.
– “Artificial intelligence policy in India: a framework for engaging the limits of data-driven
decision-making”

Kul Bhushan ( July 19, 2018) - A wide implementation of a high-end technology like AI in
India is not going to be without challenges. – “Artificial Intelligence in Indian banking: Challenges
and opportunities”

12
Shashi Shekhar Vempati ( August 2016) - Though India does not suffer from a brain drain
of top-quality AI talent from university research labs to the industry, it must be wary to avoid this
concentration of intellectual energy. – “India and the Artificial Intelligence Revolution”

Amani Kaadoor ( 2016 ) - The development of emergent technologies carries with it ethical
issues and risks. Depending on how AI's development is managed, it may have beneficial and/or
deleterious effects. – “Managing the ethical and risk implications of rapid advances in artificial
intelligence”

13
Chapter – 3

14
INDUSTRY PROFILE
The Global Artificial Intelligence market size was valued at USD 136.55 billion in 2022 and is
projected to expand at a compound annual growth rate (CAGR) of 37.3% from 2023 to 2030. The
continuous research and innovation directed by tech giants are driving the adoption of advanced
technologies in industry verticals, such as automotive, healthcare, retail, finance, and manufacturing.
For instance, in November 2020, Intel Corporation acquired Cnvrg.io, an Israeli company that
develops and operates a platform for data scientists to build and run machine learning models, to boost
its artificial intelligence business. Technology has always been an essential element for these
industries, but artificial intelligence (AI) has brought technology to the center of organizations. For
instance, from self-driving vehicles to crucial life-saving medical gear, AI is being infused virtually
into every apparatus and program.

AI is proven to be a significant revolutionary element of the upcoming digital era. Tech giants like
Amazon.com, Inc.; Google LLC; Apple Inc.; Facebook; International Business Machines Corporation;
and Microsoft are investing significantly in the research and development of AI. These companies are
working to make AI more accessible for enterprise use cases. Moreover, various companies adopt AI
technology to provide a better customer experience. For instance, in March 2020, McDonald’s made
its most significant tech investment of USD 300 million to acquire an AI start-up in Tel Aviv to provide
a personalized customer experience using artificial intelligence.
The essential fact accelerating the rate of innovation in AI is accessibility to historical datasets. Since
data storage and recovery have become more economical, healthcare institutions and government
agencies build unstructured data accessible to the research domain. Researchers are getting access to
rich datasets, from historic rain trends to clinical imaging. The next-generation computing
architectures, with access to rich datasets, are encouraging information scientists and researchers to
innovate faster.

15
Furthermore, progress in profound learning and ANN (Artificial Neural Networks) has also fueled the
adoption of AI in several industries, such as aerospace, healthcare, manufacturing, and automotive.
ANN works in recognizing similar patterns and helps in providing modified solutions. Tech companies
like Google Maps have been adopting ANN to improve their route and work on the feedback received
using the ANN. ANN is substituting conventional machine learning systems to evolve precise and
accurate versions. For instance, recent advancements in computer vision technology, such as GAN
(Generative Adversarial Networks) and SSD (Single Shot MultiBox Detector), have led to digital
image processing techniques. For instance, images and videos taken in low light, or low resolution,
can be transformed into HD quality by employing these techniques. The continuous research in
computer vision has built the foundation for digital image processing in security & surveillance,
healthcare, and transportation, among other sectors. Such emerging methods in machine learning are
anticipated to alter the manner AI versions are trained and deployed.

The WHO (World Health Organization) declared the novel coronavirus (COVID-19) outbreak a
pandemic in 2020, causing a massive impact on businesses and humankind. This pandemic has
emerged as an opportunity for AI-enabled computer systems to fight against the outbreak, as several
tech giants and start-ups started working on preventing, mitigating, and containing the virus. For
instance, the Chinese tech giant Alibaba's research institute Damo Academy has developed a
diagnostic algorithm to detect new coronavirus cases with the chest CT (Computed Tomography) scan.
The AI model used in the system has been trained with sample data from over 5,000 positive
coronavirus cases. In June 2020, Lunit developed an AI solution for the X-ray analysis of the chest for
simpler management of COVID-19 cases and offered assistance in interpreting, monitoring, and
patient trials.
The COVID-19 outbreak is expected to stimulate the market growth of next-generation tech domains,
including artificial intelligence, owing to the mandated WFH (work-from-home) policy due to the
pandemic. For instance, LogMeIn, Inc., a U.S.-based company that provides SaaS (Software-as-a-
Service) and cloud-based customer engagement and remote connectivity & collaboration services, has

16
experienced a significant increase in new sign-ups across its product portfolios amid the pandemic.
Also, tech companies are expanding their product offerings and services to widen availability across
the globe. For instance, in April 2020, Google LLC launched an AI-enabled chatbot called Rapid
Response Virtual Agent for call centers. This chatbot is built to respond to issues customers might be
experiencing due to the coronavirus (COVID-19) outbreak over voice, chat, and other social channels.

Solution Insights
Software solutions led the market and accounted for more than 36.7% of the global revenue in 2022.
This high percentage can be attributed to prudent advances in information storage capacity, high
computing power, and parallel processing capabilities to deliver high-end services. Furthermore, the
ability to extract data, provide real-time insight, and aid decision-making, has positioned this segment
to capture the most significant portion of the market. Artificial intelligence software solutions include
libraries for designing and deploying artificial intelligence applications, such as primitives, linear
algebra, inference, sparse matrices, video analytics, and multiple hardware communication
capabilities. The need for enterprises to understand and analyze visual content to gain meaningful
insights is expected to spur the adoption of artificial intelligence software over the forecast period.

Companies adopt AI services to reduce their overall operational costs, yielding more profit. Artificial
Intelligence as a Service, or AIaaS, is being used by companies to obtain a competitive advantage over
the cloud. Artificial intelligence services include installation, integration, maintenance, and support
undertakings. The segment is projected to grow significantly over the forecast period. AI hardware
includes chipsets such as GPU (Graphics Processing Unit), CPU, application-specific integrated
circuits (ASIC), and field-programmable gate arrays (FPGAs). GPUs and CPUs currently dominate
the artificial intelligence hardware market due to their high computing capabilities required for AI
frameworks. For instance, in September 2020, Atomwise partnered with GC Pharma to offer AI-based
services to the former and help develop more effective novel hemophilia therapies.

17
Technology Insights
On the back of its growing prominence because of its complicated data-driven applications, including
text/content or speech recognition, the deep learning segment led the market and accounted for around
36.4% share of the global revenue in 2022. Deep learning offers lucrative investment opportunities as
it helps overcome the challenges of high data volumes. For instance, in July 2020, Zebra Medical
Vision collaborated with TELUS Ventures to enhance the availability of the former’s deep learning
solutions in North America and expand AI solutions to clinical care settings and new modalities.

Machine learning and deep learning cover significant investments in AI. They include both AI
platforms and cognitive applications, including tagging, clustering, categorization, hypothesis
generation, alerting, filtering, navigation, and visualization, which facilitate the development of
advisory, intelligent, and cognitively enabled solutions. The growing deployment of cloud-based
computing platforms and on-premises hardware equipment for the safe and secure restoration of large
volumes of data has paved the way for the expansion of the analytics platform. Rising investments in
research and development by leading players will also play a crucial role in increasing the uptake of
artificial intelligence technologies. During the forecast period, the NLP segment is expected to gain
momentum. NLP is becoming increasingly widely used in various businesses to understand client
preferences, evolving trends, purchasing behavior, decision-making processes, and more, in a better
manner.

18
End-use Insights
The advertising & media segment led the market and accounted for more than 19.5% of the global
revenue share in 2022. This high share is attributable to the growing AI marketing applications with
significant traction. For instance, in January 2022, Cadbury started an initiative to let small business
owners create their AD for free using the face and voice of a celebrity, with the help of an AI tool.
However, the healthcare sector is anticipated to gain a leading share by 2030. The healthcare segment
has been segregated based on use cases such as robot-assisted surgery, dosage error reduction, virtual
nursing assistants, clinical trial participant identifier, hospital workflow management, preliminary
diagnosis, and automated image diagnosis. The BFSI segment includes financial analysis, risk
assessment, and investment/portfolio management solicitations.
Artificial intelligence has witnessed a significant share in the BFSI sector due to the high demand for
risk & compliance applications along with regulatory and supervisory technologies (SupTech). By
using AI-based insights in Suptech tools in financial markets, the authorities are increasingly
examining FinTech-based apps used for regulatory, supervisory, and oversight purposes for any
potential benefits. In a similar vein, regulated institutions are creating and implementing FinTech
applications for reporting and regulatory and compliance obligations. Financial institutions are using
AI applications for risk management and internal controls as well. The combination of AI technology
with behavioral sciences enables large financial organizations to prevent wrongdoing, moving the
emphasis from ex-post resolution to proactive prevention.
Other verticals for artificial intelligence systems include retail, law, automotive & transportation,
agriculture, and others. The conversational AI platform is one of the most used applications in every
vertical. For instance, in April 2020, Google LLC launched a Rapid Response Virtual Agent for call
centers. This new chatbot is built to respond to issues customers might be experiencing due to the
coronavirus (COVID-19) outbreak over voice, chat, and other social channels. The retail segment is
anticipated to witness a substantial rise owing to the increasing focus on providing an enhanced
shopping experience. An increasing amount of digital data in text, sound, and images from different
social media sources is driving the need for data mining and analytics. In the entertainment and
advertising industry, AI has been creating a positive impact, and companies are using AI techniques
to promote their products and connect to the customer base.

19
Regional Insights
North America dominated the market and accounted for over 36.8% share of global revenue in 2022.
This high share is attributable to favorable government initiatives to encourage the adoption of
artificial intelligence (AI) across various industries. For instance, in February 2019, U.S. President
Donald J. Trump launched the American AI Initiative as the nation’s strategy for promoting leadership
in artificial intelligence. As part of this initiative, Federal agencies have fostered public trust in AI-
based systems by establishing guidelines for their development and real-life implementation across
different industrial sectors.

The regional market in Asia Pacific is anticipated to witness significant growth in the artificial
intelligence market. This growth owes to the significantly increasing investments in artificial
intelligence. For instance, in April 2018, Baidu, Inc., a China-based tech giant, announced that it had
entered into definitive agreements with investors concerning the divestiture of its financial services
group (FSG), providing wealth management, consumer credit, and other business services. The
investors are led by Carlyle Investment Management LLC and Tarrant Capital IP, LLC, with
participation from ABC International and Taikanglife, among others. Also, a growing number of start-
ups in the region are boosting the adoption of AI to improve operational efficiency and enable process
automation.

Key Companies & Market Share Insights


Vendors in the market are focusing on increasing the customer base to gain a competitive edge in the
industry. Therefore, key players are taking several strategic initiatives, such as mergers and
acquisitions, and partnerships with other major companies. For instance, in April 2020, Advanced
Micro Devices announced a strategic alliance with Oxide Interactive LLC, a video game developer
company, to develop graphics technologies for the cloud gaming market space. Both companies have
planned to create a set of tools & techniques to address the real-time demands of cloud-based gaming.
Also, in December 2019, Intel Corporation completed the acquisition of Habana Labs Ltd., an Israel-
based deep learning company. This acquisition is anticipated to strengthen Intel Corporation’s AI
portfolio and boost its efforts in the AI silicon market.

20
In September 2019, IBM Watson Health signed an agreement with Guerbet, a France-based medical
imaging company, to develop an AI software solution for cancer diagnostics and monitoring. This
partnership has marked an extension of their earlier collaboration with live cancer diagnostics and
monitoring. Furthermore, in January 2019, Intel Corporation announced its partnership with Alibaba
Group Holding Limited (China) to co-develop an AI-powered tracking technology to be deployed at
the Olympic Games 2020. This technology is based on Alibaba’s cloud computing technology and
Intel’s hardware to power a deep learning application to extract athletes’ 3D forms in competition or
training. Some prominent players in the global artificial intelligence market include:

• Advanced Micro Devices


• AiCure
• Arm Limited
• Atomwise, Inc.
• Ayasdi AI LLC
• Clarifai, Inc
• Cyrcadia Health
• Enlitic, Inc.
• Google LLC
• H2O.ai.
• HyperVerge, Inc.
• International Business Machines Corporation
• IBM Watson Health
• Intel Corporation
• Iris.ai AS.
• Lifegraph
• Microsoft
• NVIDIA Corporation
• Sensely, Inc.
• Zebra Medical Vision, Inc

21
Global Artificial Intelligence Market Segmentation
This report forecasts revenue growth at global, regional, and country levels and provides an analysis
of the latest industry trends in each of the sub-segments from 2017 to 2030. For this study, Grand View
Research has segmented the global artificial intelligence market report based on solution, technology,
end-use, and region:

North America controlled the market and accounted for a massive revenue share in the global market.
The high percentage is obtainable because of government initiatives that are favorable to inspire the
adoption of AI across various industries. For instance, in February 2019, US President Donald J Trump
initiated the American AI resourcefulness as the country's strategy for encouraging leadership in AI.
As a part of this initiative, federal agencies have promoted public trust in AI-based systems by
instituting guidelines for its advancement and actual life application across the varied industrial sector.

In the Asia Pacific, the market is expected to observe a notable CAGR over the forecast period. His
development owes to the remarkably escalating investments in artificial intelligence.

Artificial intelligence in drug discovery and development


The use of artificial intelligence (AI) has been increasing in various sectors of society, particularly the
pharmaceutical industry. In this review, we highlight the use of AI in diverse sectors of the
pharmaceutical industry, including drug discovery and development, drug repurposing, improving
pharmaceutical productivity, and clinical trials, among others; such use reduces the human workload
as well as achieving targets in a short period of time. We also discuss crosstalk between the tools and
techniques utilized in AI, ongoing challenges, and ways to overcome them, along with the future of
AI in the pharmaceutical industry.

22
Artificial intelligence: things to know
Over the past few years, there has been a drastic increase in data digitalization in the pharmaceutical
sector. However, this digitalization comes with the challenge of acquiring, scrutinizing, and applying
that knowledge to solve complex clinical problems. This motivates the use of AI, because it can handle
large volumes of data with enhanced automation. AI is a technology-based system involving various
advanced tools and networks that can mimic human intelligence. At the same time, it does not threaten
to replace human physical presence completely. AI utilizes systems and software that can interpret and
learn from the input data to make independent decisions for accomplishing specific objectives. Its
applications are continuously being extended in the pharmaceutical field, as described in this review.
According to the McKinsey Global Institute, the rapid advances in AI-guided automation will be likely
to completely change the work culture of society

AI: networks and tools


AI involves several method domains, such as reasoning, knowledge representation, solution search,
and, among them, a fundamental paradigm of machine learning (ML). ML uses algorithms that can
recognize patterns within a set of data that has been further classified. A subfield of the ML is deep
learning (DL), which engages artificial neural networks (ANNs). These comprise a set of
interconnected sophisticated computing elements involving ‘perceptons’ analogous to human
biological neurons, mimicking the transmission of electrical impulses in the human brain. ANNs
constitute a set of nodes, each receiving a separate input, ultimately converting them to output, either
singly or multi-linked using algorithms to solve problems. ANNs involve various types, including
multilayer perceptron (MLP) networks, recurrent neural networks (RNNs), and convolutional neural
networks (CNNs), which utilize either supervised or unsupervised training procedures

The MLP network has applications including pattern recognition, optimization aids, process
identification, and controls, are usually trained by supervised training procedures operating in a single
direction only, and can be used as universal pattern classifiers. RNNs are networks with a closed-loop,
having the capability to memorize and store information, such as Boltzmann constants and Hopfield
networks. CNNs are a series of dynamic systems with local connections, characterized by its topology,
and have use in image and video processing, biological system modeling, processing complex brain

23
functions, pattern recognition, and sophisticated signal processing. The more complex forms include
Kohonen networks, RBF networks, LVQ networks, counter-propagation networks, and ADALINE
networks.
Several tools have been developed based on the networks that form the core architecture of AI systems.
One such tool developed using AI technology is the International Business Machine (IBM) Watson
supercomputer (IBM, New York, USA). It was designed to assist in the analysis of a patient’s medical
information and its correlation with a vast database, resulting in suggesting treatment strategies for
cancer. This system can also be used for the rapid detection of diseases. This was demonstrated by its
ability to detect breast cancer in only 60 s

AI in the lifecycle of pharmaceutical products


Involvement of AI in the development of a pharmaceutical product from the bench to the bedside can
be imagined given that it can aid rational drug design; assist in decision making; determine the right
therapy for a patient, including personalized medicines; and manage the clinical data generated and
use it for future drug development. E-VAI is an analytical and decision-making AI platform developed
by Eularis, which uses ML algorithms along with an easy-to-use user interface to create analytical
roadmaps based on competitors, key stakeholders, and currently held market share to predict key
drivers in sales of pharmaceuticals, thus helping marketing executives to allocate resources for
maximum market share gain, reversing poor sales and enabled them to anticipate where to make
investments. Different applications of AI in drug discovery and development are summarized in figure.

AI in drug discovery
The vast chemical space, comprising >1060 molecules, fosters the development of a large number of
drug molecules . However, the lack of advanced technologies limits the drug development process,
making it a time-consuming and expensive task, which can be addressed by using AI . AI can
recognize hit and lead compounds, and provide a quicker validation of the drug target and
optimization of the drug structure design . Different applications of AI in drug discovery are depicted
in .
Despite its advantages, AI faces some significant data challenges, such as the scale, growth, diversity,
and uncertainty of the data. The data sets available for drug development in pharmaceutical companies
can involve millions of compounds, and traditional ML tools might not be able to deal with these types
of data. Quantitative structure-activity relationship (QSAR)-based computational model can quickly
predict large numbers of compounds or simple physicochemical parameters, such as log P or log D.
However, these models are some way from the predictions of complex biological properties, such as

24
the efficacy and adverse effects of compounds. In addition, QSAR-based models also face problems
such as small training sets, experimental data error in training sets, and lack of experimental
validations. To overcome these challenges, recently developed AI approaches, such as DL and relevant
modeling studies, can be implemented for safety and efficacy evaluations of drug molecules based on
big data modeling and analysis. In 2012, Merck supported a QSAR ML challenge to observe the
advantages of DL in the drug discovery process in the pharmaceutical industry. DL models showed
significant predictivity compared with traditional ML approaches for 15 absorption, distribution,
metabolism, excretion, and toxicity (ADMET) data sets of drug candidates

Application of Machine learning and Artificial intelligence in oil and gas industry
The petroleum industry involves systems for oil field exploration, reservoir engineering, drilling and
production engineering. Oil and gas is also the fuel source for other chemicals, including
pharmaceutical drugs, solvents, fertilizers, pesticides, and plastics. If prices of fossil fuels continues to
rise, fossil fuel companies will need to develop new technology and strengthen operations to increase
efficiency and build on their existing capabilities. However, the oil fields are now mature and are
producing more water than oil because of water front arrival at shore, channelling, coning, or water
breakthrough. This makes it impossible to produce petroleum from the formation economically.
Moreover, because the price of oil has not yet been stable, fairly costly engineering or equipment is
not at all of interest to any oil and gas firm. By using either Inflow Control Devices (ICD) or
Inflow Control Valves (ICV) as well as downhole sensor systems, the easiest solution to save
efficiency and productivity is to maximize cumulative extraction through effective and smart
technologies. Improved control in major oilfields needs fast decision-making while taking into account
ongoing challenges. The Smart Oilfield will do this by developing a comprehensive oilfield technology
infrastructure by digitizing instrumentation systems and creating network-based knowledge exchange
in order to optimize production process .

It has been seen crystal clear that the digital technology has a tremendous influence on business and
society. With time it has been seen that digital transformation is regarded as the "fourth industrial
revolution", characterized by the convergence of technologies that blur the boundaries between the
physical, digital and biological realms, such as artificial intelligence, robotics and autonomous
vehicles. Artificial Intelligence (AI) technologies are gaining considerable attention because of their

25
rapid response speeds and robust capacity for generalization . Machine learning demonstrates good
potential for assisting and enhancing traditional reservoir engineering approaches in a wide range of
reservoir engineering issues. Various studies employ advanced machine-learning algorithms such as
Fuzzy Logic (FL), Artificial Neural Networks (ANN), Supporting Vector Machines (SVM), Response
Surface Model (RSM), as classification and regression problems tools. Several of the machine-learning
algorithms used in the reservoir field of engineering come under the supervised learning classification.
Most reservoir engineering implementations often use evolutionary optimization techniques, such
as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO).

The knowledge of block chain technology throughout oil and gas sector solves the possibilities,
difficulties, threats and developments which are assessed in this sector. Block chain technology will
offer several benefits to entire oil and gas sector, including declining payments and increasing
accountability and performance. The advancement within block chain technology in oil and gas
sector would then migrate to modified block chain network, cross-chain, modified smart contracts
along with the additional multidisciplinary experts. Technical changes with implementation of block
chain method in this sector are showcased in: casing drilling technology; modern
innovations, enhanced oil recovery; synthetic, thermic, physical and chemical techniques Microbial
Enhanced Oil Recovery (MOER) and water alternating gas (WAG) processes.
This paper narrates the state-of-art research works related to application of Machine Learning and AI
techniques in oil and gas upstream industry. The major objective of this paper is to unfold the merits
of AI and machine learning techniques in various sectors of upstream. Based on the systematic
understanding of this industry the paper presents the workflows that utilises the machine learning and
AI for effective computation and decision making. This paper reviews that how a hand-shaking
between petroleum industry and numerical simulator with intelligent system eases the work and
advances the productivity.

26
Artificial neural network (ANN)

Deep Learning is a subset of Machine learning. In deep learning a structure called Artificial Neural
Network learns and understands the concept of data. Neural Networks is one set of algorithm used in
ML for modelling the data. A deep learning algorithm in Oil and gas industry helps to process huge
amount of data and to achieve the best performance with large amount of data. Features are picked out
without human Intervention. Deep learning algorithms perform complex operations where Machine
learning algorithms cannot perform complex operations. Inputs are run through neural networks. ANN
is used as effective machine learning method to solve complicated problems. In oil and gas industries,
ANN is most widely used in nonlinear and complex problems which cannot be solved by linear
relationship. Feed Forward-ANN (FF-ANN) transfers information in forward direction including
hidden neurons. Areas of petroleum industry on which neural network can be applied are seismic
pattern recognition, drill bit diagnosis, improvement of gas well production, identification of
sandstone lithofacies, prediction and optimization of well performance. ANN model helps to predict
pipeline conditions, it enables operators to assess and predicts the conditions of pipelines. Predicted
pipe failure rate and mechanical reliability by using ANN and other methods are discussed in. Machine
learning model can be used to find percentage of sand in reservoir. Seismic Impedance, Instantaneous
Amplitude and Frequency were used as input. The model predicted sand fraction in less program
completion time and with enhanced visualization. ANN- Generalized Auto Regressive
Conditional Heteroscedasticity (ANN- GARCH) machine learning method is used to predict oil price
volatility.

Artificial Intelligence in production


Artificial intelligence (AI) technologies experience an ever-growing interest both in research and
industry. Though they offer high potential for manufacturing, recent studies among practitioners reveal
that there is a lack of knowledge for implementing AI in production environments and especially for
leading an implementation successfully. Therefore, in this paper, a competence profile for shop floor
managers has been developed. Shop floor managers are seen as suitable levers in companies for

27
implementing AI technologies. This profile focuses on their practical requirements and encompasses
relevant productionoriented use cases, social factors for engaging employees and deeper understanding
of the models to interpret the results. The profile has been put into practice by means of learning
content and has been tested by shop floor managers. The feedback is promising: About 78% of the
testers stated that the content is helpful for them in understanding the benefits, challenges, tasks, and
risks when implementing AI based projects. The results serve as a baseline for future development of
learning materials with corresponding exercises to be taught in learning factories targeting the hands-
on AI implementation.

Artificial Intelligence and Data Science in the Automotive Industry


Data science and machine learning are the key technologies when it comes to the processes and
products with automatic learning and optimization to be used in the automotive industry of the future.
This article defines the terms "data science" (also referred to as "data analytics") and "machine
learning" and how they are related. In addition, it defines the term "optimizing analytics" and illustrates
the role of automatic optimization as a key technology in combination with data analytics. It also uses
examples to explain the way that these technologies are currently being used in the automotive industry
on the basis of the major subprocesses in the automotive value chain (development, procurement;
logistics, production, marketing, sales and after-sales, connected customer). Since the industry is just
starting to explore the broad range of potential uses for these technologies, visionary application
examples are used to illustrate the revolutionary possibilities that they offer. Finally, the article
demonstrates how these technologies can make the automotive industry more efficient and enhance its
customer focus throughout all its operations and activities, extending from the product and its
development process to the customers and their connection to the product.

28
Chapter – 4

29
DATA ANALYSIS

What is Artificial Intelligence (AI) ?


Artificial intelligence (AI) is a wide-ranging branch of computer science concerned with building
smart machines capable of performing tasks that typically require human intelligence. While AI is an
interdisciplinary science with multiple approaches, advancements in machine learning and deep
learning, in particular, are creating a paradigm shift in virtually every sector of the tech industry.

Artificial intelligence allows machines to model, or even improve upon, the capabilities of the human
mind. And from the development of self-driving cars to the proliferation of smart assistants like Siri
and Alexa, AI is increasingly becoming part of everyday life — and an area companies across every
industry are investing in.

The concept of what defines AI has changed over time, but at the core there has always been the idea
of building machines which are capable of thinking like humans.

After all, human beings have proven uniquely capable of interpreting the world around us and using
the information we pick up to effect change. If we want to build machines to help us to this more
efficiently, then it makes sense to use ourselves as a blueprint. AI, then, can be thought of as simulating
the capacity for abstract, creative, deductive thought – and particularly the ability to learn – using the
digital, binary logic of computers.

Research and development work in AI is split between two branches. One is labelled “applied AI”
which uses these principles of simulating human thought to carry out one specific task. The other is
known as “generalised AI” – which seeks to develop machine intelligences that can turn their hands
to any task, much like a person.

Research into applied, specialised AI is already providing breakthroughs in fields of study from
quantum physics where it is used to model and predict the behaviour of systems comprised of billions
of subatomic particles, to medicine where it being used to diagnose patients based on genomic data.

30
In industry, it is employed in the financial world for uses ranging from fraud detection to improving
customer service by predicting what services customers will need. In manufacturing it is used to
manage workforces and production processes as well as for predicting faults before they occur,
therefore enabling predictive maintenance.

In the consumer world more and more of the technology we are adopting into our everyday lives is
becoming powered by AI – from smartphone assistants like Apple’s Siri and Google’s Google
Assistant, to self-driving and autonomous cars which many are predicting will outnumber manually
driven cars within our lifetimes.

Generalised AI is a bit further off – to carry out a complete simulation of the human brain would
require both a more complete understanding of the organ than we currently have, and more computing
power than is commonly available to researchers. But that may not be the case for long, given the
speed with which computer technology is evolving. A new generation of computer chip technology
known as neuromorphic processors are being designed to more efficiently run brain-simulator code.
And systems such as IBM’s Watson cognitive computing platform use high-level simulations of
human neurological processes to carry out an ever-growing range of tasks without being specifically
taught how to do them.

How does AI work ?

As the hype around AI has accelerated, vendors have been scrambling to promote how their products
and services use it. Often, what they refer to as AI is simply a component of the technology, such as
machine learning. AI requires a foundation of specialized hardware and software for writing and
training machine learning algorithms. No single programming language is synonymous with AI, but
Python, R, Java, C++ and Julia have features popular with AI developers.
In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for
correlations and patterns, and using these patterns to make predictions about future states. In this way,
a chatbot that is fed examples of text can learn to generate lifelike exchanges with people, or an image
recognition tool can learn to identify and describe objects in images by reviewing millions of examples.
New, rapidly improving generative AI techniques can create realistic text, images, music and other
media.
31
AI programming focuses on cognitive skills that include the following:
• Learning : This aspect of AI programming focuses on acquiring data and creating rules for
how to turn it into actionable information. The rules, which are called algorithms, provide
computing devices with step-by-step instructions for how to complete a specific task.
• Reasoning : This aspect of AI programming focuses on choosing the right algorithm to reach
a desired outcome.
• Self-Correction : This aspect of AI programming is designed to continually fine-tune
algorithms and ensure they provide the most accurate results possible.
• Creativity : This aspect of AI uses neural networks, rules-based systems, statistical methods
and other AI techniques to generate new images, new text, new music and new ideas.

What are the key developments in AI?


All of these advances have been made possible due to the focus on imitating human thought processes.
The field of research which has been most fruitful in recent years is what has become known as
“machine learning”. In fact, it’s become so integral to contemporary AI that the terms “artificial
intelligence” and “machine learning” are sometimes used interchangeably.
However, this is an imprecise use of language, and the best way to think of it is that machine learning
represents the current state-of-the-art in the wider field of AI. The foundation of machine learning is
that rather than have to be taught to do everything step by step, machines, if they can be programmed
to think like us, can learn to work by observing, classifying and learning from its mistakes, just like
we do.

The application of neuroscience to IT system architecture has led to the development of artificial neural
networks– and although work in this field has evolved over the last half century it is only recently that
computers with adequate power have been available to make the task a day-to-day reality for anyone
except those with access to the most expensive, specialised tools. Perhaps the single biggest enabling
factor has been the explosion of data which has been unleashed since mainstream society merged itself
with the digital world. This availability of data – from things we share on social media to machine data
generated by connected industrial machinery – means computers now have a universe of information
available to them, to help them learn more efficiently and make better decisions.

32
What is the future of AI ?
Real fears that development of intelligence which equals or exceeds our own, but has the capacity to
work at far higher speeds, could have negative implications for the future of humanity have been
voiced, and not just by apocalyptic sci-fi such as The Matrix or The Terminator, but respected scientists
like Stephen Hawking.
Even if robots don’t eradicate us or turn us into living batteries, a less dramatic but still nightmarish
scenario is that automation of labour (mental as well as physical) will lead to profound societal change
– perhaps for the better, or perhaps for the worse.
This understandable concern has led to the foundation last year, by a number of tech giants including
Google, IBM, Microsoft, Facebook and Amazon, of the Partnership in AI. This group will research
and advocate for ethical implementations of AI, and to set guidelines for future research and
deployment of robots and AI. The field of research which has been most fruitful in recent years is what
has become known as “machine learning”. In fact, it’s become so integral to contemporary AI that the
terms “artificial intelligence” and “machine learning” are sometimes used interchangeably.

33
Negative Impacts Of Artificial Intelligence (AI)
Artificial intelligence (AI) is doing a lot of good and will continue to provide many benefits for our
modern world, but along with the good, there will inevitably be negative consequences. The sooner
we begin to contemplate what those might be, the better equipped we will be to mitigate and manage
the dangers.
Legendary physicist Stephen Hawking shared this ominous warning: “Success in creating effective AI
could be the biggest event in the history of our civilisation. Or the worst. So we cannot know if we
will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it.”
The first step in being able to prepare for the negative impacts of artificial intelligence is to consider
what some of those negative impacts might be. Here are some key ones:

1. AI Bias
Since AI algorithms are built by humans, they can have built-in bias by those who either intentionally
or inadvertently introduce them into the algorithm. If AI algorithms are built with a bias or the data in
the training sets they are given to learn from is biassed, they will produce results that are biassed. This
reality could lead to unintended consequences like the ones we have seen with discriminatory
recruiting algorithms and Microsoft’s Twitter chatbot that became racist. As companies build AI
algorithms, they need to be developed and trained responsibly.

2. Loss of Certain Jobs


While many jobs will be created by artificial intelligence and many people predict a net increase in
jobs or at least anticipate the same amount will be created to replace the ones that are lost thanks to AI
technology, there will be jobs people do today that machines will take over. This will require changes
to training and education programmes to prepare our future workforce as well as helping current
workers transition to new positions that will utilise their unique human capabilities.

3. A shift in Human Experience


If AI takes over menial tasks and allows humans to significantly reduce the amount of time they need
to spend at a job, the extra freedom might seem like a utopia at first glance. However, in order to feel
their life has a purpose, humans will need to channel their newfound freedom into new activities that
give them the same social and mental benefits that their job used to provide. This might be easier for
some people and communities than others. There will likely be economic considerations as well when
machines take over responsibilities that humans used to get paid to do. The economic benefits of
increased efficiencies are pretty clear on the profit-loss statements of businesses, but the overall
benefits to society and the human condition are a bit more opaque.

4. Global Regulations
While our world is a much smaller place than ever before because of technology, this also means that
AI technology that requires new laws and regulations will need to be determined among various
governments to allow safe and effective global interactions. Since we are no longer isolated from one
34
another, the actions and decisions regarding artificial intelligence in one country could adversely affect
others very easily. We are seeing this already playing out, where Europe has adopted a robust
regulatory approach to ensure consent and transparency, while the US and particularly China allows
its companies to apply AI much more liberally.

5. Accelerated Hacking
Artificial intelligence increases the speed of what can be accomplished and in many cases, it exceeds
our ability as humans to follow along. With automation, nefarious acts such as phishing, delivery of
viruses to software and taking advantage of AI systems because of the way they see the world, might
be difficult for humans to uncover until there is a real quagmire to deal with.

6. AI Terrorism
Similarly, there may be new AI-enabled form of terrorism to deal with: From the expansion of
autonomous drones and the introduction of robotic swarms to remote attacks or the delivery of disease
through nanorobots. Our law enforcement and defence organisations will need to adjust to the potential
threat these present.
It will take time and extensive human reasoning to determine the best way to prepare for a future with
even more artificial intelligence applications to ensure that even though there is potential for adverse
impacts with its further adoption, it is minimised as much as possible.

Is Artificial Intelligence (AI) A Threat to Humans?

Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever
happen to humankind? This has been a question in existence since the 1940s when computer
scientist Alan Turning wondered and began to believe that there would be a time when machines could
have an unlimited impact on humanity through a process that mimicked evolution.
There are all kinds of exciting AI tools and applications that are beginning to affect the economy in
many ways. These shouldn’t be overshadowed by the overhype on the hypothetical future point where
you get AIs with the same general learning and planning abilities that humans have as well as
superintelligent machines. These are two different contexts that require attention.

Should we be scared of Artificial Intelligence (AI)?


Some notable individuals such as legendary physicist Stephen Hawking and Tesla and SpaceX leader
and innovator Elon Musk suggest AI could potentially be very dangerous; Musk at one point was
comparing AI to the dangers of the dictator of North Korea. Microsoft co-founder Bill Gates also
believes there’s reason to be cautious, but that the good can outweigh the bad if managed properly.
Since recent developments have made super-intelligent machines possible much sooner than initially
thought, the time is now to determine what dangers artificial intelligence poses.

35
How can Artificial Intelligence (AI) be Dangerous?
Artificial intelligence (AI) has the potential to be dangerous in a number of ways. One of the major
concerns is bias and discrimination. AI systems can perpetuate existing biases and discriminatory
practices if they are trained on biased data sets. For example, if an AI system is trained on data that is
biased against a particular race, gender, or religion, the system may end up making biased decisions
that reinforce these prejudices. Additionally, there is a risk that AI systems could malfunction or be
deliberately manipulated to cause harm. For example, a self-driving car that is controlled by an AI
system could cause accidents if the system malfunctions or is hacked. Another concern is that AI
systems could be used to create sophisticated fake news or deepfakes, which could be used to
manipulate public opinion or spread disinformation. Finally, there is the possibility that AI could
become so advanced that it poses an existential threat to humanity. Some experts have warned that the
development of superintelligent AI could lead to an "intelligence explosion" that could be beyond
human control, with potentially disastrous consequences.

While we haven’t achieved super-intelligent machines yet, the legal, political, societal, financial and
regulatory issues are so complex and wide-reaching that it’s necessary to take a look at them now so
we are prepared to safely operate among them when the time comes. Outside of preparing for a future
with super-intelligent machines now, artificial intelligence can already pose dangers in its current
form.

36
Threats Of Artificial Intelligence
1. JOB LOSSES DUE TO AI AUTOMATION
Job automation is generally viewed as the most immediate concern. It’s no longer a matter of if AI
will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively
those whose workers perform predictable and repetitive tasks disruption is well underway. According
to a 2019 Brookings Institution study, 36 million people work in jobs with “high exposure” to
automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and
market analysis to hospitality and warehouse labour will be done using AI.
AI-powered job automation is a pressing concern as the technology is adopted in industries like
marketing, manufacturing and healthcare. Eighty-five million jobs are expected to be lost to
automation between 2020 and 2025, with Black and Latino employees left especially vulnerable.
“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t
looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by
this economy,” futurist Martin Ford told Built In. “I don’t think that’s going to continue.”
As AI robots become smarter and more dexterous, the same tasks will require fewer humans. And
while it’s true that AI will create 97 million new jobs by 2025, many employees won’t have the skills
needed for these technical roles and could get left behind if companies don’t upskill their workforces.

Even professions that require graduate degrees and additional post-college training aren’t immune to
AI displacement.The potential for AI automation to lead to job losses is a significant concern for many
workers and industries. While some jobs may be created as a result of AI development and deployment,
others may be eliminated or significantly changed.
One of the primary areas where job losses due to AI automation are expected is in manufacturing.
Many manufacturing tasks are already highly automated, and as AI technologies continue to advance,
they may be able to take over even more tasks, leading to further job losses in this industry.
Transportation is another area where job losses may occur. Self-driving vehicles, for example, have
the potential to eliminate the need for human drivers in a range of industries, from taxi and delivery
services to long-haul trucking.

37
2. SOCIAL MANIPULATION THROUGH AI ALGORITHMS
Artificial intelligence (AI) algorithms can be used to manipulate social and political behavior in a
variety of ways. This can occur through the use of targeted advertising and messaging, as well as
through the manipulation of social media algorithms and the use of bots and fake accounts.
One way that social manipulation can occur is through the use of microtargeting, where AI algorithms
are used to identify and target specific groups of individuals with tailored messages and content. This
can be used to influence political opinions and voting behavior, as well as to manipulate consumer
behavior and purchasing decisions.

Social media algorithms can also be manipulated to amplify certain types of content or to suppress
others, which can have a significant impact on public opinion and discourse. For example, algorithms
can be used to promote conspiracy theories and disinformation or to suppress dissenting voices and
opinions.
Finally, the use of bots and fake accounts can be used to create the illusion of widespread support or
opposition to a particular cause or idea. These accounts can be used to spread false information and
propaganda, as well as to manipulate public opinion and behavior.
Online media and news have become even murkier in light of deepfakes infiltrating political and social
spheres. The technology makes it easy to replace the image of one figure with another in a picture or
video. As a result, bad actors have another avenue for sharing misinformation and war propaganda,
creating a nightmare scenario where it can be nearly impossible to distinguish between creditable and
faulty news.

38
3. SOCIAL SURVEILLANCE WITH AI TECHNOLOGY
AI technology has the potential to enable unprecedented levels of social surveillance, with the ability
to track and monitor individuals in a variety of ways. This can include the use of facial recognition
technology, biometric data, and other forms of surveillance that can be used to track an individual's
movements, behaviors, and activities.
One of the primary concerns with social surveillance through AI technology is the potential for abuse
by governments and other powerful entities. This can include the use of AI technology to target and
oppress certain groups of individuals, such as political dissidents, ethnic or religious minorities, and
other marginalized communities
Private companies and organizations may also use AI technology for social surveillance. For example,
retailers may use facial recognition technology to track and analyze customer behavior in their stores,
while social media platforms may use AI algorithms to monitor user behavior and target advertising.
“Malicious use of AI,” can put our privacy at risk. Now everything is exposed to the internet and
people who are expertise in this field have started misusing the power of artificial intelligence. AI will
adversely affect privacy and security.

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy
and security. A prime example is China’s use of facial recognition technology in offices, schools and
other venues. Besides tracking a person’s movements, the Chinese government may be able to gather
enough data to monitor a person’s activities, relationships and political views.

39
4. BIASES DUE TO ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) systems can introduce various biases in decision-making processes, data
analysis, and other applications. These biases may stem from a range of factors, such as the data used
to train the AI models, the design of the algorithms themselves, and the values and assumptions of the
developers and users.
One major source of bias in AI is selection bias, where AI models may be trained on biased datasets,
leading to skewed outcomes. For example, if an AI model is trained on historical hiring data that favors
a particular gender or race, it may learn to favor candidates from those groups in the future. This can
perpetuate existing societal biases and discrimination.
Another type of bias is confirmation bias, where AI systems may reinforce pre-existing biases by
providing results that align with the user's expectations or assumptions. This can lead to the
perpetuation of stereotypes and discrimination. Algorithmic bias is another source of bias, where the
design of algorithms used in AI systems can introduce biases. For instance, if an algorithm relies
heavily on certain features or inputs, it may not be able to accurately predict outcomes for groups with
different characteristics.

Data sparsity bias can also be a problem, where AI models can be biased by a lack of data. For instance,
if there is not enough data on a particular group, an AI model may not be able to make accurate
predictions or decisions for that group. Finally, AI systems can be influenced by the values and
assumptions of the people who design and use them. It's important to recognize and address biases in
AI to ensure that these systems are fair and just for everyone.

40
5. WIDENING SOCIOECONOMIC INEQUALITY AS A RESULT OF AI
Artificial intelligence (AI) has the potential to widen socioeconomic inequality in a number of ways.
One of the primary ways this can occur is through the displacement of jobs that are vulnerable to
automation. As AI and other technologies continue to develop, many jobs that were once performed
by humans may become automated, leading to unemployment or underemployment for those workers.
This can have a disproportionate impact on workers in low-skill or routine jobs, who are often already
at a socioeconomic disadvantage.

Another way that AI can contribute to inequality is through biased decision-making. As we discussed
earlier, AI systems can be biased due to the data they are trained on, which can perpetuate existing
social biases and discrimination. This can lead to unequal treatment in areas such as hiring, lending,
and criminal justice, which can have long-term implications for individuals and communities.

In addition, the development and deployment of AI systems is often driven by companies and
organizations with significant resources and financial incentives. This can lead to a concentration of
power and influence in the hands of a small number of actors, exacerbating existing inequalities and
creating new ones. For example, companies with access to large amounts of data may be able to
develop AI systems that give them a competitive advantage over smaller companies or individual
entrepreneurs.
To address these issues, it's important to prioritize equity and inclusion in the development and
deployment of AI systems. This can involve measures such as ensuring diverse representation in AI
development teams, designing algorithms that are transparent and auditable, and investing in education
and training programs to help workers transition to new types of jobs. By taking a proactive and
inclusive approach to AI development, we can work to minimize the potential for AI to exacerbate
socioeconomic inequality.

41
6. WEAKENING ETHICS AND GOODWILL BECAUSE OF AI
AI has the potential to weaken ethics and goodwill in various ways. One major concern is that AI
systems can perpetuate biases and discrimination due to their reliance on biased data. This can lead to
unfair treatment and a breakdown of trust between different groups in society. Additionally, the
automation of certain tasks by AI can lead to job displacement and economic inequality, which can
further erode the principles of ethics and goodwill.

The lack of accountability of AI systems is also a concern, as it can make it difficult to hold them
responsible for their actions. This can lead to a lack of transparency and trust, which can further
undermine the values of ethics and goodwill. Finally, the dehumanization of society due to the
increased use of AI can lead to a weakening of empathy and a loss of the personal connections that are
fundamental to ethical behavior and goodwill. As such, it is important to approach the development
and use of AI with a critical eye towards ethical considerations and to ensure that it is used in a way
that promotes fairness, accountability, and empathy.
The rapid rise of the conversational AI tool ChatGPT gives these concerns more substance. Many users
have applied the technology to get out of writing assignments, threatening academic integrity and
creativity. And even in attempts to make the tool less toxic, OpenAI exploited underpaid Kenyan
laborers to perform the work.
Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence,
we’re going to keep pushing the envelope with it if there’s money to be made.
If mankind’s so-called technological progress were to become an enemy of the common good. This
would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”

42
7. AUTONOMOUS WEAPONS POWERED BY AI
Autonomous weapons powered by artificial intelligence are weapons systems that can identify and
engage targets without human intervention. These systems are designed to operate independently and
make decisions based on data and algorithms. While some proponents argue that these weapons can
reduce the risks to human soldiers and civilians, others have raised concerns about the ethical and
practical implications of such weapons.
One major concern is the lack of human oversight and accountability. Without human intervention, it
can be difficult to ensure that these weapons are used in a way that is consistent with ethical and legal
principles. Additionally, the use of autonomous weapons could lead to a proliferation of such weapons,
which could potentially destabilize global security and increase the risk of conflict.

Another concern is the potential for errors and unintended consequences. Autonomous weapons rely
on algorithms and data to make decisions, which can lead to unintended outcomes if the data is biased
or the algorithm is flawed. This could lead to the targeting of innocent civilians or the escalation of
conflicts.
As is too often the case, technological advancements have been harnessed for the purpose of warfare.
When it comes to AI, some are keen to do something about it before it’s too late: In a 2016 open letter,
over 30,000 individuals, including AI and robotics researchers, pushed back against the investment in
AI-fueled autonomous weapons.
“The key question for humanity today is whether to start a global AI arms race or to prevent it from
starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a
global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious:
autonomous weapons will become the Kalashnikovs of tomorrow.”
This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems, which locate
and destroy targets on their own while abiding by few regulations. Because of the proliferation of
potent and complex weapons, some of the world’s most powerful nations have given in to anxieties
and contributed to a tech cold war.

43
Many of these new weapons pose major risks to civilians on the ground, but the danger becomes
amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various types
of cyber attacks, so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and
instigating absolute armageddon. If political rivalries and warmongering tendencies are not kept in
check, artificial intelligence could end up being applied with the worst intentions.

44
8. FINANCIAL CRISES BROUGHT ABOUT BY AI ALGORITHMS
Financial crises brought about by AI algorithms are a concern for the financial industry. While AI
algorithms have the potential to improve decision-making and increase efficiency, there is also the risk
that they can amplify the impact of financial crises.
One concern is the potential for algorithmic trading to contribute to market volatility. AI algorithms
can execute trades at high speeds, which can lead to sudden fluctuations in stock prices. If these
algorithms are not properly calibrated or fail to account for market conditions, they can exacerbate
market instability.
Another concern is the potential for AI algorithms to perpetuate systemic biases. If these algorithms
are trained on biased data, they may perpetuate those biases and contribute to economic inequality.
This can lead to a breakdown of trust in financial markets and undermine the principles of ethical
behavior.
The financial industry has become more receptive to AI technology’s involvement in everyday finance
and trading processes. As a result, algorithmic trading could be responsible for our next major financial
crisis in the markets.
While AI algorithms aren’t clouded by human judgment or emotions, they also don’t take into account
contexts, the interconnectedness of markets and factors like human trust and fear. These algorithms
then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small
profits. Selling off thousands of trades could scare investors into doing the same thing, leading to
sudden crashes and extreme market volatility.

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help
investors make smarter and more informed decisions on the market. But finance organizations need to
make sure they understand their AI algorithms and how those algorithms make decisions. Companies
should consider whether AI raises or lowers their confidence before introducing the technology to
avoid stoking fears among investors and creating financial chaos.
Finally, the use of AI algorithms in credit scoring and lending decisions can also pose a risk. If these
algorithms are not properly calibrated or fail to account for relevant factors, they may make lending
decisions that are discriminatory or lead to higher default rates. This can lead to a destabilization of
financial markets and increase the risk of a financial crisis.

45
Artificial intelligence, bias and clinical safety
In medicine, artificial intelligence (AI) research is becoming increasingly focused on applying machine
learning (ML) techniques to complex problems, and so allowing computers to make predictions from
large amounts of patient data, by learning their own associations. Estimates of the impact of AI on the
wider economy globally vary wildly, with a recent report suggesting a 14% effect on global gross
domestic product by 2030, half of which coming from productivity improvements. These predictions
create political appetite for the rapid development of the AI industry, and healthcare is a priority area
where this technology has yet to be exploited. The digital health revolution described by Duggal et
al is already in full swing with the potential to ‘disrupt’ healthcare. Health AI research has
demonstrated some impressive results, but its clinical value has not yet been realised, hindered partly
by a lack of a clear understanding of how to quantify benefit or ensure patient safety, and increasing
concerns about the ethical and medico-legal impact.
This analysis is written with the dual aim of helping clinical safety professionals to critically appraise
current medical AI research from a quality and safety perspective, and supporting research and
development in AI by highlighting some of the clinical safety questions that must be considered if
medical application of these exciting technologies is to be successful.

Trends in ML research
Clinical decision support systems (DSS) are in widespread use in medicine and have had most impact
providing guidance on the safe prescription of medicines, guideline adherence, simple risk
screening or prognostic scoring. These systems use predefined rules, which have predictable behaviour
and are usually shown to reduce clinical error, although sometimes inadvertently introduce safety
issues themselves. Rules-based systems have also been developed to address diagnostic
uncertainty but have struggled to deal with the breadth and variety of information involved in the
typical diagnostic process, a problem for which ML systems are potentially better suited.
As a result of this gap, the bulk of research into medical applications of ML has focused on diagnostic
decision support, often in a specific clinical domain such as radiology, using algorithms that learn to
classify from training examples (supervised learning). Some of this research is beginning to be applied
to clinical practice, and from these experiences lessons can be learnt about both quality and safety.
Notable examples of this include the diagnosis of malignancy from photographs of skin lesions,
prediction of sight-threatening eye disease from optical coherence tomography (OCT) scans and
prediction of impending sepsis from a set of clinical observations and test results.
Outside of diagnostic support ML systems are being developed to provide other kinds of decision
support, such as providing risk predictions based on a multitude of complex factors, or tailoring
specific types of therapy to individuals. Systems are now entering clinical practice that can analyse CT
scans of a patient with cancer and by combining this data with learning from previous patients, provide
a radiation treatment recommendation, tailored to that patient which aims to minimise damage to
nearby organs.
Other earlier stage research in this area uses algorithms that learn strategies to maximise a ‘reward’
(reinforcement learning). These have been used to test approaches to other personalised treatment
problems such as optimising a heparin loading regime to maximise time spent within the therapeutic
range or targeting blood glucose control in septic patients to minimise mortality.
Looking further ahead AI systems may develop that go beyond recommendation of clinical action.
Such systems may, for example, autonomously triage patients or prioritise individual’s access to
clinical services by screening referrals. Such systems could entail significant ethical issues by
perpetuating inequality, analogous to those seen in the automation of job applicant screening, of which

46
it is said that ‘blind confidence in automated e-recruitment systems could have a high societal cost,
jeopardizing the right of individuals to equal opportunities in the job market’. This is a complex
discussion and beyond the remit of this article.

Outside of medicine, the cutting edge of AI research is focused on systems that behave autonomously
and continuously evolve strategies to achieve their goal (active learning), for example, mastering the
game of Go, trading in financial markets, controlling data centre cooling systems or autonomous
driving. The safety issues of such actively learning autonomous systems have been discussed
theoretically by Amodei e t al and from this work we can identify potential issues in medical
applications. Autonomous systems are long way off practical implementation in medicine, but one can
imagine a future where ‘closed loop’ applications, such as subcutaneous insulin pumps driven by
information from wearable sensors, or automated ventilator control driven by physiological
monitoring data in intensive care, are directly controlled by AI algorithms.
These various applications of ML require different algorithms, of which there are a great many. Their
performance is often very dependent on the precise composition of their training data and other
parameters selected during training. Even controlling for these factors some algorithms will not
produce identical decisions when trained in identical circumstances. This makes it difficult to
reproduce research findings and will make it difficult to implement ‘off the shelf’ ML systems. It is
notable in ML literature that there is not yet an agreed way to report findings or even compare the
accuracy of ML systems.
This figure summarises expected trends in ML research in medicine, over the short, medium and
longer terms, with the focus evolving from reactive systems, trained to classify patients from gold
standard cases, with a measurable degree of accuracy, to proactive autonomous systems which
continuously learn from experience, whose performance is judged on outcome. Translation of ML
research into clinical practice requires a robust demonstration that the systems function safely, and
with this evolution different quality and safety issues present themselves.
Quality and safety in ML systems
In an early AI experiment, the US army used ML to try to distinguish between images of armoured
vehicles hidden in trees versus empty forests.1 After initial success on one set of images, the system

47
performed no better than chance on a second set. It was subsequently found that the positive training
images had all been taken on a sunny day, whereas it had been cloudy in the control photographs—the
machine had learnt to discriminate between images of sunny and cloudy days, rather than to find the
vehicles. This is an example of an unwittingly introduced bias in the training set. The subsequent
application of the resulting system to unbiased cases is one cause of a phenomenon called
‘distributional shift’.

Developing AI in health through the application of ML is a fertile area of research, but the rapid pace
of change, diversity of different techniques and multiplicity of tuning parameters make it difficult to
get a clear picture of how accurate these systems might be in clinical practice or how reproducible they
are in different clinical contexts. This is compounded by a lack of consensus about how ML studies
should report potential bias, for which the authors believe the Standards for Reporting of Diagnostic
Accuracy initiative52 could be a useful starting point. Researchers need also to consider how ML
models, like scientific data sets, can be licensed and distributed to facilitate reproduction of research
results in different settings.

48
Public Perceptions of Recent Developments in Artificial Intelligence
In recent years there has been a somewhat re-newed interest in applications of AI based on recent
developments in computer technology that allows for use of extensive processing power and the
analysis of vast amounts of so-called Big Data applications of Machine Learning, Deep Learning and
Neural Networks. Such applications gather under the label of AI, which is ascribed a huge impact on
society as a whole. Thereby, AI has especially seen widespread use in business and public
management. As a consequence, the public discourse regarding AI is mainly driven by companies that
provide AI technology looking for customers and markets for their products. Meanwhile, empirical
evidence from survey research supports the assumption that AI is not per se perceived as entirely
positive by the public. A cross-national survey by Kelley et al.shows that AI is connected with positive
expectations in the field of medicine, but reservations are prevalent concerning data privacy and job
loss. Another concern is raised by Araujo et al., who state that citizens perceive high risks regarding
decision-making AI. Moreover, a representative opinion poll by Zhang and Dafoe illustrates that
Americans as well as citizens from the European Union (EU) believe that robots and AI could have
harmful consequences for societies and should be carefully managed. Additionally, Gnambs and Appel
show that attitudes towards robots have recently changed for the worse in the EU. Especially, when it
comes to the influence of robots in the economy and the substitution of workforce, people express fear.
On a broader level, a recent study by Liang and Lee inquiring about the fear of AI even found that a
considerable amount of all Americans reported fears when it comes to autonomous robots and AI.

Measuring the Fear of Autonomous Robots and Artificial Intelligence


Using data from the Chapman Survey of American Fears, Liang and Lee set out to investigate the
prevalence of fear of autonomous robots and artificial intelligence (FARAI). They come to the
conclusion that roughly a quarter of the US population experienced a heightened level of FARAI. In
the respective study, participants were confronted with the question “How afraid are you of the
following?”. The FARAI-scale was afterwards built out of these four items: (1) “Robots that can make
their own decisions and take their own actions,”, (2) “Robots replacing people in the work-force”, (3)
“Artificial intelligence” and (4) “People trusting artificial intelligence to do work”. All items were
rated on a four-point Likert scale with answers ranging from “not afraid (1), slightly afraid (2), afraid
(3), to very afraid (4)”.
While the authors shed some first light in addressing threat perceptions of artificial intelligence and
generate valuable insights into various associations of FARAI with demographic and personal
characteristics, there is also need for a potential enhancement of the existing measurement of FARAI.
As the FARAI scale was developed out of a broad questionnaire concerning many possible fears people
in the US might have, the measurement was not specifically developed for measuring the distinctive
fear of robots and AI, respectively. The FARAI scale also varies in its scope. While item 3 broadly
queries the fear of AI in general, item 2 specifically inquiries about its specific impacts on the economic
sector. Items 1 and 4 query a specific functionality of AI, with item 4 focusing on the human-machine
connection. Thus, the items are mixed in their expressiveness and aim at different aspects of AI’s
impact. Accordingly, the scale does not allow for distinct assessments of AI and necessary
specifications concerning its domain of application and the employed functions.

Besides, the public understanding of robots and AI might be influenced by popular imaginations from
pop-culture, science-fiction, and the media, as is also already implied by Liang and Lee and Laakasuo
et al. Due to the popularity of vastly different types of autonomous robots and AI in literature, film
and comics, it is hard to pin down what exactly comes to a person’s mind inquired about both terms.
Delineating boundaries may not be possible when it comes to the public imagination. As a survey
research in the UK by Cave et al. suggests, a quarter of the British population conflates the term AI
49
with robots. Accordingly, a conceptual clarification concerning the distinction between the
terms robot and artificial intelligence is required to begin with. In the FARAI measure by Liang and
Lee, there is a mixture between both terms as two question items focus on each term, respectively.
This terminological distinction is often conflated in empirical research. We believe that the mixture of
the terms might lead to avoidable ambiguity and maybe even confusion, since people may think of two
distinct and even completely different phenomena or might not be able to distinguish between the two
constructs at all. According to the Oxford English Dictionary a robot is “an intelligent artificial being
typically made of metal and resembling in some way a human or other animal” or “a machine capable
of automatically carrying out a complex series of movements, esp. one which is programmable” while
AI is “the capacity of computers or other machines to exhibit or simulate intelligent behaviour”. There
is certainly some conceptual overlap by definition, especially with regard to the capacity of intelligent
behavior demonstrated by an artificial construct, hence, something that does not exist naturally, but is
human-made. It also cannot be ruled out that appraisal of robots may be strongly associated with AI,
especially when such robots are depicted as autonomous.

Recently, the term AI, particularly, has renewedly received widespread attention and describes
techniques from computer science that gather many different concepts like machine learning, deep
learning or neural networks, which are the basis of autonomous functionality and pervasive
implementation. As a consequence, we decided to focus our measurement solely on AI as it depicts
the core issue of the nascent technology, i.e. autonomous intelligent behavior, which applies to many
use cases that do not necessarily include a physical machine in motion.

Threat Perceptions as Precondition of Fear


There is plenty of literature on the subject of fear, especially from the field of human psychology.
Altogether, fear is defined as a negative emotion that is aroused in response to a perceived threat. When
it comes to the origins of emotion, many studies rely on the appraisal theory of emotion: “The appraisal
task for the person is to evaluate perceived circumstances in terms of a relatively small number of
categories of adaptional significance, corresponding to different types of benefit or harm, each with
different implications for coping”. Accordingly, the authors define relational themes for different
emotions. According to Smith and Lazarus anxiety, respectively fear, is evoked, when people perceive
an ambiguous danger or threat, which is motivationally relevant as well as incongruent to their goals.
Thereby, a threat is seen as an “environmental characteristic that represents something that portends

50
negative consequences for the individuum”. Furthermore, people perceive low or uncertain coping
potential. In other words: Fear is the result of a persons’ appraisal process, where a situation or an
object is perceived as threatening and relevant as well as no avoiding potential can be seen. If theses
appraisals are processed, people react with fear and try to avoid the threat, i.e. in turning away from
the object.
Many scholars build on appraisal theory to develop more specified theories on the mechanisms of fear.
Especially in health communication much work on so called fear appeal literature has been done. In a
nutshell, most fear appeal theories state that a specific object, event or situation (e.g., a disease)
threatens the well-being of a person. With the development of the Extended Parallel Process Model
(EPPM), Witte theorizes that this threat is at first processed cognitively. Thereby, severity and
susceptibility of the threat as well as coping potential (self and general), i.e. the amount of efficacy
respectively control, is rated. Depending on the weights of these cognitive apprehensions, people react
differently. Fear emerges, when the threat perception is high, while the coping perception is low. As a
result, message denial arises that is mostly characterized by avoiding the threat. On the other hand,
when threat as well as coping potential are perceived as high, message acceptance results. If this
happens, people actively engage with the threat, for example in gathering information about the threat
or actively combating potential harms. In this case, fear does not emerge. Whereas the empirical
examination of the EPPM found no clear proof and many suggestions for extending the model have
been made, scholars agree upon the central persuasion effects of threat and coping perceptions.
Moreover, the EPPM commonly serves as a framework for further research.
Transferred to the subject of perceived threats of AI, we belief that AI is best described as an
environmental factor that might cause fear. However, AI should not be treated as a specific fear itself.
In our view, fear may be a result of a cognitive appraisal process, where AI depicts a potential threat
origin. Thus, we explicitly focus on threats of AI, not fear of AI. This idea becomes more prevalent in
thinking about an actual situation. For example, a person is confronted with an AI system that decides
over an approval of a credit. This person most likely will not be afraid of the computer system, but
will rather evaluate cognitively the threat that such a system might pose to its well-being. The person
then rates the probability of the system causing harm (e.g., if it denies the credit). If the outcome of
this process ends in a negative evaluation for the person, fear will be evoked. However, this fear is
based on the threat the AI systems poses and not on the AI system itself. This is crucial for our
understanding of threats of AI.

Context Specificity of Threat Perceptions


It is also important to address the social situation, in which a threat is perceived. Smith and Lazarus
already stated that an “appraisal can, of course, change (1) as the person-environment relationship
changes; (2) in consequence of self-protective coping activity (e.g. emotion-focused coping); (3) in
consequence of changing social structures and culturally based values and meanings; or (4) when
personality changes, as when goals or beliefs are abandoned as unservicable”. Furthermore, Tudor
proposed a sociological approach for the understanding of fear. He developed a concept, in which he
distinguishes parameters of fear including environments, cultures as well as social structures.
Thereby, contexts can vary in manifold ways. A rather simple example for what Tudor refers to as an
environmental context is the case of the wild animal: for instance, a tiger could face a human being;
however, arguably there is a huge difference in fear reaction if one is confronted with the tiger in a zoo
or in its natural habitat. Thus, the environmental factor “cage” does have a huge impact on the
incitement of fear. Additionally, cultural backgrounds can affect the way threats are perceived: “If our
cultures repeatedly warn us that this kind of activity is dangerous, or that sort of situation is likely to
lead to trouble, then this provides the soil in which fearfulness may grow”. Lastly, social structures
described that the societal role of an individual might influence threat perceptions. For instance, that
could be the job position of an employee or just the belonging to a specific societal group.
51
Furthermore, different social actors are able to influence the social construction of public fears, i.e. if
and how environmental stimuli are treated as threats. According to Dehne, the creation of fear, among
other factors, is dependent upon transmission of information in a society. In that, especially scientific,
economic, political and media actors affect the social construction of threats. However, depending on
the actors that take the highest share in the public discourse, different threat perceptions might emerge.
For instance, given an AI application in medicine, we assume that science as well as media lead the
debate. On the other hand, an AI application in the field of recruiting will probably be led by economic
actors. It is plausible that there are specific context dependencies (who informs the public about a
specific AI application) that have an influence on (threat) perceptions.
In summary, there are many (social) factors that shape the way emotions are elicited leading to the
conclusion that threat perceptions heavily rely on the context in which an individual encounters AI. Of
course, we are not able to cover all possible contexts of AI related threats. However, we distinguish
two context groups, which are important for the understanding of TAI: AI functionality and distinct
domains of AI applications.

Distinct Dimensions of AI Functionality


What an AI is capable or supposed to do may have a decisive effect on the appraisal of AI applications.
However, AI is a generic term that unites many different functionalities. In the scientific community,
there are manifold definitions on the term AI and what can and what cannot be counted as an AI
system. Whereas there is not one definition, most scholars agree upon central functionalities AI
systems can perform. Nevertheless, there is no consensus upon how to group these functionalities. For
example, Hofmann et al, identify perceiving, identification, reasoning, predicting, decision-making,
generating and acting as AI functions. Though, we base our approach on the periodic systems of AI
and group AI functionalities into four categories, which undoubtedly intersect each other: recognition,
prediction, recommendation and decision-making.
Noteworthy, our approach is quite similar to Hofmann et al: however, we subsumed generating and
acting into the category of decision-making as we focus on AI that act autonomously in that category.
Additionally, we added perceiving and identification into one category: This decision was based upon
the results of a pre-test of the scale, which we conducted with 304 participants. Our results show that
participants could not differentiate between the perceiving and identification function. In the
following, we elaborate on our proposed AI function classes:

52
Recognition
Recognition describes the task of analyzing input data in various forms (e.g., images, audio) and
recognizing specific patterns in the data. Depending on the application, these patterns can vary hugely.
In a health application, AI recognition is used to detect and identify breast cancer. In the economic
sector AI systems promise to detect (personal) characteristics and abilities of potential employees via
their voices and / or faces.

Prediction
In prediction tasks, AI applications prognose future conditions on the basis of the analyzed data. It
differentiates from recognition in forecasting developments of specific states, whereas recognition
mostly classifies the given data. In the medical sector, a AI applications are able to calculate the further
development of diseases on basis of medical diagnoses and (statistical) reports.

Recommendation
Recommendation describes a task in the field of human-computer interaction. Thereby, AI systems
directly engage with humans, mostly decision makers, in recommending specific actions. These
actions are again highly dependent on the actual application field. For the medical example this could
mean that the AI application, which takes into account all given data, proposes a medical treatment to
the doctor. Noteworthy, the decision to accept or decline this suggestion is still made by the physician
or the patient, respectively.

Decision-Making
Ultimately, the functionality decision-making refers to AI systems that operate autonomously.
Oftentimes, these applications are also called algorithmic decision-making (ADM) systems. Hereby,
AI systems learn and act autonomously after being carefully trained by developers. The most
prominent application is with no doubt autonomous driving. However, decision-making tasks can also
be found in other domains of application. For example, in medicine AI systems could directly decide
over medical treatments of patients; in the higher education sector ADM could decide about the
admissions of students’ applications to university. Concerning the human-computer interaction an
ADM substitutes the human task completely.

Noteworthy, two points are important to mention. Firstly, the functionalities can depend on each other
and are thus not completely separable. Secondly, AI applications in specific fields do not necessarily
have to fulfill all AI functionalities. Mostly, AI systems just perform one task while not providing the
other ones. We stress that threat perceptions of AI - in technical terms - should not be treated as a
second-order factor. Rather our scale deploys a toolbox which can be used to cover threat perceptions
of the different functionalities. However, we expect that there are significant correlations between the
functionalities.

Distinct Domains of AI Application


As usual, social science research addresses the social change induced by technological phenomena and
artifacts in various domains of public and private life. Depending on the domain of application, AI
may be wholeheartedly welcomed or seen as a severe threat. For instance, imagining to hand over
certain decisions to an AI may appear rather innocuous for certain lifestyle choices such as buying a
product or taking a faster route to a destination, but may lead to reactance when perceived individual

53
stakes are high, e.g. when AI interferes in life altering decisions that affect one’s health or career
decisions. As applications of AI are expected to get implemented in manifold life domains, research
will need to address the respective perceptions of the people affected. The domain specificity of effects
is already an established approach in social science research; for instance Acquisti et al.as well as Bol
et al.found that distinct application domains do matter in terms of online privacy behavior.
Additionally, Araujo et al. analyzed perceptions of automated decision-making AI in three distinct
domains. Thus, we believe that a measurement of threat perceptions also needs to be adaptable to a
multi-faceted universe of AI related phenomena, some of which might not even be known to date.
Concludingly, we propose a measurement that is adaptable to every AI domain. As follows, the
proposed TAI scale is tested in three different domains, namely loan origination, job recruitment, and
medical treatment.

Loan Origination: Assessing Creditworthiness


AI technologies are already applied in the finance sector, i.e. in credit approval. As credit approval is
a more or less mathematical problem, it is reasonable that AI based algorithms are applied for this
purpose. The algorithms used analyze customer data and calculate potential payment defaults - and
finally can decide, whether a credit is approved. As individual goals greatly depend on such decisions,
it may pose a threat for individuals, who believe that their input data might be deficient or assume that
the processing biased.
Job Recruitment: Assessing the Qualification and Aptitude of Applicants
Recently, AI applications have been applied to the field of human resource management, i.e. recruiting.
More specifically, AI can be used to analyze and predict performance of employees. Furthermore, AI
based systems are able to recommend or select potential job candidates. However, there are several
potential risks of the use of AI systems in human resource management.
Health: Medical Treatment of Diseases
One of the most important fields of AI development and implementation is with no doubt health
care/medicine. Especially, in fields where imaging techniques are applied (e.g., radiology) AI
applications are frequently used. Recent works show that AI applications are especially appropriate to
detect and classify specific diseases, for example breast or skin cancer, in X-ray images [48, 49].

54
Moreover, another AI application can identify gene-related diseases in face images of patients.
Generally, people tend to have optimistic perceptions of the use of AI in medicine.
Summing up, it may be assumed that distinct domains of AI application cause different threat
perceptions. As mentioned earlier, a possible explanatory approach is that the public discourse, through
which individuals are mostly confronted with AI, is led by different actor groups. Another reason to
believe that domains do vary is the actual tasks AI systems perform and which severity individuals
ascribe to them. Presumably, also personal relevance appraisals play a major role in the level of threat
individuals ascribe to distinct domains. An individual, who does not plan to apply for a credit will
probably rate the use of an AI system for credit approval as less threatening than a person who is in
dire need of a loan. Arguably, we can only focus on a small sample of potential AI domains.

Threat perceptions of AI induce fear among respondents.


As threat perceptions of AI functionalities and domains may differ vastly from each other, we are
interested in whether the amount of perceived fear (if any) that is explained by our proposed measure
also differs by context. Arguably, not all threat perceptions necessarily need to cause the same fear
reactions. For instance, if subjects perceive high levels of efficacy in dealing with the potential threat,
a far less strong emotional reaction is likely to occur. This becomes particularly obvious, when
comparing the recommendation and the decision-making functionality. Decision-making AI takes
control away from the individual, whereas in recommendation at least an (other) human still has control
over the process.
Accordingly, we set out to develop a measurement scale for the application in survey research on AI
that addresses the threat perceptions of people that are confronted with various forms of AI
implementation. Here, we explicitly emphasize that the proposed scale addresses the perceptions of
individuals. Hence, it is not of much concern what an AI system actually does on a technical level, but
how different ideal functionalities are seen in the eyes of respondents that usually do not have much
knowledge about AI technology. The scale must be applicable with regards to the respondents and
their individual imaginations of AI to show validity when it comes to threat perceptions. Again, this
must not be coherent with the “technical”/“mathematical” level of actual AI systems. Rather
respondents need only to differentiate between the observable functions AI systems perform.
Thus, the aim is to reliably and validly assess the extent to which respondents perceive autonomous
systems as a threat to themselves. Moreover, the scale needs to be standardized allowing for
comparisons between samples from various populations, but flexible enough allowing for application
in distinct domains of AI research.

55
Threat Perceptions of Artificial Intelligence
We propose a measurement for threat perceptions concerning AI based on the specific functionality
that AI systems can perform. We identified ‘recognition’, ‘prediction’, ‘recommendation’ and
‘decision-making’ as the core functions of current AI systems performance from a user’s perspective.
The phenomenon AI was firstly explained to the participants with a short text, which also contained
the information that AI currently draws widespread public attention. Furthermore, a broad definition
of AI systems and functionality was given in a neutral tone as well as an explanation of how AI systems
could be used in the specific context presented to the respondents.
The public perception of Artificial Intelligence will become increasingly important as applications that
make use of AI technologies will further proliferate in various societal domains. A populace that
perceives AI as threatening and that in consequence fears its proliferation may prove as detrimental as
a blind trust in the benevolence of actors that implement AI systems as well as a general overestimation
of the veracity of assertions and decisions made by AI. Consequently, the survey of threat perceptions
of various AI systems is of great research interest. In this paper, we proposed and constructed a
measurement of threat perceptions regarding AI that is able to capture various functions performed by
AI systems and that is adaptable to any context of application that is of interest. The developed TAI
scale showed satisfactory results in that it reliably captured threat perceptions regarding the distinct
functions of recognition, prediction, recommendation, and decision-making by AI. The results also
suggest that the developed scale is able to elucidate differences in these threat perceptions between
distinct domains of AI applications.

Limitations
As this study attempts to develop a more fine-grained approach to measuring threat perceptions
regarding AI, its focus lay on scale construction and testing the application of the scale in an online
survey. The results are thus limited to German online users from a non-representative online access
panel. Further research should extend the scope of the domain of AI applications as well as addressing
further groups of stakeholders and, especially, behavioural consequences of perceived threats of AI.
Furthermore, a translation of the scale to other languages appears as another promising avenue. As
Gnambs and Appel showed, based on longitudinal data from the Eurobarometer attitudes towards
robots and autonomous systems vary between countries and might be subject to cultural influences
that warrant research illuminating divergent perceptions and their antecedents.
Finally, we point out that, although we refer to the periodic system of AI and the study of Hofmann
and colleagues, the functional classes may be considered somewhat arbitrary. As AI is a very broad
term, there might be other possibilities for dimensional structures of a scale focusing on public threat
perceptions. However, our results give support for the dimensional structure we proposed.

56
Chapter – 5

57
FINDINGS, SUGGESTIONS AND CONCLUSION

I. Finding / Understanding the Risks Caused due to AI

Automation thus is posing a great risk to educated professionals and their job markets. People may say
technologies have always created new jobs by taking old jobs, however AI is a unique kind of
technology as AI seeks to mimic human intelligence thereby really threatening worth of human
intelligence. While historically technologies only replaced a very limited aspect of human action AI
seeks to replace human mind itself. AI will create global technological inequality enhancing the
already powerful nations more powerful and making the less developed nations more exploitable. Also,
the military industrial complex will become even more bold and audacious because AI warfare will
save national lives so there will be no more public discontent against declaration of wars. Tyrannical
nations will become more oppressive. Imagine the dominion of Skynet4 except being controlled by
the elites. This militarization of AI poses a great threat against humanity. Militarizing AI objectives
by training machines to do harm and/or preserve a certain political and strategic policy seems like a
sci-fi movie becoming reality. Faulty, inadequately trained, poorly understood algorithms, data
poisoning and incorrect statistical approximation can produce erroneous results, which may have wide-
scale impact on people’s lives. Take the example of Tesla self-driving cars which have failed and
caused fatalities. Also, such faults may have a national and/or global impact if the military industrial
complex automates a distributed armed surveillance system. Depending on the design of such a system,
which may self-learn, identify and execute its own objectives and depending on the severity and scope
of arming may determine the danger level to human society. The danger of AI rises also from
increasing development of quantum computing, mass and distributed application of digital components
and extensive digitization process made possible by affordable and cost-effective technologies partly
due to cheap labor in some of the cruelest nations and regions on earth. These increase the use and
hence the probability of AI harms. Lastly, AI may bring about a revolution in perfecting human apathy.
We are today already socially and emotionally isolated by being glued to digital devices, sometimes
totally unaware of human diaspora around the world. With the advent of emotionally acting robots,
human beings will become more distant and this will increase human apathy across societies and
regions causing a demise and slow down in human centric policies.
Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists,
academicians, journalists, and venture capitalists alike. As with many phrases that cross over from
technical academic fields into general circulation, there is significant misunderstanding accompanying
use of the phrase. However, this is not the classical case of the public not understanding the scientists
—here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the
emergence of an intelligence in silicon that rivals our own entertains all of us, enthralling us and
frightening us in equal measure. And, unfortunately, it distracts us.
Whether or not we come to understand ‘intelligence’ any time soon, we do have a major challenge on
our hands in bringing together computers and humans in ways that enhance human life. While some
view this challenge as subservient to the creation of artificial intelligence, another more prosaic, but
no less reverent, viewpoint is that it is the creation of a new branch of engineering. Much like civil
engineering and chemical engineering in decades past, this new discipline aims to corral the power of
a few key ideas, bringing new resources and capabilities to people, and to do so safely. Whereas civil
engineering and chemical engineering built upon physics and chemistry, this new engineering

58
discipline will build on ideas that the preceding century gave substance to, such as information,
algorithm, data, uncertainty, computing, inference, and optimization. Moreover, since much of the
focus of the new discipline will be on data from and about humans, its development will require
perspectives from the social sciences and humanities. While the building blocks are in place, the
principles for putting these blocks together are not, and so the blocks are currently being put together
in ad-hoc ways. Thus, just as humans built buildings and bridges before there was civil engineering,
humans are proceeding with the building of societal-scale, inference-and-decision-making systems
that involve machines, humans, and the environment. Just as early buildings and bridges sometimes
fell to the ground—in unforeseen ways and with tragic consequences—many of our early societal-
scale inference-and-decision-making systems are already exposing serious conceptual flaws.
Unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What
we’re missing is an engineering discipline with principles of analysis and design. The current public
dialog about these issues too often uses the term AI as an intellectual wildcard, one that makes it
difficult to reason about the scope and consequences of emerging technology.

Let us consider more carefully what AI has been used to refer to, both recently and historically.
Most of what is labeled AI today, particularly in the public sphere, is actually machine learning (ML),
a term in use for the past several decades. ML is an algorithmic field that blends ideas from statistics,
computer science and many other disciplines (see below) to design algorithms that process data, make
predictions, and help make decisions. In terms of impact on the real world, ML is the real thing, and
not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in
the early 1990s, and by the turn of the century forward-looking companies such as Amazon were
already using ML throughout their business, solving mission-critical, back-end problems in fraud
detection and supply-chain prediction, and building innovative consumer-facing services such as
recommendation systems. As datasets and computing resources grew rapidly over the ensuing two
decades, it became clear that ML would soon power not only Amazon but essentially any company in
which decisions could be tied to large-scale data. New business models would emerge. The phrase
‘data science’ emerged to refer to this phenomenon, reflecting both the need of ML algorithms experts
to partner with database and distributed-systems experts to build scalable, robust ML systems, as well
as reflecting the larger social and environmental scope of the resulting systems.This confluence of
ideas and technology trends has been rebranded as ‘AI’ over the past few years. This rebranding

59
deserves some scrutiny. Historically, the phrase “artificial intelligence” was coined in the late 1950s
to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level
intelligence. I will use the phrase “human-imitative AI” to refer to this aspiration, emphasizing the
notion that the artificially-intelligent entity should seem to be one of us, if not physically then at least
mentally (whatever that might mean). This was largely an academic enterprise. While related academic
fields such as operations research, statistics, pattern recognition, information theory, and control theory
already existed, and often took inspiration from human or animal behavior, these fields were arguably
focused on low-level signals and decisions. The ability of, say, a squirrel to perceive the three
dimensional structure of the forest it lives in, and to leap among its branches, was inspirational to these
fields. AI was meant to focus on something different: the high-level or cognitive capability of humans
to reason and to think. Sixty years later, however, high-level reasoning and thought remain elusive.
The developments now being called AI arose mostly in the engineering fields associated with low-
level pattern recognition and movement control, as well as in the field of statistics, the discipline
focused on finding patterns in data and on making well-founded predictions, tests of hypotheses, and
decisions. Indeed, the famous backpropagation algorithm that David Rumelhart rediscovered in the
early 1980s, and which is now considered at the core of the so-called “AI revolution,” first arose in the
field of control theory in the 1950s and 1960s. One of its early applications was to optimize the thrusts
of the Apollo spaceships as they headed towards the moon. Since the 1960s, much progress has been
made, but it has arguably not come about from the pursuit of human-imitative AI. Rather, as in the
case of the Apollo spaceships, these ideas have often hidden behind the scenes, the handiwork of
researchers focused on specific engineering challenges. Although not visible to the general public,
research and systems-building in areas such as document retrieval, text classification, fraud detection,
recommendation systems, personalized search, social network analysis, planning, diagnostics, and A/B
testing have been a major success—these advances have powered companies such as Google, Netflix,
Facebook, and Amazon. One could simply refer to all of this as AI, and indeed that is what appears to
have happened. Such labeling may come as a surprise to optimization or statistics researchers, who
find themselves suddenly called AI researchers, but labels aside, the bigger problem is that the use of
this single, illdefined acronym prevents a clear understanding of the range of intellectual and
commercial issues at play. The past two decades have seen major progress—in industry and
academia—in a complementary aspiration to human-imitative AI that is often referred to as
“Intelligence Augmentation” (IA). Here computation and data are used to create services that augment
human intelligence and creativity. A search engine can be viewed as an example of IA, as it augments
human memory and factual knowledge, as can natural language translation, which augments the ability
of a human to communicate. Computer-based generation of sounds and images serves as a palette and
creativity enhancer for artists. While services of this kind could conceivably involve high-level
reasoning and thought, currently they don’t; they mostly perform various kinds of string-matching and
numerical operations that capture patterns that humans can make use of.
“Intelligent Infrastructure” (II), whereby a web of computation, data, and physical entities exists that
makes human environments more supportive, interesting, and safe. Such infrastructure is beginning to
make its appearance in domains such as transportation, medicine, commerce, and finance, with
implications for individual humans and societies. This emergence sometimes arises in conversations
about an Internet of Things, but that effort generally refers to the mere problem of getting ‘things’ onto
the Internet, not to the far grander set of challenges associated with building systems that analyze those
data streams to discover facts about the world and permit ‘things’ to interact with humans at a far
higher level of abstraction than mere bits.

60
We now come to a critical issue: is working on classical human-imitative AI the best or only way to
focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact
been in areas associated with human-imitative AI—areas such as computer vision, speech recognition,
game-playing, and robotics. Perhaps we should simply await further progress in domains such as these.
There are two points to make here. First, although one would not know it from reading the newspapers,
success in human-imitative AI has in fact been limited; we are very far from realizing human-imitative
AI aspirations. The thrill (and fear) of making even limited progress on humanimitative AI gives rise
to levels of over-exuberance and media attention that is not present in other areas of engineering.
Second, and more importantly, success in these domains is neither sufficient nor necessary to solve
important IA and II problems. On the sufficiency side, consider self-driving cars. For such technology
to be realized, a range of engineering problems will need to be solved that may have little relationship
to human competencies (or human lack-of-competencies). The overall transportation system (an II
system) will likely more closely resemble the current air-traffic control system than the current
collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more
complex than the current air-traffic control system, specifically in its use of massive amounts of data
and adaptive statistical modeling to inform fine-grained decisions. Those challenges need to be in the
forefront versus a potentially-distracting focus on human-imitative AI.

AI will also remain quite essential, because for the foreseeable future, computers will not be able to
match humans in their ability to reason abstractly about real-world situations. We will need
wellthought-out interactions of humans and computers to solve our most pressing problems. And we
will want computers to trigger new levels of human creativity, not replace human creativity.
Beyond the historical perspectives of McCarthy and Wiener, we need to realize that the current public
dialog on AI—which focuses on narrow subsets of both industry and of academia—risks blinding us
to the challenges and opportunities that are presented by the full scope of AI, IA, and II. This scope is
less about the realization of science-fiction dreams or superhuman nightmares, and more about the
need for humans to understand and shape technology as it becomes ever more present and influential
in their daily lives. Moreover, in this understanding and shaping, there is a need for a diverse set of
voices from all walks of life, not merely a dialog among the technologically attuned. Focusing
narrowly on human-imitative AI prevents an appropriately wide range of voices from being heard.
While industry will drive many developments, academia will also play an essential role, not only in
providing some of the most innovative technical ideas, but also in bringing researchers from the

61
computational and statistical disciplines together with researchers from other disciplines whose
contributions and perspectives are sorely needed—notably the social sciences, the cognitive sciences,
and the humanities. On the other hand, while the humanities and the sciences are essential as we go
forward, we should also not pretend that we are talking about something other than an engineering
effort of unprecedented scale and scope; society is aiming to build new kinds of artifacts. These
artifacts should be built to work as claimed. We do not want to build systems that help us with medical
treatments, transportation options, and commercial opportunities only to find out after the fact that
these systems don’t really work, that they make errors that take their toll in terms of human lives and
happiness. In this regard, as I have emphasized, there is an engineering discipline yet to emerge for the
data- and learning-focused fields. As exciting as these latter fields appear to be, they cannot yet be
viewed as constituting an engineering discipline. We should embrace the fact that we are witnessing
the creation of a new branch of engineering. The term engineering has connotations—in academia and
beyond—of cold, affectless machinery, and of loss of control for humans, but an engineering discipline
can be what we want it to be. In the current era, we have a real opportunity to conceive of something
historically new: a human-centric engineering discipline. I will resist giving this emerging discipline
a name, but if the acronym AI continues to serve as placeholder nomenclature going forward, let’s be
aware of the very real limitations of this placeholder. Let’s broaden our scope, tone down the hype,
and recognize the serious challenges ahead.

62
II. Suggestions / Countermeasures

What risks need to be tackled?


The prerequisite to any adequate, implementable and effective risk-reduction measure is a clear
understanding of the problem that is to be tackled. In that regard, one of the challenges of discussing
the strategic and nuclear risks posed by AI is that the risk picture is multilayered. Risks and challenges
can be analysed on three levels.

First, and most broadly, there are the risks inherent to the nature and limitations of AI technology.
These include broad and general challenges such as the unpredictability of machine learning systems,
the lack of reliability of autonomous systems, and their vulnerability to adversarial attack such as
cyberattack, data poisoning and spoofing.
Second, there are the risks posed by the use of AI for military applications. These range from the
challenge of a state signalling its own capabilities and intentions and understanding those of its
opponent. This is particularly the case when AI-powered military technologies are used to deal with
the acceleration of the speed of warfare. A related risk in that regard is the potential erosion of human
control over the use of force. A further key concern is the acquisition of military AI by non-state actors,
which is facilitated by the dual-use nature of AI technology.

63
Third, there are the specific risks posed by the use of AI in connection with nuclear weapon systems.
These include AI undermining the confidence of nuclear-armed states in their second-strike
capabilities. AI may also be employed to weaken the cybersecurity of nuclear force-related systems.
AI also has the potential to provide new tools for influence operations on nuclear decision makers. It
can increase the risk of accidental or inadvertent escalation, due to system failure or misuse of
technology.

How can these risks be mitigated?


Risk-reduction measures can take many forms . These can be unilaterally implemented technical,
organizational or policy measures that directly attempt to mitigate the risk. They can be bilaterally or
multilaterally agreed confidence-building measures (CBMs) to build trust among parties, support crisis
prevention and facilitate crisis mitigation. Or they can be internationally agreed regulatory frameworks
in the form of hard or soft laws. Hard law regulation could take the form of a legally binding
international treaty banning or restricting the development or use of a certain technology or capability.
Soft law could take the form of a political declaration or international code of conduct that would
identify best practices or provide guidance to states, academia or industry.
The type of measure that would be most appropriate will depend on a number of factors: relevance,
efficiency and, most importantly, feasibility. When choosing a measure, it is important to be pragmatic
and to weigh the advantages and disadvantages of each.
International law, whether hard or soft, has the greatest normative power, but it may take a long
time,years or even decades to be negotiated. Years of multilateral negotiations may also lead to
measures that are limited in the scope of their application. Moreover, given that states might end up
agreeing on the lowest common denominator, such measures may have limited relevance or
effectiveness in reducing risk. In addition, unless they are widely supported by the community of states,
their actual effect on the way that states conduct themselves may eventually be negligible.

Some Suggestions to Overcome some of the AI Created Problems


We may be able to deal with some of the problems created due to automation:
1. Create a vibrant and rigorous welfare system for the unemployed depending on their
education and skill levels.
2. Tax the robots and robot employed businesses in a more aggressive manner.
3. Hold the owners, managers and developers of AI responsible always giving benefit of the
doubt to the people.
4. Always prepare a kill switch to shut down malfunctioning AI or reboot such with a clean
state.
5. Log AI computations and decisions to trace and understand any foreseeable vulnerabilities
and threats.
6. Globalize AI technologies by banning all forms of AI patenting.
7. Ban aggressive AI use in military and warfare.
8. Implement an international monitoring system of AI accountability.

64
In order to combat these cyberthreats, AI solutions can utilize machine learning and recurrent neural
networks. Interconnected neurons fire together when detecting patterns in data that typically represent
phishing websites. Benign and phishing URLs are collected to create a dataset and identify content-
based features. Together with supervised machine learning, the probability of a website being
legitimate or malicious is determined.
All companies are at risk of being attacked by cyber actors. Lookalike, name spoofing and phishing
attacks can target any industry, including public administration, healthcare, pharmaceuticals,
insurance, research and retail. When it comes to lookalike and name spoofing, AI solutions
continuously check the domain and display names landing in the organization to find hidden patterns
indicating the company may be undergoing spoofing attacks.
In the case of phishing URL detection, for example, the algorithm can be trained on millions of
phishing samples. As a result, it detects phishing URLs based on thousands of features extracted from
a single URL in high dimensional space. It is hard for humans to imagine four- or five-dimensional
space since the world appears three-dimension to the human eye, but AI can look into a thousand-
dimensional space and make conclusions based on it.
Despite the benefits, implementing functional AI solutions with high accuracy is a challenge for most
companies. In order to do so, companies should consider these best practices.
1. The AI model must be trained on real-world data from production. Companies should start the data
collection long before the development of the AI solution.
2. Companies should monitor how the character of data changes over time. A pandemic or climate
change can be a change worth tracking.
3. Companies should develop and use explainable AI techniques. Only explainable AI is capable of
not only finding the phishing attacks but also reasoning the source of the decision.
The cyberattack space is getting massive, and it keeps growing. Analyzing organizational threats is
beyond mere human intervention. Companies need emerging technologies to support security teams.
AI in cybersecurity is still new, but the capacity to learn new things, make informed decisions and
improve models is unmatched, as it can analyze a vast amount of information and provide the data that
security professionals need to enhance security and protect against cyberattacks.

65
III. Conclusion
In this paper we introduced a scale to measure threat perceptions of artificial intelligence. The scale
can be used to assess citizens’ concerns regarding the use of AI systems. As AI technologies are
increasingly introduced into everyday life, it is crucial to understand under what circumstances citizens
might feel themselves to be threatened. Threats can be understood as a pre-condition of fear.
Subsequently, according to fear appeal literature, being frightened can lead to denial and avoidance of
the threatening object. Thus, if people perceive AI as a serious threat, it could cause a non-adoption of
the technology.
However, AI is an umbrella term for a huge variety of different applications. AI applications can
fulfill various functions and applications are used in almost every societal field. Arguably, there are
huge variances in threat perceptions of different functions and domains of application. With the TAI-
scale we propose a measurement to account for this context-specificity.
First, the results suggest that threat perceptions of distinct AI functions can be reliably
differentiated by respondents. Recognition, prediction, recommendation and decision-making are
indeed perceived as different functions of AI systems. However, depending on the context evaluated
the measure showed diverging factorial validity. In one case the indicator items had significant shared
variance with more than one dimension. This impairment of discriminatory power indicates that
thorough pre-testing of the adapted measures and data quality control are of utmost importance when
devising the survey instrument in subsequent study designs. In doing so, researchers need to make sure
that respondents fully comprehend the item wording and that the object of potential threat is clearly
recognizable. Especially, this becomes important when respondents are confronted with new and
technically sophisticated AI systems, for which there not yet exists enough direct personal experience.
Second, threat perceptions are shown to vary between different domains, in which AI systems are
deployed. This suggests that the notion of a general fear of AI needs to enhanced in favor of a broader
conception not only of what actions AI is able to perform, but also what exactly is at stake in a given
situation. In cases where AI systems seem useful and the consequences of its application appear
insubstantial, the introduction of AI in another domain might evoke entirely opposite reactions. Thus,
while general perceptions such as general predispositions concerning digital technology certainly do
play a role when it comes to the evaluation of innovative AI systems, a more fine-grained approach is
necessary and appears to be fruitful with the developed measurement. Respondents’ threat perceptions
in this study varied considerably between domains. Especially, the use of AI in medical treatment was
only perceived as lightly threatening, whereas threat perceptions were quite higher in the domains of
job recruitment and loan origination. Regarding the levels of threat perceptions concerning the
functionalities, it is evident that the decision-making function is perceived as most threatening within
all three domains. Arguably, this might be based on the loss of humans’ autonomy. As this is only a
hypothesis at this point, further studies should elaborate on these findings. Future applications of the
TAI scale will yield further insights concerning the items’ and scales’ sensitivity with regards to
different domains of AI applications.
Third, threat perceptions are reliable predictors of self-reported fear. As the measurement of actual
fear is rather complicated via the means of a survey instrument, the inquiry of threat perceptions
appears not only to be preferable. It also suggests that it is reasonably well connected to individual
self-reports of experienced fear. Further studies should elaborate on these findings and focus on the
behavioral impact of AI-related threat perceptions. As the fear appeal literature suggests, one might
expect that high levels of AI-induced fear lead to rejection of the technology or even protest behavior.
66
Highlighting the good fit of our scale, we encourage researchers to implement the TAI scale in
research focusing on public perceptions of AI systems. As outlined earlier, the TAI scale can be seen
as a toolbox. Hence, it is possible to integrate only those functional dimensions in a survey that actually
fit the AI system under research. Anyhow, we also advise researchers to be mindful when using the
scale. In practice, there is a lot of confusion about the terminology of AI - even within the scientific
community. Researchers using the scale have to make sure that the AI system under research actually
performs AI tasks. Given the fact that usually non-experts serve as respondents, scholars have the
responsibility to inform them rightfully about what the specific AI system under consideration is and
what it is able to do. Otherwise, researchers would make claims about a threat of AI perceptions
without actually examining AI.
Future studies should test the TAI scale in surveys employing representative sampling to make
statements over the actual level of threat perceptions regarding the different functionalities of AI
systems in various domains. With that, it would be possible to grasp the public threat perception of AI
systems and draw conclusions for further implementation of AI in society. However, such research
needs to be thoroughly theorized, as the mere information concerning levels of threat perceptions of
AI with the public is of little academic value.
Another promising future direction of research could focus on the role of knowledge in attitude
building. Knowledge about AI technology could influence the way individuals feel threatened by AI.
Additionally, with this extension of research one may test, whether the perceptions of what an AI
system is capable of do in fact match the technical level. It may be possible that the imagination of
individuals and the real performance of AI systems might not correspond.
Both AI and nuclear weapons have the potential to cause significant harm and damage if they are
misused or fall into the wrong hands. However, the nature and potential consequences of the threats
posed by these technologies are different .Nuclear weapons are designed to cause widespread
destruction and death on a massive scale, and the use of these weapons would have catastrophic
consequences for humanity and the planet. AI, on the other hand, is a tool that can be used for a wide
range of applications, both beneficial and potentially harmful. While there are concerns about the
potential threats posed by AI, it is important to recognize that these threats can be mitigated through
careful development and use of the technology, as well as through effective regulation and oversight.
The development of ethical frameworks and guidelines for the development and use of AI can help to
ensure that this technology is used in a way that benefits society and minimizes potential risks. Overall,
it is important to recognize that both nuclear weapons and AI have the potential to cause harm, and
efforts should be made to ensure that both are developed and used in a way that prioritizes the safety
and well-being of humanity.

67
BIBLIOGRAPHY

1. https://www.precedenceresearch.com/artificial-intelligence-
market#:~:text=The%20global%20artificial%20intelligence%20(AI,USD%2051%20billion
%20in%202021.
2. https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market
3. https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-
74851580.html
4. https://www.mordorintelligence.com/industry-reports/global-artificial-intelligence-market
5. https://www.polarismarketresearch.com/industry-analysis/artificial-intelligence-market
6. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
7. https://www.mckinsey.com/capabilities/quantumblack/our-insights/confronting-the-risks-of-
artificial-intelligence
8. https://link.springer.com/article/10.1007/s12369-020-00734-w
9. https://bernardmarr.com/what-are-the-negative-impacts-of-artificial-intelligence-ai/
10. https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
11. https://builtin.com/artificial-intelligence
12. https://www.makeuseof.com/is-ai-dangerous-5-immediate-risks-of-artificial-intelligence/
13. https://bernardmarr.com/what-is-ai/
14. https://bernardmarr.com/is-artificial-intelligence-ai-a-threat-to-humans/
15. https://www.ai-bees.io/post/how-artificial-intelligence-impacts-the-future-of-work
16. https://www.analyticsinsight.net/top-10-artificial-intelligence-risks-that-humans-will-face-in-
2075/
17. https://www.analyticsinsight.net/superintelligent-ai-can-we-control-it-before-its-a-threat-to-
humanity/
18. https://learn.g2.com/ai-ethics
19. https://chat.openai.com/chat

68

You might also like