0% found this document useful (0 votes)
21 views6 pages

AI Part 1 & 2

The document discusses the ethical implications and potential risks associated with artificial intelligence (AI), highlighting concerns about the development of super-intelligent AI and its potential to pose existential threats to humanity. It argues against proposed policies that would limit AI research, emphasizing the importance of funding and understanding AI technologies to mitigate risks. The document also explores ethical frameworks, such as utilitarianism and Kantian ethics, in relation to AI development and the necessity of addressing ethical challenges in the field.

Uploaded by

Brian Yego
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views6 pages

AI Part 1 & 2

The document discusses the ethical implications and potential risks associated with artificial intelligence (AI), highlighting concerns about the development of super-intelligent AI and its potential to pose existential threats to humanity. It argues against proposed policies that would limit AI research, emphasizing the importance of funding and understanding AI technologies to mitigate risks. The document also explores ethical frameworks, such as utilitarianism and Kantian ethics, in relation to AI development and the necessity of addressing ethical challenges in the field.

Uploaded by

Brian Yego
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1

Ethics in Artificial Intelligence

Name of Student

Author's Affiliation

Course Number and Name

Instructor's Name
2

Ethics in Artificial Intelligence

PART 1

AI tools have often presented overlapping solutions in our current technological sector,

yet their promising learning and evolving capabilities pose substantial threats to humanity. Vold

and Harris's (2021) research provides a glimpse of the possibility that some artificial intelligence

(AIs) could progress into super-intelligent beings beyond human control and pose an existential

threat to civilization. When seeking to support this argument, they first explain the concept of

"intelligence explosion." The authors describe the term as a hypothetical incident during which

an artificial intelligence system quickly loops through iterative self-improvement, resulting in the

development of a super-intelligent AI. For instance, AI systems such as AlphaZero have proven

that these technologies can iteratively improve themselves by continually competing against

themselves. Such examples illustrate that under specific conditions, the repetitive self-

improvement process can create an uncontrollable super-intelligent machine despite the

narrowness of the previous AI system.

The article also discusses the potential risks AI tools pose between influential actors, such

as disincentivizing scientists from advancing in AI safety, sparking military conflict between AI

competitors, and destabilizing global strategic stability by disrupting nuclear deterrence

strategies. It argues that a competitive AI race could incentivize scholars to trade off wellbeing to

gain a relative advantage and disincentivize implementing safety measures that could prevent an

intelligence explosion. It further suggests that the narrative of an AI race possibly will influence

crisis growth between nations and that AI-enabled observation and reconnaissance systems could

make pre-emptive nuclear strikes a viable strategy. Thus, the article concludes that an AI race

could pose indirect global catastrophic risks, such as contributing to military conflict escalation.
3

The weaponization of AI also poses an X-risk in both direct and indirect ways to society.

According to Vold & Harris (2021), Malicious actors could exploit software vulnerabilities or

generate political discord with synthetic media to cause widespread harm. At the same time,

lethal autonomous weapons systems (LAWS) could increase the chances of war more and

tensions between competing nations and military engagement. A glitch in AI might cause a

difficult launch wherein the program engages in fast iterative self-improvement and develops

objectives at odds with human values. Since automated systems pose a more significant danger

of being attacked by malevolent actors or causing more severe damage owing to any unexpected

system breakdown, it follows that the more command an automation process has over militarized

systems that may inflict absolute destruction.

PART 2

The components of artificial intelligence (AI), which are increasingly contemporary in

our everyday lives, bring with them the potential for long-term consequences for the political

system and the socioeconomic structure. At this point, public and private organizations use AI

technologies to automate their administrative procedures. On the other hand, existing ideologies

project a future in which AI systems will bring about catastrophic consequences for human

civilization. Because of this incident, the NSERC body has sought to propose a policy that will

discourage the conduct of research studies on AI intelligent systems in the next 25 years.

Because the advantages provided by AI tools much exceed their disadvantages, the following

discussion will argue against implementing the proposed legislation. The most crucial point is

that supporting AI research gives computer scientists a chance to understand AI technologies

better and reduce the hazards connected with them.


4

Ideally, ethical standards dictate how one ought to behave or how something ought to be

done. An excellent example of an ethical guideline that may be put into practice is treating

everyone equally and decently. In AI, the concept of ethics is a significant issue for robotic

manufacturers, software developers, and machine operators, particularly in ways they can reduce

the ethical harms the technology may present to society. As noted in the Vold & Harris (2021)

article, most of the threats around AI usage lie in its design and application. Due to the unique

nature of the circumstances surrounding AI threats, any claims that such an occurrence poses a

danger may only be based on speculation. As a result of their uncommon occurrence, any

guesses concerning the conditions under which they may arise are purely hypothetical and

cannot be independently verified by observation.

Funding AI research, as opposed to the NSERC policy, is the only approach that can help

public and private institutions understand the devastating risks that AI systems pose in society

and how they can mitigate them. For instance, many developers today lack enough training and

awareness of how they can deal with ethical challenges in the technological sector. As Gardner

(2022) illustrates, there is a severe deficiency in the coverage of big data and artificial

intelligence in higher learning since none of the primary teaching pathways, programs, or

workshops that an AI researcher may pursue lack standard subject requirements. However, by

funding AI research, developers can gain the necessary skills and behaviors to deal with ethical

transformations in AI. Rather than prohibiting AI research, the policy should focus on outlining

statutory requirements for developers and researchers wishing to dive into the AI field.

Besides, from a utilitarian perspective, I believe that increased funding for AI research

can benefit society with increased efficiency and improved medical care, which outweigh the

potential risks associated with the technology. The key reason is that the benefit of better-funded
5

research in these areas would be much greater than the cost of creating an uncontrollable

artificial intelligence, which could potentially have catastrophic consequences. Utilitarianism

promotes the idea that “the greatest good should be achieved for the most significant number of

people”(Longoni, C., & Cian, 2022). Therefore, a utilitarian would prioritize funding research

that results in the greatest good, even if there is a slight risk that an uncontrollable artificial

intelligence could be created.

Furthermore, when considering the ethical implications of this issue, it is essential to look

at Kantian ethics, the concept of deontology, which holds that actions should be judged by their

inherent morality rather than the consequences of those actions (Stahl, 2021). In this case, the

focus should be on preventing the creation of an uncontrollable artificial intelligence and not on

the potential benefits of further research. Therefore, from a Kantian perspective, advancing our

knowledge of how to prevent the creation of artificial intelligence that might become

uncontrollable outweighs the value of better funding research related to these areas.

References
6

Gardner, A., Smith, A. L., Steventon, A., Coughlan, E., & Oldfield, M. (2022). Ethical funding

for trustworthy AI: proposals to address the responsibilities of funders to ensure that

projects adhere to trustworthy AI practice. AI and Ethics, 2(2), 277-291.

Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The

"word-of-machine" effect. Journal of Marketing, 86(1), 91-108.

Stahl, B. C. (2021). Concepts of Ethics and Their Application to AI. In Artificial Intelligence for

a Better Future (pp. 19-33). Springer, Cham.

Vold, K., & Harris, D. R. (2021). How Does Artificial Intelligence Pose an Existential Risk?.

You might also like