0% found this document useful (0 votes)
47 views8 pages

Theoretical Framework

This document deals with deontology in the development of artificial intelligence. It presents the types of artificial intelligence, its main applications and how the European Parliament and the Asilomar conference address deontology. Finally, it analyzes possible consequences of the lack of deontological principles such as the development of autonomous weapons and social manipulation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views8 pages

Theoretical Framework

This document deals with deontology in the development of artificial intelligence. It presents the types of artificial intelligence, its main applications and how the European Parliament and the Asilomar conference address deontology. Finally, it analyzes possible consequences of the lack of deontological principles such as the development of autonomous weapons and social manipulation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

SALESIAN POLYTECHNIC UNIVERSITY

Electronic Engineering
Deontology
Name: Bryan Yépez

Group 2

Date: 2019-10-29

Teacher: Manolo Acosta

Topic: Deontology in the development of Artificial Intelligence

Theoretical framework
Introduction

A profound change is coming and will affect the entire society and the people who have
some type of responsibility in this transition have both a great duty and the opportunity
to give it the best possible shape. Artificial intelligence (AI) is one of the branches of
Computer Science, with strong roots in other areas such as logic and cognitive sciences.
There are many definitions of what artificial intelligence is. However, all of them agree
that it is the combination of algorithms proposed with the purpose of creating machines
that have the same capabilities as human beings. Gradually our daily lives are
influenced by decisions resulting from artificial intelligence, from things as simple as
search suggestions in our internet browser to tasks as complex as piloting an airplane
where hundreds of human lives depend on its correct development and functioning. of
the developed AI.

Types of Artificial Intelligence

There are several types of artificial intelligence (Russel, 2017):

 Systems that think like humans: they automate activities such as decision
making, problem solving and learning. An example is artificial neural networks.

 Systems that act like humans: These are computers that perform tasks in a
similar way to how people do. This is the case of robots.
 Systems that think rationally: they try to emulate the rational logical thinking of
humans, that is, they investigate how to ensure that machines can perceive,
reason and act accordingly. Expert systems fall into this group.

 Systems that act rationally: ideally, they are those that try to rationally imitate
human behavior, such as intelligent agents.

Main applications of Artificial Intelligence


AI processes are applied in real systems in a wide variety of branches and problems
(Gevarter, 1987):
• Management and control: intelligent analysis, goal setting.
• Manufacturing: design, planning, programming, monitoring, control, project
management, simplified robotics and computer vision.
• Education: practical training, exams and diagnosis.
• Engineering: design, control and analysis.
•Equipment: design, diagnosis, training, maintenance, configuration, monitoring and
sales.
• Cartography: interpretation of photographs, design, resolution of cartographic
problems.
• Professions: law, medicine, accounting, geology, chemistry.
• Software: teaching, specification, design, verification, maintenance.
• Weapon systems: electronic warfare, target identification, adaptive control, image
processing, signal processing.
• Data processing: education, natural language interface, intelligent access to data and
database managers, intelligent data analysis.
• Finance: planning, analysis, consulting.

But AI also has numerous commercial applications in today's world (Negrete,1992).


• Configuration: selection of distribution of the components of a computing system.
• Diagnosis: computer hardware, computer networks, mechanical equipment, medical
problems, telephone breakdowns, electronic instrumentation, electronic circuits,
automobile breakdowns.
• Interpretation and analysis: geological data for oil prospecting, chemical compounds,
signal analysis, complex mathematical problems, military threat assessment, electronic
circuit analysis, biological data (coronary, cerebral and respiratory), radar, sonar and
infrared information.
• Monitoring: equipment, process monitoring, manufacturing and management of
scientific processes, military threats, vital functions of hospitalized patients,

financial data on teleprinter punched paper strips, industrial and government reports.
• Planning: asset and liability management, portfolio management, credit and loan
analysis, contracts, shop floor scheduling, project management, experiment planning,
printed circuit board production.
• Intelligent interfaces: instrumentation (fiscal) hardware, computer programs, multiple
databases, control panels.
• Natural language systems: interfaces with databases in natural language, tax
management (accounting aids), consulting on legal issues, estate planning, banking
systems consulting.
• Design systems: very high-scale integration of microcircuits, synthesis of electronic
circuits, chemical plants, buildings, bridges and dams, transportation systems.
• Computer vision systems: selection of parts and components, assembly, quality
control.
• Software development: automatic programming.

Deontology in AI development according to the European Parliament

The way of reflecting and proposing deontological frameworks offers the following
common parameters and ideas, which can serve as a shared basis to try to conceptualize
and regulate, in due course, the practices and consequences of AI according to the
European Parliament (Palmer,2016).

 AI should be done for the good of humanity and benefit the greatest number. It
is necessary to reduce the risk of exclusion.

 The standards regarding AI must be very high with regard to human safety: For
this, ethical and finalist control of research, transparency and cooperation in the
development of Artificial Intelligence is necessary.

 Researchers and designers have a crucial responsibility: all AI research and


development must be characterized by transparency, reversibility and
traceability of processes.
 Need for human control: That at all times it is humans who decide what robotic
or AI-based systems can or cannot do.

 Manage risk: The more serious the potential risk, the stricter the risk control and
management systems must be.

 No development of AI to create weapons of destruction.

 Uncertainty: It is recognized that advances in these fields are uncertain, in areas


and scopes that in certain cases are unimaginable. Therefore, regulations and
frameworks must be rethought in the medium term when other advances have
become a reality.

Deontology in AI development according to Asilomar

In February 1975, a group of geneticists met in a small California town, Asilomar, to


decide whether their work could destroy the world. We were at the beginning of genetic
engineering and DNA manipulation, and from that meeting a series of principles and a
strict ethical framework for biotechnology emerged. Four decades later – organized by
the Future of Life Institute – another group of scientists met in the same place and with
the same problem. But this time it was about analyzing the possible consequences of
Artificial Intelligence. The underlying idea was clear and shared: a profound change is
coming and will affect the entire society and the people who have some type of
responsibility in this transition have both a great duty and the opportunity to give it the
best possible form.

1. Protecting humans from harm caused by robots: human dignity.

2. Respect the refusal to be cared for by a robot.

3. Protect human freedom from robots.

4. Protect privacy and data use: especially when autonomous cars, drones, personal
assistants or security robots advance.

5. Protection of humanity from the risk of manipulation by robots: Especially in


certain groups - the elderly, children, dependents - that can generate artificial
empathy.

6. Avoid the dissolution of social ties by having robots monopolize, in a certain


sense, the relationships of certain groups.
7. Equal access to progress in robotics: Like the digital divide, the robotics divide
can be essential.

8. Restricting access to enhancement technologies by regulating the idea of


transhumanism and the search for physical and/or mental improvements.

Possible consequences of the lack of Deontology in the development of AI

The lack of application of Deontological principles in the development of Artificial


Intelligence can entail certain risks that can make it dangerous, and this has been stated
by numerous influential people such as Stephen Hawking, and Elon Musk, CEO of the
Tesla and SpaceX companies. .(TecBeat Magazine, 2018) . Some possible
consequences may be:

1. Autonomous weapons: Weapons programmed to kill pose a serious risk in the


future of AI. It is possible that nuclear weapons will be replaced by autonomous
weapons. They are not only dangerous because they can become completely
autonomous and act without supervision, but because of the people who can
have them in their hands.

2. Manipulate society: Social networks can be a great source of information about


anyone, and in addition to being used for marketing and providing specific ads
for each person, they can be used in many other ways. With the Cambridge
Analytica scandal in relation to the US presidential elections and Brexit in the
United Kingdom, it was possible to see the enormous power that having data can
give to manipulate people, with which AI, being able to identify algorithms and
personal data, can be extremely dangerous.

3. Invasion of privacy to socially oppress: It is possible to 'follow' a user's tracks on


the Internet, and use a lot of information, invading their privacy. For example, in
China, information such as facial recognition cameras and the way they behave,
whether they smoke, or whether they watch a lot of video games will be used for
the social credit system. This invasion of privacy can therefore turn into social
oppression.

4. Divergence between our objectives and those of the AI machine: If our


objectives do not coincide with those of the machine, the actions we ask to carry
out can end in disaster. For example, sending an order to the AI to take us to a
place as quickly as possible, but without specifying that it has to respect traffic
rules so as not to endanger human lives.

5. Discrimination: Because AI machines can collect information, analyze it, and


track it, they can also use this information against you. For example, an
insurance company may deny you insurance because of how many times
cameras have images of you talking on the phone, or a person aspiring for a job
may lose the opportunity due to their low social network of contacts on the
Internet. .

Examples of failures in the development of Artificial Intelligence

Below you can see examples of how the lack of Deontology in the development of
Artificial Intelligence has affected the behavior of some artificial beings.

1)Tay is an artificial intelligence that was launched in 2016 in order to interact


on Twitter and learn to behave like normal people did on social networks. The
project is supposed to have the ability to converse with people in a fun and
informal way and it should also be able to learn from the conversations it had.
The problem arose when Tay learned too much and began making racist
comments and even sharing thoughts that They sympathized with Hitler and the
Nazis. The project was a total failure and ended up being eliminated 16 hours
later.
2) The Promobot IR77 was a Russian robot that was being designed in order to have
face-to-face interactions with humans and learn from them. Apparently he learned too
much, since after it was taking place in a facility he himself escaped and
declared his independence, traveling 45 meters to escape until he ran out of
battery.
BIBLIOGRAPHY

-Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern


Approach (3rd edition). Upper Saddle River: Prentice Hall.

- Gevarter, M. (1987). Intelligent machines. Madrid: Díaz de Santos, SA

-Negrete, J. (1992). From philosophy to artificial intelligence. Mexico: Group


Noriega Editors.

-Moor, J (2006). The Nature, Importance and Difficulty of Machine Ethics. IEEE
Intelligent Systems 21 (4): 18-21.
- Call for debate on killer robots, By: Jason Palmer, Science and technology
reporter, BBC News, 3/8/09.

- AI vs Ethics. Tecbeat Magazine, Volume 93 (1st. Edition ). Recovered from:


https://www.itmagaxine.com/2018/11/20/5-riesgos-de-la-inteligencia-artificial-
que-trabajon-hacerla-peligrosa/

You might also like