Theoretical Framework
Theoretical Framework
Electronic Engineering
Deontology
Name: Bryan Yépez
Group 2
Date: 2019-10-29
Theoretical framework
Introduction
A profound change is coming and will affect the entire society and the people who have
some type of responsibility in this transition have both a great duty and the opportunity
to give it the best possible shape. Artificial intelligence (AI) is one of the branches of
Computer Science, with strong roots in other areas such as logic and cognitive sciences.
There are many definitions of what artificial intelligence is. However, all of them agree
that it is the combination of algorithms proposed with the purpose of creating machines
that have the same capabilities as human beings. Gradually our daily lives are
influenced by decisions resulting from artificial intelligence, from things as simple as
search suggestions in our internet browser to tasks as complex as piloting an airplane
where hundreds of human lives depend on its correct development and functioning. of
the developed AI.
Systems that think like humans: they automate activities such as decision
making, problem solving and learning. An example is artificial neural networks.
Systems that act like humans: These are computers that perform tasks in a
similar way to how people do. This is the case of robots.
Systems that think rationally: they try to emulate the rational logical thinking of
humans, that is, they investigate how to ensure that machines can perceive,
reason and act accordingly. Expert systems fall into this group.
Systems that act rationally: ideally, they are those that try to rationally imitate
human behavior, such as intelligent agents.
financial data on teleprinter punched paper strips, industrial and government reports.
• Planning: asset and liability management, portfolio management, credit and loan
analysis, contracts, shop floor scheduling, project management, experiment planning,
printed circuit board production.
• Intelligent interfaces: instrumentation (fiscal) hardware, computer programs, multiple
databases, control panels.
• Natural language systems: interfaces with databases in natural language, tax
management (accounting aids), consulting on legal issues, estate planning, banking
systems consulting.
• Design systems: very high-scale integration of microcircuits, synthesis of electronic
circuits, chemical plants, buildings, bridges and dams, transportation systems.
• Computer vision systems: selection of parts and components, assembly, quality
control.
• Software development: automatic programming.
The way of reflecting and proposing deontological frameworks offers the following
common parameters and ideas, which can serve as a shared basis to try to conceptualize
and regulate, in due course, the practices and consequences of AI according to the
European Parliament (Palmer,2016).
AI should be done for the good of humanity and benefit the greatest number. It
is necessary to reduce the risk of exclusion.
The standards regarding AI must be very high with regard to human safety: For
this, ethical and finalist control of research, transparency and cooperation in the
development of Artificial Intelligence is necessary.
Manage risk: The more serious the potential risk, the stricter the risk control and
management systems must be.
4. Protect privacy and data use: especially when autonomous cars, drones, personal
assistants or security robots advance.
Below you can see examples of how the lack of Deontology in the development of
Artificial Intelligence has affected the behavior of some artificial beings.
-Moor, J (2006). The Nature, Importance and Difficulty of Machine Ethics. IEEE
Intelligent Systems 21 (4): 18-21.
- Call for debate on killer robots, By: Jason Palmer, Science and technology
reporter, BBC News, 3/8/09.