Chapter1: Artificial Intelligence and Agents
Chapter1: Artificial Intelligence and Agents
and Agents
Asst.Prof.Abdulle Hassan 1
Intelligence
• What does the word ‘intelligence’ mean?
– someone’s intelligence is their ability to understand and learn
things.
– Intelligence is the ability to think and understand instead of
doing things by instinct or automatically.
• (Essential English Dictionary, Collins, London, 1990)
• According to the first definition, intelligence is the quality
processed by humans.
• But the second one does not specify whether it is
someone or something that has the ability to think and
understand.
Asst.Prof.Abdulle Hassan 2
What is Artificial Intelligence (AI)?
• Artificial Intelligence or AI is the field that studies the
synthesis and analysis of computational agents that
act intelligently.
• An agent is something that acts in an environment and
it does something.
– Agents include worms, dogs, thermostats, airplanes, robots,
humans, companies, and countries.
Asst.Prof.Abdulle Hassan 3
What is Artificial Intelligence (AI)?
• An agent acts intelligently if:
• its actions are appropriate for its goals and circumstances
• it is flexible to changing environments and goals
• it learns from experience
• it makes appropriate choices given perceptual and
computational limitations
• A computational agent is an agent whose decisions
about its actions can be explained in terms of
computation.
– The decision can be broken down into primitive operations that
can be implemented in a physical device.
Asst.Prof.Abdulle Hassan 4
What is Artificial Intelligence (AI)?
• Agents are limited:
– No agents are omniscient or omnipotent.
– Agents can only observe everything about “the world” in very
specialized domain, where the world is very constrained.
– Agents have finite memory.
– Agents in the real world do not have unlimited time to act.
Asst.Prof.Abdulle Hassan 5
What is Artificial Intelligence (AI)?
• The central scientific goal of AI is to understand the
principles that make intelligent behavior possible in natural
or artificial systems and is done by
– analyze natural and artificial agents
– formulate and test hypotheses about what it takes to construct
intelligent agents and
– design, build, and experiment with computational systems that
perform tasks that require intelligence.
• The central engineering goal of AI is the design and
synthesis of useful, intelligent artifacts/agents.
Asst.Prof.Abdulle Hassan 6
Artificial and Natural Intelligence
• AI is the established name for the field, but the term
“artificial intelligence” may be interpreted as the opposite
of real or natural intelligence.
– Natural means occurring in nature and artificial means made by
people.
• If an agent behaves intelligently, it is intelligent.
– It is only the external behavior that defines intelligence; acting
intelligently is being intelligent.
– Thus, artificial intelligence, if and when it is achieved, will be real
intelligence created artificially.
Asst.Prof.Abdulle Hassan 7
Turing Imitation Game
• One of the earliest and most significant papers on
machine intelligence, ‘Computing machinery and
intelligence’ was written by the British mathematician
Alan Turing (Turing, 1950)
• Turing arranged a game called Turing imitation game,
was a famous test for machine intelligence
– The test was that the computer should be interrogated by a
human via a teletype and the computer passes the test if the
human interrogator can not tell if there is a computer or a human
at the other end (Figure 1.1 and Figure 1.2)
Asst.Prof.Abdulle Hassan 8
Turing Imitation Game
Interrogator
Asst.Prof.Abdulle Hassan 9
Turing Imitation Game: Phase1
• The imitation game proposed by Turing included two
phases:
– In the first phase (in Figure 1.1) there are 3 persons: the
interrogator, a man and a woman are each placed in separate
rooms and can communicate only via a remote terminal
– The interrogator’s objective is to work out who is the man and
who is woman by questioning them
– The rules of the game are that the man should attempt to
deceive the interrogator that he is a woman, while the woman to
convince the interrogator that she is the woman.
Asst.Prof.Abdulle Hassan 10
Turing Imitation Game: Phase2
– In the second phase (in Figure 1.2), the man is replaced by a
computer programmed to deceive the interrogator as the man
did.
– It would even be programmed to make mistakes and provide
fuzzy answers in the way a human would.
– If the computer can fool the human interrogator as often as
the man did, we may say this computer has passed the
intelligent behavior test!
• The obvious thing is that naturally intelligent agent is the
human being.
Asst.Prof.Abdulle Hassan 11
Turing Imitation Game: Phase2
• One class of intelligent agents is the class or
organization:
– Ant colonies are a prototypical example of organization.
• Individual ant may not be very intelligent, but ant colony can act as more
intelligent
– Similarly, corporations can be more intelligent than individual
people
• Modern computers are developed from low-level hardware to high-level
software, are more complicated than any human can understand, yet they
are manufactured by organization of humans.
• Human society viewed as an agent is the most intelligent
agent known.
Asst.Prof.Abdulle Hassan 12
A Brief History of AI
• Each new technology has, in its turn, been exploited to
build intelligent agents or models of mind.
– In the past Clockwork, hydraulics, telephone switching systems,
holograms, analog computers, and digital computers have all
been proposed both as technological metaphors for intelligence
and as mechanisms for modeling mind.
– About 400 years ago people started to write about the nature of
thought and reason.
– Hobbes (1588–1679), who has been described by Haugeland
[1985] as the “Grandfather of AI,” espoused the position that
thinking was symbolic reasoning.
Asst.Prof.Abdulle Hassan 13
A Brief History of AI
– The idea of symbolic reasoning was further developed by
Descartes (1596–1650), Pascal (1623–1662), Spinoza (1632–
1677), Leibniz (1646–1716) and others.
– The idea of symbolic operations became more concrete with
the development of computers.
– The first general-purpose computer (non electronic) designed was
the Analytical Engine by Babbage (1792–1871).
– In the early part of the 20th century several models of
computation were proposed:
• Turing machine by Alan Turing (1912–1954), a theoretical machine that
writes symbols on an infinitely long tape, and
• the lambda calculus of Church (1903–1995), which is a mathematical
formalism for rewriting formulas.
Asst.Prof.Abdulle Hassan 14
A Brief History of AI
– Once real computers were built, some of the first applications of
computers were the AI programs
• For example, Samuel [1959] built a checkers program in 1952 and
implemented a program that learns to play checkers in the late 1950s.
• Newell and Simon [1956] built a program, Logic Theorist, that discovers
proofs in propositional logic.
– In addition to that for high-level symbolic reasoning, there was
also much work inspired by how neurons work.
– McCulloch and Pitts [1943] showed how a simple “formal
neuron” could be the basis for a Turing-complete machine:
• The first learning for artificial neural network (ANN) was described by
Minsky [1952].
• One of the early significant works was the Perceptron of Rosenblatt
[1958].
Asst.Prof.Abdulle Hassan 15
A Brief History of AI
– One of the founders of AI was John von Neumann, a Hungarian
born mathematician, With the help of Marvin Minsky in the
Princeton mathematics department, built the first replica of neural
based computer in 1951.
– Claude Shannon, a graduate from MIT demonstrated the need to
use heuristics in search in order to reduce the time complexity
(1950)
– In 1956, John McCarty (at Princeton University), Marvin Minsky
and Claude Shannon brought together researches interested in
the study of machine intelligence, ANN and automata theory,
under the sponsorship of IBM
– Consequently, this workshop gave birth to a new field of science
called Artificial Intelligence (AI)
Asst.Prof.Abdulle Hassan 16
A Brief History of AI
– John McCarthy, the inventor of the term ‘Artificial Intelligence’
defined a high-level language; LIPS (List processing) was one of
the oldest programming languages (1958)
– McCarthy proposed first complete knowledge based system
called advice taker in 1958
– Frank and Rosenblatt developed the perceptron convergence
theorem for percepton learning process
• The perceptron is a single neuron
– Allen Newell and Herbert from Carnegie Mellon University
developed a general purpose program to simulate human
problem-solving methods
Asst.Prof.Abdulle Hassan 17
AI in late 1960s-early 1970s
• To solve problems, programs begin to apply a search strategy by
trying out different combinations of small steps until the right one
was found
– In order to solve large problems, this approach was found wrong
• Easy or tractable problems can be solved in polynomial time
– For a problem of size n, the time or number of steps needed to find the
solution is a polynomial function of n
• On the other hand, hard or intractable problems require times that
are exponential functions of the problem size
– An exponential function (y = ex) is used to model a relationship in which a
constant change in the independent variable gives the same proportional
change in the dependent variable (e = 2.7183)
Asst.Prof.Abdulle Hassan 18
AI in late 1960s-early 1970s
• The exponential function ex can be characterized in a variety of
equivalent ways:
n 2 3
x x x
e
x
1 x ...
n 0 n! 2! 3!
Asst.Prof.Abdulle Hassan 19
AI in late 1960s-early 1970s
• An algorithm is said to be of polynomial time if its running
time is upper bounded (indicated by ‘O’) by a polynomial
expression in the size of the input for the algorithm:
– i.e., T(n) = O(nk) for some constant k (where k = 1, 2, ..)
• Where T(n), is the maximum amount of time taken on any input of size
n and asymptotic upper bound time complexity is O.
– Problems for which a polynomial time algorithm exists belong
to the class P, which is central in the field of computational
complexity theory.
– Cobham's thesis states that polynomial time is a synonym for
"tractable", "feasible", "efficient", or "fast“ solution(s).
Asst.Prof.Abdulle Hassan 20
AI in late 1960s-early 1970s
• Some examples of polynomial time algorithms:
– The quick-sort (https://en.wikipedia.org/wiki/Quicksort)
sorting algorithm on n integers performs at most A(n2)
operations for some constant A. Thus it runs in time O(n2)
and is a polynomial time algorithm.
– All the basic arithmetic operations (addition, subtraction,
multiplication, division, and comparison) can be done in
polynomial time.
Asst.Prof.Abdulle Hassan 21
AI in late 1960s-early 1970s
• While a polynomial-time algorithm is considered to be efficient,
then an exponential-time algorithm is inefficient because its
execution time increases with the problem size
• The theory of NP-completeness (Cook, 1971, Karp, 1972)
developed in the early 1970s, showed the existence of a large class
of non-deterministic polynomial-time problems(NP problems) that
are in the NP-complete category
– A problem is called NP if its solution (if one exists) can be guessed and
verified in polynomial time;
– Non-deterministic means there is no particular algorithm is followed to make
the guess accurate (towards the solution)
• The hardest problems in this class are NP-complete, even with
faster computers and larger memories, these problems are hard to
solve
Asst.Prof.Abdulle Hassan 22
Asymptotic Analysis
• Consider a program to compute the sum of a sequence of numbers:
integer function SUMMATION (int sequence[])
{
sum 0
for i 1 to LENGTH (sequence)
sum sum + sequence[i]
return sum }
• In this analysis, the first step is to abstract over the input in order to
find some parameter(s) that characterize the size of the input
Asst.Prof.Abdulle Hassan 23
Asymptotic Analysis
• In the given program, the input can be characterized by
the length of the sequence [], called n
• The second step is to abstract over the implementation,
to find some measure that reflects the running time of the
algorithm
– For the given SUMMATION function the running time could be
just the number of lines of code executed or measuring the
number of additions, assignments, array references and
branches executed by the algorithm
• A characterization of the total number of steps taken by
the algorithm as a function of the size of the input n
– This characterization can be given as T(n) (is the maximum
amount of time taken by the algorithm with the input size n)
Asst.Prof.Abdulle Hassan 24
Asymptotic Analysis
• If we count lines of the given SUMMATION function, we have
T(n) = 2n + 2 (where 2n means that n calculations for the
loop and another n calculations for the sum (so n + n = 2n).
Similarly one each single calculation needed for the
initialization and return of the sum, so the total is 2)
– T(n) denotes the worst-case time complexity of the algorithm
• From this point, we can say that asymptotic time
complexity of the SUMMATION is O(n), means that its order
of time complexity is n, with the possible values of n in
polynomial time
– An algorithm with T(n) = O(n) is called linear time algorithm
Asst.Prof.Abdulle Hassan 25
Asymptotic Analysis
• The O() notation give us what is called an asymptotic
analysis, as n asymptotically approaches infinity, an O(n)
algorithm is better than O(n2) algorithm
– The asymptote of a curve is a line such that the distance
between a curve and a line approaches zero or tends to infinity
[https://en.wikipedia.org/wiki/Asymptote].
• Asymptotic analysis is the most widely used tool for
analyzing algorithms
• The O() notation is a good compromise between
precision and ease of analysis
Asst.Prof.Abdulle Hassan 26
Asymptotic Analysis
• The analysis of algorithms and the O() notation allow us
to talk about the efficiency of a particular algorithm
• The field of complexity analysis analyzes problems
rather than algorithms
– The gross division is between problems that can be solved in
polynomial time and problems that cannot be solved in
polynomial time, no matter what algorithm is used
• The class of polynomial problems– those which can be
solved in time T(n) = O(nk) for some constant k is called
polynomial time (P) problems
– These problems are called “easy” problems
Asst.Prof.Abdulle Hassan 27
P, NP and Hard Problems
• Another important class of problems is NP, the class of
non-deterministic polynomial-time problems
– A problem is in this class if there is some algorithm that can
guess a solution and then verify whether the guess is correct in
polynomial time
• One of the biggest open questions in computer science is
the class NP is equivalent to the class P when one does
not have guessing
• Most computer scientists are convinced that P NP
means that NP problems are inherently hard and have no
polynomial time algorithms
– But this has never been proven!
Asst.Prof.Abdulle Hassan 28
NP-complete
• A subclass of NP called NP-complete problems
– The word “complete” is used here in the sense of “most
extreme” and thus refers to the hardest problems in the class of
NP
• Many important problems are in the class of NP-
complete
– An example is the satisfiability problem: given a sentence of
propositional logic, is there an assignment of truth values to the
proposition symbols of the sentence that make it true?
• Many real world problems are in NP category and AI is
more interested in solving such problems with heuristics
or approximation techniques
Asst.Prof.Abdulle Hassan 29
Make a Machine Learn: Rebirth of ANN
• In mid-1980, AI researchers decided to have a new look at Artificial Neural
Networks (ANN)
– The major reason for the delay was, there were no powerful PCs or
workstations to model and experiment with ANN at that time
– In the 1980s, it is noticed the need for brain-like processing elements in
computer technology
– Grossberg established a new principle of self-organized class of neural
networks
– In 1982, Hopfield introduced feedback neural networks, called Hopfield
networks
– In 1986, Rumelhart and McClelland reinvented back-propagation learning
algorithm
– In 1988, Broomhead and Lowe found a procedure to design layered
feedforward neural networks. Based on this, the concept of Convolution NN
(CNN) is formulated for image/pattern recognition application.
Asst.Prof.Abdulle Hassan 30
Evolutionary Computation – Learning by Doing
• The evolutionary computation is an approach to AI is
based on the computational models of natural
selection and genetics
– Evolutionary computation comprises machine learning,
optimization and classification paradigms roughly based
on mechanisms of evolution such as biological genetics and
natural selection.
• The evolutionary computation field includes genetic
algorithms, evolutionary programming, genetic
programming, evolution strategies, and particle
swarm optimization.
Asst.Prof.Abdulle Hassan 31
The New Era of Knowledge Engineering
• ANN technology offers more natural interaction with the
real world than do systems based on symbolic reasoning
– ANNs can learn and adapt to changes in a problem’s
environment, establish patterns in situations where rules are not
known
• However, ANNs lack explanation facilities and usually act as a black box
• Classic expert systems are especially good for closed-
system applications with precise inputs and logical outputs
– They use domain expert’s knowledge in the form of rules
– A major drawback is that human experts cannot always express
their knowledge in terms of rules
Asst.Prof.Abdulle Hassan 32
Relationship of AI to Other Disciplines
• AI is a very young discipline:
– The science of AI could be described as “synthetic psychology,”
“experimental philosophy,” or “computational epistemology”–
epistemology is the study of knowledge.
– Instead of being able to observe only the external behavior of intelligent
systems, as in philosophy, psychology, economics, and sociology, AI
researchers experiment with executable models of intelligent behavior.
– AI researchers are interested in testing general hypotheses about the
nature of intelligence by building machines that are intelligent and that
do not necessarily mimic humans or organizations.
– AI is intimately linked with the discipline of computer science because
it is essential to understand algorithms, data structures, and
combinatorial complexity to build intelligent machines.
Asst.Prof.Abdulle Hassan 33
Agents Situated in Environments
• AI is about practical reasoning: reasoning in order to do
something.
• A coupling of perception, reasoning, and acting comprises
an agent and it acts in an environment.
– An agent’s environment may include other agents. An agent
together with its environment is called a world.
• for example, a coupling of a computational engine with physical sensors
and actuators, called a robot, where the environment is a physical
setting.
• It could be an advice-giving computer (expert system).
• It could be a program that acts in a purely computational environment
(software agent).
Asst.Prof.Abdulle Hassan 34
Agents Situated in Environments
• At any time, an agent depends on its (Figure 1.3) :
– prior knowledge about the agent and the environment;
– history of interaction with the environment is composed of:
• stimuli (input) from the current environment, which include
observations about the environment
• actions that the environment imposes on the agent
• past experiences of previous actions and observations, or
other data, from which it can learn
– goals that it must try to achieve or preferences over states of the
world
– abilities are the primitive actions the agent is carrying out.
Asst.Prof.Abdulle Hassan 35
Agents Situated in Environments
Asst.Prof.Abdulle Hassan 36
Agents Situated in Environments
• Inside the black box (in Figure 1.3), the agent has some
internal belief state
– belief about its current environment, what it has learned, what it
is trying to do, and what it intends to do.
– An agent updates this internal state based on stimuli.
– It uses the belief state and stimuli to decide on its actions.
– Purposive agents have preferences or goal.
• They act to try to achieve the states they prefer most.
• The preferences of an agent are often the preferences of its
designer.
Asst.Prof.Abdulle Hassan 37
Designing AI Agents
• AI agents are designed for particular tasks.
• In deciding what an agent will do has three aspects of
computation:
1. The computation that goes into the design of the agent.
2. The computation that the agent can do before it observes
the world.
3. The computation that is done by the agent as it is acting.
Asst.Prof.Abdulle Hassan 38
Designing Agents
• Design time computation:
– It is the computation that is carried out during the design time by
the designer.
• Offline computation:
– It is the computation done by the agent before it has to act.
• In offline, an agent can take background knowledge and data for
computation called a knowledge base (KB).
• Online computation:
– It is the computation done by the agent between observing the
environment and acting in the environment.
• An agent typically use its KB, beliefs and observations to determine what
to do next.
Asst.Prof.Abdulle Hassan 39
Designing Agents
• The two broad strategies have been pursued in building
agents are:
1. The first one is to simplify environments and build complex
reasoning systems for these simple environments.
• Adv: Much of the complexity of the task can be reduced by simplifying the
environment.
2. The second is to build simple agents in natural environments
• Researchers then make agents with more reasoning abilities as their tasks
become more complicated
– This is inspired by seeing how insects can survive in complex
environments even though they have limited reasoning abilities.
Asst.Prof.Abdulle Hassan 40
Tasks
• Most AI representation typically specifies “what needs to be
computed” rather than “how it to be computed”.
– This way AI is differ from conventional computer programs
• Much AI reasoning involves searching through the space of
possibilities to determine how to complete a given task.
• The general problem solving framework is given in Figure 1.4.
• To solve a task, the designer of a system must:
– Determine what constitutes a solution.
– Represent the task in a way a computer can reason about
– The computed output (answer) which is presented to a user or a sequence of
actions to be carried out in the environment; and
– interpret the output as a solution to the problem.
Asst.Prof.Abdulle Hassan 41
Tasks
Asst.Prof.Abdulle Hassan 42
Tasks
• Knowledge is the information about a domain that can
be used to solve problems in that domain.
– As part of designing a program to solve problems, we must define
how the knowledge will be represented
– A knowledge representation scheme is the form of the
knowledge that is used in an agent.
– A knowledge base (KB) is the representation of all of the
knowledge/information that is stored by an agent.
• A good knowledge representation should be:
– rich enough to express the knowledge needed to solve the
problem.
Asst.Prof.Abdulle Hassan 43
Defining a Solution
• There are four common classes of solutions:
– Optimal solution: An optimal solution to a problem is the best
solution according to some measure of solution quality.
– A trash collecting robot to take as much of the trash as possible,
minimizing the distance it traveled, and explicitly specify a trade-off
between the effort required and the proportion of the trash taken out.
– Satisficing solution: A satisficing solution is one that is good
enough according to some description of which solutions are
adequate.
• For example, the robot that it must take all of trash out, or tell it to take out
three items of trash.
Asst.Prof.Abdulle Hassan 44
Defining a Solution
– Approximately optimal solution: An approximately optimal
solution is one whose measure of quality is close to the best.
• For example, the robot may not need to travel the optimal distance to take
out the trash.
• For other problems, it is (asymptotically) just as difficult to guarantee finding
an approximately optimal solution as it is guaranteed.
– Probable solution: A probable solution is not actually be a
solution to the problem, is likely to be a solution.
• For example, in the case where the delivery robot could drop the trash or
fail to pick it up when it attempts to. But the application needs only 80% of
the trash to be collected (at least).
Asst.Prof.Abdulle Hassan 45
Defining a Solution -Summary
• There are four common classes of solutions:
– An optimal solution to a problem is one that is the best
solution according to some measure of solution quality.
– A satisficing solution is one that is good enough according to
some description of which solutions are adequate.
– An approximately optimal solution is one whose measure of
quality is close to the best that could theoretically be obtained.
• Typically agents do not need optimal solutions to problems; they only
must get close enough.
– A probable solution is one that, even though it may not actually
be a solution to the problem, is likely to be a solution.
• This is one way to approximate, in a precise manner, a satisficing
solution.
Asst.Prof.Abdulle Hassan 46
Knowledge Representation
• Computers and human minds are examples of physical
symbol systems.
• A symbol is a meaningful pattern of data that can be
manipulated by a computational device
• Examples of symbols are written words, sentences, gestures,
marks on paper, image, sequences of bits, etc.
– The symbol system creates, copies, modifies, and destroys
symbols.
– An agent can be seen as manipulating symbols to produce action
• An agent can use physical symbol systems to model the world.
• A model of a world is a representation of the specifics of what is true in the
world
Asst.Prof.Abdulle Hassan 47
Knowledge Representation
• Physical symbol system hypothesis:
– “A physical symbol system has the necessary and sufficient
means for general intelligent action”.
• All models are abstractions;
– represent only part of the world and leave out many of the
details.
– an agent can have a very simplistic model of the world, or it can
have a very detailed model of the world.
• The level of abstraction provides a partial ordering of
abstraction
• A lower-level abstraction includes more details than a
higher-level abstraction.
Asst.Prof.Abdulle Hassan 48
Knowledge Representation
• Choosing an appropriate level of abstraction is difficult
because:
– a high-level description (abstraction) is easier for a human to
specify and understand.
– often high-level description away the details that may be
important for actually solving the problem.
– a low-level description can be more accurate and more
predictive.
• At the lower the level it is more difficult to reason with.
– This is because a solution at a lower level involves more steps and
more possible courses of computation needed.
– It is often a good idea to model an environment at multiple
levels of abstraction.
Asst.Prof.Abdulle Hassan 49
Knowledge Representation
• Two levels of abstraction seem to be common among
biological and computational entities:
– The knowledge level is about the external world to the agent.
• The knowledge level is in terms of what an agent knows and what its
goals are.
– For example, the delivery agent’s behavior can be described in terms
of whether it knows that a parcel has arrived or not and whether it
knows where a particular person is or not.
– The symbol level is about what symbols an agent uses to
implement the knowledge level.
• The symbol level is a level of description of an agent in terms of what
reasoning it is doing.
– To produce answers, an agent manipulates its symbols (which are
acquired from its environment) with its knowledge level entities (KB).
Asst.Prof.Abdulle Hassan 50
Knowledge and Symbol Levels
• The knowledge level is in terms of what an agent knows
and what its goals are.
– The knowledge level is about the external world to the agent.
• The symbol level is a level of description of an agent in
terms of what reasoning it is doing.
– The symbol level is about what symbols an agent uses to
implement the knowledge level.
– The symbol level is a level of description of an agent in terms
of the reasoning it does.
• To implement the knowledge level, an agent manipulates symbols to
produce answers.
Asst.Prof.Abdulle Hassan 51
Knowledge Representation
• Knowledge is the information about a domain that
can be used to solve problems in that domain:
– As part of designing a program to solve problems, we must
define how the knowledge will be represented.
– A knowledge representation scheme is the form of the
knowledge that is used to represent knowledge in an agent.
– A knowledge base(KB) is the representation of all of the
knowledge that is stored by an agent.
Asst.Prof.Abdulle Hassan 52
Reasoning and Acting
• Reasoning is the computation required to determine what
an agent should do.
• In AI, reasoning involves searching through the space
of possibilities to determine how to complete a task:
– Design time reasoning is the reasoning that is carried out by
the agent based on its given symbols by the designer.
– Offline computation is the computation done by the agent
before it has to act based on its KB.
– Online computation is the computation done by the agent
between observing the environment and acting in the
environment.
• A piece of information obtained online is called an observation.
Asst.Prof.Abdulle Hassan 53
Agent Design Space
• A number of dimensions of complexity exist in the
design of intelligent agents.
– There ten dimensions of complexity in the design of intelligent
agents:
• modularity, representation scheme, planning, horizon, sensing
uncertainty, effect uncertainty, preference, number of agents,
learning, and computational limits.
– These dimensions define a design space of AI; different points in
this space can be obtained by varying the values of the
dimensions.
Asst.Prof.Abdulle Hassan 54
Modularity
• Modularity is the extent to which a system can be
decomposed into interacting modules that can be
understood separately.
– Modularity is important for reducing complexity.
– Modularity is typically expressed in terms of a hierarchical
decomposition.
• For example, a human’s visual cortex and eye constitute a module that
takes in light and outputs some simplified description of a scene.
– Modularity is hierarchical if the modules are organized into
smaller modules.
• Object-oriented programming designed to enable simplification of a
system by exploiting modularity and abstraction.
Asst.Prof.Abdulle Hassan 55
Modularity
• In the modularity dimension, an agent’s structure is one
of the followings:
– flat: there is no organizational structure (model at one level of
abstraction);
– modular: the system is decomposed into interacting modules
that can be understood on their own (separately); or
– hierarchical: the system is modular, and the modules
themselves are decomposed into simple interacting modules
(may be recursively).
• In a hierarchical structure the agent reasons at multiple levels of
abstraction.
• In a flat or modular structure the agent typically reasons at a single level
of abstraction.
Asst.Prof.Abdulle Hassan 56
Representation Scheme
• The representation scheme dimension concerns how
the world is described.
– The ways to represent the world could be based on an agent
states.
– We can factor the state of the world into the agent’s internal
state (its belief state) and the environment state.
• At the simplest level, an agent can reason explicitly in
terms of individually identified states.
– A state may be described in terms of features, where a feature
has a value in each state.
• A proposition is a Boolean feature, its value is either true or false. Thirty
propositions can encode 230 = 1, 073, 741, 824 states.
Asst.Prof.Abdulle Hassan 57
Representation Scheme
• When describing a complex world, the features can depend on
relations and individuals
– An individual could also be a thing, an object, or an entity.
– A relation on a single individual is a property. There is a feature for each
possible relationship among the individuals.
• An agent can reason in terms of:
• Explicit states: a state is one way the world could be
• Features or propositions:
– States can be described using features.
– 30 binary features can represent 230 states.
– Individuals and relations:
• There is a feature (attributes) for each relationship of individuals. Often an
agent can reason without knowing the individuals or when there are
infinitely many individuals.
Asst.Prof.Abdulle Hassan 58
Planning horizon
• How far the agent looks into the future when deciding what to do
is called planning horizon.
• In the planning horizon dimension, an agent is one of the
following:
– Static (non-planning stage): world does not change
• an agent that does not consider the future when it decides
what to do
– Finite horizon (stage) planner: agent reasons about a fixed
finite number of time steps (or stages).
• A doctor may have to treat a patient but may have time for a test and so
there may be two stages to plan for: a testing stage and a treatment
stage.
Asst.Prof.Abdulle Hassan 59
Planning horizon
– Indefinite horizon (stage) planner: agent reasons about a
finite, but not pre-determined, number of time steps (stages)
• For example, an agent that must get to some location may not know (a
priori) how many steps it will take to get into there
• but when planning, it does not consider what it will do after it gets to the
location.
– Infinite horizon (stage) planner: the agent plans for going on
forever (called a process)
• For example, the stabilization module of a legged robot should go on
forever;
• it cannot stop when it has achieved stability, because the robot has to
keep from falling over.
Asst.Prof.Abdulle Hassan 60
Uncertainty
• An agent could assume there is no uncertainty, or it
could take uncertainty in the domain into consideration:
– For example, given a patient’s symptoms, a medical doctor may
not actually know which disease a patient may have and may
have only a probability distribution over the diseases.
• Uncertainty is divided into two dimensions:
– Uncertainty from sensing and
– Uncertainty about the effect of actions.
Asst.Prof.Abdulle Hassan 61
Uncertainty
• Uncertainty is divided into two dimensions:
– uncertainty from sensing and uncertainty about the effect of
actions.
• The sensing Uncertainty dimensions concerns whether an
agent can determine the state from the stimuli:
– Fully observable means the agent can observe the state of the
world.
– Partially observable means the agent does not directly observe the
state of the world.
• This occurs when many possible states can result in the same stimuli or when
stimuli are misleading
Asst.Prof.Abdulle Hassan 62
Uncertainty
• In some cases an agent knows the effects of its action.
• In many cases, it is difficult to predict the effects of an
action and the best an agent can do is to have a
probability distribution over the effects.
– For example, a teacher not know the effects of explaining a concept,
even if the state of the students is known
• In the effect uncertainty dimension the dynamics can
be:
– deterministic – the resulting state is determined from the
action and the state
– stochastic – there is uncertainty about the resulting state..
Asst.Prof.Abdulle Hassan 63
Why is Probability in Uncertainty?
• Probabilistic reasoning is relevant when agents need to act even
if they are uncertain. Predictions are needed to decide what to do:
– definitive predictions: you will be run over tomorrow.
– disjunctions: be careful or you will be run over.
– point probabilities: probability you will be run over tomorrow is
0.002 if you are careful and 0.05 if you are not careful.
– probability ranges: you will be run over with probability in range
[0.001,0.34].
• Probabilities can be learned from data and prior knowledge.
• By probability distribution, it is possible to extract results from
uncertain situations.
Asst.Prof.Abdulle Hassan 64
Why is Probability in Uncertainty?
• Probability distribution gives values for all possible assignments:
• For example, weather is one of <sunny, rain, cloudy, snow>
• P(weather) = <0.72, 0.1, 0.08, 0.1> (normalized, i.e., sums to 1).
• Joint probability distribution for a set of variables gives values for
each possible assignment to all the variables
• What is the probability of having either a cavity or a Toothache?
– P(Cavity Ú Toothache) ?
Asst.Prof.Abdulle Hassan 66
Number of Agents
• The number of agents dimension considers whether
the agent does:
– Single agent reasoning means the agent assumes that there
are no other agents in the environment.
– Multiple agent reasoning (or multi-agent reasoning) means
is the agent reasoning of other agents into account.
• Each agent in a system can have their own goals:
– cooperative, competitive, or goals can be independent of
each other.
Asst.Prof.Abdulle Hassan 67
Interaction
• The interaction dimension considers:
– Offline reasoning where the agent determines what
to do before interacting with the environment
– Online reasoning where the agent must determine
what action to do while interacting in the
environment, and needs to make timely decisions.
Asst.Prof.Abdulle Hassan 68
Learning
• In a real world application, an agent should use data from
its past experiences and other sources to help it decide
what to do.
• The learning dimension determines whether
– knowledge is given or
– knowledge is learned (from data or past experience).
• Learning typically means finding the best model that fits
the data, there are many issues beyond fitting data,
including:
– how to incorporate background knowledge, what data to collect, how to
represent the data and the resulting representations, what learning biases are
appropriate, etc.
Asst.Prof.Abdulle Hassan 69
Computational Limits
• Often there are computational limits that prevent an
agent from carrying out the best action.
– the agent may not be able to find the best action quickly enough
within its memory limitations.
• The computational limits dimension determines whether
an agent has:
– perfect rationality, where an agent reasons about the best
action without taking into account its limited resources; or
– bounded rationality, where an agent decides on the best action
that it can find given its computational limitations.
• Figure 1.6 summarizes the dimensions of complexity.
Asst.Prof.Abdulle Hassan 70
Computational Limits
Asst.Prof.Abdulle Hassan 71
Prototypical Applications
• Agent applications are widespread and diverse and
include;
– medical diagnosis,
– scheduling factory processes,
– robots for hazardous environments,
– game playing,
– autonomous vehicles in space,
– natural language translation systems,
– tutoring systems, and many
Asst.Prof.Abdulle Hassan 72
Examples of Agents
• Organizations: Microsoft, UN, IT or CS Dept,….
• People: teachers, physicians, stock traders, engineers,
researchers, travel agents, farmers, waiters,...
• Computers/devices: thermostats, user interfaces,
airplane controllers, network controllers, games, advising
systems, tutoring systems, diagnostic assistants, robots,
Google car, Mars rover,...
• Animals: dogs, mice, birds, insects, worms, bacteria,...
Asst.Prof.Abdulle Hassan 73
Input to an Agent
• Abilities - the set of things it can do
• Goals/Preferences - what it wants, its desires, its
values,...
• Prior Knowledge - what it comes into being knowing,
what it doesn't get from experience,...
• History of observations (percepts, stimuli) of the
environment:
– (current) observations - what it observes now
– past experiences - what it has observed in the past
Asst.Prof.Abdulle Hassan 74
Example Agent: Robot
• abilities: movement, grippers, speech, facial
expressions,. . .
• goals: deliver food, rescue people, score goals,
explore,. . .
• prior knowledge: what is important feature, categories
of objects, what a sensor tell us,. . .
• observations: vision, sonar, sound, speech
recognition, gesture recognition,. . .
• past experiences: effect of steering, slipperiness, how
people move,. . .
Asst.Prof.Abdulle Hassan 75
Example Agent: Teacher
• abilities: present new concept, drill, give test, explain
concept,. . .
• goals: particular knowledge, skills, inquisitiveness
(curiosity for knowing), social skills,. . .
• prior knowledge: subject material, teaching strategies,. .
• observations: test results, facial expressions, errors,
focus,. . .
• past experiences: prior test results, effects of teaching
strategies, feedback results, . . .
Asst.Prof.Abdulle Hassan 76
Trading Agent
• Abilities: acquire information, make recommendations,
purchase items.
• Prior knowledge: the ontology of what things are
available, where to purchase items, how to decompose
a complex item.
• Past experience: how long special items last, how long
items take to sell out, who has good deals, what your
competitors do.
Asst.Prof.Abdulle Hassan 77
Example Agent: Medical Doctor
• Abilities: ??
• Goals: ??
• Prior Knowledge: ??
• Observations: ??
• Past Experiences: ??
Asst.Prof.Abdulle Hassan 78
Example Agent: Autonomous Car
Abilities: ??
Goals: ??
Prior Knowledge: ??
Observations: ??
Past Experiences: ??
• https://www.theverge.com/autonomous-cars
Asst.Prof.Abdulle Hassan 79
Exercises – Page 42
Exercises
1.1 (a, b, c),
1.2, and
1.3 (a, b, c, d, e, f, g)
Asst.Prof.Abdulle Hassan 80