Chapter 2
Chapter 2
Rationality
Environment types
Agent types
2
Agents
An agent is anything that can be viewed as
Sensors: perceive environment
Actuators: act upon environment
Samples of agents
Human agent
Sensors: eyes, ears, and other organs for sensors
Actuators: hands, legs, vocal tract, and other movable or changeable
body parts
Robotic agent
Sensors: cameras and infrared range finders
Actuators: various motors
Software agents
Sensors: keystrokes, file contents, received network packages
Actuators: displays on the screen, files, sent network packets
3
Agents & environments
5
A vacuum-cleaner agent
Tabulation of the agent function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[ B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
… …
[A, Clean], [A, Clean], [A, Clean] Right
[A, Clean], [A, Clean], [A, Dirty] Suck
6
Rational agents
"do the right thing" based on the perception history
and the actions it can perform.
7
Performance measure
Evaluates the sequence of environment states
Vacuum-cleaner agent: samples of performance measure
Amount of dirt cleaned up
One point award for each clean square at each time step
Penalty for electricity consumption & generated noise
Mediocre job or periods of high and low activation?
8
Rational agents (vacuum cleaner example)
Is this rational? If dirty then suck, otherwise move to the
other square
Depends on
Performance measure, e.g., Penalty for energy consumption?
Environment, e.g., New dirt can appear?
Actuators, e.g., No-op action?
Sensors, e.g., Only sense dirt in its location?
9
Rationality vs. Omniscience
Rationality is distinct from omniscience (all-knowing with
infinite knowledge, impossible in reality)
10
Autonomy
An agent is autonomous if its behavior is determined by
its own experience (with ability to learn and adapt)
Not just relies only on prior knowledge of designer
Learns to compensate for partial or incorrect prior knowledge
Benefit: changing environment
Starts by acting randomly or base on designer knowledge and then
learns form experience
Rational agent should be autonomous
Example: vacuum-cleaner agent
If dirty then suck, otherwise move to the other square
Does it yield an autonomous agent?
learning to foresee occurrence of dirt in squares
11
Task Environment (PEAS)
Performance measure
Environment
Actuators
Sensors
12
PEAS Samples…
Agent: Automated taxi driver
13
PEAS Samples…
Agent: Medical diagnosis system
14
PEAS Samples…
Satellite image analysis system
15
PEAS Samples…
Agent: Part picking robot
16
PEAS Samples…
Agent: Interactive English tutor
Sensors: Keyboard
17
PEAS Samples…
Agent: Pacman
Performance measure: Score, lives
Environment: Maze containing white dots, four ghosts, power
pills, occasionally appearing fruit
Actuators: Arrow keys
Sensors: Game screen
18
Environment types
Fully observable (vs. partially observable): Sensors give access
to the complete state of the environment at each time
Sensors detect all aspects relevant to the choice of action
Convenient (need not any internal state)
Noisy and inaccurate sensors or missing parts of the state from
sensors cause partially observability
19
Environment types
Deterministic (vs. stochastic): Next state can be completely
determined by the current state and the executed action
If the environment is deterministic except for the actions of other
agents, then the environment is strategic (we ignore this uncertainty)
Partially observable environment could appear to be stochastic.
Environment is uncertain if it is not fully observable or not deterministic
20
Environment types
Episodic (vs. sequential): The agent's experience is divided into
atomic "episodes“ where the choice of action in each episode
depends only on the episode itself.
E.g., spotting defective parts on an assembly line (independency)
In sequential environments, short-term actions can have long-term
consequences
Episodic environment can be much simpler
21
Environment types
Known (vs. unknown): the outcomes or (outcomes
probabilities for all actions are given.
It is not strictly a property of the environment
Related to agent’s or designer’s state of knowledge about “laws of
physics” of the environment
22
Pacman game
Fully observable?
Single-agent?
Deterministic?
Discrete?
Episodic?
Static?
Known?
23
Environment types
24
Structure of agents
An agent is completely specified by the agent function (that
maps percept sequences to actions)
One agent function or small equivalent class is rational
Agent program implements agent function (focus of our
course)
Agent program takes just the current percept as input
Agent needs to remember the whole percept sequence, if requiring it
(internal state)
25
Agent Program Types
Look Up Table
Simple reflexive
Goal-based agents
Utility-based agents
26
Look Up Table Agents
Benefits:
Easy to implement
Drawbacks:
𝑇
space ( 𝑡=1 |𝑃| ; 𝑃 : set of possible percepts, 𝑇: lifetime)
t
For chess at least 10150 entries while less than 1080 atoms in the observable
universe
the designer have not time to create table
no agent could ever learn the right table entries from its experience
how to fill in table entries?
27
Agent program
Mapping is not necessarily using a table.
AI intends to find programs producing rational behavior (to the
extent possible) from a smallish program instead of a vast table
Can AI do for general intelligent behavior what Newton did for
square roots?
28
Simple Reflex Agents
Agent Program
29
Simple Reflex Agents
Select actions on the basis of the current percept
ignoring the rest of the percept history
30
Simple Reflex Agents
Simple, but very limited intelligence
Works only if the correct decision can be made on the
basis of the current percept (fully observability)
Infinite loops in partially observable environment
31
Model-based reflex agents
32
Model-based reflex agents
Partial observability
Internal state (based on percept history)
reflects some unobserved aspects of the current state
Updating the internal state information requires two kinds of
knowledge
Information about how the world evolves (independent of agent)
Information about how the agent's own actions affects the world
Only determine the best guess for the current state of a
partially observable environment
33
Goal-based agents
34
Goal-based agents
Knowing about the current state is not always enough to
decide what to do
Situations that are desirable must be specified (goal)
Usually requires search and planning
to find action sequences achieving goal
35
Goal-based agents vs. reflex-based agents
Consideration of future
Goal-based agents may be less efficient but are more
flexible
Knowledge is represented explicitly and can be changed easily
Example: going to a new destination
Goal-based agent: specifying that destination as the goal
Reflexive agent: agent's rules for when to turn and when to go
straight must be rewritten
36
Utility-based agents
37
Utility-based agents
More general performance measure than goals
How happy would each world state make the agent?
Utility function is an internalization of performance measure
Advantages
Like goal-based agents show flexibility and learning advantages
Can trade-off conflicting goals (e.g. speed and safety)
Where none of several goals can be achieved with certainty
likelihood of success can be weighted by importance of goals
Rational utility-based agent chooses the action that
maximizes the expected utility of action outcomes
Many chapters of AI book is about this
Handling uncertainty in partially observable and stochastic
environments
38
Learning Agents
39
Learning Agents
Create state-of-the-art systems in many areas of AI
Four conceptual components
Performance element: selects actions based on percepts
(considered as entire agent before)
Learning element: makes improvements by modifying
“knowledge” (performance element) based on critic feedback
Critic: feedbacks on how the agents is doing
Problem generator: suggests actions leading to new and
informative experiences
Performance standard
Fixed, out of the agent
Percepts themselves do not provide indication of success
Distinguishes part of the incoming percept as a reward or penalty
40
Learning Agents
Learning element is based on performance element (i.e.,
agent design)
The learning element can make changes to any of the
"knowledge" components of previous agent diagrams
To bring components into closer agreement with feedback
(yielding better overall performance)
41