Module 2-Intelligent Agents
Module 2-Intelligent Agents
Module 2-Intelligent Agents
Intelligent Agents
Artificial Intelligence
Review of Intelligent Agents
Motivation
Objectives
Introduction
Agents and Environments
Rationality
Agent Structure
Agent Types
Simple reflex agent
Model-based reflex agent
Goal-based agent
Utility-based agent
Learning agent
Artificial Intelligence
Introduction to Intelligent
Agents
Motivation
Agents are used to provide a consistent viewpoint on various topics in AI
Agents require essential skills to perform tasks that require intelligence
Intelligent agents use methods and techniques from the field of AI
What is an agent?
In general, an entity that interacts with its environment
Perception through sensors
Actions through effectors or actuators
Agent and its environment
An agent perceives its environment through sensors
The complete set of inputs at a given time is called a percept
The current percept, or a sequence of percepts may influence the actions of an agent
It can change the environment through actuators
An operation involving an actuator is called an action
Actions can be grouped into action sequences
Artificial Intelligence
Intelligent Agents
Artificial Intelligence
Example of Agents
Human agent
Eyes, ears, skin, taste buds, etc. for sensors
Hands, fingers, legs, mouth, etc. for actuators
Powered by muscles
Robot
Camera, infrared, bumper, etc. for sensors
Grippers, wheels, lights, speakers, etc. for actuators
Often powered by motors
Software agent
Functions as sensors
Information provided as input to functions in the form of encoded bit
strings or symbols
Functions as actuators
Results deliver the output
Artificial Intelligence
Vacuum-Cleaner World
Artificial Intelligence
Agents and Their
Environment
A rational agent does “the right thing”
The action that leads to the best outcome under the given circumstances
Problems:
What is “ the right thing”
Artificial Intelligence
The Concept of Rationality
Concept of Rationality
Rational omniscient (knowing everything)
Percepts may not supply all relevant information
Rational clairvoyant(beyond normal sensory contact)
Action outcomes may not be as expected
Rational successful
Rationality vs Perfection
Rationality maximizes expected performance
Perfection maximizes actual performance
Artificial Intelligence
Performance Measure
Performance of Agents
Criteria for measuring the outcome and the expenses of the agent
Often subjective, but should be objective
Task dependent
Time may be important
Performance measure example
Vacuum agent
Performance Measure: number of tiles cleaned during a certain period
Based on the agent’s report, or validated by an objective authority
Doesn’t consider expenses of the agent, side effects
Energy, noise, loss of useful objects, damaged furniture, scratched floor
Might lead to unwanted activities
Agent re-cleans clean tiles, covers only part of the room, drops dirt on tiles to have
more tiles to clean, etc.
Alternative Performance Measure:
One point per square cleaned up in time T?
One point per clean square per time step, minus one per move
Penalize for > k dirty squares?
Artificial Intelligence
Rational Agents
Rationality: selects the action that is expected to maximize its performance
Based on a performance measure
Depends on the percept sequence, background knowledge, and feasible actions
Performance measure for the successful completion of a task
Complete perceptual history (percept sequence)
Background knowledge
Especially about the environment
Dimensions, structure, basic “laws”
Task, user, other agents
Feasible actions
Capabilities of the agent
Definition of a Rational Agent
For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
Artificial Intelligence
The Nature of Environments
Environment
Determine to a large degree the interaction between the “outside world” and the agent
The “outside world” is not necessarily the “real world” as we perceive it
In many cases, environments are implemented within computers
They may or may not have a close correspondence to the “real world”
No changes while the agent is “thinking”.
Task environment :The nature of the task environment directly affects the
appropriate design for the agent program.
Specifying the task environment :PEAS description :
Artificial Intelligence
Specifying the Task
Environment (I)
PEAS Description
Performance Measures
Used to evaluate how well an agent solves the task
at hand
Environment
Surroundings beyond the control of the agent
Actuators
Determine the actions the agent can perform
Sensors
Provide information about the current state of the
environment
Artificial Intelligence
VacBot
VacBot PEAS
PEAS Description
Description
Artificial Intelligence
SearchBot
SearchBot PEAS
PEAS Description
Description
Performance Measures Number of “hits” (relevant retrieved items)
Recall (hits / all relevant items)
Precision (relevant items/retrieved items)
Quality of hits
Artificial Intelligence
Chess
Chess Player
Player PEAS
PEAS Description
Description
Performance Measures Winning the game
Time spent in the game
Environment Chessboard
Positions of every piece
Artificial Intelligence
The Nature of Environments
Artificial Intelligence
Specifying the Environment
(II)
PAGE Description:
Used for high-level characterization of agents
Percepts
Information acquired through the agent’s sensory system
Actions
Operations performed by the agent on the environment
through its actuators
Goals
Desired outcome of the task with a measurable
performance
Environment
Surroundings beyond the control of the agent
Artificial Intelligence
VacBot
VacBot PAGE
PAGE Description
Description
Goals
Keep the floor clean
Artificial Intelligence
StudentBot
StudentBot PAGE
PAGE Description
Description
Environment Classroom
Artificial Intelligence
Environment Properties
Fully observable vs. partially observable
Sensors capture all relevant information from
the environment
Deterministic vs. stochastic (non-deterministic)
Changes in the environment are predictable
Episodic vs. sequential (non-episodic)
Independent perceiving-acting episodes
Static vs. dynamic
No changes while the agent is “thinking”
Discrete vs. continuous
Limited number of distinct percepts/actions
Single vs. multiple agents
Interaction and collaboration among agents
Competitive, cooperative
Artificial Intelligence
Environment Properties
Artificial Intelligence
Structure Of Agents
Artificial Intelligence
Agent Programs
Difference between the agent program, which takes the current percept
as input, and the agent function, which may depend on the entire percept
history. If the agent’s actions need to depend on the entire percept
sequence, the agent will have to remember the percepts.
For example, Figure 2.7 shows a rather trivial agent program that keeps
track of the percept sequence and then uses it to index into a table of
actions to decide what to do.
The table—an example of which is given for the vacuum world in Figure
2.3 — represents explicitly the agent function that the agent program
embodies
Artificial Intelligence
Agent Program Types
Artificial Intelligence
Simple Reflex Agents
Simple Reflex Agents
These agents select actions based on the current percept, ignoring the rest of the
percept history.
For example, the vacuum agent whose agent function is tabulated in Figure 2.3 is
a simple reflex agent, because its decision is based only on the current location
and on whether that location contains dirt.
Artificial Intelligence
condition–action rule
Imagine yourself as the driver of the automated taxi. If the car
in front brakes and its brake lights come on, then you should
notice this and initiate braking.
Artificial Intelligence
Simple Reflex Agent
Artificial Intelligence
Model-based Reflex Agents
The agent should maintain some sort of internal state that
depends on the percept history and thereby reflects at
least some of the unobserved aspects of the current state.
Updating this internal state information as time goes by
requires two kinds of knowledge to be encoded in the
agent program in some form.
This knowledge about “how the world works”—whether
implemented in simple Boolean circuits or in complete
scientific theories—is called a transition model of the
world.
we need some information about how the state of the
world is reflected in the agent’s percepts. This kind of
knowledge is called a sensor model.
Together, the transition model and sensor model
allow an agent to keep track of the state of the
world—to the extent possible given the
limitations of the agent’s sensors. An agent that
uses such models is called a model-based
agent.
Artificial Intelligence
Model-based Reflex Agents
Artificial Intelligence
Goal-based Agent
The agent tries to reach a desirable state, the goal
May be provided from the outside (user, designer,
environment), or inherent to the agent itself
Results of possible actions are considered with respect to
the goal
Easy when the results can be related to the goal after each
action
In general, it can be difficult to attribute goal satisfaction
results to individual actions
May require consideration of the future
What-if scenarios
Search, reasoning or planning
Very flexible, but not very efficient
Artificial Intelligence
Goal-based Agent
Artificial Intelligence
A Utility-based Agent
More sophisticated distinction between different world
states
A utility function maps states onto a real number
May be interpreted as “degree of happiness”
Permits rational actions for more complex tasks
Resolution of conflicts between goals (tradeoff)
Multiple goals (likelihood of success, importance)
A utility function is necessary for rational behavior, but
sometimes it is not made explicit
Artificial Intelligence
A Utility-based Agent
Artificial Intelligence
Learning Agent
Performance element
Selects actions based on percepts, internal state, background knowledge
Can be one of the previously described agents
Learning element
Identifies improvements
Critic
Provides feedback about the performance of the agent
Can be external; sometimes part of the environment
Artificial Intelligence
A Model of a Learning Agent
Artificial Intelligence
Summary
Agents perceive and act in an environment
Definition of Percept
Description of the environment (PEAS)
Ideal agents maximize their performance measure
Rationality
Performance Measure
Rational Agent
Basic agent types
Simple reflex
Reflex with state
Goal-based
Utility-based
Learning
Some environments may make life harder for agents
Inaccessible, non-deterministic, non-episodic, dynamic, continuous
Artificial Intelligence
What I want you to do
Review Chapter 2
Review Class Notes
Artificial Intelligence