0% found this document useful (0 votes)
43 views49 pages

Chapter 2 IA

The document discusses intelligent agents and their properties, describing an agent as anything that perceives its environment and acts upon it, and outlining different types of agents including simple reflex agents, model-based reflex agents, utility-based agents, and learning agents. It also covers rationality versus omniscience in agents and describes different properties of task environments that agents can operate in, such as fully observable versus partially observable and deterministic versus non-deterministic.

Uploaded by

hamba Abebe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views49 pages

Chapter 2 IA

The document discusses intelligent agents and their properties, describing an agent as anything that perceives its environment and acts upon it, and outlining different types of agents including simple reflex agents, model-based reflex agents, utility-based agents, and learning agents. It also covers rationality versus omniscience in agents and describes different properties of task environments that agents can operate in, such as fully observable versus partially observable and deterministic versus non-deterministic.

Uploaded by

hamba Abebe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Chapter 2

Intelligent Agents
Outlines
Agent and Environment
Rationality Vs Omniscience
Task Environment and its properties
Structure of Intelligent agents
Agents Types
 Simple Reflex Agent
 Model-Based reflex Agent
 Utility –Based Agent
 Learning Agent
Introduction

What is an agent ?
 An agent is anything that perceives its environment through sensors
and acts upon that environment through actuators
 Example:
 Human is an agent

 A robot is also an agent with cameras and motors

 A thermostat detecting room temperature.

 An Environment is what the agent is interacting with.


 A rational agent needs to be designed, keeping in mind the type of
environment it will be used in.

chapter2: Intelligent Agent 3


Diagram of an agent

What AI should fill


chapter2: Intelligent Agent 4
Simple Terms
Percept
 Agent’s perceptual inputs at any given instant
Percept sequence
 Complete history of everything that the agent has
ever perceived.

chapter2: Intelligent Agent 5


Agent function & program
Agent’s behavior is mathematically
described by
 Agent function
 A function mapping any given percept
sequence to an action
Practically it is described by
 An agent program
 The real implementation

chapter2: Intelligent Agent 6


Rational agents
 A rational agent could be anything which makes
decisions, as a person, machine, or software.
 It carries out an action with the best outcome after
considering past and current percepts (agent’s
perceptual inputs at a given instance).
 A rational agent always performs right action, where
the right action means the action that causes the agent
to be most successful in the given percept sequence.
(The evidence provided by what it perceived and
whatever built-in knowledge it has).
Intelligent Agents

chapter2: Intelligent Agent 8


Concept of Rationality
Rational agent
 One that does the right thing
 = every entry in the table for the agent function
is correct (rational).
What is correct?
 The actions that cause the agent to be most
successful
 So we need ways to measure success.

chapter2: Intelligent Agent 9


Performance measure
Performance measure
 An objective function that determines
 How the agent does successfully

 E.g., 90% or 30% ?

An agent, based on its percepts


  action sequence :

if desirable, it is said to be performing well.


 No universal performance measure for all
agents
chapter2: Intelligent Agent 10
Performance measure
A general rule:
 Design performance measures according to
 What one actually wants in the environment
 Rather than how one thinks the agent should behave

E.g., in vacuum-cleaner world


 We want the floor clean, no matter how the agent
behave
 We don’t restrict how the agent behaves

chapter2: Intelligent Agent 11


Acting of Intelligent Agents (Rationality)

 What is rational at any given time depends on four things:


o P Percepts – the inputs to our system
o A Actions – the outputs of our system
o G Goals – what the agent is expected to achieve
o E Environment – what the agent is interacting with
 The Other popular alternative involves the following:
o P Performance – how we measure the system’s achievements
o E Environment – what the agent is interacting with
o A Actuators – what produces the outputs of the system
o S Sensors – what provides the inputs to the system
 rationality is concerned with expected success given what has been
perceived.
chapter2: Intelligent Agent 12
Rational agent
For each possible percept sequence,
 a rational agent should select
 an action expected to maximize its performance

measure, given the evidence provided by the


percept sequence and whatever built-in
knowledge the agent has
E.g., an exam
 Maximize marks, based on

the questions on the paper & your knowledge

chapter2: Intelligent Agent 13


Example of a rational agent
Performance measure
 Awards one point for each clean square
 at each time step, over 10000 time steps

Prior knowledge about the environment


 The geography of the environment

 Only two squares


 The effect of the actions

chapter2: Intelligent Agent 14


Example of a rational agent
Actions that can perform
 Left, Right, Suck and No Op
Percept sequences
 Where is the agent?
 Whether the location contains dirt?

Under this circumstance, the agent is


rational.

chapter2: Intelligent Agent 15


Omniscience
An omniscient agent
 Knows the actual outcome of its actions, and
can act accordingly.
 No other possible outcomes

 However, impossible in real world

 crossing a street but died of the fallen cargo


door from 33,000ft  irrational?

chapter2: Intelligent Agent 16


Omniscience
Based on the circumstance, it is rational.
As rationality maximizes
 Expected performance
Perfection maximizes
 Actual performance
Hence rational agents are not omniscient.

chapter2: Intelligent Agent 17


Task environments
Task environments are the problems
 While the rational agents are the solutions

Specifying the task environment


 PEAS description as fully as possible

P -Performance A -Actuators
E -Environment S -Sensors
Therefore in designing an intelligent agent, one has to
remember PEAS (Performance, Environment,
Actuators, Sensors) framework.
Use automated taxi driver as an example

chapter2: Intelligent Agent 18


Task environments
Environment
 A taxi must deal with a variety of roads

 Traffic lights, other vehicles, pedestrians,


stray animals, road works, police cars,
etc.
 Interact with the customer

chapter2: Intelligent Agent 19


Task environments
Actuators (for outputs)
 Control over the accelerator, steering, gear
shifting and braking
 A display to communicate with the customers

Sensors (for inputs)


 Detect other vehicles, road situations
 GPS (Global Positioning System) to know
where the taxi is
 Many more devices are necessary

chapter2: Intelligent Agent 20


Task environments
A sketch of automated taxi driver

chapter2: Intelligent Agent 21


Properties of Task Environments
o Fully observable and partially observable: An agent’s sensors
give it access to the complete state of the environment at each
point in time, if fully observable, otherwise not. Example: chess
is a fully observable environment, while Driving is not
Partially observable
An environment might be Partially observable because of noisy and
inaccurate sensors or because parts of the state are simply missing
from the sensor data. Example:
 A local dirt sensor of the cleaner cannot tell whether other
squares are clean or not

10/9/2020 DaDu Emerging technology 22


Properties of Task Environments
o Deterministic and Non Deterministic: The next state of the environment is
completely determined by the current state and the action executed by the
agent. Non deterministic environment is random in nature and cannot be
completely determined. Example: Tic Tac Toe game/ Robot on Mars
o Static and Dynamic: The static environment is unchanged while an agent is
on purpose. A dynamic environment, on the other hand, does
change. Example: Speech Analysis/Vision AI system in drones.
o Discrete and Continuous: A limited number of distinct, clearly defined
perceptions and actions, constitute a discrete environment. E.g. Chess/
Driving
o Single agent and Multi-agent: An agent operating just by itself has a single
agent environment. However if there are other agents involved, then it’s a
multi agent environment. Self-driving cars have multi agent environment.

10/9/2020 DaDu Emerging technology 23


Properties of Task Environments
o Accessible / Inaccessible: If the agent’s sensory apparatus can have access
to the complete state of the environment, then the environment is accessible
to that agent. if not, it is not accessible.
o Episodic / Non-episodic: In an episodic environment, each episode consists
of the agent perceiving and then acting.
o The quality of its action depends just on the episode itself.

o Subsequent episodes do not depend on the actions in the previous episodes.


o Episodic environments are much simpler because the agent does not need to
think ahead.
o Example: mail sorting system/chess game

10/9/2020 DaDu Emerging technology 24


Properties of task environments
Known vs. unknown
This distinction refers not to the environment itself but to the
agent’s (or designer’s) state of knowledge about the
environment.
-In known environment, the outcomes for all actions are
given. ( example: solitaire card games).
- If the environment is unknown, the agent will have to learn how
it works in order to make good decisions.( example: new video
game).

chapter2: Intelligent Agent 25


Examples of task environments

chapter2: Intelligent Agent 26


Structure of agents
Agent = architecture + program
 Architecture = some sort of computing
device (sensors + actuators)
 (Agent) Program = some function that
implements the agent mapping = “?”
 Agent Program = Job of AI

chapter2: Intelligent Agent 27


Agent programs

Input for Agent Program


 Only the current percept
Input for Agent Function
 The entire percept sequence
 The agent must remember all of them

Implement the agent program as


 A look up table (agent function)

chapter2: Intelligent Agent 28


Agent programs
Skeleton design of an agent program

chapter2: Intelligent Agent 29


Agent Programs

P = the set of possible percepts


T= lifetime of the agent
 The total number of percepts it receives

T t
Size of the look up table t 1
P
Consider playing chess
 P =10, T=150
 Will require a table of at least 10150 entries

chapter2: Intelligent Agent 30


Agent programs
Despite of huge size, look up table does
what we want.
The key challenge of AI
 Find out how to write programs that, to the
extent possible, produce rational behavior
 From a small amount of code
 Rather than a large amount of table entries

 E.g., a five-line program of Newton’s Method


 V.s. huge tables of square roots, sine, cosine,

chapter2: Intelligent Agent 31
Types of agents

Four types
 Simple reflex agents
 Model-based reflex agents

 Goal-based agents

 Utility-based agents

chapter2: Intelligent Agent 32


Simple reflex agents
•works by finding a rule whose condition matches the current situation
(as defined by the percept) and then doing the action associated with
that rule.
condition–action rule
E.g. If the car in front brakes, and its brake lights come on, then the
driver should notice this and initiate braking,
–Some processing is done on the visual input to establish the
condition.
–If "The car in front is braking"; then this triggers some established
connection in the agent program to the action "initiate braking". We
call such a connection a condition-action rule written as: If car-in-
front-is breaking then initiate-braking.
•Humans also have many such conditions. Some of which are learned
responses. Some of which are innate (inborn) responses
–Blinking when something approaches the eye.
Program Skeleton of Agent
function SKELETON-AGENT (percept) returns action
static: knowledge, the agent’s memory of the world
knowledge UPDATE-KNOWLEDGE(knowledge,percept)
action  SELECT-BEST-ACTION(knowledge)
knowledge UPDATE-KNOWLEDGE (knowledge, action)
return action

On each invocation, the agent’s knowledge base is updated to


reflect the new percept, the best action is chosen, and the fact
that the action taken is also stored in the knowledge base.
The knowledge base persists from one invocation to the next.

NOTE: Performance measure is not part of the agent


Structure of a simple reflex agent
Simple Reflex Agent sensors

Environment
What the world
is like now

Condition - What action I


should do now
action rules
effectors

function SIMPLE-REFLEX-AGENT(percept) returns action


static: rules, a set of condition-action rules
state  INTERPRET-INPUT (percept)
rule  RULE-MATCH (state,rules)
action  RULE-ACTION [rule]
return action
Structure of a simple reflex agent
 rectangles to denote the current internal state of the agent’s
decision process, and
 ovals to represent the background information used in the
process.
 INTERPRET-INPUT function generates an abstracted
description of the current state from the percept,
 RULE-MATCH function returns the first rule in the set of
rules that matches the given state description
 Note that the description in terms of “rules” and “matching”
is purely conceptual; actual implementations can be as
simple as a collection of logic gates implementing a
Boolean circuit.
Model-Based Reflex Agent
•This is a reflex agent with internal state.
–It keeps track of the world that it can’t see now.
•It works by finding a rule whose condition matches the current
situation (as defined by the percept and the stored internal state)
–If the car is a recent model -- there is a centrally mounted brake
light. With older models, there is no centrally mounted, so what if
the agent gets confused?
Is it a parking light? Is it a brake light? Is it a turn signal light?
–Some sort of internal state should be in order to choose an action.
–The camera should detect two red lights at the edge of the vehicle
go ON or OFF simultaneously.
•The driver should look in the rear-view mirror to check on the
location of near by vehicles. In order to decide on lane-change the
driver needs to know whether or not they are there. The driver sees,
and there is already stored information, and then does the action
associated with that rule.
Structure of Model-Based reflex agent

State sensors
How the world evolves What the world

Environment
is like now
What my actions do

Condition - action rules


What action I
should do now

effectors

function REFLEX-AGENT-WITH-STATE (percept) returns action


static: state, a description of the current world state
rules, a set of condition-action rules
state  UPDATE-STATE (state, percept)
rule  RULE-MATCH (state, rules)
action  RULE-ACTION [rule]
state  UPDATE-STATE (state, action)
return action
Model-Based Reflex Agent
• A model-based reflex agent. It keeps track of the
current state of the world, using an internal
model. It then chooses an action in the same way
as the reflex agent.
• It keeps track of the world state as well as a set
of goals it is trying to achieve, and chooses an
action that will (eventually) lead to the
achievement of its goals
Goal based agents
• Choose actions that achieve the goal (an agent with explicit
goals)
• Involves consideration of the future:
• Knowing about the current state of the environment is not always
enough to decide what to do.
For example, at a road junction, the taxi can turn left, right or go
straight.
• The right decision depends on where the taxi is trying to get to. As
well as a current state description, the agent needs some sort of
goal information, which describes situations that are desirable. E.g.
being at the passenger's destination.
• The agent may need to consider long sequences, twists and turns
to find a way to achieve a goal.
• Sometimes goal-based action selection is straightforward for example,
when goal satisfaction results immediately from a single action
Structure of a Goal-based agent
State sensors

How the world evolves What the world


is like now

Environment
What my actions do

What it will be like


if I do action A

Goals
What action I
should do now

effectors

function GOAL_BASED_AGENT (percept) returns action


state  UPDATE-STATE (state, percept)
action  SELECT-ACTION [state, goal]
state  UPDATE-STATE (state, action)
return action
Goal-based agent
• The goal-based agent appears less efficient, it is
more flexible because the knowledge that supports
its decisions is represented explicitly and can be
modified.
• The goal-based agent’s behavior can easily be
changed to go to a different destination, simply by
specifying that destination as the goal.
Utility based agents
• Goals are not really enough to generate high quality
behavior.
For example, there are many action sequences that will get
the taxi to its destination, thereby achieving the goal. Some
are quicker, safer, more reliable, or cheaper than others. We
need to consider Speed and safety
• When there are several goals that the agent can aim for, non
of which can be achieved with certainty.
• Utility provides a way in which the likelihood of success
can be weighed up against the importance of the goals.
• An agent that possesses an explicit utility function can make
rational decisions.
Structure of a utility-based agent
State sensors
How the world evolves What the world is
like now
What my actions do

Environment
What it will be like
if I do action A

Utility How happy I will be


in such as a state

What action I should


do now
effectors

function UTILITY_BASED_AGENT (percept) returns action


state  UPDATE-STATE (state, percept)
action  SELECT-OPTIMAL_ACTION [state, goal]
state  UPDATE-STATE (state, action)
return action
utility-based agent
• An agent’s utility function is essentially an internalization
of the performance measure.
• If the internal utility function and the external performance
measure are in agreement, then an agent that chooses
actions to maximize its utility will be rational according to
the external performance measure.
• Then it chooses the action that leads to the best expected
utility, where expected utility is computed by averaging
over all possible outcome states, weighted by the
probability of the outcome.
Learning Agents
After an agent is programmed, can it work
immediately?
 No, it still need teaching

In AI,
 Once an agent is done
 We teach it by giving it a set of examples
 Test it by using another set of examples

We then say the agent learns


 A learning agent

chapter2: Intelligent Agent 46


Learning Agents
Four conceptual components
 Learning element
 Making improvement
 Performance element
 Selecting external actions
 Critic
 Tells the Learning element how well the agent is doing with
respect to fixed performance standard.
(Feedback from user or examples, good or not?)
 Problem generator
 Suggest actions that will lead to new and informative
experiences.

chapter2: Intelligent Agent 47


Learning Agents

chapter2: Intelligent Agent 48


chapter2: Intelligent Agent
19/07/2023 49

You might also like