0% found this document useful (0 votes)
19 views27 pages

Agents

agents and all ai

Uploaded by

Anonymous 7XvEM3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views27 pages

Agents

agents and all ai

Uploaded by

Anonymous 7XvEM3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Intelligent Agents

Chapter 2

Chapter 2

Reminders
Assignment 0 (lisp refresher) due 1/28
Lisp/emacs/AIMA tutorial: 11-1 today and Monday, 271 Soda

Chapter 2

Outline
Agents and environments
Rationality
PEAS (Performance measure, Environment, Actuators, Sensors)
Environment types
Agent types

Chapter 2

Agents and environments


sensors
percepts
?

environment

agent

actions

actuators

Agents include humans, robots, softbots, thermostats, etc.


The agent function maps from percept histories to actions:
f : P A
The agent program runs on the physical architecture to produce f

Chapter 2

Vacuum-cleaner world

Percepts: location and contents, e.g., [A, Dirty]


Actions: Lef t, Right, Suck, N oOp

Chapter 2

A vacuum-cleaner agent
Percept sequence
[A, Clean]
[A, Dirty]
[B, Clean]
[B, Dirty]
[A, Clean], [A, Clean]
[A, Clean], [A, Dirty]
..

Action
Right
Suck
Lef t
Suck
Right
Suck
..

function Reflex-Vacuum-Agent( [location,status]) returns an action


if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left

What is the right function?


Can it be implemented in a small agent program?
Chapter 2

Rationality
Fixed performance measure evaluates the environment sequence
one point per square cleaned up in time T ?
one point per clean square per time step, minus one per move?
penalize for > k dirty squares?
A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence to date
Rational 6= omniscient
percepts may not supply all relevant information
Rational 6= clairvoyant
action outcomes may not be as expected
Hence, rational 6= successful
Rational exploration, learning, autonomy

Chapter 2

PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure??
Environment??
Actuators??
Sensors??

Chapter 2

PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure?? safety, destination, profits, legality, comfort, . . .
Environment?? US streets/freeways, traffic, pedestrians, weather, . . .
Actuators?? steering, accelerator, brake, horn, speaker/display, . . .
Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, . . .

Chapter 2

Internet shopping agent


Performance measure??
Environment??
Actuators??
Sensors??

Chapter 2

10

Internet shopping agent


Performance measure?? price, quality, appropriateness, efficiency
Environment?? current and future WWW sites, vendors, shippers
Actuators?? display to user, follow URL, fill in form
Sensors?? HTML pages (text, graphics, scripts)

Chapter 2

11

Environment types
Solitaire

Backgammon

Internet shopping

Taxi

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Chapter 2

12

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Environment types
Solitaire
Yes

Backgammon
Yes

Internet shopping
No

Taxi
No

Chapter 2

13

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Environment types
Solitaire
Yes
Yes

Backgammon
Yes
No

Internet shopping
No
Partly

Taxi
No
No

Chapter 2

14

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Environment types
Solitaire
Yes
Yes
No

Backgammon
Yes
No
No

Internet shopping
No
Partly
No

Taxi
No
No
No

Chapter 2

15

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Environment types
Solitaire
Yes
Yes
No
Yes

Backgammon
Yes
No
No
Semi

Internet shopping
No
Partly
No
Semi

Taxi
No
No
No
No

Chapter 2

16

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Environment types
Solitaire
Yes
Yes
No
Yes
Yes

Backgammon
Yes
No
No
Semi
Yes

Internet shopping
No
Partly
No
Semi
Yes

Taxi
No
No
No
No
No

Chapter 2

17

Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??

Environment types
Solitaire Backgammon
Internet shopping
Taxi
Yes
Yes
No
No
Yes
No
Partly
No
No
No
No
No
Yes
Semi
Semi
No
Yes
Yes
Yes
No
Yes
No
Yes (except auctions) No

The environment type largely determines the agent design


The real world is (of course) partially observable, stochastic, sequential,
dynamic, continuous, multi-agent

Chapter 2

18

Agent types
Four basic types in order of increasing generality:
simple reflex agents
reflex agents with state
goal-based agents
utility-based agents
All these can be turned into learning agents

Chapter 2

19

Agent

Simple reflex agents


Sensors

What action I
should do now

Environment

What the world


is like now

Conditionaction rules

Actuators

Chapter 2

20

Example
function Reflex-Vacuum-Agent( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left

(setq joe (make-agent :name joe :body (make-agent-body)


:program (make-reflex-vacuum-agent-program))
(defun make-reflex-vacuum-agent-program ()
#(lambda (percept)
(let ((location (first percept)) (status (second percept)))
(cond ((eq status dirty) Suck)
((eq location A) Right)
((eq location B) Left)))))

Chapter 2

21

Reflex agents with state


Sensors
State
How the world evolves

What action I
should do now

Environment

What the world


is like now

What my actions do

Conditionaction rules

Agent

Actuators

Chapter 2

22

Example
function Reflex-Vacuum-Agent( [location,status]) returns an action
static: last A, last B, numbers, initially
if status = Dirty then . . .

(defun make-reflex-vacuum-agent-with-state-program ()
(let ((last-A infinity) (last-B infinity))
#(lambda (percept)
(let ((location (first percept)) (status (second percept)))
(incf last-A) (incf last-B)
(cond
((eq status dirty)
(if (eq location A) (setq last-A 0) (setq last-B 0))
Suck)
((eq location A) (if (> last-B 3) Right NoOp))
((eq location B) (if (> last-A 3) Left NoOp)))))))
Chapter 2

23

Goal-based agents
Sensors
State

What it will be like


if I do action A

What my actions do

What the world


is like now

How the world evolves

Environment

What action I
should do now

Goals

Agent

Actuators

Chapter 2

24

Utility-based agents
Sensors
State

What it will be like


if I do action A

What my actions do

What the world


is like now

How the world evolves

Environment

How happy I will be


in such a state

Utility

What action I
should do now

Agent

Actuators

Chapter 2

25

Learning agents
Performance standard

Sensors

Critic

knowledge

Performance
element

Environment

feedback
changes
Learning
element
learning
goals
Problem
generator

Agent

Actuators

Chapter 2

26

Summary
Agents interact with environments through actuators and sensors
The agent function describes what the agent does in all circumstances
The performance measure evaluates the environment sequence
A perfectly rational agent maximizes expected performance
Agent programs implement (some) agent functions
PEAS descriptions define task environments
Environments are categorized along several dimensions:
observable? deterministic? episodic? static? discrete? single-agent?
Several basic agent architectures exist:
reflex, reflex with state, goal-based, utility-based

Chapter 2

27

You might also like