0% found this document useful (0 votes)
16 views5 pages

Fundamentals of Artificial Intelligence

The document discusses intelligent agents and their environments. It defines agents and describes their functions and architectures. Rational agents are defined as those that select actions expected to maximize their performance measure given their percepts and knowledge. The PEAS framework is introduced for specifying task environments for building rational agents.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Fundamentals of Artificial Intelligence

The document discusses intelligent agents and their environments. It defines agents and describes their functions and architectures. Rational agents are defined as those that select actions expected to maximize their performance measure given their percepts and knowledge. The PEAS framework is introduced for specifying task environments for building rational agents.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Chapter # 2 Objectives

•Agents and environments


Intelligent Agents •Rationality
•PEAS (Performance measure,
Environment, Actuators, Sensors)
•Environment types
•Agent types

2 3

Agents Glossary
is anything that can be viewed as
perceiving its environment through and
acting upon that environment through
– agents perceptual inputs

› eyes, ears, and other organs for sensors;


› legs, mouth, and other body parts for – History of everything the agent has
actuators perceived

› cameras and infrared range finders for – Describes agent’s behaviour


sensors; – maps any percept to an action
› various motors for actuators
that have some functions as
sensors and some functions as actuators. 4 – Implements agent function 5 6

Agents and environments Contd…


Example: Vacuum-cleaner Agent

 Environment: square A and B


 Percepts: [location and content] e.g. [A, Dirty]
 The agent function maps from percept
histories to actions:  Actions: left, right, suck up dirt, and no- function REFLEX-VACUUM-AGENT( [location,
operation status]) returns action
[f: P*  A]
 A simple agent function may be “if the current › if status = Dirty then return Suck
 The agent program runs on the physical square is dirty, then suck dirt or move to other › else if location = A then return Right
architecture to produce f square..” › else if location = B then return Left
agent = architecture + program 7 8 9
Rational Agent Contd... Contd...
 An agent should strive to "do the right thing", based on
 What is rational - depends on four things:  Rationality is distinct from omniscience (all-knowing
what it can perceive and the actions it can perform.
– The performance measure with infinite knowledge)
 What is the right thing?
– Causes the agent to be most successful. – The agent’s prior knowledge of the environment  Agents can perform actions in order to modify future
– The actions the agent can perform percepts so as to obtain useful information
 How to evaluate agent’s success? (information gathering, exploration – an important
Performance measure – is a criteria to measure an – The agent’s percept sequence to date part of rationality)
agent’s behavior  A rational agent is:
 An agent is autonomous if its behavior is determined
e.g., performance measure of a vacuum-cleaner agent For each possible percept sequence, a rational agent by its own experience (with ability to learn and adapt)
could be amount of dirt cleaned up, amount of time should select an action that is expected to maximize its
taken, amount of electricity consumed, amount of noise performance measure, given the evidence provided by – a rational agent should be autonomous ….!
generated, etc the percept sequence and what ever built in knowledge Rational ⇒ exploration, learning, autonomy
 Performance measure according to what is wanted in the agent has.
the environment instead of how the agents should
behave. 10 11 12

Building Rational Agents Contd…


PEAS Description to Specify Task Environments PEAS: Specifying an automated
taxi driver
•To design a rational agent we need to  Performance measure:
Performance measure: safety, speed, legal, comfortable, maximize profits
specify a task environment
?  Environment:
•PEAS: to specify a task environment Environment: ?
•P:Performance Measure ?  Actuators:
•E: Environment Actuators: ?
•A: Actuators ?  Sensors:
?
•S: Sensors Sensors:
?
13 14 15

Contd…
Contd… Contd…
Performance measure:
Performance measure:  Performance measure: safe, fast, legal, comfortable, maximize profits
safe, fast, legal, comfortable, maximize profits safe, fast, legal, comfortable, maximize profits
Environment:
Environment:  Environment:
roads, other traffic, pedestrians, customers roads, other traffic, pedestrians, customers
roads, other traffic, pedestrians, customers  Actuators: Actuators:
Actuators: steering, accelerator, brake, signal, horn
 Sensors: steering, accelerator, brake, signal, horn
?
? Sensors:
Sensors:
cameras, sonar, speedometer, GPS
?
16 17 18
PEAS: Specifying a part PEAS: Specifying an Environment types
picking robot interactive English tutor
• Fully observable(vs. partially observable)
Fully -everything seen shows all relevant information
•Performance measure: Percentage Performance measure: Maximize partially -noise, inaccurate sensors, hidden info,...
of parts in correct bins student's score on test • Deterministic (vs. stochastic)
•Environment: Conveyor belt with Environment: Set of students next state depends on current state and next action
parts, bins stochastic -probabilistic; other factors involved
Actuators: Screen display (exercises, (complex?)
•Actuators: Jointed arm and hand suggestions, corrections) • Episodic (vs. sequential)
•Sensors: Camera, joint angle sensors Sensors: Keyboard episodic one self-contained, independent situation
sequential -current decision affects future ones
19 20 21

Environment types Examples


• Static (vs. dynamic)
static -environment is fixed during decision making
dynamic -environment changes
• Discrete(vs. continuous)
discrete -finite # states (measurements, values,...)
Continuous -smooth, infinite scale
• Single agent(vs. multi-agent)
single -one agent involved
multi -more than one (adversary or cooperative)

22 23 24

Table Driven Approach Agent types Simple reflex agents

Four basic types in order of increasing Simplest


generality: No percept history
› These agents select actions on the basis of
Simple reflex agents current percept
Model-based reflex agents The condition-action rules
› if car-in-front-is-braking then brake
Goal-based agents › if light-becomes-green then move-forward
Utility-based agents › if intersection-has-stop-sign then stop
› If dirty then suck
25 26 27
Simple reflex agents Simple reflex agents Simple reflex agents
•Characteristics
•Such agents have limited intelligence… • Function SIMPLE-REFLEX-AGENT(percept)
•Efficient returns an action
•No internal representation for reasoning, • static: rules, a set of condition-action rules
inference. • state←INTERPRET-INPUT(percept)
•No strategic planning, learning. • rule←RULE-MATCH(state, rule)
• action←RULE-ACTION[rule]
•Are not good for multiple, opposing, goals.
• return action
• Works only if correct decision can be made
on basis of current percept. Will only work if the environment is fully
observable otherwise infinite loops may
occur.
28 29 30

Simple Reflex Agent Model based reflex agents Model based reflex agents
To tackle partially observable environments.
To update its state the agent needs two kinds of
•Function REFLEX-AGENT-WITH-
see action
knowledge: STATE(percept) returns an action
Agent  how the world evolves independently from the • static: rules, a set of condition-action rules
agent; • state, a description of the current world state
Ex: an overtaking car gets closer with time. • action, the most recent action.
 how the world is affected by the agent’s actions. • state←UPDATE-STATE(state, action, percept)
• rule←RULE-MATCH(state, rule)
Ex: if I turn left, what was to my right is now • action←RULE-ACTION[rule]
behind Me.
•return action
Environment

31 32 33

Model-based reflex agents Goal based agents


Model Based reflex Agent
•Current state of environments is not always
Agent
enough
• e.g at a road junction, it can turn left, right or go straight
see action • Correct decision in such cases depends on where taxi is trying to get
to
Predict
state •Major difference: future is taken into account
•Combining goal information with the
knowledge of its actions, the agent can choose
those actions that will achieve the goal.
Environment

34 35 36
Goal based agents Goal-based agents Goal Based Agent
• Goal-based Agents are much more flexible in
responding to a changing environment; Agent
accepting different goals. see Goals Decision action
• Such agents work as follows:
Predict State
• information comes from sensors - percepts
• changes the agents current state of the world
• based on state of the world and knowledge (memory)
and goals/intentions, it chooses actions and does
them through the effectors. Environment

37 38 39

Utility based agents Utility based agents Utility-based agents


•Goals alone are not always enough to
generate quality behaviours •A utility function maps a state (or
•E.g. different action sequences can take the sequence of states) to a real
taxi agent to destination (and achieving number to take rational decisions
thereby the “goal”) – but some may be quicker, and to specify tradeoffs when:
safer, economical etc .. • goals are conflicting – like speed and safety
•A general performance measure is required • There are several goals and none of which can
to compare different world states be achieved with certainty

40 41 42

Learning agents Learning agents


•A learning agent can be divided into four
conceptual components:
• Learning Element
• Responsible for making improvements in Performance element
• uses feedback from Critic to determine how performance
element should be modified to do better
• Performance Element
• Responsible for taking external actions
• selecting actions based on percepts
• Critic
• Tells the learning element how well agent is doing w.r.t. to fixed
performance standard
• Problem Generator
• Responsible for suggesting actions that will lead to improved and
informative experiences. 43 44 45

You might also like