0% found this document useful (0 votes)
12 views21 pages

Chapter 2 - Intelligent Agents

Chapter Two discusses intelligent agents, defining them as entities that perceive their environment through sensors and act upon it via actuators. It outlines various types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents, each with distinct capabilities and decision-making processes. The chapter emphasizes the importance of understanding the task environment and the properties that influence agent behavior, such as observability, determinism, and the nature of actions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views21 pages

Chapter 2 - Intelligent Agents

Chapter Two discusses intelligent agents, defining them as entities that perceive their environment through sensors and act upon it via actuators. It outlines various types of agents, including simple reflex, model-based, goal-based, utility-based, and learning agents, each with distinct capabilities and decision-making processes. The chapter emphasizes the importance of understanding the task environment and the properties that influence agent behavior, such as observability, determinism, and the nature of actions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

CHAPTER TWO

INTELLIGENT AGENTS

CoSC 412
Kibrom G.
Intelligent Agents
• Agent is one that:
• Perceives via sensors (Observes its
environment) – Camera, any other input
• Takes action in its environment via its
actuators

• Environment is that which is outside of the


agent but affects the agent and is also
affected by it
• Limited part of the entire universe (for
obvious purposes)

• Examples:
What do you think should come
• Humans can be thought of as agents
here?
• Inputs – eyes, ears
• Actuators/effectors – Hands, legs
• Robotic agent
• Inputs/sensors – cameras, infrared
sensors, …
• Actuators – motors
• Software Agent
Intelligent Agents – Some
terminologies
• Percepts:
• The content an agent’s sensors are
perceiving

• Percept Sequence:
• Complete history of everything the agent
has ever perceived (up to the present)

• In a give situation/scenario an agent


makes a decision based on its
• built in knowledge (that which was
original)
• Precept sequence (learned over time)

• Agent function which maps any given


percept
sequence to an action describes an
agent’s behavior
• Externally it’s a table of actions (a list)
• Internal to the agent the process is a
Example of Agent
Agent type TypesActions
Percepts Goals Environment
(input) (output)

Healthy patients,
Medical diagnosis system Symptoms, findings, Questions, tests, treatments minimize costs Patient, hospital
patient's answers

Satellite image analysis Pixels of varying Print a categorization of Correct Images from orbiting
system intensity, color scene categorization satellite

Part-picking robot Pixels of varying Place parts in correct


intensity Pick up parts and sort into bins bins Conveyor belts with parts

Refinery controller Temperature, Open, close valves; adjust Maximize purity, Refinery
pressure readings temperature yield, safety

Interactive English tutor Typed words Print exercises, suggestions, Set of students
corrections Maximize student's
score on test
How should an agent
act?
• Rational agent • What is rational at any given time
• One that does the right thing. depends on four things:
• if the system achieves the objective • The performance measure defining
assigned the criterion of success

• The agent’s prior knowledge of the


• Which actions are correct? (Rationalistic
environment
Action approach)
• The actions that cause the agent to be • The actions that the agent can
most successful
perform
• Therefore, success of an agent should be
measured in someway • The agent’s percept sequence up to
now
How should an agent act? -

Continued
For each possible percept sequence, a
rational agent should select an action
expected to maximize its performance
measure,

• Given the evidence provided by the percept


sequence and whatever built-in knowledge
the agent has
• For example, during exam objective is to
maximize marks, based on the questions on
the paper & your knowledge
Learning Autonomy
• Does a rational agent depend on • If an agent just relies on the prior knowledge of
only current percept? its designer rather than its own percepts then
• No, the past percept sequence the agent lacks autonomy
should also be used. this is called • A rational agent should be autonomous - it
learning should learn what it can to compensate for partial
• After experiencing an episode or incorrect prior knowledge.
(event), the agent should adjust its • E.g. a clock
behaviors to perform better for the • No input (percepts)
same job next time.
• Run only but its own algorithm (prior
knowledge)
• No learning, no experience, etc.
Software

Agents
Sometimes, the environment may not be the real
world

• E.g. flight simulator, video games, Internet

• They are all artificial but very complex


environments

• Those agents working in these environments


are called Software agent (softbots) Because
all parts of the agent are software
Task
• Environments
Sometimes, the environment may not be • In designing an agent, the first step must
the real world always be to specify the task environment as
fully as possible.
• E.g. flight simulator, video games,
Internet

• They are all artificial but very


complex environments

• Those agents working in these


environments are called Software
agent (softbots) Because all parts of
the agent are software

• Specifying the task environment, PEAS


description as fully as possible
• Performance measure
• Environment
• Actuators
Task Environments –
Continued
• Performance measure • Actuators
• How can we judge the automated driver? • Control over the accelerator,
• Which factors are considered?
steering, gear shifting and
• getting to the correct destination
braking
• minimizing fuel consumption
• minimizing the trip time and/or • A display to communicate with
cost the customers
• minimizing the violations of traffic
laws
• Environment
• maximizing the safety and • Sensor
comfort, etc.
• A taxi must deal with a variety of roads • camera,
• Traffic lights, other vehicles, pedestrians,
stray animals, road works, police cars, etc.
• Interact with the customer
Properties of task environments
- Continued
• Fully observable vs. Partially • Deterministic vs. stochastic
• Next state of the environment
observable
Completely determined by the current
• Fully observable state and the actions executed by the
• If an agent’s sensors give it access to agent, then the environment is
the complete state of the environment deterministic,
at each point in time then the • If the environment is not completely
environment is effectively and fully determinable/erratic, then it is
observable Stochastic.
• Partially observable • E.g. Cleaner and taxi driver are:
• If an agent’s sensors doesn't give it Stochastic because of some
access to the complete state of the unobservable aspects: noise or unknown
environment.
• Because of noisy and inaccurate sensors
or parts of the state are simply missing
from the sensor data.
• E.g. A local dirt sensor of the cleaner
cannot tell Whether other squares are
clean or not.
Properties of task environments
- Continued
• Episodic vs. sequential • Static vs. dynamic
• An episode = agent’s single pair of • A dynamic environment is always
perception & action changing over time
• The quality of the agent’s action does not • E.g., the number of people in the street
depend on other episodes • Static environment
• Every episode is independent of each other • E.g. the destination
• Episodic environment is simpler • Semi dynamic
• The agent does not need to think ahead • environment is not changed over time
• Sequential
• but the agent’s performance score does
• Current action may affect all future decisions
• E.g. Taxi driving and chess.
Properties of task environments
- Continued
• Discrete vs. continuous • Known vs. unknown
• If there are a limited number of distinct • In known environment, the outcomes for all
states, clearly defined percepts and actions, actions are given.
the environment is discrete
• Example: solitaire card games.
• E.g. Chess game
• If the environment is unknown, the agent
• Continuous environment
will have to learn how it works in order to
• E.g. Taxi driving make good decisions.

• Example: new video game.


Properties of task environments
- Examples
Structure of AI

Agents
An intelligent agent is composed of • An agent based on a prespecified lookup
• Architecture – underlying machinery the table.
AI agent executes/performs its function
• It keeps track of the percept sequence and
• Agent Function – maps a just looks up the best action.
percept/environmental input into a an
action
• f:P* → A
• And then there’s the agent program,
which implements the agent function by
running on the agent architecture to
produce f
• Input for Agent Program - Only the
current percept
• Input for Agent Function - The entire
percept sequence
• The agent must remember all of
them
• Implement the agent program as a look
Types of agent

programs
Based on degree of perceived
intelligence (from our perspective) and
capability
• Simple Reflex Agents: act only upon the
current percept (don’t consider percept
seq.)
• Here agent function is based on the
condition-action rule (maps a state to
an action)
• If the condition is true, then the
action is taken, else not
• Agent function works only in a fully
observable environment
• E.g. – Vacuum agent (has dirt sensor
and location sensor)
• How should the vacuum behave if it’s
location sensor fails? (Doesn’t know
where it’s head is)
Types of agent

programs
Based on degree of perceived intelligence
(from our perspective) and capability
• The most effective way to handle partial
observability is for the agent to keep track of
the part of the world it can’t see now
• Model-based reflex agents: have
• Internal state: depends on the
percept history and thereby reflects at
least some of the unobserved aspects
of the current state
• Transition Model: Knowledge about
how the world works
• Effects of the agent’s actions
• How the world evolves
independently of the agent
• Sensor Model: perception of the
world
• Model based agent uses transition and
Types of agent

programs
Based on degree of perceived intelligence
(from our perspective) and capability
• Model based Reflex Agent –
Continued
• Transition Model:
For example, when the agent turns the
steering wheel clockwise, the car turns to
the right,
When it’s raining the car’s cameras can
get wet
• Sensor Model:
For example, when the car in front
initiates braking, one or more illuminated
red regions appear in the forward-facing
camera image, and, when the camera
gets wet, droplet-shaped objects appear
in the image partially obscuring the road
• Internal State:
For example: internal state is not too
extensive—just the previous frame from
Types of agent

programs
Based on degree of perceived intelligence
(from our perspective) and capability
• Goal-based Agent: as well as a
current state description (Model-based),
the agent needs some sort of goal
information that describes situations that
are desirable
• Combine model and goal to achieve its
objective (reach it’s goal)
• Selecting an action based on goals:
• From a single action (simple)
• Combination of several actions
required (Search and planning
involved)
• Decision making of this kind is
fundamentally different from the
condition:
• Consider both “What will happen if I
Types of agent When there are several goals


programs
Based on degree of perceived intelligence
• None of them are achieved certainly
• Utility provides a way for the decision-
(from our perspective) and capability
making
• Utility-based Agent: here we’re
concerned not only about goals but the
quality of action taken
• A more refined action – performance
metrics required (utility function)
• Use a world model along with utility
function that influences its
preferences among the states of that
world.
• It chooses the action that leads to the
best expected utility
• Advantages:
When there are conflicting goals,
• Only some of the goals but not all can
be achieved
Types of agent • Learning allows the agent to operate in initially
unknown environments and to become more

programs
Based on degree of perceived intelligence
competent than its initial knowledge alone might
allow
(from our perspective) and capability
• Learning Agent:
• is an agent that augments the
performance element which
determines actions from percept
sequences with: (Four conceptual
components)
• Learning element : makes
improvements to the agent’s knowledge
• Critic: gives feedback to the learning
element based on an external
performance standard (from user or
examples, good or not?)
• Problem generator: suggests actions
that lead to new and informative
experiences.

You might also like