0% found this document useful (0 votes)
12 views20 pages

Agents and Environments

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views20 pages

Agents and Environments

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

AGENTS AND ENVIRONMENT

Instructor : Dr. Tahir Mujtaba (53598)


Assistant Professor,
SCOPE, VIT Chennai
[email protected]
Cabin 5, Annexure Ground Floor AB IIIrd

1
AGENTS AND ENVIRONMENT
What is an Agent?
An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors..
An agent can be:
• Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
• Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
• Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

What are Sensors, Actuators, and Effectors


• Sensor: A device that detects the change in environment and sends it to other electronic
devices. An agent observes its environment through sensors.
• Actuators: are the component of machines that converts energy into motion and are
responsible for moving and controlling a system.
Example : Electric motor, Gears, Rails, etc.
• Effectors: are the devices which affect the environment. Example : legs, wheels, arms,
fingers, wings, fins, and display screen.
1
AGENTS AND ENVIRONMENT

Agent Terminology
Performance Measure of an Agent : It is the criteria, which determines how successful
an agent is.
Behavior of an Agent : It is the action that agent performs after any given sequence of
percepts.
Percept : It is agent’s perceptual inputs at a given instance.
Percept Sequence : It is the history of all that an agent has perceived till date.
Agent Function : It is a map from the precept sequence to an action 1
EXAMPLE: VACUUM CLEANER

Percept Sequence Action


[A, clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck

1
STRUCTURE OF AI AGENT
Designing an agent programme that performs the agent function is the AI task.

Agents structure is made up of both its architecture and agent programme.

Agent = Architecture + Agent program

Architecture = The hardware or software platform on which the agent runs.


Provides the underlying computational resources, such as sensors and
actuators.

Agent Program = an implementation of an agent function.

Example: A Vacuum Cleaner Agent


Architecture:
A robotic vacuum cleaner with sensors (for detecting dirt, obstacles, and the layout of the
room) and actuators (for moving and vacuuming).

Agent Program:
The software that processes sensor inputs and determines actions based on a set of rules or
algorithms. 1
STRUCTURE OF AI AGENT
function INTERPRET_INPUT(percept):
Pseudo code for Agent Program
if percept == "dirt":
function VacuumAgent(percept): return "dirt detected"
state <- INTERPRET_INPUT(percept) elif percept == "bump":
rule <- RULE_MATCH(state, rules) return "obstacle detected"
action <- RULE_ACTION(rule) else:
return action return "no dirt detected"
rules = [ function RULE_MATCH(state, rules):
{condition: "dirt detected", action: "vacuum"}, for rule in rules:
{condition: "obstacle detected", action: "turn if rule.condition == state:
right"},
return rule
{condition: "no dirt detected", action: "move
forward"} return None

] function RULE_ACTION(rule):
if rule:
return rule.action
return None
1
2 MAIN TYPES OF AI AGENTS
INTELLIGENT AGENTS:
- It is an autonomous entity which act upon an environment using sensors and actuators
for achieving goals.
- It may learn from the environment to achieve their goals.
- A thermostat is an example of an intelligent agent.

RATIONAL AGENTS:
- It is an agent which has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.

- It is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.

- For an AI agent, the rational action is most important in AI reinforcement learning


algorithm. For each best possible action, agent gets the positive reward for each
wrong action, an agent gets a negative reward.

- A self driving car is an example of a rational agent.

1
TYPES OF AI AGENTS

 Simple reflex agents;


 Model-based reflex agents;
 Goal-based agents; and
 Utility-based agents.
 Learning Agents

1
SIMPLEX REFLEX AGENTS

• These agents select actions based on the current percept, ignoring the rest of the
percept history. They function with condition-action rules (if-then rules).

• Example:
• A thermostat that turns on the heater if the temperature is below a certain
threshold.
• Vacuum agent is a reflex agent because its decision is based only on the current
location and on whether that location contains dirt.
1
MODEL BASED AGENTS
•These agents maintain an internal
state that depends on the percept
history. The internal state is used to
handle partial observability and track
the world.

•Example1: A robot vacuum that can


keep track of which areas of the floor
have already been cleaned.

•Example2: A self-driving car that


keeps track of the positions of nearby
vehicles and pedestrians to navigate
• safely.
These agents have the model, "which is knowledge of the world" and based on
the model they perform actions.

1
GOAL BASED AGENTS
• These agents act to achieve specific
goals. They use goal information to
guide their actions and plan
sequences of actions to achieve these
goals.

• Example:1 A GPS navigation


system that plans a route to reach a
specified destination.
•Example 2: A delivery robot that
plans a route to deliver a package to
a specific location.

• These expand the capabilities of the model-based agent by having the "goal"
information and choose an action, so that goal can be achieved.

1
UTILITY BASED AGENTS

• These agents are similar to the


goal-based agent but provide an
extra component of utility
measurement which makes them
different by providing a measure of
success at a given state.
• It acts based not only goals but also
the best way to achieve/reach the
goal.

• It is useful when there are multiple possible alternatives, and an agent has to
choose one in order to perform the best action.

• Example: A financial investment system that selects a portfolio of assets to


maximize the expected return while minimizing risk.

1
LEARNING AGENTS
• A learning agent in AI is the type of agent which can learn from its past
experiences, or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• Example: A recommendation system that improves its suggestions based on
user feedback and preferences.

A learning agent has mainly four conceptual components, which are:

• Learning element: It is responsible for making improvements by learning


from environment.
• Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action.
• Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
• Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
1
LEARNING AGENTS

1
AGENT ENVIRONMENT

An environment is everything in the world which surrounds the agent, but it is not a
part of an agent itself. An environment can be described as a situation in which an
agent is present.

The environment is where agent lives, operate and provide the agent with
something to sense and act upon it. An environment is mostly said to be non-
feministic.

Types of Environment :
Fully observable vs Partially Observable
Static vs Dynamic
Discrete vs Continuous
Deterministic vs Stochastic
Single-agent vs Multi-agent
Episodic vs sequential
Known vs Unknown
Accessible vs Inaccessible 1
TYPES OF AGENT ENVIRONMENT
1. Fully observable vs Partially Observable:
If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially
observable.
A fully observable environment is easy as there is no need to maintain the internal state
to keep track history of the world.
An agent with no sensors in all environments then such an environment is called as
unobservable.

2. Deterministic vs Stochastic:
If an agent’s current state and selected action can completely determine the next state of
the environment, then such environment is called a deterministic environment.
A stochastic environment is random in nature and cannot be determined completely by
an agent.
In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.

1
AGENT ENVIRONMENT

3. Episodic vs Sequential:
In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.

4. Static vs Dynamic:
If the environment can change itself while an agent is deliberating then such environment
is called a dynamic environment else it is called a static environment.
Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world at each
action.
Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment

1
AGENT ENVIRONMENT

5.Single Agent vs Multiple Agent:


• If only one agent is involved in an environment, and operating by itself then such
an environment is called single agent environment.
• However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
• The agent design problems in the multi-agent environment are different from single
agent environment.
6.Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
• A chess gamecomes under discrete environment as there is a finite number of moves
that can be performed.
• A self-driving car is an example of a continuous environment.

1
AGENT ENVIRONMENT

7. Known vs UnKnown :
Known and unknown are not actually a feature of an environment, but it is an agent’s
state of knowledge to perform an action.
In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.

7. Accessible vs Inaccessible:
If an agent can obtain complete and accurate information about the state’s
environment, then such an environment is called an Accessible environment else it is
called inaccessible.
An empty room whose state can be defined by its temperature is an example of an
accessible environment.
Information about an event on earth is an example of Inaccessible environment.

1
PEAS REPRESENTATION
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or
rational agent, then we can group its properties under PEAS representation model. It is made
up of four words:
Agent Performance(P) Environment (E) Actuators (A) Sensors(S)
measure
Medical Healthy patient Patient Tests Keyboard
Diagnose Minimized cost Hospital Staff Treatments (Entry of
symptoms)
Room Table Camera
Cleanness Wood floor Wheels Dirt detection
Vacuum Efficiency Carpet Various Brushes sensor
Cleaner Battery life obstacles Vacuum Cliff sensor
Security Extractor Bump Sensor
Infrared Wall
Sensor
Part - Percentage Conveyor belt Jointed Arms Camera
picking of parts with parts, Hand Joint angle sensors.
Robot in correct bins. Bins

You might also like