Types of Agents

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Types of Agents

Agents in Artificial Intelligence can be categorized into different types


based on how agent’s actions affect their perceived intelligence and
capabilities, such as:
Types of Agents
 Simple reflex agents
 Model-based agents
 Goal-based agents
 Utility-based agents
Simple Reflex Agent
A simple reflex agent is an AI system that follows pre-defined rules to make decisions. It
only responds to the current situation without considering the past or future
ramifications.
A simple reflex agent is suitable for environments with stable rules and straightforward
actions, as its behavior is purely reactive and responsive to immediate environmental
changes.
How does it work?
A simple reflex agent executes its functions by following the condition-action rule, which
specifies what action to take in a certain condition.
Example
A rule-based system developed to support automated customer support interactions. The
system can automatically generate a predefined response containing instructions on
resetting the password if a customer’s message contains keywords indicating a password
reset.
Model-based Reflex Agent
A model-based reflex performs actions based on a current percept and an internal state
representing the unobservable word.
It updates its internal state based on two factors:
 How the world evolves independently of the agent
 How does the agent’s action affect the world
A cautionary model-based reflex agent is a variant of a model-based reflex agent that
also considers the possible consequences of its actions before executing them.

A model-based reflex agent follows the condition-action rule, which


specifies the appropriate action to take in a given situation. But unlike a
simple reflex agent, a model-based agent also employs its internal state to
assess the condition during the decision and action process.
The model-based reflex agent operates in four stages:
1.Sense: It perceives the current state of the world with its sensors.
2.Model: It constructs an internal model of the world from what it sees.
3.Reason: It uses its model of the world to decide how to act based on a set of
predefined rules or heuristics.
4.Act: The agent carries out the action that it has chosen.
Goal-based Agents
Goal-based agents are AI agents that use information from their environment to achieve
specific goals. They employ search algorithms to find the most efficient path towards
their objectives within a given environment.

These agents are also known as rule-based agents, as they follow predefined rules to
accomplish their goals and take specific actions based on certain conditions.
Goal-based agents are easy to design and can handle complex tasks. They can be used in
various applications like robotics, computer vision, and natural language processing.

Unlike basic models, a goal-based agent can determine the optimal course of decision-
making and action-taking processes depending on its desired outcome or goal.
Given a plan, a goal-based agent attempts to choose the best strategy to
achieve the goals, It then uses search algorithms and heuristics to find the efficient
path to the goal.

The working pattern of the goal-based agent can be divided into five steps:
Perception
Reasoning
Action
Evaluation
Goal Completion
Utility-based Agents
Utility-based agents are AI agents that make decisions based on
maximizing a utility function or value. They choose the action with the
highest expected utility, which measures how good the outcome is.

This helps them deal with complex and uncertain situations more flexibly
and adaptively. Utility-based agents are often used in applications where
they have to compare and select among multiple options, such as resource
allocation, scheduling, and game-playing.
A utility-based agent aims to choose actions that lead to a high utility
state. To achieve this, it needs to model its environment, which can be
simple or complex.

Then, it evaluates the expected utility of each possible outcome based on


the probability distribution and the utility function.

Finally, it selects the action with the highest expected utility and repeats
this process at each time step.

You might also like