0% found this document useful (0 votes)
10 views17 pages

AI Module 1 - 5

The document discusses various types of agents in artificial intelligence, including Simple Reflex Agents, Model-based Reflex Agents, Goal-based Agents, Utility-based Agents, and Learning Agents. Each agent type is characterized by its decision-making process and capability to adapt to its environment, with Learning Agents being highlighted for their ability to improve through experience. The document also emphasizes the importance of goals and utility in guiding agent actions and the challenges associated with implementing these agents.

Uploaded by

santyadav19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views17 pages

AI Module 1 - 5

The document discusses various types of agents in artificial intelligence, including Simple Reflex Agents, Model-based Reflex Agents, Goal-based Agents, Utility-based Agents, and Learning Agents. Each agent type is characterized by its decision-making process and capability to adapt to its environment, with Learning Agents being highlighted for their ability to improve through experience. The document also emphasizes the importance of goals and utility in guiding agent actions and the challenges associated with implementing these agents.

Uploaded by

santyadav19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Faculty Name: Santhosh K

Academic Year : 2023-2024


Subject: Artificial Intelligence
Sub Code: BCS515B
Class & Sec :V C
1
The Table-Driven Agent
Table represents explicitly the agent function Ex: the simple vacuum cleaner
• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability. All these agents can improve their performance
and generate better action over the time.
These are given below:
➢Simple Reflex Agent

➢Model-based reflex agent


➢Goal-based agents

➢Utility-based agent

➢Learning agent
Simple reflex agents
• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.

• These agents only succeed in the fully observable environment.

• Problems for the simple reflex agent design approach:

• They have very limited intelligence

• They do not have knowledge of non-perceptual parts of the current state


• Mostly too big to generate and to store.

• Not adaptive to changes in the environment.


Model-based reflex agent:
• The Model-based agent can work in a partially observable environment,
and track the situation.
• A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is
called a Model-based agent.
• Internal State: It is a representation of the current state based on percept
history.
Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Model-based reflex agent
Goal-based agents:
• The knowledge of the current state environment is not always sufficient to decide
for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
• They choose an action, so that they can achieve the goal.
• Sometimes goal-based action selection is straightforward: for example when goal
satisfaction results immediately from a single action.
• Sometimes it will be trickier: for example, when the agent has to consider long
sequences of twists and turns to find a way to achieve the goal.
• Search and planning are the subfields of AI devoted to finding action sequences
that achieve the agent’s goals.
Utility-based agents:

• These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success
at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the
goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
Utility-based Agents advantages wrt. goal-based:
• with conflicting goals, utility specifies and appropriate tradeoff
• with several goals none of which can be achieved with certainty, utility
selects proper tradeoff between importance of goals and likelihood of
success
• require sophisticated perception, reasoning, and learning
• may require expensive computation.
• complicate to implement.
Learning Agents:

• Previous agent programs describe methods for selecting actions

• How are these agent programs programmed?

• Programming by hand inefficient and ineffective!

• Solution: build learning machines and then teach them (rather than instruct
them)

• Advantage: robustness of the agent program toward initially-unknown


environments
• Performance element: selects actions based on percepts Corresponds to the previous
agent programs
• Learning element: introduces improvements uses feedback from the critic on how the
agent is doing determines improvements for the performance element
• Critic tells how the agent is doing w.r.t. performance standard
• Problem generator: suggests actions that will lead to new and informative experiences
forces exploration of new stimulating scenarios
Example: Taxi Driving
• After the taxi makes a quick left turn across three lanes, the critic observes the
shocking language used by other drivers.

• From this experience, the learning element formulates a rule saying this was a bad
action.

• The performance element is modified by adding the new rule.

• The problem generator might identify certain areas of behavior in need of


improvement, and suggest trying out the brakes on different road surfaces under
different conditions.

You might also like