0% found this document useful (0 votes)
22 views4 pages

Ai 1

Artificial intelligence

Uploaded by

shezy3466
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views4 pages

Ai 1

Artificial intelligence

Uploaded by

shezy3466
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

TYPES OF AI AGENTS

1. Simple Reflex Agents


Definition:
• The Simple reflex agents are the simplest agents. These agents take decisions on the basis
of the current percepts and ignore the rest of the percept history.
Working:
• The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
Real-Life Examples:
• Elevator Control: Simple reflex agents in small buildings or low-traffic areas manage
elevator systems by responding to button presses and sensor inputs.
• Automatic Doors: Reflex agents in automatic doors detect people in front and open,
staying closed if no one is present.

Limitation of Dynamic Environments:

• No learning memory: Do not learn or adapt to past events.


• Semi adaptability: only deals with already established circumstances.
• Not suitable for complicated goals: Cannot plan or deduce.

2. Model-Based Reflex Agents

Definition and Working:

• Model-based reflex agents are a type of intelligent agent in artificial intelligence that
operate on the basis of a simplified model of the world.

How is it Different from the Basic Reflex Agent in decision making?

• Simple reflex agents make decisions based solely on what they can currently see or sense
from their environment. This can be limited because they don’t remember past
information or anticipate future changes. To handle situations where not all information
is immediately available Model-based Agents are used, which keep track of what they
cannot see at the moment.

World Model and Internal State:

• World Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the current state based on percept history.
Why So Effective?

• It becomes more elastic and brighter due to its ability to predict the changes as well as
the memory of the previous states in variable surroundings.

Practical Scenario:

• A Robot Vacuum Cleaner: A Simple Reflex Agent just moves left if it hits an obstacle.
But a Model-Based Reflex Agent remembers where obstacles are and plans its path to clean
more efficiently.

3. Goal-Based Agents

Definition and Working:

• Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.

Comparison:

• Simple Reflex Agents: They do not have any goals. They simply react to the situation at
hand.
• Model-Based Reflex Agents: These are concerned about the current state and have some
inner model.
• Goal-Based Agents: These agents can relate the short-term level of goals with a long term
level, making them much more flexible in difficult situations. Goals are important as they
guide the agent towards achievement.
Real Life Example:
• A group of friends plan to go a road trip is an example of implementation of steps taken
by a goal-based agent. They have learned from past experiences that cars are more
comfortable and suitable for longer distances. They search for the shortest route; this search
is carried out keeping the destination (goal) in mind.
4. Utility-Based Agents
Definition and Working:

• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.

Application and Use Cases:

• Robotics: It should find pathways with minimum energy consumption


• Finance: Algorithms of trade decide which strategies yield most profit when the risk is
controlled
• Game AI: It should select actions with maximum winning chances, and it should select
the best strategy

5. Learning Agents

Definition and Working:

• A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
Components:
A learning agent has mainly four conceptual components, which are:
• Learning Element: Behavior modification based on feedback.
• Performance Element: Actions depending on present knowledge.
• Critic: Provides performance feedback.
• Problem Generator: Suggests new learning environments.
Examples and Benefits of Learning Agents:
Example:
• Recommendation Systems: Netflix and Amazon use the concept to recommend items to
their customers according to user's activity
Benefits:
• Flexibility: The model adapts to changing situations.
• Better Decision Making: The optimization of action through time.
• Autonomy: Self-running models that learn continuously Learning Agents are necessary
in complex scenarios because they can learn and adapt.

You might also like