Ai 1
Ai 1
• Model-based reflex agents are a type of intelligent agent in artificial intelligence that
operate on the basis of a simplified model of the world.
• Simple reflex agents make decisions based solely on what they can currently see or sense
from their environment. This can be limited because they don’t remember past
information or anticipate future changes. To handle situations where not all information
is immediately available Model-based Agents are used, which keep track of what they
cannot see at the moment.
• World Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
• Internal State: It is a representation of the current state based on percept history.
Why So Effective?
• It becomes more elastic and brighter due to its ability to predict the changes as well as
the memory of the previous states in variable surroundings.
Practical Scenario:
• A Robot Vacuum Cleaner: A Simple Reflex Agent just moves left if it hits an obstacle.
But a Model-Based Reflex Agent remembers where obstacles are and plans its path to clean
more efficiently.
3. Goal-Based Agents
• Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
Comparison:
• Simple Reflex Agents: They do not have any goals. They simply react to the situation at
hand.
• Model-Based Reflex Agents: These are concerned about the current state and have some
inner model.
• Goal-Based Agents: These agents can relate the short-term level of goals with a long term
level, making them much more flexible in difficult situations. Goals are important as they
guide the agent towards achievement.
Real Life Example:
• A group of friends plan to go a road trip is an example of implementation of steps taken
by a goal-based agent. They have learned from past experiences that cars are more
comfortable and suitable for longer distances. They search for the shortest route; this search
is carried out keeping the destination (goal) in mind.
4. Utility-Based Agents
Definition and Working:
• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
5. Learning Agents
• A learning agent in AI is the type of agent which can learn from its past experiences, or it
has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
Components:
A learning agent has mainly four conceptual components, which are:
• Learning Element: Behavior modification based on feedback.
• Performance Element: Actions depending on present knowledge.
• Critic: Provides performance feedback.
• Problem Generator: Suggests new learning environments.
Examples and Benefits of Learning Agents:
Example:
• Recommendation Systems: Netflix and Amazon use the concept to recommend items to
their customers according to user's activity
Benefits:
• Flexibility: The model adapts to changing situations.
• Better Decision Making: The optimization of action through time.
• Autonomy: Self-running models that learn continuously Learning Agents are necessary
in complex scenarios because they can learn and adapt.