0% found this document useful (0 votes)
9 views58 pages

Unit 2

The document provides an introduction to intelligent agents and problem-solving within the context of artificial intelligence. It covers various types of intelligent agents, including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents, detailing their functionalities, strengths, and limitations. Additionally, it discusses agent architecture, components of problem-solving agents, and search algorithms used in AI for effective decision-making.

Uploaded by

Varun Khosla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views58 pages

Unit 2

The document provides an introduction to intelligent agents and problem-solving within the context of artificial intelligence. It covers various types of intelligent agents, including simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents, detailing their functionalities, strengths, and limitations. Additionally, it discusses agent architecture, components of problem-solving agents, and search algorithms used in AI for effective decision-making.

Uploaded by

Varun Khosla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

MASTER OF COMPUTER

DCA6301: Artificial Intelligence

APPLICATION
SEMESTER 2

DCA6301
ARTIFICIAL INTELLIGENCE
Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 1
DCA6301: Artificial Intelligence

Unit - 2
Introduction to Intelligent Agents and
Problem-Solving

DCA324
KNOWLEDGE MANAGEMENT
Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 2
DCA6301: Artificial Intelligence
TABLE OF CONTENTS

SL Fig No / Table SAQ /


Topic Page No
No / Graph Activity

1 Introduction - -
5-6
1.1 Learning Objectives - -

2 Types of Intelligent Agents - -

2.1 Simple Reflex Agents 1 -


2.2 Model-Based Reflex Agents - -
7-14
2.3 Goal-Based Agents - -
2.4 Utility-Based Agents - -
2.5 Learning Agents - -

3 Types of Learning Agents - - 15-16

4 Agent Architecture - -
4.1 Perceptual System 2 -
4.2 Types of Sensors 3, 4, 5, 6 -
4.3 Internal Model - - 17-22
4.4 Decision-Making System - -
4.5 Action Execution System - -
4.6 Learning Element 7 -

5 Components of a Problem-Solving Agent - -


5.1 Initial State - -
5.2 Actions - -
23-25
5.3 Transition Model - -
5.4 Goal Test - -
5.5 Path Cost - -

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 3


DCA6301: Artificial Intelligence

6 Search Algorithms - -
6.1 Types of Search Algorithms - -
6.2 Uninformed Search Strategies - -
6.2.1 Depth-First Search (DFS) - -
6.2.2 Breadth-First Search (BFS) - -
6.2.3 Depth-Limited Search (DLS) - -
6.2.4 Iterative Deepening Search (IDS) - -
6.3 Informed Search Strategies (Heuristic 48-49
- -
Search)
6.3.1 Greedy Search 8, 9 -
6.3.2 Minimax Algorithm - -
6.3.3 A* Search: Balancing Efficiency and 10 -
Optimality
6.3.4 AO* Algorithm - -
6.3.5 Case Study-AO* - -
6.3.6 Alpha-Beta Pruning - -

7 Space vs. Time Complexity - - 48-49

8 Summary - - 50

9 Glossary - - 51-53

10 Self-Assessment Questions - 1 54

11 Terminal Questions - - 55

12 Answers - -
12.1 Self-Assessment Questions - - 56-57
12.2 Terminal Questions - -

13 References - - 52

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 4


DCA6301: Artificial Intelligence

1. INTRODUCTION
The world of artificial intelligence (AI) revolves around creating intelligent agents. These are systems
that can reason, learn, and act autonomously in their environment. Imagine a robot vacuum cleaner
navigating your house, a spam filter sorting your emails, or a chess-playing program making strategic
moves. These are all examples of intelligent agents in action.

This journey into intelligent agents starts by understanding their different types. We'll explore how
some agents react directly to stimuli (simple reflex agents), while others have a mental map of their
surroundings (model-based reflex agents). We'll also delve into agents with specific goals (goal-based
agents) and those that make decisions based on value (utility-based agents). And of course, we can't
forget about learning agents, which can improve their performance over time.

But how do these agents function? We'll peek inside their architecture, examining the components
that enable them to perceive, reason, and act. This includes the perceptual system that gathers
information, the internal model that represents the environment, and the decision-making system
that chooses the best course of action. We'll also explore how actions are carried out and how some
agents can even learn and adapt.

Next, we'll shift gears to problem-solving agents, a specific type designed to tackle challenges. We'll
break down the essential elements of these agents, including the initial state, available actions, the
consequences of those actions, and how to determine if a goal has been achieved.

Finally, we'll enter the fascinating world of search algorithms. These are the methods problem-solving
agents use to find a sequence of actions leading to a goal. We'll explore uninformed search strategies
that systematically explore all possibilities and informed search strategies that use knowledge
(heuristics) to guide the search more efficiently. We'll also learn how to choose the most suitable
search algorithm for a given problem, considering factors like space and time complexity as well as
the specific characteristics of the problem itself.

So, buckle up and get ready to dive deep into the world of intelligent agents and problem-solving!

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 5


DCA6301: Artificial Intelligence

1.1. Learning Outcomes


By the end of this session, you will be able to
• Identify and differentiate various types of
intelligent agents
• Explain the components of an intelligent agent
architecture
• Analyze the core concepts of problem-solving
agents
• Classify and evaluate different search algorithms

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 6


DCA6301: Artificial Intelligence

2. TYPES OF INTELLIGENT AGENTS


Intelligent agents come in various flavors, each with its own strengths and limitations. The sections
below give an insight into the most common types you'll encounter in AI:

2.1. Simple Reflex Agents


Imagine a robot vacuum cleaner zipping around your room. How does it decide what to do? Simple
reflex agents are the basic building blocks of AI that make these kinds of quick, reactive decisions.
Let's dive into their world and see how they work!

What are Simple Reflex Agents?


Think of a simple reflex agent as a machine that follows a set of "if-then" rules. It perceives its
environment through sensors (like the vacuum's bump sensors) and reacts based on those inputs.
There's no fancy learning or complex calculations involved, just a direct response to the current
situation. The basic functionalities are

Figure :2.1 –Example for Simple reflex agent

• Sensors: These are the agent's eyes and ears, gathering information about the environment (e.g.,
dirt detected by the vacuum).
• Condition-Action Rules: These are the pre-programmed "if-then" instructions (e.g., if dirt
detected, then move towards it).
• Actuators: These are the agent's hands and feet, allowing it to take action in the world (e.g.,
vacuum cleaner wheels move forward).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 7


DCA6301: Artificial Intelligence

Examples in Action:
• Thermostat: Senses the room temperature (sensor) and turns on the heater if it's cold
(condition-action rule) to maintain a set temperature (action).
• Traffic Light: Detects the presence of cars (sensor) and changes light colors (action) based on
pre-defined rules (condition-action rule).
• Automatic Door: Senses someone approaching (sensor) and triggers the door to open (action)
based on proximity (condition-action rule).

Strengths and Limitations:


Simple reflex agents are great for well-defined environments with clear sensor inputs and predictable
outcomes. They are fast, efficient, and easy to design. However, their limitations are clear:
• Limited Intelligence: They can't handle complex situations or adapt to changes. Imagine the
vacuum encountering an obstacle it hasn't been programmed for.
• No Memory: They only react to the current situation and don't learn from past experiences. The
vacuum wouldn't know it already cleaned a spot unless it has some memory of its path.
• Fully Observable Environments: They work best in environments where everything can be
sensed. If the vacuum can't see a hidden dirt pile, it won't clean it.

Simple reflex agents are the starting point for many intelligent systems. While they might not be the
most sophisticated AI, they provide a foundation for understanding how agents perceive, react, and
interact with the world. As we move towards more complex environments, we'll need to explore
agents with learning capabilities and internal models, but for now, these basic robots are doing a
pretty good job keeping our rooms clean and our traffic lights flowing!

2.2. Model-Based Reflex Agents


Lets take the example of playing a game of chess. You analyze the board (perceive the environment),
consider your opponent's past moves (internal model), and choose the best move based on that
information (action). This kind of strategic thinking goes beyond simple reflex agents and introduces
the concept of model-based reflex agents.

Building on Simple Reflex Agents:


Simple reflex agents, like a thermostat reacting to temperature, excel in controlled environments with
clear sensor inputs. However, the real world is more complex. Model-based reflex agents address
some of these limitations:

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 8


DCA6301: Artificial Intelligence

• Internal Model: They maintain a mental map or model of the environment, keeping track of
past observations and the world's state. This allows them to consider not just the current input,
but also the potential consequences of actions.
• Partially Observable Environments: They can function even if they can't sense everything. By
using the internal model, they can make educated guesses about the unseen parts of the
environment.

How it Works:
Here's a breakdown of the model-based reflex agent's decision process:
1. Perceive: The agent gathers information about the environment through sensors (e.g., seeing
the chessboard).
2. Update Internal Model: Based on the new information, the agent updates its internal
representation of the environment (e.g., understanding the opponent's strategy).
3. Simulate Actions: Using the internal model, the agent simulates different actions and their
potential outcomes (e.g., predicting the opponent's response to different chess moves).
4. Select Best Action: The agent chooses the action that leads to the most desirable outcome based
on its goals (e.g., selecting the chess move that gives you the best advantage).
5. Take Action: The agent executes the chosen action in the real world (e.g., making the chess
move).

Examples in Action:
Model-based reflex agents are used in various scenarios:
• Self-Driving Cars: These cars perceive their surroundings (sensors), maintain a map of the
environment (internal model), predict traffic flow (simulate actions), and choose the safest path
(select best action).
• Robot Arm in a Factory: The robot arm senses object positions (sensors), keeps track of its
own movements (internal model), simulates different pick-and-place actions, and chooses the
most efficient movement (select best action) to avoid collisions.

Advantages and Limitations:


Model-based reflex agents offer significant advantages:
• Handle Complexity: They can deal with partially observable environments and plan actions
based on predictions.
• More Flexible: They can adapt to changes in the environment by updating their internal model.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 9


DCA6301: Artificial Intelligence

However, they also have limitations:


• Model Accuracy: The agent's performance depends on the accuracy of its internal model. A
flawed model can lead to poor decisions.
• Computational Cost: Simulating actions can be computationally expensive, especially for
complex environments.

Model-based reflex agents are a step up from simple reflex agents, providing a more sophisticated
way to interact with the world. By considering past experiences and potential consequences, they can
make informed decisions in dynamic environments. As AI continues to evolve, these agents will play
a crucial role in tasks requiring strategic thinking and adaptation in complex situations.

2.3. Goal-Based Agents


Visualize a robot tasked with cleaning your entire house. Unlike a simple reflex agent that just reacts
to dirt, a goal-based reflex agent has a clear objective: achieving a specific goal (a clean house). Let's
explore how these goal-oriented agents operate!

Beyond Reactions: The Goal-Based Approach


Simple reflex agents excel in predefined situations, but what if the task requires a bigger picture?
Goal-based reflex agents address this by introducing the concept of a goal, a desired state the agent
strives to achieve. Here's what sets them apart:
• Explicit Goals: These agents are programmed with specific objectives, like cleaning the entire
house. This goal guides their decision-making process.
• Planning and Search: They don't just react; they plan a sequence of actions to achieve the goal.
This often involves searching for the most efficient path to success.

Components of a Goal-Based Reflex Agent:


1. Perceptors: These are the agent's sensors that gather information about the environment (e.g.,
sensors in the cleaning robot detecting dirty areas).
2. Effectors: These are the agent's actuators that allow it to interact with the environment (e.g.,
the robot's wheels and cleaning brushes).
3. Internal Model: Similar to model-based reflex agents, some goal-based agents maintain an
internal representation of the environment to track progress towards the goal.
4. Goal Description: This outlines the desired state the agent aims to achieve (e.g., "all rooms
clean").

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 10


DCA6301: Artificial Intelligence

5. Plan Generator: This component analyzes the current state, the goal, and the internal model (if
present) to generate a sequence of actions (e.g., a cleaning plan for the robot).
6. Action Selection: This component chooses the most suitable action from the generated plan to
take the next step towards the goal.

Examples in Action:
Goal-based reflex agents find applications in various domains:
• Game Playing AI: An AI playing chess has the goal of checkmating the opponent. It perceives
the board (perceptors), considers possible moves (plan generation), and selects the best move
based on its goal (action selection).
• Search and Rescue Robots: These robots have the goal of finding survivors. They use sensors
to navigate the environment (perceptors), keep track of explored areas (internal model), and
plan search paths to maximize the chance of finding survivors (plan generation and action
selection).

Strengths and Limitations:


Goal-based reflex agents offer distinct advantages:
• Goal-Directed Behavior: They are focused on achieving a specific objective, leading to more
efficient and purposeful actions.
• Adaptability: They can adjust their plans based on changes in the environment or new
information from the sensors.

However, they also have limitations:


• Planning Complexity: Generating optimal plans can be computationally expensive, especially
for complex tasks or environments.
• Goal Specificity: They are designed for specific goals and might struggle to adapt to completely
new objectives.

Goal-based reflex agents represent a significant advancement in agent design. By incorporating goals
and planning, they can tackle more complex tasks and navigate dynamic environments. As AI
technology progresses, these agents will play a vital role in achieving specific objectives in various
applications, from game playing to real-world robotics.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 11


DCA6301: Artificial Intelligence

2.4. Utility-Based Agents


Envisage a robot assistant helping you plan your day. It considers various errands, appointments, and
weather conditions to create a schedule that optimizes your time and minimizes stress. This is the
realm of utility-based agents, where decisions are based on maximizing a measure of "goodness" or
utility.

Beyond Goals: The Utility-Based Approach


Goal-based agents excel at achieving specific objectives. But what if there are multiple options, each
with different benefits and drawbacks? Utility-based agents introduce the concept of utility, a
numerical value assigned to each possible outcome. The agent then strives to choose the action that
leads to the outcome with the highest utility.

Components of a Utility-Based Agent:


1. Perceptors: Sensors gather information about the environment (e.g., traffic sensors for a
driving agent).
2. Effectors: Actuators allow the agent to interact with the environment (e.g., steering wheel and
brakes of the car).
3. Internal Model (Optional): Some agents maintain an internal representation of the
environment to predict outcomes of actions.
4. Utility Function: This function assigns a numerical value (utility) to each possible outcome (e.g.,
reaching your destination quickly and safely has a higher utility than getting stuck in traffic).

Decision-Making Process:
1. Perceive: The agent gathers information about the environment.
2. Generate Options: Based on the perceived state, it identifies possible actions.
3. Predict Outcomes: Using the internal model (if present) or simulations, the agent predicts the
potential outcomes of each action.
4. Evaluate Utility: The utility function assigns a numerical value to each predicted outcome.
5. Select Best Action: The agent chooses the action that leads to the outcome with the highest
utility.
6. Take Action: The chosen action is executed in the real world.

Examples in Action:
Utility-based agents are used in various applications:

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 12


DCA6301: Artificial Intelligence

• Personal Assistant Robots: These robots consider various factors like weather, traffic, and user
preferences (perceive) to suggest personalized schedules or routes (predict outcomes, evaluate
utility, select best action).
• Self-Driving Cars: They navigate by considering factors like safety, efficiency, and traffic laws
(perceive), predict possible routes based on traffic data (predict outcomes), and choose the route
with the highest utility (evaluate utility, select best action).

Strengths and Limitations:


Utility-based agents offer distinct advantages:
• Flexibility: They can handle multiple objectives and trade-offs by assigning different weights to
various factors in the utility function.
• Decision-Making in Complex Environments: By evaluating the utility of different outcomes,
they can make informed decisions in dynamic situations.

However, they also have limitations:


• Designing the Utility Function: Defining the utility function can be complex and requires
careful consideration of all relevant factors and their relative importance.
• Computational Cost: Predicting and evaluating the utility of all possible outcomes can be
computationally expensive for complex environments.

Utility-based agents introduce a sophisticated approach to making decisions in the real world. By
considering the "goodness" of potential outcomes, they can navigate complex environments and
achieve a balance between various objectives. As AI continues to evolve, these agents hold promise
for applications requiring flexible decision-making and optimal action selection in dynamic scenarios.

2.5. Learning Agents


The most sophisticated type of intelligent agent is the learning agent. These agents can improve their
performance over time by learning from their experiences. There are various learning algorithms,
allowing them to adapt to new situations and refine their strategies.

If a robot is playing a game against you, at first, it might make random moves. But with each game, it
starts to learn your strategies and adapt its own. This is the world of learning agents, where the
ability to improve performance over time sets them apart.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 13


DCA6301: Artificial Intelligence

Beyond Fixed Rules: The Learning Approach


Simple reflex agents and their more advanced counterparts rely on pre-programmed rules. However,
the real world is dynamic, demanding agents that can continuously learn and adapt. Learning agents
address this challenge by:
• Learning Mechanism: They have an internal mechanism that allows them to learn from
experience. This could involve storing past observations, identifying patterns, or adjusting
internal models based on new information.
• Performance Improvement: Their goal is not just to react or achieve a specific goal, but to
improve their performance over time. This could involve achieving better scores in a game,
making more efficient decisions, or adapting to changes in the environment.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 14


DCA6301: Artificial Intelligence

3. TYPES OF LEARNING AGENTS


There are various approaches to learning, each with its strengths and applications:
• Supervised Learning: The agent learns from labelled data where the desired outcome is
provided. For example, a robot learning to identify objects in images might be shown pictures
labelled "cat,
• Unsupervised Learning: The agent learns from unlabelled data, identifying patterns and
structures on its own. For example, a robot arm learning to grasp objects might analyze videos
of successful grasps to understand how to manipulate objects effectively.
• Reinforcement Learning: The agent learns through trial and error, receiving rewards for
desired actions and penalties for undesirable ones. A robot playing a game might receive
rewards for winning and penalties for losing, leading it to learn strategies through exploration
and experimentation.

We will explore on these in detail in the upcoming sessions.

Examples in Action:
Learning agents are making waves in various fields:
• Recommendation Systems: These systems analyze past user behavior to recommend movies,
music, or products (supervised learning).
• Spam Filtering: Email filters use supervised learning to identify and block spam emails based
on past examples.
• Game Playing AI: AI players in games like Chess or Go learn through reinforcement learning,
continuously improving their strategies based on past encounters.

Strengths and Limitations:


Learning agents offer significant advantages:
• Adaptability: They can adjust to changing environments and improve their performance over
time.
• Handling Complex Situations: They can learn from large amounts of data and make decisions
in complex scenarios.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 15


DCA6301: Artificial Intelligence

However, they also have limitations:


• Learning Time: It can take time for learning agents to improve, especially with complex tasks.
• Data Dependence: Their performance heavily depends on the quality and quantity of data they
learn from.
• Ethical Considerations: Designing fair and unbiased learning algorithms is crucial to avoid
discriminatory outcomes.

Learning agents represent the cutting edge of artificial intelligence. Their ability to learn from
experience opens up possibilities for intelligent systems that can continuously improve and adapt to
the real world. As research in learning algorithms progresses, these agents will play a vital role in
solving complex problems and automating tasks across various domains.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 16


DCA6301: Artificial Intelligence

4. AGENT ARCHITECTURE

Intelligent agents are not magical boxes. They achieve their feats through a well-defined architecture
consisting of several key components working together. Let's explore these components and
understand their roles:

4.1. Perceptual System


This is the agent's sensory apparatus. It gathers information about the environment Perceptual
System in Intelligent Agents

The perceptual system is the sensory gateway for an intelligent agent. It's responsible for gathering
information about the surrounding environment, allowing the agent to perceive the world around it
through various sensors like cameras, microphones, LiDAR (Light Detection and Ranging), or
temperature sensors. For example, a self-driving car uses a multitude of sensors, including cameras
to perceive traffic lights and pedestrians, LiDAR to create a 3D map of the surroundings, and radar to
detect distant objects.

The specific type of sensor depends on the agent's purpose and the environment it operates in.

This information serves as the foundation for all the agent's subsequent actions and decision-making
processes.

Examples of Perceptual Systems in Action:


• Self-Driving Cars: Use cameras, LiDAR, and radar to perceive traffic lights, pedestrians, other
vehicles, and the overall road layout.
• Robot Vacuum Cleaner: Uses various sensors (bumpers, cliff sensors, dirt detectors) to
perceive obstacles, cleanable areas, and avoid falling down stairs.
• Virtual Assistant: Employs microphones to capture voice commands and interprets them using
speech recognition algorithms.

Diagram of Perceptual System:


Here's a simplified diagram illustrating the components of a perceptual system:

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 17


DCA6301: Artificial Intelligence

Figure 2.6 :Perceptual System Diagram

1. Sensors: Capture raw data from the environment (visual, auditory, tactile, etc.).
2. Signal Processing: Prepares the raw data for further analysis.
3. Feature Extraction: Identifies key characteristics of the data.
4. Pattern Recognition: Matches data to existing patterns or models.
5. Perceptual Output(Actuators): Processed information sent to the agent's decision-making
system.

By effectively utilizing the perceptual system, intelligent agents can gain a vital understanding of their
surroundings, enabling them to react, plan, and make informed decisions in the real world.

4.2. Types of Sensors


The specific sensors employed by the perceptual system depend on the agent's purpose and the
environment it operates in. Some common types are
• Visual Sensors (Cameras): Capture visual data like images and videos. Used in self-driving
cars, robots, and face recognition systems.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 18


DCA6301: Artificial Intelligence

Figure 2.2 A Car with a camera mounted on the roof

• Auditory Sensors (Microphones): Capture sound and audio information. Used in speech
recognition systems, virtual assistants, and robots that interact with humans.

Figure 2.3 :Smart speaker with a microphone array on top

• Tactile Sensors: Detect physical contact and pressure. Used in robotic


grippers, prosthetics, and robots for object manipulation.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 19


DCA6301: Artificial Intelligence

Figure 2.4: robotic gripper with tactile sensors on its fingertips

• Range Sensors (LiDAR, Radar): Measure distance and create 3D maps of the
environment. Used in self-driving cars, autonomous drones, and robots navigating complex
spaces.

Figure 2.5: Selfdriving car with a LiDAR sensor mounted on its roof

Information Processing:
The raw data captured by the sensors needs to be processed and interpreted before it becomes
meaningful information for the agent. This processing might involve tasks like:
• Signal Processing: Removing noise and enhancing relevant signals.
• Feature Extraction: Identifying key characteristics of the data (e.g., edges in an image, voice
pitch in audio).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 20


DCA6301: Artificial Intelligence

• Pattern Recognition: Matching sensory data to pre-existing patterns or models stored in the
agent's memory.

4.3. Internal Model


The internal model is a mental representation of the environment the agent is situated in. It's built
based on the information received from the perceptual system and the agent's prior knowledge or
experience. This model can be as simple as a map or as complex as a dynamic simulation of the
environment.

The internal model plays a crucial role in decision-making. It allows the agent to consider the potential
consequences of actions before taking them and plan effectively to achieve its goals.

4.4. Decision-Making System


This is the brain of the agent, responsible for analyzing the information from the perceptual system
and the internal model to determine the best course of action. It uses algorithms and reasoning
techniques to choose an action that will move the agent closer to its goals.

The decision-making system can be rule-based, relying on pre-programmed if-then statements. More
sophisticated agents might employ machine learning techniques to make data-driven decisions or
even utilize probabilistic reasoning to handle uncertainty in the environment.

4.5. Action Execution System


Once a decision is made, the action execution system takes over. It translates the chosen action into a
set of commands for the agent's actuators (components that allow it to interact with the
environment). These actuators could be motors for movement, robotic arms for manipulation, or
simply a display interface for providing information.

The action execution system needs to ensure smooth and accurate execution of the chosen action.
Factors like motor control, signal processing, and safety considerations come into play here.

4.6. Learning Element


While not all intelligent agents have this capability, the learning element adds another layer of
sophistication. It allows the agent to improve its performance over time by learning from its
experiences. This can involve various learning algorithms, such as reinforcement learning, supervised
learning, or unsupervised learning.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 21


DCA6301: Artificial Intelligence

The learning element can modify the agent's internal model, refine its decision-making strategies, or
even adjust its perception based on new data encountered in the environment.

The following diagram illustrates the Agent Architecture

Figure 2.7: Agent Architecture Diagram

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 22


DCA6301: Artificial Intelligence

5. COMPONENTS OF A PROBLEM-SOLVING AGENT


Problem-solving agents are a specific type of intelligent agent designed to tackle well-defined
challenges. They achieve this by following a structured approach that involves defining the problem,
formulating a solution strategy, and executing the chosen course of action. Let's delve deeper into the
essential components that make problem-solving agents tick

5.1. Initial State


This represents the starting point for the agent in the problem space. It defines the current situation
or configuration from which the agent needs to find a solution. The initial state can be described in
various ways depending on the problem.

For example:
• In a maze navigation problem, the initial state could be the agent's location (coordinates) within
the maze.
• In a game of chess, the initial state would be the starting position of all the pieces on the board.
• In a route-finding problem, the initial state could be the agent's current location and the
destination it needs to reach.

5.2. Actions
These are the set of possible operations or maneuvers that the agent can perform to alter its state in
the problem space. The available actions depend on the specific problem and the capabilities of the
agent.

Some examples are


• In a maze navigation problem, actions could be moving up, down, left, or right within the maze.
• In a game of chess, actions could be moving any piece to a valid square on the board according
to the rules of the game.
• In a route-finding problem, actions could be moving to a neighboring city or town on the map.

5.3. Transition Model


This component defines the relationship between states and actions. It essentially describes how the
state of the problem space changes when the agent takes a particular action. The transition model can

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 23


DCA6301: Artificial Intelligence

be represented mathematically as a function that takes the current state and an action as input and
returns the resulting new state.

5.4. Goal Test


The goal test is a crucial component that determines if the agent has reached a successful solution
state. It's a boolean function that evaluates a given state and returns true if it's a goal state, and false
otherwise.

Few examples of goal tests are


• In a maze navigation problem, the goal test would be true if the agent's location is the designated
exit of the maze.
• In a game of chess, the goal test would be true if the agent's opponent's king is in checkmate.
• In a route-finding problem, the goal test would be true if the agent's location is the destination
city.

5.5. Path Cost


The path cost is a function that assigns a numerical value to a sequence of actions. It essentially
measures the effort or cost associated with getting from the initial state to a particular state. This cost
could be measured in various ways, such as time, distance traveled, resources consumed, or a
combination of factors.

For instance:
• In a maze navigation problem, the path cost could be the number of moves it takes to reach a
specific location.
• In a game of chess, the path cost might be an evaluation of the material lost (pieces captured)
during the game.
• In a route-finding problem, the path cost could be the total distance traveled or the travel time
based on the mode of transportation.

Visualizing the Components with a Maze Example:


Imagine a maze problem-solving agent. The initial state is the agent's starting position within the
maze. The available actions are moving up, down, left, or right. The transition model dictates how
these actions change the agent's location in the maze. The goal test is true if the agent reaches the exit
of the maze. Finally, the path cost could be the number of moves it takes to reach the exit.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 24


DCA6301: Artificial Intelligence

By effectively utilizing these core components, problem-solving agents can systematically navigate
through the problem space, evaluate potential solutions, and ultimately find a sequence of actions
leading to a successful goal state.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 25


DCA6301: Artificial Intelligence

6. SEARCH ALGORITHMS
Search algorithms are the workhorses of problem-solving agents. They systematically explore the
problem space, a collection of all possible states an agent can be in, to find a sequence of actions
leading to a goal state. If you're lost in a maze, how do you find the exit? This is where search
algorithms come in! They are a fundamental concept in computer science, used to systematically
explore a set of possibilities and find the best solution to a problem.

Think of it like this:


• You (the agent) are trying to navigate a complex problem space (the maze).
• The search algorithm acts as your guide, exploring different paths (possible solutions).
• Each path has a cost associated with it (time, distance, etc.).
• The goal is to find the path with the lowest cost (the exit!).

6.1. Types of Search Algorithms


There are two main categories of search algorithms:
1. Uninformed Search: These algorithms explore the search space blindly, without any
knowledge about which paths might be better. They rely on basic techniques like systematically
checking every option or prioritizing unexplored paths.

o Example: Breadth-First Search explores all options at the same level (like checking every
hallway in a floor before moving to the next).

2. Informed Search: These algorithms leverage additional information, often a heuristic function,
to guide their search towards the goal more efficiently. The heuristic estimates how "close" a
particular state is to the goal, allowing the algorithm to prioritize promising paths.

o Example: A* Search uses a heuristic like the Manhattan distance to estimate the remaining
distance to the exit in the maze.

Applications of Search Algorithms:


Search algorithms have a wide range of applications, including:
• Navigation: GPS systems use search algorithms to find the shortest route between two
locations.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 26


DCA6301: Artificial Intelligence

• Game Playing: AI players in games like chess or checkers use search algorithms to evaluate
possible moves and choose the best one.
• Problem Solving: Search algorithms are used in various problem-solving domains, from
scheduling tasks to optimizing resource allocation.

Search algorithms are powerful tools for navigating complex problem spaces. By exploring different
possibilities and evaluating their costs, they help us find the best solutions to a wide range of
challenges. As computer science continues to evolve, search algorithms will continue to play a vital
role in developing intelligent systems that can tackle increasingly complex tasks.

6.2. Uninformed Search Strategies


Uninformed search strategies, also known as blind search, navigate the problem space without any
knowledge or guidance about the goal's location. They rely on brute force exploration, often leading
to inefficiencies for complex problems. Here, we'll delve into two fundamental uninformed search
strategies: Depth-First Search (DFS) and Breadth-First Search (BFS).

6.2.1. Depth-First Search (DFS)


Imagine exploring a maze. DFS is like navigating a maze by always going as far down one path as
possible before backtracking and trying another path. Here's a breakdown of DFS:
• Core Idea: DFS explores one path at a time until it reaches a goal state or hits a dead end. If it
reaches a dead end, it backtracks and explores another path that originates from the most recent
branching point (visited state) in the exploration.

• Data Structure: DFS typically uses a Last-In-First-Out (LIFO) data structure like a stack to keep
track of the paths being explored. New states are pushed onto the stack, and backtracking
involves popping states off the stack to revisit previous branching points.

Example: Maze Navigation with DFS


Consider the maze in the image below. We want to find a path from the green "Start" state to the red
"Goal" state.
• DFS starts at the "Start" state and pushes it onto the stack.
• It then explores the path going down (D), pushing that state onto the stack.
• It continues exploring down (D) and pushes that state as well.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 27


DCA6301: Artificial Intelligence

• However, this path reaches a dead end. Since there are no further options down this branch, DFS
backtracks.
• It pops the most recent state (D) off the stack and explores the right path (R) from the previous
state.
• This process continues, with DFS pushing states onto the stack as it explores down paths and
popping them off when it needs to backtrack.
• Eventually, DFS explores the path sequence "Start" -> "R" -> "U" -> "R" -> "Goal," successfully
reaching the goal state.

Advantages of DFS:
• Simple to implement: The LIFO stack structure makes DFS easy to understand and code.
• Space efficient: DFS only needs to store the current path being explored, leading to lower space
complexity compared to BFS (explained later).

Disadvantages of DFS:
• Can get stuck in deep dead ends: DFS might prioritize long, winding paths and get stuck
exploring a deep dead end before finding a shorter solution.
• Not optimal for all problems: DFS doesn't guarantee finding the shortest path to the
goal, especially in problems with branching paths.

6.2.2. Breadth-First Search (BFS)


BFS takes a different approach to navigating the problem space. Here's what sets it apart:
• Core Idea: BFS explores all the states at a given depth before moving on to the next depth level. It
systematically expands outward from the initial state, ensuring all possibilities at a specific level
are considered before diving deeper.
• Data Structure: BFS typically uses a First-In-First-Out (FIFO) data structure like a queue to
manage the exploration process. States are added to the back of the queue, and exploration
happens by removing states from the front.

Example: Maze Navigation with BFS


Let's use the same maze to illustrate BFS.
• BFS starts at the "Start" state and adds it to the back of the queue.
• Then, it explores all the neighboring states of "Start" (Up, Right, Down) and adds them to the
back of the queue in the order they were discovered.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 28


DCA6301: Artificial Intelligence

• Next, BFS removes the first state from the queue (which is still "Start") and explores its
unvisited neighbors again (Right, Down this time, as Up was already explored). This process
continues, systematically adding all neighboring states of the current state to the back of the
queue.
• As BFS expands outward, it eventually encounters the "Goal" state, which is a neighbor of a
previously explored state. Since the goal is found, the search terminates.

Advantages of BFS:
• Guaranteed to find a shortest path: If a solution exists, BFS is guaranteed to find a path with
the minimum number

Uninformed Search Strategies: Beyond Depth-First and Breadth-First Search

While Depth-First Search (DFS) and Breadth-First Search (BFS) provide fundamental uninformed
search approaches, they have limitations. DFS can get stuck in deep dead ends, and BFS can be
inefficient for problems with vast shallow spaces. Here, we explore two advanced uninformed search
strategies that address these shortcomings: Depth-Limited Search and Iterative Deepening Search.

6.2.3. Depth-Limited Search (DLS)


DLS aims to control the depth of exploration that DFS ventures into. It sets a pre-defined limit on the
maximum depth a path can reach before forcing backtracking. This prevents DFS from getting stuck
in infinitely deep but irrelevant parts of the search space.

Core Idea:
• DLS functions similarly to DFS, using a stack to keep track of the explored path.
• However, DLS introduces a depth limit (L) that restricts how far down a path the search can go.
• When a state at depth L is encountered, DLS backtracks even if there are unexplored branches
at that state.
• The search continues exploring alternative paths, adhering to the depth limit, until a goal state
is found or all possibilities are exhausted.

Example: Maze Navigation with DLS


Consider the maze below. We'll set a depth limit (L) of 2 for DLS.
• DLS starts at "Start" and pushes it onto the stack.
• It explores the path going down (D), pushing that state onto the stack (depth 1).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 29


DCA6301: Artificial Intelligence

• Since the depth limit is 2, DLS can explore one more level. It goes down again (D), pushing that
state as well (depth 2).
• However, there are no further options at this point. Since the depth limit is reached, DLS
backtracks, even though there's an unexplored path to the right.
• It pops the state at depth 2 off the stack and tries the right path (R) from the previous state
(depth 1).
• DLS can now explore the path "Start" -> "R" -> "U" -> "R," successfully reaching the goal state
within the depth limit.

Advantages of DLS:
• Avoids getting stuck in deep dead ends: The depth limit prevents DLS from endlessly
exploring irrelevant deep paths.
• More focused exploration than DFS: DLS prioritizes exploring paths closer to the surface,
potentially finding solutions faster than unrestricted DFS.

Disadvantages of DLS:
• May require multiple executions: Finding the optimal depth limit can be tricky. A low limit
might miss the goal, while a high limit might behave similarly to DFS. Sometimes, running DLS
with progressively increasing depth limits might be necessary.
• Not guaranteed to find the shortest path: Like DFS, DLS doesn't guarantee finding the most
efficient solution, especially if the goal is located deeper than the chosen depth limit.

6.2.4. Iterative Deepening Search (IDS)


Iterative Deepening Search (IDS) combines the strengths of DFS and BFS to overcome their
limitations. It leverages the efficiency of BFS for exploring shallow levels and the completeness of DFS
for guaranteeing a solution (if it exists).

Core Idea:
• IDS performs multiple DFS searches iteratively, with each iteration increasing the depth limit by
one.
• In the first iteration, it acts like DLS with a depth limit of 1, exploring all states at depth 1.
• If the goal isn't found, the second iteration increases the limit to 2, essentially running a DLS with
a limit of 2.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 30


DCA6301: Artificial Intelligence

• This process continues, systematically increasing the depth limit in each iteration until the goal
is found or all possibilities are exhausted at the current depth.

Example: Maze Navigation with IDS


Using the same maze from the previous example, let's see how IDS works.
• Iteration 1 (Depth Limit 1): IDS behaves like DLS with a limit of 1. It explores all states
reachable from "Start" within one step (Up, Right, Down) but doesn't find the goal.
• Iteration 2 (Depth Limit 2): The depth limit is increased to 2. This allows IDS to explore paths
like "Start" -> "R" -> "U" -> "R," reaching the goal state. Since a solution is found, the search
terminates.

Advantages:
• Efficiency for Shallow Solutions: IDS inherits the benefits of BFS for exploring shallow levels
of the search space. By iteratively increasing the depth limit, it prioritizes finding solutions
closer to the initial state, potentially leading to faster results in problems where the goal is likely
to be found at a shallow depth.
• Guaranteed to Find a Solution (if it exists): Similar to DFS, IDS guarantees finding a solution
to the problem if one exists within the defined search space. By systematically increasing the
depth limit in each iteration, it ensures that all possible paths are eventually explored, leading
to the goal state if it's reachable.
• Space Efficient: Unlike BFS, which can require storing a vast number of states in the queue for
deeper problems, IDS maintains a space complexity similar to DFS. It only needs to keep track
of the current path being explored in each iteration, making it memory-efficient for problems
with large branching factors.

Disadvantages:
• Redundant Exploration: In some cases, IDS might perform redundant state explorations.
During each iteration with an increased depth limit, it might revisit states that were already
explored in previous iterations. While this doesn't affect the correctness of the solution, it can
lead to a slight increase in overall search time compared to an optimal uninformed search
strategy.
• Potential for Multiple Iterations: Depending on the depth of the goal state, IDS might require
multiple iterations to find the solution. This can be slower than scenarios where BFS or a well-
informed search strategy can find the solution in a single pass.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 31


DCA6301: Artificial Intelligence

• Tuning Depth Limit: While IDS guarantees finding a solution eventually, the initial choice of
depth limit can impact efficiency. If the limit is set too low, it might require many iterations.
Conversely, a very high limit might lead to unnecessary exploration similar to DFS.

IDS offers a valuable alternative to uninformed search algorithms like DFS and BFS. It provides a
balance between efficiency and completeness, making it suitable for various problem-solving
scenarios. While it might involve some redundant exploration and requires tuning the depth limit,
IDS is a powerful tool for navigating complex search spaces where finding a solution within a
reasonable timeframe is crucial.

6.3. Informed Search Strategies (Heuristic Search)


Uninformed search algorithms explored the problem space blindly, considering all possibilities with
equal weight. Now, we delve into informed search strategies, also known as heuristic search. These
methods incorporate knowledge about the problem domain to guide the search towards the goal
more efficiently. They utilize a heuristic function, an estimate of the cost (distance, time, etc.) of
reaching the goal from a particular state.

6.3.1. Greedy Search


Imagine navigating a maze with helpful directional signs pointing towards the exit. Greedy search
operates on a similar principle. It uses a heuristic function (h(n)) to estimate the cost of reaching the
goal from any given state (n) in the problem space. At each step, it chooses the neighbor of the current
state that appears to be closest to the goal based on the heuristic estimate.

Core Idea:
• Greedy search maintains a list of explored states and a set of frontier states (unexplored
neighbors of the current state).
• It iteratively selects the state from the frontier with the most promising heuristic value
(h(n)), indicating its perceived closeness to the goal.
• The chosen state is added to the explored list, and its unexplored neighbors are added to the
frontier.
• This process continues until the goal state is reached or a dead end is encountered (no neighbors
with a promising heuristic value).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 32


DCA6301: Artificial Intelligence

Example: Maze Navigation with Greedy Search


Consider the maze below. Let h(n) represent the estimated straight-line distance to the goal from any
state (n).

Figure 2.8: Maze for Greedy Search Example

• Greedy search starts at "Start" and calculates the h(n) for its neighbors (Up, Right, Down).
• Since "Right" has the lowest h(n) (closest estimated distance to the goal), it's chosen as the next
state.
• The process repeats, with Greedy search always moving towards the neighbor with the most
promising h(n) value.
• In this example, Greedy search successfully reaches the goal state by following the path "Start" -
> "Right" -> "Up" -> "Right."

Advantages of Greedy Search:


• Simpler to implement: Greedy search uses a straightforward approach, making it easier to
code compared to some informed search algorithms.
• Can be efficient for certain problems: If the heuristic function is accurate and the goal is
located in a relatively straight-line path, Greedy search can find solutions quickly.

Disadvantages of Greedy Search:


• Not guaranteed to find optimal solutions: Greedy search prioritizes immediate progress
based on the heuristic, which might not always reflect the true cost to the goal. It can get stuck
in suboptimal paths, especially if the heuristic is misleading.
• Can be susceptible to local optima: In some problem spaces, Greedy search might reach a state
that appears close to the goal based on the heuristic but is actually a dead end. Since it only

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 33


DCA6301: Artificial Intelligence

considers the most promising neighbor, it might not explore alternative paths that could lead to
the actual goal.

Problem with Greedy Search Solution:


Consider a different maze layout where the heuristic function (h(n)) is not perfect. Imagine the maze
below with the same h(n) values from the previous example.

Figure 2.9: Maze for Greedy Search Limitation Example

• Greedy search would follow the same initial path ("Start" -> "Right" -> "Up" -> "Right").
• However, at this point, it would mistakenly choose "Down" as the next state because it has the
lowest h(n) among its neighbors.
• This leads to a dead end, and Greedy search wouldn't be able to find the actual goal state located
to the left.

6.3.2. Minimax Algorithm


The minimax algorithm is a fundamental technique used in artificial intelligence (AI) for decision-
making in two-player zero-sum games like chess, checkers, or Go. It explores all possible game states
(represented as a tree) to find the optimal move for a player. Here's a breakdown of minimax with a
tree structure example:

The Game Tree:


Imagine a game like tic-tac-toe. Each possible move on the board creates a new branch in a tree
structure. The root node represents the starting position, and subsequent levels represent the

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 34


DCA6301: Artificial Intelligence

possible moves for each player (usually alternating between maximizing and minimizing players).
Terminal nodes represent the end states of the game (win, lose, or draw).

Scoring and Players:


• Maximizing Player: This player aims to achieve the best possible outcome, usually assigned a
high score (e.g., +1 for winning tic-tac-toe).
• Minimizing Player: This player tries to achieve the worst outcome for the maximizing player,
often assigned a low score (e.g., -1 for losing tic-tac-toe).

Minimax in Action:
The minimax algorithm works recursively, evaluating each branch of the tree from the perspective of
the player whose turn it is at that level:
1. Maximizing Player's Turn: The algorithm examines all possible moves (child nodes) available
from the current node.
2. Recursive Evaluation: For each move (child node), it minimax is called recursively, simulating
the opponent's (minimizing player) best response. This involves evaluating the scores of all
terminal states reachable from that child node.
3. Score Propagation: The minimax algorithm propagates the score back up the tree. For the
maximizing player, it chooses the child node with the highest score, essentially maximizing its
potential outcome.

Example with Tree Structure:


Consider a simplified tic-tac-toe scenario where X is the maximizing player and O is the minimizing
player:
(Root)
/ \ \
X1 X2 X3
/\ /\ /\
O1 O2 O3 O4 O5 O6
... ... ... ... ... ... (Terminal States)

• X1, X2, X3: Represent possible opening moves for X (maximizing player).

• O1, O2, ..., O6: Represent possible responses from O (minimizing player) for each of X's moves.

• "...": Indicate terminal states (win, lose, or draw) reached after a sequence of moves.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 35


DCA6301: Artificial Intelligence

Walkthrough:
1. Start at the root node. This is X's turn (maximizing player).
2. Explore X1: Simulate O's (minimizing player) best response for each child node (O1, O2).
Evaluate the score of each terminal state reachable from those nodes (win/loss for X).
3. Choose the best move for X from X1: This would be the move leading to the highest score (most
wins for X) considering O's potential responses.
4. Repeat for X2 and X3: Evaluate each move (X2 and X3) with minimax, considering O's best
replies. Choose the move for X that leads to the best outcome based on the propagated scores.

The minimax algorithm, by recursively evaluating all possible game states, identifies the move for the
maximizing player (X) that leads to the best outcome (most wins) considering all potential responses
from the minimizing player (O).

Limitations:
• Exponential Complexity: The number of possible game states grows exponentially with game
depth. Minimax can become computationally expensive for complex games with many moves.
• Static Evaluation: It relies on a static evaluation function to score terminal states. This function
might not perfectly capture the dynamic nature of a game.

Future Enhancements:
• Alpha-Beta Pruning: This optimization technique significantly reduces the number of states
explored by minimax, making it more efficient for complex games.
• Heuristic Evaluation: Using a more sophisticated heuristic function to estimate scores can
improve decision-making, especially for games with imperfect information.

By systematically exploring the game tree and considering all possible moves and responses, the
minimax algorithm provides a powerful approach for finding optimal moves in two-player zero-sum
games.

6.3.3. A* Search: Balancing Efficiency and Optimality


A* Search stands out as a powerful informed search strategy in the realm of artificial intelligence. It
leverages the strengths of both uninformed and informed search to achieve a remarkable feat: finding
optimal solutions (shortest paths) while maintaining efficiency in exploring the problem space. Here's
a comprehensive breakdown of A* Search, along with an illustrative problem and solution.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 36


DCA6301: Artificial Intelligence

Core Idea:
A* Search operates by maintaining two crucial functions:
• h(n): The heuristic function, inherited from Greedy Search. It estimates the cost
(distance, time, etc.) of reaching the goal state from a particular state (n) in the problem space.

This function guides the search towards promising paths.

• f(n): The cost function, which combines the actual cost (g(n)) of reaching state (n) from the start
state and the heuristic estimate (h(n)). It represents the total estimated cost to reach the goal
from state (n). f(n) = g(n) + h(n).

A* Search employs a priority queue (frontier) to manage the exploration process. Unlike Greedy
Search, which solely prioritizes the heuristic value, A* Search prioritizes states based on their f(n)
values. States with lower f(n) are considered more promising as they suggest a potentially lower
overall cost to reach the goal.

The search proceeds iteratively, following these steps:

1. Initialization: Start with the initial state and calculate its f(n) value. Add it to the empty frontier.

2. Loop: While the frontier is not empty and the goal state hasn't been reached:
o Remove the state with the lowest f(n) value from the frontier (considered the most
promising state).
o If the removed state is the goal state, terminate the search - a solution (optimal path) has
been found.
o Expand the current state by generating its successor states (possible next moves).
o For each successor state:

▪ Calculate its g(n) value (actual cost to reach it from the start state).
▪ Calculate its h(n) value (estimated cost to reach the goal from it).
▪ Calculate its f(n) value (g(n) + h(n)).
▪ If the successor state is not already in the explored list or the frontier, add it to the
frontier with its calculated f(n) value.
▪ If the successor state is already in the frontier, but the newly calculated f(n) is lower
than the existing one, update the state's f(n) value in the frontier (this ensures the
search prioritizes the most promising path).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 37


DCA6301: Artificial Intelligence

o Add the expanded state (the one removed from the frontier in step 2.1) to the explored list.

A Search in Action: Maze Navigation Example*

Consider the maze below. Let h(n) represent the estimated straight-line distance to the goal from any
state (n).

Figure 2.10 Maze for A* Search Example

Solution:
1. Start: We begin at "Start" and calculate its f(n). Since g(n) for the start state is 0 (no cost to reach
itself), f(n) will solely depend on h(n) (estimated distance to the goal).
2. Iteration 1: The frontier initially contains only "Start" with its f(n) value. We expand "Start,"
generating its neighbors ("Up," "Right," and "Down"). We calculate the g(n) for each neighbor
(cost of moving one step) and add h(n) to get their f(n) values. Since "Right" has the lowest
f(n), it becomes the next state to explore.
3. Iteration 2: We expand "Right," generating its neighbors ("Up," "Right," and "Down"). We
calculate their g(n) and f(n) values. This time, "Up" has the lowest f(n), so it's chosen for
exploration.
4. Iteration 3: Expanding "Up" leads to "Goal" as a successor state. Since "Goal" is the target, the
search terminates.

By following this process, A* Search efficiently navigates the maze, considering both the actual cost
of moves (g(n)) and the estimated cost to the goal (h(n)) to find the shortest path: "Start" -> "Right" -
> "Up" -> “Goal."

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 38


DCA6301: Artificial Intelligence

Demonstration of A* Search navigating a maze is explained step-by-step:

Maze Layout:
Imagine a maze like the one below, where "S" represents the starting point and "G" represents the
goal.
+---+---+---+
|S| | |
+---+---+---+
| | | |
+---+---+---+
| | |G|
+---+---+---+

Heuristic Function (h(n))


For this example, let's assume the heuristic function (h(n)) estimates the straight-line distance from
any state (n) in the maze to the goal (G).

A Search Process:
1. Initialization:
o We start at the "S" state.
o Since there's no cost to reach the starting state itself, its actual cost (g(n)) is 0.
o We calculate the heuristic value (h(n)) for the starting state, which is the straight-line
distance to the goal (typically calculated using a distance formula).
o The starting state's f(n) value is calculated as g(n) + h(n). f(n) represents the total
estimated cost to reach the goal from the current state.
o We create an empty explored list to keep track of visited states and a priority queue
(frontier) to manage unexplored states.
o The starting state with its f(n) value is added to the frontier.

2. Looping through the Search:


The search continues iteratively until the goal state is found or the frontier becomes empty
(indicating no solution exists):
o We remove the state with the lowest f(n) value from the frontier (considered the most
promising state to explore next).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 39


DCA6301: Artificial Intelligence

o If the removed state is the goal state (G), the search terminates successfully - we've found
the optimal path!
o If not, we expand the current state by generating its successor states (all possible moves
from that state - Up, Down, Left, Right in this maze example).

3. Evaluating Successor States:


4. For each successor state:
5. We calculate its actual cost (g(n)) to reach it from the current state (typically the cost of moving
one step in the maze, which is 1 in this case).
6. We calculate its heuristic value (h(n)) based on its estimated straight-line distance to the goal.
7. We calculate its f(n) value (g(n) + h(n)).
8. Updating the Frontier and Explored List:
o We check if the successor state is already in the explored list. If it is, we skip it as we've
already explored that path.

o If the successor state is not in the explored list:

▪ We add the successor state with its f(n) value to the frontier. This allows the priority
queue to prioritize states with lower estimated total costs.

o We also add the expanded state (the one removed from the frontier in step 2) to the
explored list to prevent revisiting the same state.

8. Repeating the Process:


We repeat steps 2-4 until the goal state is found or the frontier becomes empty.

Following these steps, A Search will efficiently explore the maze, prioritizing paths that appear closer to
the goal based on the heuristic estimate. In this example, the optimal path (shortest route) from "S" to
"G" will be discovered.*

In real-world scenarios, the heuristic function might be more complex, and the calculations might
involve additional factors depending on the problem domain.

Advantages:
• Optimality: A* Search is guaranteed to find the optimal solution (shortest path) if a consistent
heuristic is used. A consistent heuristic never overestimates the actual cost to reach the goal

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 40


DCA6301: Artificial Intelligence

(h(n) <= actual cost to goal). This ensures that A* Search explores only the most promising paths
and eventually reaches the goal with the minimum cost.
• Efficiency: Compared to uninformed search algorithms like BFS, A* Search prioritizes states
with lower f(n) values. This f(n) function combines the actual cost traveled (g(n)) and the
estimated cost to the goal (h(n)). By focusing on states with a potentially lower overall cost, A*
Search avoids exploring irrelevant parts of the problem space, leading to faster solution times in
many scenarios.
• Heuristic Flexibility: A* Search doesn't require a perfect heuristic. Even with an imperfect but
reasonable estimate, A* Search can often find near-optimal solutions much faster than
uninformed search methods. This flexibility allows for adapting the heuristic function to specific
problem domains, potentially improving efficiency.

Disadvantages:
• Reliance on Heuristic Quality: The effectiveness of A* Search hinges on the quality of the
chosen heuristic function. A poor heuristic that significantly overestimates the cost to the goal
(h(n) >> actual cost) can lead A* Search down misleading paths, hindering its efficiency and
potentially preventing it from finding the optimal solution.
• Computational Overhead: Compared to simpler uninformed search algorithms, A* Search
involves maintaining a priority queue (frontier) and calculating f(n) values for states. This can
introduce some additional computational overhead, especially for problems with large
branching factors or complex heuristic functions.
• Incompleteness in Special Cases: While generally optimal with consistent heuristics, A* Search
can exhibit incomplete behavior in specific scenarios. For instance, if multiple paths to the goal
have the same actual cost but the heuristic consistently overestimates the cost for one path, A*
Search might prioritize exploring the other paths first, potentially missing the optimal solution
in rare cases.

A* Search offers a powerful approach for navigating problem spaces, particularly when an accurate
or reasonable heuristic function is available. Its ability to find optimal solutions while maintaining
efficiency makes it a popular choice for various AI applications. However, the quality of the heuristic
and the potential for computational overhead are important considerations when choosing A* Search
for a specific problem domain.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 41


DCA6301: Artificial Intelligence

6.3.4. AO* Algorithm


The AO* algorithm is a type of search algorithm designed for problems represented by AND-OR
graphs. Unlike A*, which excels at finding a single best path, AO* tackles scenarios where there are
multiple possible solutions or achieving sub-goals is crucial.

Here's a breakdown of AO* and its key features:

What it's Used For:


• Planning in Uncertain Environments: Imagine a search and rescue mission in a disaster zone.
Buildings might be collapsed, roads blocked, and there could be multiple survivors scattered
around. AO* can efficiently explore this complex search space, considering different routes and
adapting to dead ends.
• Multi-Option Decision Making: Robotics is another field where AO* shines. Robots navigating
an environment might encounter obstacles or have various paths to reach a goal. AO* allows
them to explore these options simultaneously and choose the most efficient course of action.
• Resource Allocation and Scheduling: In tasks like project management or delivery routing,
there might be multiple ways to allocate resources or sequence deliveries. AO* can factor in
these options and dependencies to find the optimal plan.

How it Works:
1. Modeling the Problem: The environment or scenario is represented as an AND-OR graph.
Nodes represent locations or states, and edges connect them. Edges are labeled as "OR"
(different options to reach the next node) or "AND" (all options need to be explored before
moving on).
2. Cost and Heuristic Functions: Similar to A*, AO* uses a cost function (g) to represent the effort
of traversing an edge and a heuristic function (h) to estimate the remaining effort to reach the
goal.
3. Open and Closed Lists: The algorithm maintains two lists: "OPEN" for unexplored nodes and
"CLOSED" for explored ones. Unlike A*, AO* can revisit nodes in the CLOSED list if new
information arises during the search.
4. Iterative Exploration: It starts at the starting node and explores the most promising node in
the OPEN list based on a combination of cost (g) and estimated effort (h).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 42


DCA6301: Artificial Intelligence

1. Handling OR and AND Nodes:


o OR Nodes: When encountering an OR node, the algorithm explores all outgoing edges
(paths) simultaneously, adding them to the OPEN list for further evaluation. This allows for
parallel exploration of different possibilities.
o AND Nodes: For AND nodes, all child nodes need to be confirmed safe or reachable before
moving on. If any child leads to a dead end, the parent AND node is marked unreachable.

2. Finding the Solution: The search continues until the goal is reached (safe zone in a rescue
mission) in an explored path. The reconstructed paths from various starting points represent
the optimal solution, considering all options and adapting to discovered dead ends.

Advantages of AO:*

• AND-OR Graph Efficiency: It effectively explores complex search spaces with multiple options
and dependencies between sub-goals.

• Adaptability to Change: AO* can dynamically adjust its search based on new information
discovered during the process.

Considerations:
• Computation Cost: Exploring multiple paths simultaneously can be computationally expensive
compared to A*.
• Heuristic Importance: A good heuristic function is crucial for guiding the search towards
promising areas and avoiding unnecessary exploration of dead ends.

By efficiently navigating the exploration of various possibilities, AO* solves problems where the
optimal solution requires considering diverse options and adapting to changing information.

6.3.5. Case Study-AO*


The A* algorithm excels at finding the single best path, but what if there are multiple viable options?
That's where AO* comes in. AO* tackles search problems represented by AND-OR graphs, where
reaching the goal might involve multiple solutions or achieving sub-goals. Let's explore its application
in a critical situation:

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 43


DCA6301: Artificial Intelligence

Scenario: A fire engulfs a high-rise office building. Firefighters need to find the fastest way to
evacuate all occupants safely. The building has multiple staircases, exits, and some floors might be
inaccessible due to the fire.

AO to the Rescue:* Here's how AO* can be used for efficient evacuation:
1. Modeling the Building: The building layout is transformed into an AND-OR graph. Nodes
represent rooms, hallways, and exits. Edges connect them, with some marked as "OR" (different
escape routes from a floor) and others as "AND" (all occupants must use that passage, like a
single staircase). Blocked hallways due to fire are dead-end nodes.
2. Cost Function (g): Similar to A*, AO* uses a cost function (g) to represent the time or effort
required to traverse an edge. This could include factors like distance, number of people to
evacuate, and potential hazards like smoke density.
3. Heuristic Function (h): The heuristic (h) estimates the remaining effort needed to reach safe
exits from any point. This might consider proximity to known exits, smoke levels, or accessibility
for people with mobility limitations.
4. Open and Closed Lists: AO* maintains two lists: "OPEN" for unexplored nodes and "CLOSED"
for explored ones. However, unlike A*, AO* can revisit nodes in the CLOSED list if new
information arises, like discovering a blocked passage.
5. Search Process: The algorithm starts at the location where the fire alarm originated and adds it
to the open list. It then iteratively explores the most promising node (lowest f(n) - same as A*)
based on cost (g) and estimated effort (h).
6. AND vs. OR Nodes:
o OR Nodes: When encountering an OR node (e.g., hallway with two exits), the algorithm
explores all outgoing edges (paths) simultaneously, adding them to the open list. This allows
for parallel exploration of different escape routes.
o AND Nodes: For AND nodes (e.g., single staircase), all child nodes (people on that floor) need
to be explored and confirmed evacuated (or dead-end if the path is blocked) before moving
on. If any child node leads to a dead-end (blocked passage), the parent AND node is marked
as unreachable.

7. Solution Found: The search continues until all occupants are found to be on a path leading to a
safe exit (marked as a goal node) in an explored path. The reconstructed paths from different
points within the building represent the most efficient evacuation plan, considering all available
escape routes and potential obstacles.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 44


DCA6301: Artificial Intelligence

This evacuation scenario highlights AO*'s ability to manage complex decision-making in emergencies
with multiple options and uncertainties. It's a valuable tool for applications like:
• Urban Search and Rescue: Optimizing search patterns for finding missing people in disaster
zones with collapsed buildings or blocked roads.
• Cybersecurity: Simulating attack scenarios and identifying the most effective ways to contain a
security breach within a computer network.
• Logistics and Delivery: Planning efficient delivery routes with multiple stops, considering
traffic conditions and potential delays.

By effectively navigating the exploration of various possibilities, AO* tackles problems where the
optimal solution requires considering diverse options and adapting to changing information.

6.3.6. Alpha-Beta Pruning


Alpha-beta pruning is an optimization technique used to significantly speed up the minimax
algorithm, commonly employed in two-player zero-sum games like chess, checkers, or Go. Here's a
detailed explanation with a tree structure example:

Minimax - The Foundation:


The minimax algorithm explores all possible game states (represented as a tree) for a specific game
and assigns a score (usually win/loss or positive/negative value) to each terminal state (end of the
game). For the maximizing player (e.g., white in chess), the goal is to choose a move that leads to the
highest score, while the minimizing player (black) aims for the lowest score. Minimax recursively
evaluates each branch of the tree, considering all possible opponent responses, to arrive at the best
move for the maximizing player.

The Bottleneck: Explosion of States


The problem with minimax is the exponential growth of possible game states as the game progresses.
Evaluating every single branch in a deep tree can be computationally expensive, especially for
complex games like chess. This is where alpha-beta pruning comes in.

Alpha-Beta Pruning in Action:


Alpha-beta pruning introduces two thresholds, alpha and beta:
• Alpha: Represents the highest guaranteed score the maximizing player can achieve from a
particular branch of the tree. Initially set to negative infinity (-∞).

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 45


DCA6301: Artificial Intelligence

• Beta: Represents the lowest guaranteed score the minimizing player can achieve from a branch.
Initially set to positive infinity (+∞).

As the algorithm explores the tree:


1. Maximizing Player's Turn: It evaluates a node and gets a score. This score is compared to
alpha.

o If the score is greater than alpha, alpha is updated to the new value. This signifies a better
score for the maximizing player.

2. Minimizing Player's Turn: The algorithm evaluates a node and gets a score. This score is
compared to beta.

o If the score is less than beta, beta is updated to the new value. This signifies a worse score
for the minimizing player (which is good for the maximizing player).

Pruning the Unnecessary:


The key insight behind alpha-beta pruning is that once a guaranteed score is established for a branch,
we can stop exploring further branches below it. Here's how:
• For Maximizing Player: If, during exploration, a node is encountered with a score less than or
equal to the current alpha value, it implies this branch cannot possibly yield a better score than
already established options. We can prune (stop exploring) the entire branch below this node.
The minimizing player wouldn't choose a move leading to a score worse than beta anyway.

• For Minimizing Player: Similarly, if a node is encountered with a score greater than or equal to
the current beta value, it implies this branch cannot possibly yield a worse score for the
minimizing player than already found options. We can prune the entire branch below this node.
The maximizing player wouldn't choose a move leading to a score better than alpha for the
opponent.

Example with Tree Structure:


Imagine a simplified game tree (maximizing player starts):
(Max)
/ \
A(4) B(?)
/\ /\

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 46


DCA6301: Artificial Intelligence

C(2) D(8) E(?) F(?)

• A (4): Represents a node with a score of 4 for the maximizing player.

• "?": Represents unexpanded nodes (not yet evaluated).

Let's walk through the exploration with alpha-beta pruning:


1. Start at the root (Max). Alpha is -∞, beta is +∞.
2. Explore A (4): Update alpha to 4 (better score for Max).
3. Explore C (2): Prune the entire right branch of C (D and below) because 2 is less than or equal
to alpha (4). Why? The minimizing player wouldn't choose a move leading to a score worse than
4 for the maximizing player.
4. Move back to B. Alpha is still 4, beta is +∞.
5. Explore E: Let's say E evaluates to 3. Prune the entire right branch of B (F and below) because 3
is less than or equal to alpha (4).

Result:
By pruning unnecessary branches, the algorithm focuses on the most promising areas of the tree,
significantly reducing the number of nodes evaluated. This allows for faster decision-making,
especially in complex games with many possible moves.

Key Points:
• Alpha-beta pruning significantly improves the efficiency of minimax without affecting the final
outcome (optimal move).
• The effectiveness of pruning depends on the ordering of moves explored and the quality of the
heuristic evaluation function used to estimate scores.

Selecting a Search Algorithm: Choosing the Right Tool for the Job
In the realm of artificial intelligence, selecting the appropriate search algorithm is crucial for
efficiently solving problems. Different algorithms excel in handling various problem characteristics.
Here, we'll delve into two key factors influencing search algorithm selection: Space vs. Time
Complexity and Problem Characteristics.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 47


DCA6301: Artificial Intelligence

7. SPACE VS. TIME COMPLEXITY

Understanding the complexity of a search algorithm is essential. Complexity refers to the amount of
resources (memory and time) an algorithm requires as the problem size (number of states) increases.
Here's a breakdown of the two key complexities to consider:
• Time Complexity: This measures the execution time of an algorithm in relation to the problem
size. Common notations include O(log n) (logarithmic time), O(n) (linear time), and O(n^2)
(quadratic time). Lower time complexity indicates faster execution for larger problems.

• Space Complexity: This measures the amount of memory an algorithm needs to store
information during its execution. It's also denoted using notations like O(1) (constant space) or
O(n) (linear space). Lower space complexity is preferred for problems with limited memory
resources.

Problem with Solution:


Consider the problem of searching for a specific book in a library catalog.
• Uninformed Search (BFS): This could be a viable option. BFS systematically explores all
possibilities at a given depth, ensuring the target book is found if it exists in the catalog. Its time
complexity is O(b), where b is the number of books (problem size). Space complexity is also O(b)
as it needs to store the queue of explored and unexplored books in memory.
• Informed Search (A Search):* If the library catalog has a search function that estimates the
"relevance" of a book to the search query, A* Search could be more efficient. By prioritizing
books with higher estimated relevance (heuristic), it can find the target book faster. The time
complexity depends on the quality of the heuristic, but it's generally lower than BFS for well-
defined relevance estimates. Space complexity remains O(b) due to the frontier for unexplored
books.

Problem Characteristics
The effectiveness of a search algorithm depends on the specific characteristics of the problem it's
tackling. Here are some key factors to consider:
• State Space Size: Is the problem space vast (e.g., chess game) or relatively small (e.g., navigating
a maze with few paths)? Algorithms with lower space complexity might be crucial for problems
with limited memory.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 48


DCA6301: Artificial Intelligence

• Branching Factor: How many possible next states can be reached from any given state? Higher
branching factors can lead to exponential growth in the search space. Algorithms that prioritize
promising paths (informed search) are beneficial in such scenarios.
• Guaranteed Solution: Does the problem guarantee a solution exists, or is it possible no solution
is available (e.g., finding a path in a maze with a dead end)? If a solution might not exist,
algorithms with completeness guarantees (like DFS) are preferred.
• Optimality: Is finding the absolute shortest path crucial, or is a "good enough" solution
acceptable? If optimality is essential, A* Search with a consistent heuristic is ideal.

Problem with Solution:


Lets explore a scenario where you need to find a route between two cities on a map with numerous
towns and roads connecting them.
• Uninformed Search (DFS): This might not be optimal. DFS can get stuck exploring long,
winding paths and miss shorter routes. However, its space complexity (O(d), where d is the
depth of the search) can be advantageous if memory is limited.
• Informed Search (A Search):* Assuming the map provides estimated travel times between towns,
A* Search becomes a compelling choice. With this heuristic, it can prioritize routes with lower
estimated travel times, leading to faster path discovery. The time complexity depends on the
heuristic quality, but it's generally efficient for well-defined road networks. Space complexity
remains O(d) due to the frontier for unexplored paths.

Selecting the right search algorithm requires careful consideration of the problem characteristics and
the desired outcome. Analyzing the space and time complexities of different algorithms in relation to
the specific problem helps in making an informed decision. By understanding these factors, you can
equip your AI systems with the most efficient search strategies for tackling various problem domains.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 49


DCA6301: Artificial Intelligence

8. SUMMARY

Intelligent agents are categorized into five types based on their complexity and functionality. Simple
reflex agents operate using predefined rules to respond to immediate percepts, making them suitable
for simple and fully observable environments. Model-based reflex agents, however, maintain an
internal state that accounts for unobservable aspects of the environment, enabling them to handle
more complex and dynamic scenarios. Goal-based agents take this further by acting to achieve specific
objectives, which requires evaluating potential actions against their goals. Utility-based agents
introduce the concept of utility, aiming to maximize their overall satisfaction by considering the trade-
offs between different outcomes. Finally, learning agents possess the ability to improve their
performance over time through experience, making them adaptable to new and evolving situations.

The architecture of an agent consists of several critical components. The perceptual system gathers
and processes information from the environment, forming the basis for decision-making. The internal
model maintains a representation of the world, allowing the agent to make informed decisions even
when not all information is directly observable. The decision-making system evaluates possible
actions based on percepts, internal states, goals, and utilities. The action execution system then
carries out these decisions using actuators. Additionally, the learning element enables the agent to
adapt and improve by updating its internal model and decision-making processes based on past
experiences.

Problem-solving agents are defined by specific components that facilitate their search for solutions.
These include the initial state, representing the starting point of the problem, and actions, which are
the possible steps the agent can take. The transition model describes how actions affect the state,
while the goal test determines whether the current state meets the desired objectives. Path cost
evaluates the efficiency of different action sequences. Search algorithms are employed to navigate
through possible states, with uninformed strategies like depth-first and breadth-first search
exploring without additional information, and informed strategies like greedy search and A* search
using heuristics to guide the search more effectively. Selecting an appropriate search algorithm
depends on the balance between space and time complexity and the specific characteristics of the
problem at hand.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 50


DCA6301: Artificial Intelligence

9. GLOSSARY

Agents that make decisions based solely on current percepts using


Simple Reflex
- condition-action rules, suitable for simple and fully observable
Agents environments.

Agents that maintain an internal state to keep track of unobservable


Model-Based
- aspects of the environment, allowing them to handle more complex and
Reflex Agents dynamic scenarios.

Agents that operate by setting specific goals and choosing actions that

Goal-Based Agents - bring them closer to these goals, requiring planning and evaluation of
future states.

Agents that use a utility function to evaluate the desirability of different


Utility-Based
- states, aiming to maximize overall satisfaction by balancing trade-offs
Agents between conflicting goals.

Agents capable of improving their performance over time by learning


Learning Agents - from past experiences and adapting their behavior accordingly.

The component responsible for gathering and processing information


Perceptual System - from the environment to form percepts.

An internal representation of the world that helps the agent make


Internal Model - decisions even when not all information is directly observable.

Decision-Making The system that evaluates percepts, internal models, goals, and utilities to
- choose the best course of action.
System

Action Execution
- The part of the agent that carries out the chosen actions using actuators.
System

The component that enables the agent to adapt and improve its

Learning Element - performance by learning from experiences and updating its internal
model and decision-making processes.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 51


DCA6301: Artificial Intelligence

The state of the environment at the beginning of the problem-solving


Initial State - process.

The possible steps that an agent can take to transition from one state to
Actions - another.

Transition Model - Describes how actions affect the state, defining the rules for state changes.

A criterion to determine if the current state meets the desired objectives


Goal Test - of the agent.

A measure of the efficiency or cost of a sequence of actions taken by the


Path Cost - agent to reach a goal.

Uninformed Search methods that explore the state space without additional
- information about the goal's location, including
Search Strategies

Depth-First
- Explores as far as possible along each branch before backtracking.
Search

Breadth-First Explores all nodes at the present depth before moving on to nodes at the
- next depth level.
Search

Depth-Limited A variant of depth-first search with a predetermined limit on the depth to


- avoid infinite paths.
Search

Iterative Combines the benefits of depth-first and breadth-first searches by


- incrementally increasing the depth limit.
Deepening Search

Informed Search
Strategies - Search methods that use heuristics to guide exploration, including
(Heuristic Search)
Selects the path that appears to lead most directly to the goal based on a
Greedy Search - heuristic.

Uses a heuristic to combine the cost to reach a node and the estimated cost
A* Search - from that node to the goal to find the most cost-effective path.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 52


DCA6301: Artificial Intelligence

Space vs. Time Consideration of the trade-offs between the memory required (space
- complexity) and the time taken (time complexity) by a search algorithm.
Complexity

The specific attributes of the problem, such as the size of the state space
Problem
- and the nature of the goal, which influence the choice of an appropriate
Characteristics search algorithm.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 53


DCA6301: Artificial Intelligence

10. SELF-ASSESSMENT QUESTIONS

SELF-ASSESSMENT QUESTIONS – 1
Fill in the blanks:
1 Simple Reflex Agents operate using __________ rules to respond to immediate percepts.
2 Model-Based Reflex Agents maintain an __________ state to account for unobservable aspects of
the environment.
3 Goal-Based Agents evaluate actions based on how well they achieve specific __________.
4 Utility-Based Agents use a __________ function to determine the desirability of different states.
5 Learning Agents improve their performance over time by __________ from past experiences.
6 The Perceptual System gathers and processes information from the environment to form
__________
7 An Internal Model helps an agent make decisions by maintaining a representation of the
__________.
8 The Decision-Making System evaluates percepts, internal models, goals, and utilities to choose
the best course of __________.
9 The Action Execution System carries out the chosen actions using __________.
10 The Learning Element allows the agent to adapt and improve its performance by updating its
__________ and decision-making processes based on experiences.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 54


DCA6301: Artificial Intelligence

11. TERMINAL QUESTIONS

Short Answer Questions


1. What is the primary difference between simple reflex agents and model-based reflex agents?

2. How do goal-based agents determine the actions they should take?

3. What role does the perceptual system play in an agent's architecture?

4. Why are utility-based agents considered more sophisticated than goal-based agents?

5. What is the purpose of the learning element in an agent's architecture?

Long Answer Questions


1. Compare and contrast the different types of intelligent agents discussed, highlighting their
strengths and weaknesses in various environments.

2. Explain the components of an agent's architecture and discuss how each component contributes
to the agent's overall functionality and effectiveness.

3. Describe the components of a problem-solving agent and explain how each component is used
to formulate and solve problems.

4. Discuss the differences between uninformed and informed search strategies, providing
examples of each and explaining how heuristics influence the performance of informed searches.

5. Evaluate the factors that need to be considered when selecting a search algorithm for a given
problem, including the trade-offs between space and time complexity and the specific
characteristics of the problem.

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 55


DCA6301: Artificial Intelligence

12. ANSWERS

12.1. Self Assessment Questions


1. condition-action
2. internal
3. goals
4. utility
5. learning
6. percepts
7. world
8. action
9. actuators
10. internal model

12.2. Terminal Questions


Short Answer Questions
Answer 1 : Topic Reference: Simple Reflex Agents (2.1.1) and Model-Based Reflex Agents (2.1.2)
Answer 2 : Topic Reference: Goal-Based Agents (2.1.3)
Answer 3 : Topic Reference: Perceptual System (2.2.1)
Answer 4 : Topic Reference: Utility-Based Agents (2.1.4) and Goal-Based Agents (2.1.3)
Answer 5 : Topic Reference: Learning Element (2.2.5)

Long Answer Questions


Answer 1 : Topic References: Simple Reflex Agents (2.1.1), Model-Based Reflex Agents (2.1.2), Goal-
Based Agents (2.1.3), Utility-Based Agents (2.1.4), Learning Agents (2.1.5)
Answer 2 : Topic References: Perceptual System (2.2.1), Internal Model (2.2.2), Decision-Making
System (2.2.3), Action Execution System (2.2.4), Learning Element (2.2.5)
Answer 3 : Topic References: Initial State (2.3.1.1), Actions (2.3.1.2), Transition Model (2.3.1.3), Goal
Test (2.3.1.4), Path Cost (2.3.1.5)
Answer 4 : Topic References: Uninformed Search Strategies (2.4.1), Depth-First Search (2.4.1.1),
Breadth-First Search (2.4.1.2), Depth-Limited Search (2.4.1.3), Iterative Deepening Search (2.4.1.4),
Informed Search Strategies (2.4.2), Greedy Search (2.4.2.1), A* Search (2.4.2.2)

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 56


DCA6301: Artificial Intelligence

Answer 5 : Topic References: Selecting a Search Algorithm (2.3), Space vs. Time Complexity (2.3.1),
Problem Characteristics (2.3.2)

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 57


DCA6301: Artificial Intelligence

13. REFERENCES

1. "Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig


2. "Artificial Intelligence: Foundations of Computational Agents" by David L. Poole and Alan K.
Mackworth

Unit: 2 - Introduction to Intelligent Agents and Problem-Solving 58

You might also like