0% found this document useful (0 votes)
3 views

Module 2

This document introduces problem-solving agents in artificial intelligence, detailing their goal formulation, problem formulation, state information, and search execution. It outlines the components of well-defined problems, including initial state, actions, transition model, goal test, and path cost, and discusses the importance of abstraction in simplifying complex problems. Additionally, it contrasts toy problems with real-world problems, providing examples and explaining the search process and infrastructure for search algorithms.

Uploaded by

CHANDAN C V
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module 2

This document introduces problem-solving agents in artificial intelligence, detailing their goal formulation, problem formulation, state information, and search execution. It outlines the components of well-defined problems, including initial state, actions, transition model, goal test, and path cost, and discusses the importance of abstraction in simplifying complex problems. Additionally, it contrasts toy problems with real-world problems, providing examples and explaining the search process and infrastructure for search algorithms.

Uploaded by

CHANDAN C V
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

MODULE 02

3.1 PROBLEM-SOLVING AGENTS

In this explana on, the concept of problem-solving agents is introduced. Here's a


breakdown:

1. Goal Formula on

 An intelligent agent aims to maximize its performance measure, which helps guide
its behaviour.

 The agent adopts goals to simplify its decision-making process. By focusing on a goal,
the agent can narrow down the ac ons it needs to consider.

 For instance, an agent in Romania might set a goal to reach Bucharest from Arad.
This reduces the complexity of the problem since now only paths leading to
Bucharest are relevant.

2. Problem Formula on

 Once the goal is established, the agent needs to decide which ac ons and states to
consider.

 If the agent were to consider ac ons in minute detail (like moving a foot forward by
an inch), it would become overwhelming. Instead, the agent considers ac ons at a
higher level—like traveling between major ci es.

 For example, the agent in Romania is trying to choose which city to go to next from
Arad. The agent needs informa on about what ac ons lead to Bucharest, so it might
use a map.

3. State Informa on and Search

 The agent assumes it has informa on about the states and ac ons. If the
environment is observable and determinis c, the agent knows the current state and
can predict future ac ons with certainty. This is true for a map of Romania.
 The agent starts by choosing a path from Arad to Bucharest. It may follow a sequence
of ci es, depending on what roads are available. If the agent faces uncertainty (like
taking the wrong road), it might need a con ngency plan.

4. Search and Execu on

 The search process involves looking for a sequence of ac ons to achieve the goal.
The agent uses a search algorithm to find the best path to its goal.

 Once a solu on is found, the agent moves into the execu on phase, where it follows
the ac on sequence it has determined.

 The agent executes the ac ons without needing to rely on real- me percepts since it
knows exactly what will happen (open-loop system).

3.1.1 Well-defined problems and solu ons

A well-defined problem in ar ficial intelligence is formally defined by five components:

1. Ini al State:

o This is the star ng point of the agent in the problem. For example, the ini al
state in the Romania naviga on problem might be represented as In(Arad).
2. Ac ons:

o The set of possible ac ons that the agent can perform in any given state. In
the Romania example, from In(Arad), the agent can take ac ons such as
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.

3. Transi on Model:

o This model describes what happens when an ac on is taken from a given


state. The transi on model is a func on RESULT(s, a) that returns the state
resul ng from performing ac on a in state s. For example, RESULT(In(Arad),
Go(Zerind)) = In(Zerind).

4. Goal Test:

o A goal test determines whether a given state is the goal state. For instance,
the goal of the agent might be In(Bucharest), so the goal test checks if the
current state matches In(Bucharest).

5. Path Cost:

o This is a numeric cost assigned to each path that the agent follows. For
example, if the goal is to minimize travel me, the cost func on could be
based on the distances between ci es, such as kilometers between Arad and
Bucharest.

Solu on to a Problem

A solu on to a problem is an ac on sequence that leads from the ini al state to a goal state.
The quality of the solu on is measured by the path cost, which is the total cost of the
sequence of ac ons. An op mal solu on is the one that has the lowest cost among all
possible solu ons.

3.1.2 Formula ng problems

When formula ng a problem, we must abstract certain details to simplify the problem
without losing essen al informa on.

 Abstrac on:
o In the Romania naviga on problem, we focus only on the ci es and the roads
between them, ignoring many real-world details such as the condi on of the
road, the weather, or the traveling companions. This makes the problem
solvable in a more manageable way.

 Ac on Abstrac on:

o The ac ons themselves are abstracted. Instead of considering every minor


detail (e.g., turning the steering wheel by a small degree), we consider high-
level ac ons like "drive from one city to another."

 Level of Abstrac on:

o The level of abstrac on is defined such that the abstract ac ons are simpler
to carry out than the detailed ones. The abstract solu on, like the path from
Arad to Bucharest, corresponds to a larger set of more detailed ac ons. If the
abstrac on is valid, any solu on in the abstract world can be expanded into a
solu on in the real world without invalida ng the results.

 Prac cality of Abstrac on:

o The choice of abstrac on should remove unnecessary details but s ll


maintain the validity of the solu on. In AI, abstrac ons help the agent handle
complex environments without becoming overwhelmed by unnecessary
informa on.

Example

In the problem of naviga ng in Romania, an agent can abstract the problem by focusing on
the ci es and ignoring non-essen al factors. The agent’s task is to find a path from Arad to
Bucharest, and once the abstract solu on is found (e.g., Arad → Sibiu → Rimnicu Vilcea →
Pites → Bucharest), it can be further expanded into detailed ac ons (e.g., deciding when to
take a rest or turn on the radio). This abstrac on helps in simplifying the problem-solving
process.

3.2 EXAMPLE PROBLEMS


The problem-solving approach in AI is applied across various task environments. These
environments are o en categorized into toy problems and real-world problems.

3.2.1 Toy problems

Toy problems are simplified versions of real-world problems designed for tes ng and
comparing algorithms. They are well-defined, making them ideal for research and
illustra on. Some examples include:

A. Vacuum World

This problem simulates a simple vacuum cleaner opera ng in a discrete environment.

 States: Defined by the agent's loca on (one of two spots) and whether dirt is present
at those loca ons. Total possible states: 2×22=8.
For larger environments with n loca ons, the states are n⋅2n.

 Ini al State: Any configura on of agent loca on and dirt.

 Ac ons:

o Move Le .

o Move Right.

o Suck dirt (clean the current square).


 Transi on Model:
Ac ons behave as expected:

o Moving Le from the le most square or Right from the rightmost square has
no effect.

o Sucking when the square is already clean also does nothing.

 Goal Test: All squares must be clean.

 Path Cost: Each ac on costs 1 step.

B. 8-Puzzle

A classic sliding puzzle consis ng of eight numbered les and one blank space.

 States: Each state represents the arrangement of les and the blank space. Total:
9!/2=181,440 reachable states.

 Ini al State: Any arrangement of les and the blank.

 Ac ons: Slide the blank Le , Right, Up, or Down. Possible ac ons depend on the
blank's posi on.

 Transi on Model: Moving the blank changes its posi on with the adjacent le.

 Goal Test: The les match a predefined goal configura on.

 Path Cost: Each move costs 1.

The 8-puzzle is part of a family of sliding-block puzzles, known to be NP-complete.


C. 8-Queens Problem

The goal is to place 8 queens on a chessboard such that no queen a acks another.

 States:

o Incremental formula on: States represent configura ons with 0–8 queens,
progressively adding queens.

o Complete-state formula on: All 8 queens are placed and adjusted.

 Ini al State: An empty board.

 Ac ons:

o Incremental: Add a queen to an empty square.

o Complete-state: Move queens to valid posi ons.

 Transi on Model: Updates the board based on the ac on taken.

 Goal Test: All 8 queens are placed without conflicts.

 State Space Reduc on:

o Without constraints: 64⋅63⋅…⋅57≈1.8×1014

o With constraints (no a acked squares): Reduced to 2,057 states.


D. Knuth’s Problem

This involves reaching any posi ve integer star ng from 4 using factorial, square root, and
floor opera ons.

 States: Posi ve integers.

 Ini al State: 4.

 Ac ons:

o Compute factorial (for integers only).

o Take the square root.

o Apply the floor func on.

 Transi on Model: Follows the mathema cal defini on of opera ons.

 Goal Test: Check if the state matches the target number.

3.2.2 Real-world problems

The sec on discusses various real-world problems, contras ng them with toy problems by
focusing on their prac cal relevance and complexity. Each problem is formulated with key
components like states, ac ons, goal tests, and path costs.
Route-Finding Problem

 Descrip on: The goal is to find a path between specified loca ons while op mizing
factors like distance, me, or cost.

 States: Include the current loca on and me, and may track historical data affec ng
fares and travel condi ons.

 Ac ons: Involve taking flights or transi ons between loca ons.

 Challenges: Complexi es arise from factors like fare structures, scheduling


constraints, and con ngencies like flight cancella ons.

Touring Problems

 Descrip on: Extends route-finding by requiring visits to mul ple loca ons, such as
"Visit every city in a map at least once."

 States: Include both the current loca on and the set of visited loca ons.

 Traveling Salesperson Problem (TSP): A special case where each city is visited exactly
once, with the shortest route being the goal. TSP is NP-hard, making it
computa onally intensive but widely applicable in logis cs and manufacturing.

VLSI Layout Problem

 Descrip on: Involves arranging millions of components on an integrated circuit (IC)


to op mize space, connec vity, and performance.

 Components:

o Cell Layout: Grouping components into non-overlapping cells.

o Channel Rou ng: Finding efficient paths for wires connec ng the cells.

 Challenges: Requires solving highly complex search problems efficiently.

Robot Naviga on

 Descrip on: A robot moves through con nuous space, requiring naviga on in two or
more dimensions.
 States: Include the robot's posi on and configura on (e.g., arm/wheel posi ons).

 Challenges: Involves errors in sensors and controls, requiring advanced algorithms to


handle con nuous and high-dimensional state spaces.

Automa c Assembly Sequencing

 Descrip on: Finding an op mal sequence to assemble complex objects without


undoing previous steps.

 Applica ons:

o Electric Motors: Demonstrated by FREDDY, a robot capable of intricate


assembly.

o Protein Design: Arranging amino acids to create func onal proteins for
medical applica ons.

 Challenges: Genera ng feasible ac ons is computa onally expensive due to


geometric and physical constraints.

3.3 SEARCHING FOR SOLUTIONS

In the process of solving problems using search algorithms, we begin by formula ng a


problem and then systema cally exploring various possible solu ons. The search algorithm
does this by considering sequences of ac ons that lead to different states in the problem’s
state space.

Search Tree and Nodes

 Search Tree: A tree-like structure is formed, with each node represen ng a state in
the problem. The root node is the ini al state, and each branch represents an ac on
that transi ons from one state to another.

 Node: A node in the search tree corresponds to a par cular state in the problem. It
contains informa on such as the current state, the ac on that led to that state, and
the parent node.

 Expanding: Expanding a node means genera ng new states (child nodes) by applying
legal ac ons to the current state. For instance, from a city like Arad in a route-finding
problem, we generate new states by moving to neighboring ci es like Sibiu,
Timisoara, etc.

 Parent Node: A node that generates other nodes (child nodes). In the context of
searching for a path from Arad to Bucharest, Arad would be the parent node of Sibiu,
Timisoara, and Zerind.

 Child Node: A node generated by applying an ac on to a parent node, represen ng a


new state in the state space.

 Leaf Node: A node that does not generate any further child nodes, meaning it has no
successors.

 Fron er (Open List): The fron er consists of nodes that are available for expansion at
any given point in the search. These nodes are ac vely being explored by the
algorithm.
Search Process

The algorithm explores the state space by:

1. Expanding the ini al state to generate child states.

2. Choosing one of the child states to expand further.

3. Repea ng this process of expansion un l a goal state is reached or no further states


can be explored.

Redundant Paths and Loops

 Loopy Path: In some cases, a search algorithm may revisit a state that was already
explored, forming a loop. This can cause the search to become inefficient or infinite
because the same state is revisited mul ple mes.

 Redundant Path: A path that leads to the same state through a different sequence of
ac ons. These paths don’t offer any new informa on and can be discarded to avoid
unnecessary explora on.

Graph Search and Avoiding Redundant Paths


 Explored Set (Closed List): This is a data structure that keeps track of all the states
that have already been expanded. When a new state is generated, the algorithm
checks if it is in the explored set. If it is, the state is discarded, preven ng redundant
explora on.

 Graph-Search Algorithm: This is a refined version of the basic search algorithm that
uses the explored set to avoid revisi ng previously explored states. This approach
guarantees that each state is explored only once, making the search more efficient.

Proper es of Graph-Search

 State-Space Graph: When using graph search, the search tree grows directly on the
state-space graph, meaning there’s no duplica on of states.

 Fron er as Separator: The fron er essen ally separates the explored and unexplored
regions of the state-space graph. Every path from the ini al state to an unexplored
state must pass through a state in the fron er, ensuring that the search is progressing
in an organized manner.
3.3.1 Infrastructure for Search Algorithms

Search algorithms require a specific infrastructure to func on efficiently, and this includes
several components for each node in the search tree. A node in the search tree contains the
following parts:

1. STATE: This represents the current state in the state space, which is a specific
configura on of the problem being solved.

2. PARENT: This is a reference to the node from which the current node was generated.
It connects the nodes in the search tree and helps trace the path back to the root
when a solu on is found.

3. ACTION: This is the ac on that was applied to the parent node to generate the
current node.

4. PATH-COST: This represents the cost of the path from the ini al state to the current
node, typically denoted as g(n). It helps in evalua ng the quality of a path, such as its
distance or me cost.

Func on for Genera ng a Child Node


 RESULT: This is the func on that applies the ac on to the parent’s state and returns
the resul ng state.

 STEP-COST: This func on calculates the cost of taking the ac on from the parent
node’s state.

Queue for Storing Nodes

The fron er (or open list) stores the nodes that need to be expanded. The fron er should be
organized in a data structure that allows the search algorithm to efficiently select the next
node to expand. The queue is used for this purpose, with the following opera ons:

 EMPTY?: This checks if the queue is empty (no more nodes to explore).

 POP: This removes and returns the first element from the queue.

 INSERT: This inserts a new element into the queue.

There are different types of queues based on how the nodes are selected:

 FIFO Queue (First-In, First-Out): Pops the oldest element (i.e., the first inserted
node).

 LIFO Queue (Last-In, First-Out, or Stack): Pops the newest element (i.e., the last
inserted node).

 Priority Queue: Pops the element with the highest priority according to a defined
ordering func on (usually based on path costs or other criteria).

Implemen ng the Explored Set

To avoid revisi ng the same state repeatedly, an explored set (or closed list) is used. This is
o en implemented using a hash table, which allows for efficient checking of whether a state
has already been explored. The hash table ensures that each state is only processed once,
improving the algorithm’s efficiency. For example, in problems like the Traveling Salesperson
Problem, the hash table can be used to determine if a set of ci es has been visited in a
par cular order, ensuring that the same state is not explored mul ple mes.

To manage equivalence between different states, the states may need to be in canonical
form. This means that logically equivalent states are represented in a uniform manner to
ensure they are treated as the same. For instance, a set of ci es {Bucharest, Urziceni, Vaslui}
should be considered the same as {Urziceni, Vaslui, Bucharest}, which can be ensured by
using a sorted list or a bit-vector.

3.3.2 Measuring Problem-Solving Performance

When evalua ng a search algorithm, several criteria are considered:

1. Completeness: This property ensures that the algorithm will always find a solu on if
one exists. If an algorithm is not complete, it might fail to find a solu on even when
one is available.

2. Op mality: This property guarantees that the algorithm will find the best solu on,
where "best" is defined in terms of the problem's cost func on (e.g., the shortest
path or least expensive solu on).

3. Time Complexity: This measures the me it takes to find a solu on. It is typically
expressed in terms of the number of nodes the algorithm generates during the
search. The me complexity depends on the size of the state space, which can be
affected by the branching factor (b) and depth (d) of the goal.

4. Space Complexity: This refers to the amount of memory needed by the search
algorithm. Space complexity is influenced by the number of nodes that need to be
stored in memory at any given me during the search.

The key parameters for me and space complexity are:

 b: Branching factor, or the maximum number of successors for any node.

 d: Depth of the shallowest goal node, which represents the number of steps from the
ini al state to the goal.

 m: Maximum length of any path in the state space.

The me complexity is o en measured by the number of nodes generated, while space


complexity measures how many nodes are stored in memory at any point in me.

Measuring Search Cost and Total Cost


 Search Cost: This typically refers to the me complexity, or the computa onal
resources used by the search algorithm. It depends on the number of nodes
generated during the search.

 Total Cost: This includes both the search cost and the cost of the solu on (path cost).
The total cost combines me and space complexity with the actual cost of the
solu on found.

For example, in a route-finding problem, the search cost could be the me taken by the
algorithm, while the solu on cost could be the total length of the route (e.g., in kilometers).
The total cost would be a combina on of both, and a trade-off might be made based on the
agent’s priori es (e.g., choosing a faster path that may not be the shortest).

3.4 UNINFORMED SEARCH STRATEGIES

uninformed search strategies—also called blind search—which don't have addi onal
informa on about states, except what is provided in the problem defini on. These strategies
only rely on genera ng successors and iden fying goal states.

Types of Search Strategies

Uninformed search strategies differ in the order in which they expand nodes in the search
tree. The dis nc on between these strategies lies in how nodes are chosen for expansion.

3.4.1 Breadth-First Search

Breadth-First Search (BFS) is a simple and common search strategy that expands nodes level
by level, star ng from the root. Here’s how it works:

 First, the root node is expanded.

 Second, all successors of the root node are expanded, then their successors, and so
on.

 Shallowest nodes are expanded first, meaning the nodes at the current depth of the
search tree are expanded before moving to deeper nodes.

How BFS Works:


 BFS uses a FIFO queue (First In, First Out) for the fron er (i.e., the list of nodes to be
expanded). This ensures that nodes are expanded in the order they were generated,
expanding all nodes at a given depth before moving on to the next level.

 The goal test is applied to each newly generated node instead of when it's selected
for expansion.

Characteris cs of BFS:

1. Completeness: BFS is guaranteed to find a solu on if one exists, provided the


branching factor is finite. If the shallowest goal node is at depth d, BFS will eventually
reach it a er genera ng all nodes at shallower depths.

2. Op mality: BFS is op mal if the path cost is a non-decreasing func on of the node
depth. This is o en the case when all ac ons have the same cost, making the
shortest path the one that is reached first.

3. Time Complexity: The me complexity of BFS is O(b^d), where b is the branching


factor and d is the depth of the shallowest goal. This arises because, in the worst
case, BFS generates all nodes at each level un l it reaches the goal.

o For a uniform tree with branching factor b, the number of nodes generated at
depth d is b^d.
o If the goal is at depth d, the total number of nodes generated is the sum of all
nodes from level 1 to level d, which gives O(b^d).

o Worst-case me complexity is exponen al due to the exponen al growth of


nodes as the depth increases.

4. Space Complexity: The space complexity is also O(b^d) since, at any point, BFS stores
all nodes in memory:

o The queue (fron er) holds O(b^d) nodes.

o The explored set (nodes that have already been expanded) holds O(b^d)
nodes.

Therefore, the space requirement is dominated by the fron er, making the space complexity
also exponen al.

Example:

Let’s take an example where the branching factor is b = 10, and we are looking for a solu on
at depth d = 6.

 At depth 0, we have 1 node (the root).

 At depth 1, there are 10 nodes.

 At depth 2, there are 100 nodes.

 At depth 3, there are 1000 nodes, and so on.

The total nodes generated by the me we reach depth d = 6 would be O(10^6) nodes.

BFS and its Limita ons:


The major issue with BFS is its exponen al complexity, both in terms of me and space:

 Time Complexity: For a depth of 12, BFS will take 13 days to find a solu on (based on
assump ons of node genera on rate and memory usage).

 Space Complexity: For large search spaces, the memory requirements for BFS can
become imprac cal. For instance, solving a problem with depth 12 requires 1
petabyte of memory.

Figure 3.13 in the text illustrates the me and memory requirements for BFS when the
branching factor is b = 10:

Thus, as the depth grows, the me and memory required to perform BFS become
imprac cal. For large-depth search problems, BFS becomes unfeasible.

3.4.2 Uniform-Cost Search

Uniform-cost search (UCS) is a search strategy that works well when step costs can vary. It is
an extension of breadth-first search (BFS) but modifies the expansion order by focusing on
path costs rather than depth.

Key Differences from Breadth-First Search:

1. Node Selec on by Path Cost: Instead of expanding the shallowest node, UCS
expands the node with the lowest path cost (denoted as g(n)). This means UCS
always chooses the node that has the least total cost to reach it from the start, rather
than just the one that is at the shallowest depth in the tree.
2. Goal Test Timing: The goal test is applied when a node is selected for expansion (like
in graph search), not when it is first generated. This is because a goal node may be on
a subop mal path at first, and UCS ensures that the node expanded is always part of
the op mal path.

3. Path Improvement: UCS checks if a be er path to a node exists when a new path is
generated. If the new path has a lower cost, the algorithm will discard the previous
path and update the node's cost. This ensures that only the op mal path to a node is
maintained in the fron er.
Example: Path from Sibiu to Bucharest

Let's walk through an example where the objec ve is to get from Sibiu to Bucharest, with
the costs of the paths between ci es provided.

 Sibiu has two successors: Rimnicu Vilcea with a cost of 80 and Fagaras with a cost of
99.

 UCS expands Rimnicu Vilcea first (as it has a lower cost) and adds Pites with a total
cost of 177.

 Next, UCS expands Fagaras, adding Bucharest with a cost of 310.

 At this point, Bucharest has been generated as a goal, but UCS doesn't stop.

 UCS then expands Pites , and a second path to Bucharest is generated with a total
cost of 278.

 UCS checks if the new path is cheaper and, finding that it is, discards the previous,
more expensive path and updates Bucharest’s cost to 278.

 Finally, UCS expands Bucharest with the op mal path cost and returns the solu on.

Op mality of Uniform-Cost Search:

 UCS guarantees op mality because it expands nodes in order of their op mal path
cost. Whenever a node n is selected for expansion, the shortest path to n has already
been found.

 If there were a shorter path to n that could have been discovered by expanding a
different node, the algorithm would have already selected that node first, due to the
lower g(n) value.

 Since step costs are non-nega ve, the paths never get shorter as nodes are
expanded, ensuring the path to the goal is op mal.

Completeness of Uniform-Cost Search:

 UCS is complete if the step costs are posi ve. This is because, as long as each step
has a non-zero cost, UCS will eventually find the goal without ge ng stuck in loops.
 However, if there are zero-cost ac ons (like a series of "NoOp" ac ons), UCS might
get stuck in an infinite loop and fail to reach a solu on. Completeness is guaranteed
only when the step costs are above some minimum posi ve constant.

Time and Space Complexity:

 Unlike BFS, which can be analyzed using depth d and branching factor b, UCS’s
complexity depends on the path cost rather than depth. Let C* be the cost of the
op mal solu on.

 In the worst case, UCS’s me and space complexity is *O(b^(1 + C/ε))**, where ε is
the smallest posi ve step cost.

 This can be much greater than the me and space complexity of BFS, which is
O(b^d), because UCS may explore paths with many small steps before finding paths
with large, useful steps.

UCS vs. BFS when Step Costs are Equal:

 When all step costs are equal, UCS behaves similarly to BFS, because the path cost is
just propor onal to the number of steps. In this case, UCS would expand all nodes at
the same depth level before moving to the next depth level, much like BFS.

 The key difference is that UCS checks all nodes at the goal’s depth to see if one has a
lower cost, even if it doesn't need to, making it more computa onally expensive than
BFS.

3.4.3 Depth-First Search (DFS)

Depth-first search (DFS) explores as deeply as possible along a branch before backtracking to
explore other branches. It uses a LIFO (Last-In-First-Out) queue, meaning the most recently
generated node is expanded first. This makes it different from breadth-first search (BFS),
which uses a FIFO (First-In-First-Out) queue.

Key Features of Depth-First Search:

 Tree Search vs Graph Search:


o Graph search avoids revisi ng states and redundant paths. It is complete in
finite state spaces because it will eventually expand all nodes, ensuring the
solu on is found.

o Tree search can suffer from infinite loops, as it may revisit the same states
repeatedly. For example, DFS might get stuck in a loop (like "Arad → Sibiu →
Arad → Sibiu") and never reach the goal.

o In infinite state spaces, DFS will fail if it encounters an infinite non-goal path
(e.g., applying the factorial operator endlessly).

 Recursive Implementa on: DFS is o en implemented recursively, where the


algorithm calls itself for each child node, moving down the search tree un l no more
successors are available, then backtracks to previous nodes.
DFS Algorithm Process:

1. DFS starts at the root and explores the deepest nodes first.

2. Once a node with no successors is reached, it is removed from the fron er, and the
algorithm backtracks to explore the next deepest node.

3. This process con nues un l the goal is found or all possibili es are exhausted.

Example:

In a binary tree, DFS might start from the root, going down the le most branch to a leaf
node. Once it hits a dead-end, it backtracks and explores the next deepest node. This is
repeated un l a solu on is found.

Proper es of Depth-First Search:

 Non-op mal: DFS may not always find the shortest path to the goal. For example, in
the given binary tree, DFS might explore the le subtree fully before finding a goal
node, even if a goal node exists on the right side that could have been reached
sooner.

 Non-complete (in tree search): DFS may not find a solu on if there are infinite loops
or cycles. If DFS revisits nodes along a cyclic path, it might get stuck, and no solu on
will be found.

Time and Space Complexity:

 Graph Search: The me complexity depends on the size of the state space. In the
worst case, the algorithm may expand all nodes in the space, making its me
complexity O(b^m), where b is the branching factor and m is the maximum depth.

 Tree Search: The me complexity is O(b^m), where m is the maximum depth of the
search tree. This can be much larger than the size of the state space because DFS
may expand all nodes in the tree, even if many are redundant.

 Space Complexity:

o Graph Search: Space complexity is propor onal to the number of nodes in


the search space.
o Tree Search: DFS uses significantly less memory than BFS. It only needs to
store a single path from the root to a leaf node, along with unexpanded
siblings. This leads to a space complexity of O(bm), where m is the maximum
depth of the tree. This is much less than BFS, which might require storing all
nodes in the fron er.

Memory Efficiency of DFS:

 DFS is o en preferred in problems where memory usage is cri cal, as it stores only a
path from the root to the current node and the sibling nodes of the current node.

 For example, in a state space with a branching factor of b and maximum depth m,
DFS requires O(bm) memory, which is much less than the memory required by BFS
(which might need to store all nodes at a par cular depth).

Backtracking Search:

 Backtracking is a variant of DFS that uses even less memory by genera ng one
successor at a me and keeping track of which successor to generate next. This
reduces memory to O(m) rather than O(bm).

 In backtracking, the current state can be modified directly, and the modifica on can
be undone when backtracking is needed. This is par cularly useful in problems like
robo c assembly or other domains with large state spaces.

3.4.4 Depth-limited Search

Depth-limited search is an adapta on of depth-first search (DFS) that imposes a limit on the
depth of the search. Nodes beyond a predefined depth limit are treated as if they have no
successors, solving the problem of infinite state spaces where DFS would otherwise con nue
indefinitely. However, there are trade-offs:

 If the limit ℓ is set smaller than the depth of the shallowest goal, the algorithm
becomes incomplete, meaning it may not find a solu on.

 If ℓ is larger than the depth of the shallowest goal, the search is non-op mal because
it may explore unnecessary deeper nodes.
The me and space complexi es of depth-limited search are both O(bℓ), where b is the
branching factor and ℓ is the depth limit. Depth-first search can be considered a special case
with no depth limit (i.e., ℓ=∞).

In some cases, we may choose a depth limit based on problem knowledge. For example, in a
map of Romania with 20 ci es, if a solu on exists, it must not exceed 19 steps. The diameter
of the state space can offer a more efficient depth limit.

Depth-limited search can be implemented as a modifica on of general tree- or graph-search


algorithms or as a simple recursive func on.

3.4.5 Itera ve Deepening Depth-First Search (IDDFS)

Itera ve deepening depth-first search (IDDFS) is a strategy that combines the advantages of
both depth-first and breadth-first search. It iterates depth-limited search with increasing
limits, star ng from 0 and gradually increasing un l a solu on is found. This approach has
several benefits:

 Memory Efficiency: IDDFS uses O(bd) memory, which is the same as depth-first
search, significantly more memory-efficient than breadth-first search.

 Completeness: IDDFS is complete when the branching factor is finite.

 Op mality: It is op mal if the path cost is a non-decreasing func on of the depth.


While it may seem wasteful because it generates nodes mul ple mes, the overhead is not
significant in prac ce. The cost is reduced because the bo om level of the search tree is
generated only once, the second-bo om level twice, and so on. Thus, the total number of
nodes generated is asympto cally the same as breadth-first search, O(bd).

IDDFS is par cularly useful when the solu on depth is unknown and the search space is
large.
3.4.6 Bidirec onal Search

Bidirec onal search aims to speed up search by running two simultaneous searches: one
forward from the ini al state and one backward from the goal. When the two searches
meet, a solu on is found.

 Efficiency: The me complexity is reduced to O(bd/2), where d is the depth of the


solu on, because the two search fronts only need to cover half the search space
each.

 Space Complexity: The space complexity is also O(bd/2), as only half the nodes are
stored.

However, implemen ng bidirec onal search requires compu ng predecessors for backward
search, which may be complex if ac ons are not reversible.

Bidirec onal search is most effec ve when there is a single goal state. It is challenging to
apply when there are mul ple or complex goal states, such as in abstract problems like the
n-queens problem.

3.4.7 Comparing Uninformed Search Strategies

You might also like