2201AI49 AI Termpaper
2201AI49 AI Termpaper
1 Introduction
Search is a universal problem-solving mechanism in artificial intelligence (AI). In AI problems, the
sequence of steps required for the solution of a problem is not known a priori, but often must be
determined by a trial-and-error exploration of alternatives. The problems that have been addressed by
AI search algorithms fall into three general classes: single-agent pathfinding problems, game playing,
and constraint-satisfaction problems. Classic examples in the AI literature of single-agent pathfinding
problems are the sliding-tile puzzles, including the 3 × 3 eight puzzle (see Figure 1) and its larger
relatives the 4 × 4 fifteen puzzle and 5 × 5 twenty-four puzzle. The eight puzzle consists of a 3 ×
3 square frame containing eight numbered square tiles and an empty position called the blank. The
legal operators are to slide any tile that is horizontally or vertically adjacent to the blank into the
blank position. The problem is to rearrange the tiles from some random initial configuration into
a particular desired goal configuration. The sliding-tile puzzles are common testbeds for research in
AI search algorithms because they are very simple to represent and manipulate, yet finding optimal
solutions to the N×N generalization of the sliding-tile puzzles is NP-complete .
1
cell in a 9 × 9 matrix with a digit from zero through nine, such that each row, column, and nine
3 × 3 submatrices contain all the digits zero through nine. Real-world examples of constraintsatis-
faction problems are ubiquitous, including boolean satisfiability, planning, and scheduling applications.
We begin by describing the problem-space model on which search algorithms are based. Bruteforce
searches are then considered including breadth-first, uniform-cost, depth-first, depth-first iterative-
deepening, bidirectional, frontier, and disk-based search algorithms.
3 Search Algorithms
The most general search algorithms are brute-force searches, since they do not require any domain-
specific knowledge. All that is required for a brute-force search is a state description, a set of legal
operators, an initial state, and a description of the goal state. The most important brute-force tech-
niques are breadth-first, uniform-cost, depth-first, depth-first iterative-deepening, bidirectional, and
frontier search. In the descriptions of the algorithms in the following text, to generate a node means
to create the data structure corresponding to that node, whereas to expand a node means to generate
all the children of that node.
2
3.1 Breadth-First Search
Breadth-first search expands nodes in order of their depth from the root, generating one level of the
tree at a time until a solution is found (see Figure 2). It is most easily implemented by maintaining
a first-in first-out queue of nodes, initially containing just the root, and always removing the node at
the head of the queue, expanding it, and adding its children to the tail of the queue. Since it never
generates a node in the tree until all the nodes at shallower levels have been generated, breadth-first
search always finds a shortest path to a goal. Since each node can be generated in constant time,
the amount of time used by breadth-first search is proportional to the number of nodes generated,
which is a function of the branching factor b and the solution depth d. Since the number of nodes in
a uniform tree at level d is bd, the total number of nodes generated in the worst case is b+b2 +b3 +·
· ·+bd , which is O(bd ), the asymptotic time complexity of breadth-first search. The main drawback
of breadth-first search is its memory requirement. Since each level of the tree must be stored in order
to generate the next level, and the amount of memory is proportional to the number of nodes stored,
the space complexity of breadth-first search is also O(bd ). As a result, breadth-first search is severely
space-bound in practice, and will exhaust the memory available on typical computers in a matter of
minutes.
3
3.2 Uniform-Cost Search
If all edges do not have the same cost, then breadth-first search generalizes to uniform-cost search.
Instead of expanding nodes in order of their depth from the root, uniform-cost search expands nodes in
order of their cost from the root. At each step, the next node n to be expanded is one whose cost g(n)
is lowest, where g(n) is the sum of the edge costs from the root to node n. The nodes are stored in a
priority queue. This algorithm is similar to Dijkstra’s single-source shortest-path algorithm . The main
difference is that uniform-cost search runs until a goal node is chosen for expansion, while Dijkstra’s
algorithm runs until every node in a finite graph is chosen for expansion. Whenever a node is chosen
for expansion by uniform-cost search, a lowest-cost path to that node has been found. The worst-case
time complexity of uniform-cost search is O(bc/e ), where c is the cost of an optimal solution, and e
is the minimum edge cost. Unfortunately, it also suffers the same memory limitation as breadth-first
search.
4
depth-first search to depth d is O(bd ), since it generates the same set of nodes as breadth-first search,
but simply in a different order. Thus, as a practical matter, depth-first search is time-limited rather
than space-limited. The primary disadvantage of depth-first search is that it may not terminate on an
infinite tree, but simply go down the left-most path forever. For example, even though there are a finite
number of states of the eight puzzle, the tree fragment shown in Figure 1 can be infinitely extended
down any path, generating an infinite number of duplicate nodes representing the same states. The
usual solution to this problem is to impose a cutoff depth on the search. Although the ideal cutoff
is the solution depth d, this value is rarely known in advance of actually solving the problem. If the
chosen cutoff depth is less than d, the algorithm will fail to find a solution, whereas if the cutoff depth
is greater than d, a large price is paid in execution time, and the first solution found may not be an
optimal one.
5
3.4 Depth-First Iterative-Deepening
Depth-first iterative-deepening (DFID) combines the best features of breadth-first and depth-first
search. DFID first performs a depth-first search to depth one, then starts over, executing a complete
depth-first search to depth two, and continues to run depth-first searches to successively greater depths,
until a solution is found (see Figure 7). Since it never generates a node until all shallower nodes have
been generated, the first solution found by DFID is guaranteed to be via a shortest path. Furthermore,
since at any given point it is executing a depth-first search, saving only a stack of nodes, and the
algorithm terminates when it finds a solution at depth d, the space complexity of DFID is only O(d).
Although DFID spends extra time in the iterations before the one that finds a solution, this extra
work is usually insignificant. To see this, note that the number of nodes at depth d is bd , and each of
these nodes are generated once, during the final iteration. The number of nodes at depth d-1 is bd−1 ,
and each of these are generated twice, once during the final iteration, and once during the penulti-
mate iteration. In general, the number of nodes generated by DFID is bd + 2bd−1 + 3bd−2 + · · · + db,.
This is asymptotically O(bd ) if b is greater than one, since for large values of d the lower order terms are
insignificant. In other words, most of the work goes into the final iteration, and the cost of the previous
iterations is relatively small. The ratio of the number of nodes generated by DFID to those generated
by breadth-first search on a tree is approximately b/(b-1). In fact, DFID is asymptotically optimal in
terms of time and space among all brute-force shortest-path algorithms on a tree . If the edge costs
differ from one another, then one can run an iterative deepening version of uniform-cost search, where
the depth cutoff is replaced by a cutoff on the g(n) cost of a node. At the end of each iteration, the
threshold for the next iteration is set to the minimum cost of all nodes generated but not expanded on
the previous iteration. Ona graph with multiple paths to the same node, however, breadth-first search
may be much more efficient than depth-first or depth-first iterative-deepening search. The reason is
that a breadth-first search can easily detect all duplicate nodes, whereas a depth-first search can only
check for duplicates along the current search path. Thus, the complexity of breadth-first search grows
only as the number of states at a given depth, while the complexity of depth-first search depends on
the number of paths of a given length. For example, in a square grid, the number of nodes within a
radius r of the origin is O(r2 ,), whereas the number of paths of length r is O(3r ,), since there are three
children of every node, not counting its parent. Thus, in a graph with a large number of very short
cycles, breadth-first search is preferable to depth-first search, if sufficient memory is available. For two
approaches to the problem of pruning duplicate nodes in depth-first search.
6
example, the time complexity of bidirectional search is O(bd/2 )since each search need only proceed to
half the solution depth. Since at least one of the searches must be breadth-first in order to find a
common state, the space complexity of bidirectional search is also O(bd/2 ). As a result, bidirectional
search is space bound in practice.
7
4 Conclusion
Search algorithms are fundamental tools in artificial intelligence, addressing a diverse range of problems
from single-agent pathfinding to constraint-satisfaction problems. These algorithms operate within
problem spaces, composed of states and operators, representing the environment in which the search
takes place. Brute-force search algorithms, such as breadth-first, uniform-cost, depth-first, depth-
first iterative-deepening, bidirectional, frontier, and disk-based search, provide general approaches to
solving problems without domain-specific knowledge.
Breadth-first search explores nodes level by level, ensuring it finds the shortest path to a goal but
suffers from high memory requirements. Uniform-cost search extends this approach to handle variable
edge costs but faces similar memory constraints. Depth-first search alleviates these memory limitations
by exploring deeper nodes first, yet may get trapped in infinite paths. Depth-first iterative-deepening
combines the benefits of breadth-first and depth-first search, providing an optimal solution with linear
space complexity.
Bidirectional search and frontier search offer alternative strategies to manage memory usage, aiming
to reduce space requirements by efficiently storing and managing node information. Disk-based search
algorithms further extend the scope of search by leveraging disk storage to handle large problem spaces.
However, these approaches still grapple with the challenge of combinatorial explosion, where the time
complexities grow exponentially with problem size, limiting the feasibility of brute-force methods for
large-scale problems.
Despite these challenges, search algorithms remain indispensable tools in AI, providing founda-
tional methods for problem-solving and exploration in diverse domains. As AI continues to evolve,
research efforts focused on optimizing search techniques and developing innovative approaches will be
crucial for addressing increasingly complex real-world problems.