0% found this document useful (0 votes)
41 views65 pages

Chapter 3

Uploaded by

suhan4me
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views65 pages

Chapter 3

Uploaded by

suhan4me
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 65

Module 3: Problem Solving

• This chapter describes one kind of goal-based agent called a problem-


solving agent
• Problem-solving agents use atomic representations, as described in
Section 2.4.7—that is (previous chapter), states of the world are
considered as wholes, with no internal structure visible to the
problem solving algorithms.
Problem Solving Agent
• In artificial intelligence (AI), a problem-solving agent is a type of goal-
based agent that uses search strategies to find solutions to complex
problems.
• It has a well-defined problem, an environment, and a set of possible
actions to move towards a solution.
Goal Formation
• Imagine an agent in the city of Arad, Romania, enjoying a touring holiday
• The agent’s performance measure contains many factors: it wants to improve its
suntan, improve its Romanian, take in the sights, enjoy the nightlife (such as it is),
avoid hangovers, and so on
• The decision problem is a complex one involving many tradeoffs and careful
reading of guidebooks.
• Now, suppose the agent has a nonrefundable ticket to fly out of Bucharest the
following day. In that case, it makes sense for the agent to adopt the goal of
getting to Bucharest.
• Courses of action that don’t reach Bucharest on time can be rejected without
further consideration and the agent’s decision problem is greatly simplified.
• Goals help organize behavior by limiting the objectives that the agent is trying to
achieve and hence the actions it needs to consider.
• Goal formulation, based on the current situation and the agent’s performance
measure, is the first step in problem solving.
• Problem formulation is the process of deciding what actions and
states to consider, given a goal.
• Our agent has now adopted the goal of driving to Bucharest and is
considering where to go from Arad. Three roads lead out of Arad, one
toward Sibiu, one to Timisoara, and one to Zerind.
• None of these achieves the goal, so unless the agent is familiar with
the geography of Romania, it will not know which road to follow
• In other words, the agent will not know which of its possible actions is
best, because it does not yet know enough about the state that
results from taking each action.
• But suppose the agent has a map of Romania. The point of a map is to
provide the agent with information about the states it might get itself
into and the actions it can take
• The agent can use this information to consider subsequent stages of a
hypothetical journey via each of the three towns, trying to find a
journey that eventually gets to Bucharest
• Once it has found a path on the map from Arad to Bucharest, it can
achieve its goal by carrying out the driving actions that correspond to
the legs of the journey
• "In general, an agent with several immediate options of unknown
value can decide what to do by first examining future actions that
eventually lead to states of known value.“
• In simple terms, when an agent has several choices and doesn't know which
one is best, it can make a better decision by thinking ahead. Instead of
randomly choosing, it looks at future actions and their outcomes,
eventually finding the option that leads to a result it knows is good. This
way, even if the current choices are unclear, the agent can rely on future
knowledge to guide its decision-making now.
• Imagine you're playing a board game and can move your piece in several
directions, but you're unsure which move is best right now. Instead of just
picking one randomly, you think ahead a few moves and see that one of the
options eventually leads to a winning spot. So, even though you don't know
which move is best right now, by looking at the future moves, you choose
the one that will get you to the winning position in the end
• The process of looking for a sequence of actions that reaches the goal
is called search
• A search algorithm takes a problem as input and returns a solution in
the form of an action sequence
• Once a solution is found, the actions it recommends can be carried
out. This is called execution phase.
3.1.1 Well-defined problems and solutions
• A problem can be defined formally by five components
• Initial state
• Actions
• Transition Model
• Goal State
• Path Cost

• Lets discuss these points with Romanian map problem


• The initial state that the agent starts in. For example, the initial state
for our agent in Romania might be described as In(Arad).
• A description of the possible actions available to the agent. Given a
particular state s, ACTIONS(s) returns the set of actions that can be
executed in s. We say that each of these actions is applicable in s. For
example, from the state In(Arad), the applicable actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
• A description of what each action does; the formal name for this is
the transition model, specified by a function RESULT(s, a) that returns
the state that results from doing action a in state s. We also use the
term successor to refer to any state reachable from a given state by a
single actionFor example, we have
RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
• Together, the initial state, actions, and transition model implicitly
define the state space of the problem—the set of all states reachable
from the initial state by any sequence of actions. The state space
forms a directed network or graph in which the nodes are states and
the links between nodes are actions
• The goal test, which determines whether a given state is a goal state.
Sometimes there is an explicit set of possible goal states, and the test
simply checks whether the given state is one of them. The agent’s
goal in Romania is the singleton set {In(Bucharest )}.
• A path cost function that assigns a numeric cost to each path. The
problem-solving agent chooses a cost function that reflects its own
performance measure. For the agent trying to get to Bucharest, time
is of the essence, so the cost of a path might be its length in
kilometers.
• In this chapter, we assume that the cost of a path can be described as
the STEP COST sum of the costs of the individual actions along the
path
• The step cost of taking action a in state s to reach state s’ is denoted
by c(s, a, s’ ).
• The step costs for Romania are shown in Figure 3.2 as route distances.
We assume that step costs are nonnegative
• A solution to a problem is an action sequence that leads from the
initial state to a goal state. Solution quality is measured by the path
cost function, and an optimal solution has the lowest path cost
among all solutions
3.2 EXAMPLE PROBLEMS
• A toy problem is intended to illustrate or exercise various problem-
solving methods. It can be given a concise, exact description and
hence is usable by different researchers to compare the performance
of algorithms.

• A real-world problem is one whose solutions people actually care


about. Such problems tend not to have a single agreed-upon
description, but we can give the general flavor of their formulations.
3.2.1 Toy problems – Vacuum world example
States: The state is determined by both the agent location and the dirt
locations. The agent is in one of two locations, each of which might or might
not contain dirt. Thus, there are 2 × 22 = 8 possible world states. A larger
environment with n locations has n ・ 2n states.
Initial state: Any state can be designated as the initial state.
Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck. Larger environments might also include Up and Down.
Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost square, and
Sucking in a clean square have no effect. The complete state space is shown
in Figure 3.3.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the
path. - Compared with the real world, this toy problem has discrete
locations, discrete dirt, reliable cleaning, and it never gets any dirtier
The 8-puzzle, an instance of which is shown in Figure 3.4,
consists of a 3×3 board with eight numbered tiles and a
blank space. A tile adjacent to the blank space can slide
into the space. The object is to reach a specified goal state,
such as the one shown on the right of the figure. The
standard formulation is as follows:
• States: A state description specifies the location of each of the eight tiles and the
blank in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given
goal can be reached from exactly half of the possible initial states (Exercise 3.4).
• Actions: The simplest formulation defines the actions as movements of the blank
space Left, Right, Up, or Down. Different subsets of these are possible depending
on where the blank is.
• Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure 3.4, the resulting state has the
5 and the blank switched.
• Goal test: This checks whether the state matches the goal configuration shown in
Figure 3.4. (Other goal configurations are possible.)
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
• The 8-puzzle belongs to the family SLIDING-BLOCK of sliding-block puzzles,
which are often used as test problems for new search algorithms in AI.
• This family is known to be NP-complete (NP- nondeterministic polynomial
time problems are a class of computational problems that can be solved in
polynomial time by a non-deterministic machine and can be verified in
polynomial time by a deterministic Machine), so one does not expect to find
methods significantly better in the worst case than the search algorithms
described in this chapter and the next.
• The 8-puzzle has 9!/2=181, 440 reachable states and is easily solved.
• The 15-puzzle (on a 4×4 board) has around 1.3 trillion states, and random
instances can be solved optimally in a few milliseconds by the best search
algorithms.
• The 24-puzzle (on a 5 × 5 board) has around 1025 states, and random
instances take several
• hours to solve optimally.
The goal of the 8-queens problem is to place eight queens on a
chessboard such that
no queen attacks any other. (A queen attacks any piece in the
same row, column or diagonal.) Figure 3.5 shows an attempted
solution that fails: the queen in the rightmost column is
attacked by the queen at the top left.
The 8 Queens Problem is a classic puzzle where the goal is to place 8
queens on an 8x8 chessboard such that no two queens threaten each
other. This means:
• No two queens can be in the same row.
• No two queens can be in the same column.
• No two queens can be on the same diagonal
Give a possible solution for this
• Although efficient special-purpose algorithms exist for this problem
and for the whole n-queens family, it remains a useful test problem
for search algorithms.
• There are two main kinds of formulation.
• An incremental formulation involves operators that augment the
state description, starting with an empty state; for the 8-queens
problem, this means that each action adds a queen to the state.
• A complete-state formulation starts with all 8 queens on the board
and moves them around. In either case, the path cost is of no interest
because only the final state counts.
The first incremental formulation one might try is the
following:
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the
specified square.
• Goal test: 8 queens are on the board, none attacked.
In this formulation, we have 64 ・ 63 ・ ・ ・ 57 ≈ 1.8×1014 possible
sequences to investigate.
A better formulation would prohibit placing a
queen in any square that is already attacked:

• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per


column in the leftmost n columns, with no queen attacking another.
• Actions: Add a queen to any square in the leftmost empty column
such that it is not attacked by any other queen.

This formulation reduces the 8-queens state space


from 1.8×10 ^14 to just 2,057, and solutions are easy
to find
Problem Setup:
• You are given the digits 1 to 9.
• You can use arithmetic operators (+, −, ×, ÷) between the digits.
• The objective is to combine these numbers and operators in such a
way that the result equals 100.
• You are not allowed to change the order of the digits, i.e., they must
remain in the sequence 1, 2, 3, ..., 9.
• You can group digits together (e.g., 12, 34), use parentheses, and
apply any valid mathematical operations
1+2+3−4+5+6+78+9=100
Donald Knuth, a renowned computer scientist, is called the "The
Ten-Puzzle". It’s a well-known challenge in computer science that
Knuth used to illustrate various problem-solving techniques,
particularly backtracking algorithms.
Donald Knuth (1964) and illustrates how infinite state spaces can
arise. Knuth conjectured that, starting with the number 4, a
sequence of factorial, square root, and floor operations will reach
any desired positive integer. For example, we can reach 5 from 4
as follows:
3.2.2 Real-world problems – Routs finding problem
• We have already seen how the route-finding problem is defined in
terms of specified locations and transitions along links between them.
• Route-finding algorithms are used in a variety of applications. Some,
such as Web sites and in-car systems that provide driving directions,
are relatively straightforward extensions of the Romania example.
• Others, such as routing video streams in computer networks, military
operations planning, and airline travel-planning systems, involve
much more complex specifications
Consider the airline travel problems that
must be solved by a travel-planning Web site:

• States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
• Initial state: This is specified by the user’s query.
• Actions: Take any flight from the current location, in any seat class, leaving after the
current time, leaving enough time for within-airport transfer if needed.
• Transition model: The state resulting from taking a flight will have the flight’s destination
as the current location and the flight’s arrival time as the current time.
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.
Touring Problem
• Touring problems are closely related to route-finding problems, but
with an important difference
• Consider, for example, the problem “Visit every city in Figure 3.2 at
least once, starting and ending in Bucharest.”
• As with route finding, the actions correspond to trips between
adjacent cities. The state space, however, is quite different.
• Each state must include not just the current location but also the set
of cities the agent has visited.
• So the initial state would be In(Bucharest ), Visited({Bucharest}), a
typical intermediate state would be In(Vaslui ), Visited({Bucharest ,
Urziceni , Vaslui}), and the goal test would check whether the agent is
in Bucharest and all 20 cities have been visited.
Traveling salesperson problem (TSP)
• The Travelling Salesperson Problem (TSP) is a well-known
optimization problem where a salesperson must visit a set of cities,
each exactly once, and return to the starting city, while minimizing the
total distance traveled (or cost incurred). The objective is to find the
shortest possible route that visits each city only once.
• The aim is to find the shortest tour
• The problem is known to be NP-hard, but an enormous amount of
effort has been expended to improve the capabilities of TSP
algorithms.
• In addition to planning trips for traveling salespersons, these
algorithms have been used for tasks such as planning movements of
automatic circuit-board drills and of stocking machines on shop floors.
• Refer the document
VLSI layout
• A VLSI layout problem requires positioning millions of components and
connections on a chip to minimize area, minimize circuit delays, minimize stray
capacitances, and maximize manufacturing yield.
• The layout problem comes after the logical design phase and is usually split into
two parts: cell layout and channel routing.
• In cell layout, the primitive components of the circuit are grouped into cells, each
of which performs some recognized function.
• Each cell has a fixed footprint (size and shape) and requires a certain number of
connections to each of the other cells. The aim is to place the cells on the chip so
that they do not overlap and so that there is room for the connecting wires to be
placed between the cells.
• Channel routing finds a specific route for each wire through the gaps between
the cells. These search problems are extremely complex, but definitely worth
solving.
• Similary robot navigation, automatic assembly sequencing, protein
design are some example with respect to real world problems
UNINFORMED SEARCH STRATEGIES
• Uninformed search strategies, also known as blind search strategies, are a category
of search algorithms that operate without any additional information about the
problem domain, other than the definition of the problem itself (such as the initial
state, goal state, and possible actions).
• These algorithms do not use any domain-specific knowledge or heuristics to guide
the search. They explore the search space systematically to find a solution but have
no insight into which path might be more promising than others.
• Key Characteristics of Uninformed Search:
• No Heuristic Knowledge: They do not have any information that helps estimate how
far the current state is from the goal.
• Systematic Exploration: They explore the search space by expanding nodes based on
the search method’s rules (e.g., depth-first, breadth-first).
• Solution Finding: The goal is to find a solution (often the shortest path), but the
algorithms may not guarantee the optimal solution unless explicitly designed to do so
(like Breadth-First Search).
We can evaluate an algorithm’s performance in four ways:

• Completeness: Is the algorithm guaranteed to find a solution when


there is one?
• Optimality: Does the strategy find the optimal solution?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the
search?
3.4.1 Breadth-first search
• Breadth-first search is a simple strategy in which the root node is
expanded first, then all the successors of the root node are expanded
next, then their successors, and so on.
• BFS explores all nodes at the current depth level (or layer) before
moving on to the next depth level. It starts at the root node and
explores all of its neighbors first, then their neighbors, and so on.
Pseudocode of BFS
Performance measure of the BFS
• Completeness: YES
• Optimality: Yes (if costs are equal)
• Time Complexity: O(b^d)
• Space complexity: O(b^d)
b is the branching factor (the average number of successors per node).
Real time applications of BFS
1. Web Crawling : Search engines like Google use BFS to index web
pages. The process begins with a list of initial URLs and then follows
links from those pages, moving outward level by level, to index as
many connected pages as possible
2. Social Networking Sites : BFS is used in social networks like Facebook
or LinkedIn to find the shortest connection path between users. For
example, finding the smallest number of mutual friends between two
people involves searching through user connections level by level
3. Network Broadcasting : When broadcasting messages in a computer
network (such as sending data packets), BFS is often used to make sure
every node in a network receives the message. The transmission
spreads level by level from the source
4. GPS Navigation Systems : BFS can be applied in GPS systems for
route planning. Given a city map modeled as a graph, BFS helps find the
shortest path (in terms of number of intersections) from one location
to another by exploring intersections level by level.
5. Solving Puzzles (like the 8-puzzle or Rubik's cube) :BFS is used to
explore all possible moves level by level to solve puzzles that can be
represented as a graph. It ensures finding the solution with the least
number of moves if such a solution exists.
6. Friend Suggestion Algorithms :BFS helps in algorithms for friend
recommendations in social media. It explores friends of friends (2nd-
degree connections) by searching through users at each "level" of
connection to suggest possible friends
7. Robot Navigation : BFS can be used in robotics for pathfinding, such
as a robot needing to navigate through a grid or maze to reach a goal.
The robot checks the nearest locations (nodes) first and expands
outwards level by level.
8. File Search Systems : In file systems, when searching for a file, BFS is
used to explore directories level by level to ensure the closest files are
found first
Uniform Cost Search (UCS)
• When all step costs are equal, breadth-first search is optimal because
it always expands the shallowest unexpanded node.
• By a simple extension, we can find an algorithm that is optimal with
any step-cost function. Instead of expanding the shallowest node,
uniform-cost search expands the node n with the lowest path cost
g(n).
• This is done by storing the frontier as a priority queue ordered by g.
The algorithm is shown in Figure 3.14.
• In addition to the ordering of the queue by path cost, there are two
other significant differences from breadth-first search
• The first is that the goal test is applied to a node when it is selected
for expansion rather than when it is first generated. The reason is that
the first goal node that is generated may be on a suboptimal path.
• The second difference is that a test is added in case a better path is
found to a node currently on the frontier
Justify “cost search expands nodes in order of
their optimal path
cost.”
• It is easy to see that uniform-cost search is optimal in general. First,
we observe that whenever uniform-cost search selects a node n for
expansion, the optimal path to that node has been found
• If there were a better (lower-cost) path, the algorithm would have
found and selected another node with a lower cost than the current
one. This ensures that the algorithm is always expanding the least
costly option, which is why it finds the optimal path
• Then, because step costs are nonnegative, paths never get shorter as
nodes are added.
3.4.3 Depth-first search
• Depth-first search always expands the deepest node in the current
frontier of the search tree
• The search proceeds immediately to the deepest level of the search
tree, where the nodes have no successors. As those nodes are
expanded, they are dropped from the frontier, so then the search
“backs up” to the next deepest node that still has unexplored
successors.
• depth-first search uses a LIFO queue. A LIFO queue means that the
most recently generated node is chosen for expansion. This must be
the deepest unexpanded node because it is one deeper than its
parent—which, in turn, was the deepest unexpanded node when it
was selected.
• The properties of depth-first search depend strongly on whether the
graph-search or tree-search version is used. The graph-search version,
which avoids repeated states and redundant paths, is complete in
finite state spaces because it will eventually expand every node.
• Depth First Search (DFS) is a graph traversal algorithm that explores
as far as possible along each branch before backtracking. It goes deep
into the graph first, and only when it hits a dead end does it backtrack
to explore other paths.
• It uses a stack (either explicitly or via recursion) to keep track of the
nodes.
• DFS is good for exploring all possible paths, though it doesn’t
guarantee finding the shortest path. Hence it is non optimal
3.4.4 Depth-limited search
• Depth-Limited Search (DLS) is a variation of Depth First Search (DFS)
that imposes a limit on how deep the search can go. This is useful in
situations where you want to avoid going too deep in the search tree,
which can help prevent excessive use of memory and resources or
infinite loops in cycles.
• It searches depth-first but stops once a predefined depth limit is
reached
• If the goal is not found within that limit, the search fails or returns a
specific result indicating that the goal could not be found at that depth.
• The embarrassing failure of depth-first search in infinite state spaces can
be alleviated by supplying depth-first search with a predetermined
depth limit
• The embarrassing failure of depth-first search in infinite state spaces
can be alleviated by supplying depth-first search with a
predetermined depth limit l
• That is, nodes at depth l are treated as if they have no successors.
• The depth limit solves the infinite-path problem.
• Unfortunately, it also introduces an additional source of
incompleteness if we choose l < d, that is, the shallowest goal is
beyond the depth limit.
• Depth-limited search will also be non optimal if we choose l > d. Its
time complexity is O(b^l) and its space complexity is O(bl).
• Depth-first search can be viewed as a special case of depth-limited
search with l =∞.
3.4.5 Iterative deepening depth-first search

• Iterative Deepening Depth-First Search (IDDFS) is a graph traversal


algorithm that combines the space efficiency of Depth First Search
(DFS) with the completeness of Breadth First Search (BFS). It
repeatedly performs depth-limited searches with increasing depth
limits until it finds the goal or reaches a maximum depth.
• Iterative deepening search (or iterative deepening depth-first search)
is a general strategy, often used in combination with depth-first tree
search, that finds the best depth limit
• It does this by gradually increasing the limit—first 0, then 1, then 2,
and so on—until a goal is found.
• Iterative Deepening Search generates the same states (or nodes) multiple
times as it searches through the graph or tree. At first glance, this might
seem wasteful or inefficient because it seems like the algorithm is doing
extra work
• However, this repetition isn't as costly as it seems. The key reason is that
in many trees, especially those where each level has a similar number of
branches (which is called the branching factor), most of the nodes are
found near the bottom levels of the tree
• In Iterative Deepening Search, here's how often nodes are generated:
• Nodes at the deepest level (level d) are generated only once. This is where the
solutions are typically found.
• Nodes at the next-to-bottom level (level d-1) are generated twice (once during the
search with depth d-1 and again in the search with depth d).
• As you move up, nodes at higher levels (like level 1 and the root level) are
generated more frequently. The nodes immediately under the root (level 1) would
be generated d times if the maximum depth of the search is d.
• Conclusion: Although it seems like the algorithm is repeating itself,
because most of the important work (finding solutions) is happening
at the lower levels of the tree, the extra work at the upper levels
doesn't have a big impact on overall efficiency.
• Iterative Deepening Search revisits nodes, it still remains efficient
because it focuses on the most valuable nodes that are usually deeper
in the tree.
• You can combine BFS and iterative deepening to avoid excessive
repeated work. Overall, iterative deepening is a good choice for large
search areas when the solution's depth is unknown.
• Iterative Deepening Search (IDS) is similar to Breadth-First Search
(BFS) because it examines all nodes at a specific layer during each
iteration before moving to the next layer.
• It could be beneficial to create a version of Uniform-Cost Search
(UCS) that operates iteratively, maintaining the optimality of UCS
while using less memory. This would involve setting increasing limits
based on path costs rather than depth limits.
3.4.6 Bidirectional search
• Bidirectional search is a search algorithm that works by exploring the search space from two
directions simultaneously. Instead of just starting from the beginning (like in a regular search), it
starts from both the starting point and the goal. These two searches work towards meeting in the
middle, reducing the overall time and effort needed
• Here’s a simple analogy: Bidirectional search is implemented by replacing the goal test with a
check to see whether the frontiers of the two searches intersect; if they do, a solution has been
found.
• Imagine you're walking through a maze. Normally, you'd start at the entrance and walk toward
the exit. But in bidirectional search, you also send someone from the exit toward the entrance.
Both of you explore the maze, and the moment you meet, you've found the solution. This way,
you cover less ground overall compared to if only one of you was searching.
• Since each search only covers half the distance, bidirectional search is often faster in situations
where you have a well-defined start and goal.
• The idea behind bidirectional search is to run two simultaneous searches—one forward from the
initial state and the other backward from the goal—hoping that the two searches meet in the
middle
The motivation is that b^d/2 + b^d/2 is much less than b^d, or in the figure, the area of the
two small circles is less than the area of one big circle centered on the start and reaching to
the goal.
• In bidirectional search, instead of just checking if you’ve reached the goal, you check
if the two searches (one from the start, one from the goal) meet. If they do, you’ve
found a solution. However, the first solution found may not be the shortest, even if
you're using breadth-first search, so you might need to search a bit more to confirm.
• To check if the searches meet, you compare nodes as they're generated or expanded.
Using a hash table makes this check very fast. For example, if the solution is six steps
deep, each search needs to go only halfway (three steps), drastically reducing the
number of nodes to explore. If the branching factor (number of choices at each step)
is 10, you'd explore 2,220 nodes compared to over a million in a regular search
• The time complexity of bidirectional search is proportional to the square root of the
search space, O(b^d/2), which makes it faster than regular breadth-first search.
However, it requires a lot of memory, as one search frontier needs to be stored to
check for intersections. This memory requirement is the main drawback. You can
reduce memory usage by using a method called "iterative deepening" for one of the
searches, but you still need to keep track of at least one search frontier.
3.4.7 Comparing uninformed search
strategies

You might also like