0% found this document useful (0 votes)
57 views73 pages

02 Uninformed Search

Uploaded by

czf1643605493
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views73 pages

02 Uninformed Search

Uploaded by

czf1643605493
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Uninformed Search

Kalev Kask

Read Beforehand: R&N 3.1-3.4

Based on slides by Profs. Dechter, Lathrop, Ihler


Uninformed search strategies
• Uninformed (blind):
– You have no clue (no information) whether one non-goal state is better
(closer to the goal) than any other. You have no information which of the
two actions is better (takes you closer to the goal). Your search is blind. You
don’t know if your current exploration is likely to be fruitful.
• Various uninformed strategies:
– Breadth-first search
– Uniform-cost search
– Depth-first search
– Iterative deepening search (generally preferred)
– Bidirectional search (preferred if applicable; big if)
Basic graph/tree search scheme
• We have 3 kinds of nodes(states):
– [only for graph search: reached (past states; = explored, closed)]
– frontier (current nodes; = open, fringe) [nodes now on the queue]
– unexplored (future nodes) [implicitly given]
• Frontier separates Explored from Unexplored!!!
• Initially, frontier = NODE(start state)
• Loop until solution is found or state space is exhausted This choice
– pick/remove first node from frontier (open) using search strategy defines
• priority queue – FIFO (BFS), LIFO (DFS), g (UCS), f (A*), etc. search
– if node is a goal then return node algorithm
– [only for graph search: add node to reached (explored/closed)]
– expand this node, add children to frontier only if not already in frontier
• [only for graph search: add children only if their state is not in reached
(explored/closed)]
• Question:
– what if a better path is found to a node already in frontier or on reached set?
Search strategy evaluation
• A search strategy is defined by the order of node expansion

• Strategies are evaluated along the following dimensions:


– completeness: does it always find a solution if one exists?
– optimality: does it always find a least-cost path solution?
– time complexity: number of nodes generated
– space complexity: maximum number of nodes in memory

• Time and space complexity are measured in terms of


– b: maximum branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
– (for UCS: C*: true cost to optimal goal;  > 0: minimum step cost)
Uninformed search design choices
• Data Structure (Queue) for Frontier:
– FIFO? LIFO? Priority?

• Goal-Test:
– Do goal-test when node inserted into Frontier?
– Do goal-test when node removed?

• Tree Search, or Graph Search:


– Forget Expanded nodes?
– Remember them?
Queue for Frontier
• FIFO (First In, First Out)
– Results in Breadth-First Search

• LIFO (Last In, First Out)


– Results in Depth-First Search

• Priority Queue sorted by path cost so far


– Results in Uniform Cost Search

• Iterative Deepening Search uses Depth-First

• Bidirectional Search can use either Breadth-First or Uniform Cost


Search
When to do goal test?
• General Rule : do Goal-Test when node is popped from queue
IF you care about finding the optimal path
AND your search space may have both short expensive and long
cheap paths to a goal.
– Guard against a short expensive goal.
– E.g., Uniform Cost search with variable step costs.
• Sometimes can (should) do Goal-Test when is node inserted.
– E.g., Breadth-first Search, Depth-first Search, or Uniform Cost search when
cost is a non-decreasing function of depth only (which is equivalent to
Breadth-first Search).
• REASON ABOUT your search space & problem.
– How could I possibly find a non-optimal goal?
When to do Goal-Test? (Summary)
• For BreathFirstSearch (BFS), the goal test is done when the child node is generated.
– Not an optimal search in the general case.
• For DepthLimitedSearch (DLS), IterativeDeepeningSearch (IDS) as in Fig. 3.12, goal
test is done when a node is popped from queue.
– More efficient search goal-tests children as generated. We follow the textbook.
• DepthFirstSearch (DFS) is same as DLS(depthlimit=∞).
• For UCS and A*(next lecture), goal test when node is popped from queue.
– This avoids finding a short expensive path before a long cheap path.
• Bidirectional search can use either BFS or UCS.
– Goal-test is search frontier intersection, see additional complications below
• For GBFS (next lecture) the behavior is the same either way
– h(goal)=0 so any goal will be at the front of the queue anyway.
Basic search scheme (Fig 3.7 p.73)
Graph Search version; simplification possible
Tree Search vs. Graph Search
• Graph Search = DO remember all states visited
• Tree Search = DO NOT remember all states visited
• Graph Search is potentially exponentially faster than
Tree Search, but at the expense of potentially
exponentially more memory
• reached in Fig. 3.7 Best-First Search algorithm is a set of
all states ever seen by the search algorithm
– This results in Graph Search
– To get Tree Search, do
• remove node.STATE from reached
• when POP(frontier)
– With this, reached is the set of current frontier states

12
Tree-Search vs. Graph-Search
❑ Example : Assemble 5 objects {a, b, c, d, e}
❑ A state is a bit-vector (length 5), 1=object in assembly, 0= not in assembly
• 11010 = a=1, b=1, c=0, d=1, e=0
• ⇒ a, b, d in assembly; c, e not in assembly
❑ State space:
• Number of states = 25 = 32
• Number of undirected edges = 25 ∙5∙½ = 80
❑ Tree search space:
• Number of nodes = number of paths = 5! = 120
• States can be reached in multiple ways
➢ 11010 can be reached by a+b+d or by a+d+b or by … etc.
• Often requires much more time, but much less space, than graph search
❑ Graph search space:
• Number of nodes = choose(5,0) + choose(5,1) + choose(5,2) + choose(5,3) +
choose(5,4) + choose(5,5) = 1 + 5 + 10 + 10 + 5 + 1 = 32
• States are reached in only one way, redundant paths are pruned
➢ Question: What if a better path is found to a state that already has been explored?
• Often requires much more space, but much less time, than tree search
Checking for identical nodes (1)
Check if a node is already in frontier
• It is “easy” to check if a node is already in the frontier (recall
frontier = fringe = open = queue)
o Keep a hash table holding all frontier nodes
• Hash size is same O(.) as priority queue, so hash does not increase overall space O(.)
• Hash time is O(1), so hash does not increase overall time O(.)
o When a node is expanded, remove it from hash table (it is no
longer in the frontier)
o For each resulting child of the expanded node:
• If child is not in hash table, add it to queue (frontier) and hash table
• Else if an old lower- or equal-cost node is in hash, discard the new higher-
or equal-cost child
• Else remove and discard the old higher-cost node from queue (frontier) and
hash, and add the new lower-cost child to queue (frontier) and hash
Checking for identical nodes (2)
Check if a node is in reached/explored/expanded
• It is memory-intensive [ O(bd) or O(bm) ]to check if a node is
in reached/explored/expanded
o Keep a hash table holding all explored/expanded nodes (hash
table may be HUGE!!)
• When a node is reached, add it to hash (reached)
• For each resulting child of the expanded node:
o If child is not in hash table or in frontier, then add it to the
queue (frontier) and process normally (BFS normal processing
differs from UCS normal processing, but the ideas behind
checking a node for being in reached/explored/expanded are
the same).
o Else discard any redundant node.
Checking for identical nodes (3)
Quick check for search being in a loop
• It is “moderately easy” to check for the search being
in a loop
o When a node is expanded, for each child:
• Trace back through parent pointers from child to root
• If an ancestor state is identical to the child, search is looping
➢ Discard child and fail on that branch
• Time complexity of child loop check is O( depth(child) )
• Memory consumption is zero
➢ Assuming good garbage collection
• Does NOT solve the general problem of repeated
nodes – only the specific problem of looping
• For quizzes and exams, we will follow your textbook
and NOT perform this loop check
Breath First Search
R&N 3.4.1
Breadth-first graph search
Breadth-First Search
• Expand shallowest unexpanded node
• Frontier: nodes waiting in a queue to be explored
– also called Fringe or OPEN
• Implementation:
– Frontier is a first-in-first-out (FIFO) queue, i.e., new successors go
at end of the queue. Future= green dotted circles
Frontier=white nodes
– Goal test when inserted Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Initial state = A
Is A a goal state?

Put A at end of queue:


Frontier = [A]
Breadth-First Search
• Expand shallowest unexpanded node
• Frontier: nodes waiting in a queue to be explored
– also called Fringe or OPEN
• Implementation:
– Frontier is a first-in-first-out (FIFO) queue, i.e., new successors go
at end of the queue. Future= green dotted circles
Frontier=white nodes
– Goal test when inserted Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Expand A to B,C
Is B or C a goal state?

Put B,C at end of queue:


Frontier = [B,C]
Breadth-First Search
• Expand shallowest unexpanded node
• Frontier: nodes waiting in a queue to be explored
– also called Fringe or OPEN
• Implementation:
– Frontier is a first-in-first-out (FIFO) queue, i.e., new successors go
at end of the queue. Future= green dotted circles
Frontier=white nodes
– Goal test when inserted Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Expand B to D,E
Is D or E a goal state?

Put D,E at end of queue:


Frontier = [C,D,E]
Breadth-First Search
• Expand shallowest unexpanded node
• Frontier: nodes waiting in a queue to be explored
– also called Fringe or OPEN
• Implementation:
– Frontier is a first-in-first-out (FIFO) queue, i.e., new successors go
at end of the queue. Future= green dotted circles
Frontier=white nodes
– Goal test when inserted Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Expand C to F, G
Is F or G a goal state?

Put F,G at end of queue:


Frontier = [D,E,F,G]
Breadth-first search
• Expand shallowest unexpanded node
• Frontier: nodes waiting in a queue to be explored
– also called Fringe, or OPEN
• Implementation:
– Frontier is a first-in-first-out (FIFO) queue (new successors go at end)
– Goal test when inserted
Future= green dotted circles
Frontier=white nodes
Expand D; no children Expanded/active=gray nodes
Forget D Forgotten/reclaimed= black nodes

Frontier = [E,F,G]
Breadth-first search
• Expand shallowest unexpanded node
• Frontier: nodes waiting in a queue to be explored
– also called Fringe, or OPEN
• Implementation:
– Frontier is a first-in-first-out (FIFO) queue (new successors go at end)
– Goal test when inserted
Future= green dotted circles
Frontier=white nodes
Expand E; no children Expanded/active=gray nodes
Forget E; B Forgotten/reclaimed= black nodes

Frontier = [F,G]
Example
BFS for
8-puzzle
Breadth-First Search
Properties of breadth-first search
• Complete? Yes, it always reaches a goal (if b is finite)
• Time? 1 + b + b2 + b3 + … + bd = O(bd)
(this is the number of nodes we generate)
• Space? O(bd)
(keeps every node in memory, either in frontier or on a path to frontier).
• Optimal? No, for general cost functions.
Yes, if cost is a non-decreasing function only of depth.
– With f(d) ≥ f(d-1), e.g., step-cost = constant:
• All optimal goal nodes occur on the same level
• Optimal goals are always shallower than non-optimal goals
• An optimal goal will be found before any non-optimal goal

• Usually Space is the bigger problem (more than time)


BFS: Time & Memory Costs

Depth of Nodes
Solution Expanded Time Memory

0 1 5 microseconds 100 bytes

2 111 0.5 milliseconds 11 kbytes

4 11,111 0.05 seconds 1 megabyte

8 108 9.25 minutes 11 gigabytes

12 1012 64 days 111 terabytes

Assuming b=10; 200k nodes/sec; 100 bytes/node


Uniform Cost Search
R&N 3.4.2
Uniform-cost search
Breadth-first is only optimal if path cost is a non-decreasing function
of depth, i.e., f(d) ≥ f(d-1); e.g., constant step cost, as in the 8-
puzzle.
Can we guarantee optimality for variable positive step costs ?
(Why ? To avoid infinite paths w/ step costs 1, ½, ¼, …)

Uniform-cost Search:
Expand node with smallest path cost g(n).
• Frontier is a priority queue, i.e., new successors are merged into
the queue sorted by g(n).
– Can remove successors already on queue w/higher g(n).
• Saves memory, costs time; another space-time trade-off.
• Goal-Test when node is popped off queue.
Uniform Cost Search

UCS: sort by g (PATH-COST)


GBFS: identical, use h
A*: identical, but use f = g+h
Uniform-cost search
Proof of Completeness:
Assume (1) finite max branching factor = b; (2) min step cost 
  0; (3) cost to optimal goal = C*. Then a node at depth 
1+C*/  must have a path cost > C*. There are O( b^( 
1+C*/  ) such nodes, so a goal will be found.

Proof of Optimality (given completeness):


Suppose that UCS is not optimal. Then there must be an
(optimal) goal state with path cost smaller than the found
(suboptimal) goal state (invoking completeness).
However, this is impossible because UCS would have
expanded that node first, by definition.
Contradiction.
Ex: Uniform-cost search (Search tree version)

1 A 10 Order of node expansion:


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem. S
Steps labeled w/cost. g=0
Ex: Uniform-cost search (Search tree version)

1 A 10 Order of node expansion: S


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem. S
Steps labeled w/cost. g=0

A B C
g=1 g=5 g=15
Ex: Uniform-cost search (Search tree version)

1 A 10 Order of node expansion: S A


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem. S
Steps labeled w/cost. g=0

This early A B C
expensive goal g=1 g=5 g=15
node will go
back onto the G
queue until after g=11
the later
cheaper goal is
found.
Ex: Uniform-cost search (Search tree version)

1 A 10 Order of node expansion: S A B


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem. S
Steps labeled w/cost. g=0

A B C
g=1 g=5 g=15

G G
g=11 g=10

If we were doing graph search we would remove the higher-cost of


identical nodes and save memory. However, UCS is optimal even with
tree search, since lower-cost nodes sort to the front.
Ex: Uniform-cost search (Search tree version)

1 A 10 Order of node expansion: S A B G


5 5 Path found: S B G Cost of path found: 10
S B G
15 C 5
Route finding problem. S
Steps labeled w/cost. g=0

A B C
g=1 g=5 g=15

G G
g=11 g=10

Technically, the goal node is not really expanded,


because we do not generate the children of a goal
node. It is listed in “Order of node expansion” only for
your convenience, to see explicitly where it was found.
Ex: Uniform-cost search (Virtual queue version)

1 A 10 Order of node expansion:


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem.
Steps labeled w/cost.

Expanded:
Next:
Children:
Queue: S/g=0
Ex: Uniform-cost search (Virtual queue version)

1 A 10 Order of node expansion: S


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem.
Steps labeled w/cost.

Expanded: S/g=0
Next: S/g=0
Children: A/g=1, B/g=5, C/g=15
Queue: S/g=0, A/g=1, B/g=5, C/g=15
Ex: Uniform-cost search (Virtual queue version)

1 A 10 Order of node expansion: SA


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem.
Steps labeled w/cost.

Expanded: S/g=0, A/g=1


Next: A/g=1
Children: G/g=11
Queue: S/g=0, A/g=1, B/g=5, C/g=15, G/g=11

Note that in a proper priority queue in a computer system, this queue


would be sorted by g(n). For hand-simulated search it is more
convenient to write children as they occur, and then scan the current
queue to pick the highest-priority node on the queue.
Ex: Uniform-cost search (Virtual queue version)

1 A 10 Order of node expansion: SAB


5 5 Path found: Cost of path found:
S B G
15 C 5
Route finding problem.
Steps labeled w/cost.

Expanded: S/g=0, A/g=1, B/g=5


Next: B/g=5
Children: G/g=10
Queue: S/g=0, A/g=1, B/g=5, C/g=15, G/g=11, G/g=10
Ex: Uniform-cost search (Virtual queue version)

1 A 10 Order of node expansion: SABG


5 5 Path found: S B G Cost of path found: 10
S B G
15 C 5
Route finding problem.
Steps labeled w/cost. The same “Order of node expansion”, “Path found”,
and “Cost of path found” is obtained by both methods.
They are formally equivalent to each other in all ways.

Expanded: S/g=0, A/g=1, B/g=5, G/g=10


Next: G/g=10
Children: none
Queue: S/g=0, A/g=1, B/g=5, C/g=15, G/g=11, G/g=10

Technically, the goal node is not really expanded,


because we do not generate the children of a goal
node. It is listed in “Order of node expansion” only for
your convenience, to see explicitly where it was found.
Uniform-cost search
Implementation: Frontier = queue ordered by path cost.
Equivalent to breadth-first if all step costs all equal.
•Complete? Yes, if b is finite and step cost ≥ ε > 0.
(otherwise it can get stuck in infinite loops)
•Time? # of nodes with path cost ≤ cost of optimal solution.
O(b 1+C*/ε ) ≈ O(bd+1)
•Space? # of nodes with path cost ≤ cost of optimal solution.
O(b 1+C*/ε ) ≈ O(bd+1).
•Optimal? Yes, for any step cost ≥ ε > 0.

•Expands in the order of optimal path cost


–When a node is expanded, shortest path to it has been found
–Nodes are expanded in the order of increasing path costs
Uniform Cost Search
Uniform cost search
• Why require step cost ≥ ε > 0?
– Otherwise, an infinite regress is possible.
– Recall:

S is the start node. S cost(S,A) = 1/2


g(A) = 1/2
cost(A,B) = 1/4
A
cost(S,G) = 1 g(B) = 3/4
cost(B,C) = 1/8
B g(C) = 7/8
cost(C,D) = 1/16
G g(G) = 1 C g(C) = 15/16

D
...
G is the only goal
node in the search No return from this branch.
space. G will never be popped.
Depth First Search
R&N 3.4.3
DLS Fig 3.12 p.81
• Depth-First Search is a special case of Depth-
Limited Search, when depth limit l=∞
Depth-first search (LIFO version)
• Expand deepest unexpanded node
• Frontier = Last In First Out (LIFO) queue (stack), i.e., new nodes go
at the front of the queue.
• Goal-Test when popped.
Future= green dotted circles
Initial state = A Frontier=white nodes
Expanded/active=gray nodes
PUSH(A) Forgotten/reclaimed= black nodes
frontier = [A]

POP(frontier)->A
frontier=[]
Is A goal? (NO)

Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Expand A to B, C – Forgotten/reclaimed= black nodes
generate and PUSH
frontier = [B,C]

POP(frontier)->B
frontier=[C]
Is B goal? (NO)

Note : we assume we push children right-to-left

Note: Can save a space factor of b by generating successors one at a time.


See backtracking search in your book, p. 87 and Chapter 6.
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Expand B to D, E – Frontier=white nodes
Expanded/active=gray nodes
generate and PUSH Forgotten/reclaimed= black nodes
frontier = [D,E,C]

POP(frontier)->D
frontier=[E,C]
Is D goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Expand D to H, I – Frontier=white nodes
generate and PUSH Expanded/active=gray nodes
frontier = [H,I,E,C] Forgotten/reclaimed= black nodes

POP(frontier)->H
frontier=[I,E,C]
Is H goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Expand H to no children
Frontier=white nodes
frontier = [I,E,C] Expanded/active=gray nodes
Forget H. Forgotten/reclaimed= black nodes

POP(frontier)->I
frontier=[E,C]
Is I goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Expand I to no children
frontier = [E,C] Future= green dotted circles
Frontier=white nodes
Forget I, D. Expanded/active=gray nodes
Forgotten/reclaimed= black nodes
POP(frontier)->E
frontier=[C]
Is E goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Frontier=white nodes
Expand E to J, K – Expanded/active=gray nodes
generate and PUSH Forgotten/reclaimed= black nodes
frontier = [J,K,C]

POP(frontier)->J
frontier=[K,C]
Is J goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Expand J to no children Frontier=white nodes
frontier = [K,C] Expanded/active=gray nodes
Forget J. Forgotten/reclaimed= black nodes

POP(frontier)->K
frontier=[C]
Is K goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Expand K to no children Frontier=white nodes
frontier = [C] Expanded/active=gray nodes
Forget K, E, B. Forgotten/reclaimed= black nodes

POP(frontier)->C
frontier=[]
Is C goal? (NO)
Depth-first search
• Expand deepest unexpanded node
– Frontier = LIFO queue, i.e., put successors at front

Future= green dotted circles


Frontier=white nodes
Expand C to F, G – Expanded/active=gray nodes
generate and PUSH Forgotten/reclaimed= black nodes
frontier = [F,G]

POP(frontier)->F
frontier=[G]
Is F goal? (NO)
Properties of depth-first search A
• Complete? No: fails in infinite-depth spaces
– IS-CYCLE(node) check avoids loops/repeated states along path
B C

• check if current nodes occurred before on path to root


– Can use graph search (remember all nodes ever seen)
• problem with graph search: space is exponential, not linear
– Still fails in infinite-depth spaces (may miss goal entirely)

• Time? O(bm) with m =maximum depth of space


– Terrible if m is much larger than d
– If solutions are dense, may be much faster than BFS

• Space? O(bm), i.e., linear space!


– Remember a single path + expanded unexplored nodes

• Optimal? No: It may find a non-optimal goal first


Comparing DFS and BFS
• BFS is optimal if path cost is non-decreasing function of depth, DFS is not
• Worst-case Time Complexity: BFS = O(bd), DFS = O(bm); m may be infinite
– In the worst-case, BFS is always better than DFS
• Sometimes, on the average, DFS is better if:
– Many goals, no loops, and no long or infinite paths
– Thus, DFS may luckily blunder into an early goal
• BFS is much worse memory-wise
– BFS may store the whole search space
• DFS can be linear space
– Stores only the nodes on the path from the current leaf to the root
• In general:
– BFS is better if shallow goals, many long paths, many loops, small search space
– DFS is better if many goals, not many loops (easy to check), few long or infinite paths
(hard to check), huge search space
– DFS is always much better in terms of memory
DFS vs BFS & graph-search vs tree-search

• BFS -> Graph-search

• DFS -> Tree-search


Iterative Deepening Search
R&N 3.4.4
Iterative Deepening Search
• To avoid the infinite depth problem of DFS:
– Only search until depth L
– i.e, don’t expand nodes beyond depth L
– Depth-Limited Search

• What if solution is deeper than L?


– Increase depth iteratively
– Iterative Deepening Search

• IDS – GENERALLY THE PREFERRED UNINFORMED SEARCH


– Inherits the memory advantage of depth-first search
– Has the completeness property of breadth-first search
Depth-limited search & IDS (R&N Fig. 3.12)
Iterative Deepening Search, L=0
Iterative Deepening Search, L=1
Iterative Deepening Search, L=2
Iterative deepening search
Iterative Deepening Search
• Number of nodes generated in a depth-limited search to depth d with
branching factor b:

NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

• Number of nodes generated in an iterative deepening search to


depth d with branching factor b:

NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd


= O(bd)
Ratio NIDS/NDLS = b/(b-1) ]
• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450
Properties of iterative deepening search
• Complete? Yes

• Time? O(bd)

• Space? O(bd)

• Optimal? No, for general cost functions.


Yes, if cost is a non-decreasing function only of depth.

Generally the preferred uninformed search strategy, combining


– guarantee of finding an optimal solution if one exists (as in BFS)
– space efficiency, O(bd) of DFS
– But still has problems with loops like DFS (but can fix easily)
Bi-Directional Search
R&N 3.4.5
Bidirectional Search
• Idea
– simultaneously search forward from S and backwards from G
– stop when both “meet in the middle”
– need to keep track of the intersection of 2 open sets of nodes

• What does searching backwards from G mean


– need a way to specify the predecessors of G
• this can be difficult,
• e.g., predecessors of checkmate in chess?
– what if there are multiple goal states?
– what if there is only a goal test, no explicit list?

• Complexity
– time complexity is best: O(2 b(d/2)) = O(b (d/2))
– memory complexity is the same
Bi-Directional Search
Summary of algorithms
Criterion Breadth- Uniform- Depth- Depth- Iterative Bidirectional
First Cost First Limited Deepening (if applicable)
DLS
Complete? Yes[a] Yes[a,b] No No Yes[a] Yes[a,d]
Time O(bd) O(b1+C*/ε) O(bm) O(bl) O(bd) O(bd/2)
Space O(bd) O(b1+C*/ε) O(bm) O(bl) O(bd) O(bd/2)
Optimal? Yes[c] Yes No No Yes[c] Yes[c,d]

There are a number of footnotes, caveats, and assumptions.


See Fig. 3.21, p. 91.
[a] complete if b is finite
[b] complete if step costs   > 0 Generally the preferred
[c] optimal if step costs are all identical uninformed search strategy
(also if path cost non-decreasing function of depth only)
[d] if both directions use breadth-first search
(also if both directions use uniform-cost search with step costs   > 0)

Note that d ≤ 1+C*/ε


You should know…
• Overview of uninformed search methods
• Search strategy evaluation
– Complete? Time? Space? Optimal?
– Max branching (b), Solution depth (d), Max depth (m)
– (for UCS: C*: true cost to optimal goal;  > 0: minimum step cost)

• Search Strategy Components and Considerations


– Queue? Goal Test when? Tree search vs. Graph search?

• Various blind strategies:


– Breadth-first search
– Uniform-cost search
– Depth-first search
– Iterative deepening search (generally preferred)
– Bidirectional search (preferred if applicable)
Summary
• Problem formulation usually requires abstracting away real-world
details to define a state space that can feasibly be explored

• Variety of uninformed search strategies

• Iterative deepening search uses only linear space and not much
more time than other uninformed algorithms

http://www.cs.rmit.edu.au/AI-Search/Product/
http://aima.cs.berkeley.edu/demos.html (for more demos)

You might also like