Heuristic Search
Heuristic Search
• Alternate searching from the start state toward the goal and from
the goal state toward the start.
• Stop when the frontiers intersect.
• Works well only when there are unique start and goal states.
• Requires the ability to generate “predecessor” states.
• Can (sometimes) lead to finding a solution more quickly.
Comparing Search Strategies
Informed search
algorithms
Outline
• Best-first search
• Greedy best-first search
• A* search
• Heuristics
• Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search
• Genetic algorithms
Best-first search
• Idea: use an evaluation function f(n) for each
node
– estimate of "desirability"
Expand most desirable unexpanded node
• Implementation:
Order the nodes in fringe in decreasing order of
desirability
• Special cases:
– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-first search
• Evaluation function f(n) = h(n) (heuristic)
• estimate of cost from n to goal
• e.g., hSLD (n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node that
appears to be closest to goal
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first
search
• Complete: No – can get stuck in loops, e.g., Iasi
->Neamt ->
Iasi -> Neamt ->
• Time: O(bm), but a good heuristic can give dramatic
improvement
• Space: O(bm) -- keeps all nodes in memory
• Optimal: No
A* search
• Idea: avoid expanding paths that are already
expensive
• If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n)
= f(n)
• i.e., f(n) is non-decreasing along any path.
• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is
optimal
Optimality of A*
• Time = Exponential
• Optimal
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = 8
• h2(S) = 3+1+2+2+2+3+3+2 = 18
Iterative-Deepening A*
• For large problems, A* consumes considerable memory
– Keeps all generated nodes in memory (as for all Graph-Search variants)
– Usually a bigger problem than exponential run-time
– Consider memory-bounded heuristic search
• IDA*—Iterative Deepening A*
– Adapts iterative deepening to heuristic search
– Instead of depth cutoff, use evaluation function, f (n)
– At each iteration, cutoff value is smallest f (n) of any node that
– exceeded the cutoff in the previous iteration
• Strengths
– Practical for many unit-step-cost problems
– Avoids overhead of large priority queue
• Weaknesses
– Some of the same problems as Uniform-Cost Search
RBFS: Recursive Best-First Search
• Characteristics
– Recursive algorithm; mimics standard best-first search
– Uses only linear space
• Behavior
– Tracks f -value of best alterative path available from any
ancestor of the current node
– If current node exceeds this bound, recursion unwinds to
alternative path
– While unwinding, replace f -value of each node along the
path with best f -value of its children
• Remember f -value of best leaf in abandoned subtree
• Possible to return to it later if necessary
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
• RBFS(B, 447)
• Goal-Test succeeds
– Recursion unwinds completely
– Returns B418
• Optimal solution path is A—S—R—P—B
RBFS Search Example
• Can regenerate nodes many times
– Closer to goal, f -value often increases
• Heuristic function (h(n)) “less optimistic”
• Path cost (g(n)) begins to dominate
• Local-Beam-Search Algorithm
1 Begin with k random states;
2 for each step do
3 Generate all successors of all k states;
4 if Goal found then return solution;
5 Retain best k states;