0% found this document useful (0 votes)
132 views50 pages

Heuristic Search

This document discusses and compares several informed search strategies, including bi-directional search, best-first search, greedy best-first search, A* search, heuristics, local search algorithms like hill-climbing search and simulated annealing search, and genetic algorithms. It provides examples and properties of each type of search strategy.

Uploaded by

sahusandipan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
132 views50 pages

Heuristic Search

This document discusses and compares several informed search strategies, including bi-directional search, best-first search, greedy best-first search, A* search, heuristics, local search algorithms like hill-climbing search and simulated annealing search, and genetic algorithms. It provides examples and properties of each type of search strategy.

Uploaded by

sahusandipan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 50

Bi-directional search

• Alternate searching from the start state toward the goal and from
the goal state toward the start.
• Stop when the frontiers intersect.
• Works well only when there are unique start and goal states.
• Requires the ability to generate “predecessor” states.
• Can (sometimes) lead to finding a solution more quickly.
Comparing Search Strategies
Informed search
algorithms
Outline
• Best-first search
• Greedy best-first search
• A* search
• Heuristics
• Local search algorithms
• Hill-climbing search
• Simulated annealing search
• Local beam search
• Genetic algorithms
Best-first search
• Idea: use an evaluation function f(n) for each
node
– estimate of "desirability"
 Expand most desirable unexpanded node

• Implementation:
Order the nodes in fringe in decreasing order of
desirability

• Special cases:
– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-first search
• Evaluation function f(n) = h(n) (heuristic)
• estimate of cost from n to goal
• e.g., hSLD (n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node that
appears to be closest to goal
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first
search
• Complete: No – can get stuck in loops, e.g., Iasi
->Neamt ->
Iasi -> Neamt ->
• Time: O(bm), but a good heuristic can give dramatic
improvement
• Space: O(bm) -- keeps all nodes in memory
• Optimal: No
A* search
• Idea: avoid expanding paths that are already
expensive

• Evaluation function f(n) = g(n) + h(n)

• g(n) = cost so far to reach n

• h(n) = estimated cost from n to goal

• f(n) = estimated total cost of path through n to


goal
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach
the goal state from n.
• An admissible heuristic never overestimates the
cost to reach the goal, i.e., it is optimistic

• Example: hSLD (n) (never overestimates the actual


road distance)

• Theorem: If h(n) is admissible, A* using TREE-


SEARCH is optimal
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and
is in the fringe. Let n be an unexpanded node in the fringe
such that n is on a shortest path to an optimal goal G.

• f(G2) = g(G2) since h(G2) = 0


• g(G2) > g(G) since G2 is suboptimal
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and
is in the fringe. Let n be an unexpanded node in the fringe
such that n is on a shortest path to an optimal goal G.

• f(G2) > f(G) from above


• h(n) ≤ h^*(n) since h is admissible
• g(n) + h(n) ≤ g(n) + h*(n)
• f(n) ≤ f(G)
Hence f(G2) > f(n), and A* will never select G2 for expansion
Consistent heuristics
• A heuristic is consistent if for every node n, every successor
n' of n generated by any action a,

h(n) ≤ c(n,a,n') + h(n')

• If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n)
= f(n)
• i.e., f(n) is non-decreasing along any path.
• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is
optimal
Optimality of A*

• A* expands nodes in order of increasing f value


• Gradually adds "f-contours" of nodes
• Contour i has all nodes with f=fi, where fi < fi+1
Properties of A*
• Complete (unless there are infinitely many nodes with f
≤ f(G) )

• Time = Exponential

• Space = Keeps all nodes in memory

• Optimal
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = 8
• h2(S) = 3+1+2+2+2+3+3+2 = 18
Iterative-Deepening A*
• For large problems, A* consumes considerable memory
– Keeps all generated nodes in memory (as for all Graph-Search variants)
– Usually a bigger problem than exponential run-time
– Consider memory-bounded heuristic search
• IDA*—Iterative Deepening A*
– Adapts iterative deepening to heuristic search
– Instead of depth cutoff, use evaluation function, f (n)
– At each iteration, cutoff value is smallest f (n) of any node that
– exceeded the cutoff in the previous iteration
• Strengths
– Practical for many unit-step-cost problems
– Avoids overhead of large priority queue
• Weaknesses
– Some of the same problems as Uniform-Cost Search
RBFS: Recursive Best-First Search
• Characteristics
– Recursive algorithm; mimics standard best-first search
– Uses only linear space
• Behavior
– Tracks f -value of best alterative path available from any
ancestor of the current node
– If current node exceeds this bound, recursion unwinds to
alternative path
– While unwinding, replace f -value of each node along the
path with best f -value of its children
• Remember f -value of best leaf in abandoned subtree
• Possible to return to it later if necessary
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
RBFS Search Example
• RBFS(B, 447)
• Goal-Test succeeds
– Recursion unwinds completely
– Returns B418
• Optimal solution path is A—S—R—P—B
RBFS Search Example
• Can regenerate nodes many times
– Closer to goal, f -value often increases
• Heuristic function (h(n)) “less optimistic”
• Path cost (g(n)) begins to dominate

– Another path may appear better


– Search may “oscillate” among two or more paths, regenerating nodes
• Complete?
• Optimal? Yes, if h(n) is admissible
• Time Complexity: Hard to say
– Depends on
• Accuracy of heuristic function
• How often best-path changes

– Potentially exponential as “forgotten” paths are repeatedly re-explored


• Space Complexity: O(m)
– Uses too little memory by forgetting too much
Memory-Bounded A*
• SMA*—Simplified, Memory-Bounded A*
– As in A*, expand best node until memory limit reached
– Drop the worst leaf node (having the highest f -value)
– As in RBFS, keep the f -value of the dropped node in its parent
– Can regenerate forgotten subtree, but only if everything else looks
worse

• Complete? Yes, if there is a solution path that fits in memory


• Optimal? Yes, if any optimal solution does so
• Subject to thrashing—switching constantly between a small set
of
candidate solutions
– Limited memory makes the problem computationally intractable
Local Search
• Local Search
– New class of algorithms
– Don’t care about paths at all
– Operating using a single current state
– Usually only move to neighbors of the current state
• Advantages
– Low memory consumption (usually constant)
– Find “reasonable” solutions in large state spaces
• Can also solve optimization problems
– Find best state according to objective function
– Such problems don’t fit the ordinary search model
Hill Climbing Search
• Basic Algorithm
– Simple loop that moves in direction of increasing value (“uphill”)
– Choose randomly if multiple successors have same “best” value
– Terminates when no neighbor of higher value
• Characteristics
– No search tree
– Current node tracks only
• Corresponding state
• Value of objective function
• Hill Climbing is greedy
– Always picks best neighboring state without further consideration
• Problematic landscape features with greedy local search
– Local maximum—trapped at a peak that is lower than global maximum
– Ridge—all local maxima have poorer adjacent states
– Plateau—flat spot on landscape; can’t make progress
State Space Landscape
Variants of Hill Climbing
• Stochastic Hill Climbing
– Choose at random from among uphill moves
– Generally converges more slowly; may find better
solutions
• First-Choice Hill Climbing
– Stochastic hill climbing
– Successors generated randomly until one better than
current state found
• Random-Restart Hill Climbing
– Conduct multiple searches from random initial states
– Previous algorithms are incomplete—get stuck on local
maxima
– Random-Restart is complete with probability almost 1.
Simulated Annealing Search
• Hill Climbing
– Never makes a “downhill” move
– Efficient, but incomplete (gets stuck on local maxima)
• Random Walk
– Choose successor randomly
– Complete, but inefficient
• Simulated Annealing
– Combine Hill Climbing (efficient) and Random Walk (complete)
– Anneal—(metallurgy) “free from internal stress by heating and
gradually cooling”
• Analogy
– Ball rolling on state space landscape
– Agitate the landscape to get ball to global minimum
• At first, vigorously (high temperature) to escape local minima
• Later, gently (low temperature) to avoid leaving global minimum
Simulated Annealing Algorithm
Local Beam Search
• Local Beam Search keeps k states instead of one

• Local-Beam-Search Algorithm
1 Begin with k random states;
2 for each step do
3 Generate all successors of all k states;
4 if Goal found then return solution;
5 Retain best k states;

• Better than multiple random restarts


– Useful information passed among searches
– Less promising states discarded
– Search moves to where best progress made
Stochastic Beam Search

• Local beam search can end up concentrating on a small


region of the state space

• Stochastic Beam Search


– Choose k successors at random
– Probability of choosing a successor an increasing
function of its value
Genetic algorithms
• GA’s are a subclass of Evolutionary Computing

• Genetic Algorithm generates successor states by combining two


parent states
• Population begins with k randomly-selected states
• Populations are comprised of individuals
– Represented as a string of bits or numbers that capture the state
• Generating new states
– Each state rated by evaluation function (fitness function)
Better states have higher values

• Choose states to reproduce based proportionally on fitness function


• Randomly choose crossover point and merge parent states
• Randomly and rarely, apply mutation
Genetic algorithms

You might also like