0% found this document useful (0 votes)
19 views7 pages

InformedSearch 6pp

Uploaded by

Harsh Thaker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views7 pages

InformedSearch 6pp

Uploaded by

Harsh Thaker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Informed Search

• How can we make search smarter?


• Use problem-specific knowledge beyond
CS 331: Artificial Intelligence the definition of the problem itself
Informed Search • Specifically, incorporate knowledge of how
good a non-goal state is

1 2

Best-First Search Heuristic Function


• Node selected for expansion based on an • h(n) = estimated cost of the cheapest path
evaluation function f(n). i.e. expand the from node n to a goal node
node that appears to be the best • h(goal node) = 0
• Node with lowest evaluation is selected for • Contains additional knowledge of the
expansion problem
• Uses a priority queue
• We’ll talk about Greedy Best-First Search
and A* Search

3 4

Greedy Best-First Search Greedy Best-First Search Example


This is the
“actual” driving Goal State
• Expands the node that is closest to the goal distance in miles
Portland

• f(n) = h(n) 38
18
Straight line distance
Wilsonville
(as the crow flies) to
28 Wilsonville in miles
30
McMinnville
Corvallis 56
Salem Albany 49
46
Salem 28
Initial State 26
Portland 17
11 Albany McMinnville 18
Corvallis
5 6

1
Greedy Best-First Search Example Greedy Best-First Search Example
Corvallis Corvallis

56

h(n) McMinnville Albany

18 49

Corvallis 56 Corvallis 56
Albany 49 Albany 49
Salem 28 Salem 28
Portland 17 Portland 17
McMinnville 18 McMinnville 18

7 8

Greedy Best-First Search Example Greedy Best-First Search Example


Portland
Goal State
Corvallis
18
38

Wilsonville
McMinnville Albany
28
49
McMinnville
30 But the route below
is much shorter than
Corvallis 56
Portland Corvallis Wilsonville Salem the route found by
17 56 0 Albany 49
46 26 Greedy Best-First
Salem 28 Initial State
Search!
Portland 17 11 Albany
McMinnville 18 Corvallis
Corvallis →McMinnville→ Wilsonville = 74 miles
Corvallis →Albany→ Salem → Wilsonville = 67 miles

Evaluating Greedy Best-First Search Evaluating Greedy Best-First Search

Complete? No (could start down an infinite Complete? No (could start down an infinite
path) path)
Optimal? Optimal? No

Time Complexity Time Complexity

Space Complexity Space Complexity

Greedy Best-First search results in lots of dead ends which


leads to unnecessary nodes being expanded
11 12

2
Evaluating Greedy Best-First Search Evaluating Greedy Best-First Search

Complete? No (could start down an infinite Complete? No (could start down an infinite
path) path)
Optimal? No Optimal? No

Time Complexity O(bm) Time Complexity O(bm)

Space Complexity Space Complexity O(bm)

Greedy Best-First search results in lots of dead ends which Greedy Best-First search results in lots of dead ends which
leads to unnecessary nodes being expanded leads to unnecessary nodes being expanded
13 14

A* Search Admissible Heuristics


• A much better alternative to greedy best- • A* is optimal if h(n) is an admissible
first search heuristic
• Evaluation function for A* is: • An admissible heuristic is one that never
f(n) = g(n) + h(n) overestimates the cost to reach the goal
where g(n) = path cost from the start node • Admissible heuristic = optimistic
to n • Straight line distance was an admissible
• If h(n) satisfies certain conditions, A* heuristic
search is optimal and complete!
15 16

Greedy Best-First Search Example A* Search Example


This is the
“actual” driving Goal State Corvallis
Portland
distance in miles
56=0+56
18
38
Straight line distance f(n)=g(n)+h(n) Straight line distance
Wilsonville
(as the crow flies) to (as the crow flies) to
28 Wilsonville in miles Wilsonville in miles
30
McMinnville
Corvallis 56 Corvallis 56
Salem Albany 49 Albany 49
46
Salem 28 Salem 28
Initial State 26
Portland 17 Portland 17
11 Albany McMinnville 18 McMinnville 18
Corvallis
17 18

3
A* Search Example A* Search Example
Corvallis Corvallis
11 46 11
46

McMinnville Albany Straight line distance McMinnville Albany


46+18=64 11+49=60 (as the crow flies) to 46+18=64
26 11
Wilsonville in miles
Corvallis 56 Salem Corvallis
Corvallis 56 37+28=65 22+56=78
Albany 49
Albany 49
Salem 28
Salem 28
Portland 17
Portland 17
McMinnville 18
McMinnville 18
19 20

A* Search Example A* Search Example


Corvallis Corvallis
46 11 46 11

McMinnville Albany McMinnville Albany


38 38 28 11
46 28 26 11 46 26

Portland Corvallis Wilsonville Salem Corvallis Portland Corvallis Wilsonville Salem Corvallis
84+17=101 92+56=148 74+0=74 37+28=65 22+56=78 84+17=101 92+56=148 74+0=74 22+56=78
30 26

Wilsonville Albany
Note: Don’t stop when you put a goal state on the priority 67+0=67 63+49=112
queue (otherwise you get a suboptimal solution)

Proper termination: Stop when you pop a goal state from the
21
priority queue

Proof that A* using TREE-SEARCH What about search graphs (more


is optimal if h(n) is admissible than one path to a node)?
• Suppose A* returns a suboptimal goal node G2 . • What if we expand a state we’ve already seen?
• G2 must be the least cost node in the fringe. Let • Suppose we use the GRAPH-SEARCH solution
the cost of optimal solution be C* h(G2) = 0 because it and not expand repeated nodes
• Because G2 is suboptimal: is a goal node
• Could discard the optimal path if it’s not the first
f(G2) = g(G2) + h(G2) = g(G2) > C* one generated
• Now consider a fringe node n on an optimal G2 n • One simple solution: ensure optimal path to any
solution path to the goal G C* repeated state is always the first one followed (like
• If h(n) is admissible then: in Uniform-cost search)
G
f(n) = g(n) + h(n) ≤ C* • Requires an extra requirement on h(n) called
• We have shown that f(n) ≤ C* < f(G2), so G2 will consistency (or monotonicity)
not get expanded before n. Henc A* must return
an optimal solution. 23 24

4
Consistency Consistency
• A heuristic is consistent if, for every node n and every • Every consistent heuristic is also admissible
successor n’ of n generated by any action a:
h(n) ≤ c(n,a,n’) + h(n’)
• A* using GRAPH-SEARCH is optimal if
h(n) is consistent
Step cost of going from n to n’
by doing action a • Most admissible heuristics are also
• A form of the triangle inequality – each side of the triangle
cannot be longer than the sum of the two sides consistent
n’ n’ h(n’)=1 n’ h(n’)=1
c(n,a,n’) h(n’) c(n,a,n’)=2 c(n,a,n’)=2

n G n n
G G
h(n)
h(n)=2 h(n)=4
26
CONSISTENT INCONSISTENT

Consistency A* is Optimally Efficient


• Claim: If h(n) is consistent, then the values of f(n) along • Among optimal algorithms that expand search
any path are nondecreasing
• Proof: paths from the root, A* is optimally efficient for
Suppose n’ is a successor of n. Want to show f(n’) ≥ f(n) any given heuristic function
Then g(n’) = g(n) + c(n,a,n’) for some a • Optimally efficient: no other optimal algorithm is
f(n’) = g(n’) + h(n’)
guaranteed to expand fewer nodes than A*
= g(n) + c(n,a,n’) + h(n’)
– Fine print: except A* might possibly expand more nodes with f(n) = C*
≥ g(n) + h(n) From defn of consistency: where C* is the cost of the optimal path – tie-breaking issues
c(n,a,n’) + h(n’) ≥ h(n)
= f(n)
• Any algorithm that does not expand all nodes with
• Thus, the sequence of nodes expanded by A* is in
nondecreasing order of f(n) f(n) < C* runs the risk of missing the optimal
• First goal selected for expansion must be an optimal solution
solution since all later nodes will be at least as expensive
27 28

Evaluating A* Search Evaluating A* Search


With a consistent heuristic, A* is complete, optimal and With a consistent heuristic, A* is complete, optimal and
optimally efficient. Could this be the answer to our optimally efficient. Could this be the answer to our
searching problems? searching problems?

The Dark Side of A*…


Time complexity is exponential (although it can be
reduced significantly with a good heuristic)

The really bad news: space complexity is exponential


(usually need to store all generated states). Typically
runs out of space on large-scale problems.

29 30

5
Summary of A* Search Summary of A* Search
Complete? Yes if h(n) is consistent, b is finite, and Complete? Yes if h(n) is consistent, b is finite, and
all step costs exceed some finite  1 all step costs exceed some finite  1
Optimal? Optimal? Yes if h(n) is consistent and admissible

Time Complexity Time Complexity

Space Complexity Space Complexity

1 Since f(n) is nondecreasing, we must eventually hit an f(n) = cost of the path to a 1 Since f(n) is nondecreasing, we must eventually hit an f(n) = cost of the path to a
goal state goal state

Summary of A* Search Summary of A* Search


Complete? Yes if h(n) is consistent, b is finite, and Complete? Yes if h(n) is consistent, b is finite, and
all step costs exceed some finite  1 all step costs exceed some finite  1
Optimal? Yes if h(n) is consistent and admissible Optimal? Yes if h(n) is consistent and admissible

Time Complexity O(bd) (In the worst case but a good Time Complexity O(bd) (In the worst case but a good
heuristic can reduce this significantly) heuristic can reduce this significantly)
Space Complexity Space Complexity O(bd) – Needs O(number of states), will
run out of memory for large search
spaces

1 Since f(n) is nondecreasing, we must eventually hit an f(n) = cost of the path to a 1 Since f(n) is nondecreasing, we must eventually hit an f(n) = cost of the path to a
goal state goal state

Iterative Deepening A* Examples of heuristic functions


• Use iterative deepening trick to reduce memory The 8-puzzle
requirements for A*
• In each iteration do a “cost-limited” depth first 7 2 4 1 2
search. 5 6 3 4 5
– Cutoff is based on the f-cost (g+h) rather than the depth
8 3 1 6 7 8
• After each iteration, the new cutoff is the
Start State End State
smallest f-cost that exceeded the cutoff in
the previous iteration Heuristic #1: h1 = number of misplaced tiles eg. start state has 8 misplaced tiles.
Complete, Optimal but more costly than This is an admissible heuristic
A* and can take a while to run with real-
valued costs 35 36

6
Examples of heuristic functions Which heuristic is better?
The 8-puzzle • h2 dominates h1. That is, for any node n, h2(n) ≥ h1(n).
• h2 never expands more nodes than A* using h1 (except
possibly for some nodes with f(n) = C*)
7 2 4 1 2
Proof:

5 6 3 4 5 Let h1 and h2 be admissible heuristics.


Every node with f(n) < C* will surely be expanded, since A* is optimal with an
8 3 1 6 7 8 admissible heuristic. Since f(n) = g(n) + h(n), every node with h(n) < C*- g(n) will
surely be expanded for either heuristic.
Start State End State Since h2 is at least as big as h1 for all nodes, every node expanded with A* using
h2 will also be expanded with A* using h 1. But h1 might expand other nodes as
well. In other words, we have h 1(n) ≤ h2(n) < C*- g(n)
Heuristic #2: h2 = total Manhattan distance (sum of horizontal and vertical
moves, no diagonal moves). Start state is 3+1+2+2+3+2+2+3=18 moves away • Better to use h2 provided it doesn’t overestimate (i.e., it is also
from the end state. This is also an admissible heuristic. admissible) and its computation time isn’t too expensive.
37 38

Which heuristic is better? Inventing Admissible Heuristics


# nodes expanded
Depth IDS A*(h1) A*(h2)
• Relaxed problem: a problem with fewer
2 10 6 6
4 112 13 12
restrictions on the actions
6 680 20 18 • The cost of an optimal solution to a relaxed
8 6384 39 25
problem is an admissible heuristic for the original
10 47127 93 39
12 3644035 227 73
problem
14 539 113 • If we relax the rules so that a square can move
16 1301 211
18 3056 363
anywhere, we get heuristic h1
20 7276 676 • If we relax the rules to allow a square to move to
22 18094 1219
any adjacent square, we get heuristic h2
24 39135 1641

From Russell and Norvig Figure 4.8 (Results averaged over 100
instances of the 8-puzzle for depths 2-24). 40

What you should know


• Be able to run A* by hand on a simple
example
• Why it is important for a heuristic to be
admissible and consistent
• Pros and cons of A*
• How do you come up with heuristics
• What it means for a heuristic function to
dominate another heuristic function
41

You might also like