0% found this document useful (0 votes)
28 views66 pages

Week-5 - Informed Search Strategies

Uploaded by

grupsakli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views66 pages

Week-5 - Informed Search Strategies

Uploaded by

grupsakli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Dr.

Ahmet Esad TOP


[email protected]
oInformed Search
oHeuristics
oGreedy Search
oA* Search

oGraph Search
o Search problem:
o States (configurations of the world)
o Actions and costs
o Successor function (world dynamics)
o Start state (initial) and goal test

o Search tree:
o Nodes: represent plans for reaching
states
o Plans have costs (sum of action costs)

o Search algorithm:
o Systematically builds a search tree
o Chooses an ordering of the fringe
(unexplored nodes)
o Optimal: finds least-cost plans
Cost: Number of pancakes flipped
State space graph with costs as weights (not the whole state space graph-24 states)

4
2 3
2
3

4
3
4 2

3 2
2
4

3
Action: flip top two Action: fliptoallreach
Path four goal:
Cost: 2 Cost:
Flip 4 flip three
four,
Total cost: 7
o Remember: strategy: UCS expands lowest path cost
o explores increasing cost sets of nodes … c1
c2
c3
o The good: UCS is complete and optimal!

o The bad:
o Explores options in every “direction” (expensive)
o No information about goal location
Start Goal

o We’re fixing that now!


Nope-Goal
Cold-Hot
Far-Close
oA heuristic is:
o A function that estimates how close a state is to a goal
o Designed for a particular search problem
o Examples: Manhattan distance, Euclidean distance for
pathing

10

5
11.2
oInformed methods add domain-specific information
oSelects most promising path to continue searching
oh(n): estimates goodness of node n
oh(n) = estimated cost (or distance) of minimal cost path from n to a goal
state.
oEstimates how close a state n is to a goal using domain-specific information
h(x)
Heuristic: the number (ID) of the largest pancake that is still out of place
3
4
h(x)

3
4
3 0
4
4 3
4
4 2
3
oExpand the node that seems closest…

oWhat can go wrong?


oStrategy: expand a node that you think is closest to a b
goal state …
o Heuristic: estimate of distance to nearest goal for each
state

oA common case:
o Best-first takes you straight to the (wrong) goal
b

oWorst-case: like a badly-guided DFS


o With a poor heuristic, this can happen
UCS Greedy

A*
o Uniform-cost orders by path cost, or backward cost g(n)
o Greedy orders by goal proximity, or forward cost h(n)
8 g=0
S h=6
h=1 g=1
e a
1 h=5

1 3 2 g=2 g=9
S a d G
h=6 b d g=4 e h=1
h=6 h=5 h=2
1 h=2 h=0
1 g=3 g=6
c b g = 10
h=7 c G h=0 d
h=2
h=7 h=6
g = 12
G h=0
o A* Search orders by the sum: f(n) = g(n) + h(n)
oShould we stop when we enqueue a goal?
h=2

2 A 2
g+h = 4 g+h = 4

S h=3 h=0 G
g+h = 3 g+h = 5
2 B 3
h=1
g+h = 3

oNo: only stop when we dequeue a goal


h=6

1 A 3
g+h = 7

g+h = 4 ??

S h=7
G h=0
g+h = 7
g+h = 5
5
oWhat went wrong?
oActual bad goal cost < estimated good goal cost
o Not good
o We need estimates to be less than actual costs! (admissible heuristic)
Inadmissible (pessimistic) heuristics break Admissible (optimistic) heuristics slow down
optimality by trapping good plans on the fringe bad plans but never outweigh true costs
oA heuristic h is admissible (optimistic) if:

Optimal cost (exact cheapest way to get the goal from that state)
o where is the true cost to a nearest goal

oExamples: 4
15
Manhattan distance ID of the largest one (out of place)

oComing up with admissible heuristics is most of what’s involved in using A*


in practice.
oAssume:
o A is an optimal goal node …
o B is a suboptimal goal node
o h is admissible

oClaim: Optimal Goal Node


o A will exit the fringe before B
Suboptimal Goal Node
Proof:

o Imagine B is on the fringe
o Some ancestor n of A is on the If none of A’s ancestors are
on the fringe, this means it
fringe, too (maybe A) has already expanded
o Claim: n will be expanded before B
o 1. f(n) is less or equal to f(A)

Definition of f-cost
Admissibility of h
Only way it will be
h = 0 at a goal
admissible at the goal node
Proof: …

o Imagine B is on the fringe


o Some ancestor n of A is on the fringe, too
(maybe A)
o Claim: n will be expanded before B
o 1. f(n) is less or equal to f(A)
o 2. f(A) is less than f(B)

B is suboptimal
h = 0 at a goal

Proof:
o Imagine B is on the fringe
o Some ancestor n of A is on the fringe, too (maybe A)
o Claim: n will be expanded before B
o 1. f(n) is less or equal to f(A)
o 2. f(A) is less than f(B)
o 3. n will be expanded before B
oAll ancestors of A expand before B
oA expands before B
oA* search is optimal (with admissible heuristics)
o1. f(n) is less than or equal to f(A) …
o Definition of f-cost says:
f(n) = g(n) + h(n) = (path cost to n) + (est. cost of n to A)
f(A) = g(A) + h(A) = (path cost to A) + (est. cost of A to A)
o The admissible heuristic must underestimate the true cost
h(A) = (est. cost of A to A) = 0
o So now, we have to compare:
f(n) = g(n) + h(n) = (path cost to n) + (est. cost of n to A)
f(A) = g(A) = (path cost to A)
o h(n) must be an underestimate of the true cost from n to A
(path cost to n) + (est. cost of n to A) ≤ (path cost to A)
g(n) + h(n) ≤ g(A)
f(n) ≤ f(A)
o2. f(A) is less than f(B) …
o We know that:
f(A) = g(A) + h(A) = (path cost to A) + (est. cost of A to A)
f(B) = g(B) + h(B) = (path cost to B) + (est. cost of B to B)
o The heuristic must underestimate the true cost:
h(A) = h(B) = 0
o So now, we have to compare:
f(A) = g(A) = (path cost to A)
f(B) = g(B) = (path cost to B)
o We assumed that B is suboptimal! So
(path cost to A) < (path cost to B)
g(A) < g(B)
f(A) < f(B)
Uniform-Cost A*

b b
… …
oUniform-cost expands equally in all “directions”
Start Goal

oA* expands mainly toward the goal, but does


hedge its bets to ensure optimality Start Goal
Greedy Uniform Cost A*
oVideo games
oPathing / routing problems
oResource planning problems
oRobot motion planning
oLanguage analysis
oMachine translation
oSpeech recognition
o…
oMost of the work in solving hard search problems optimally is in coming up
with admissible heuristics

oOften, admissible heuristics are solutions to relaxed problems, where new


actions are available

366
15

Straight line distance Manhattan distance

oInadmissible heuristics are often useful too


Start State Actions Goal State

oWhat are the states?


oHow many states?
oWhat are the actions?
oHow many successors from the start state?
oWhat should the costs be?
oHeuristic: Number of tiles misplaced
oWhy is it admissible?
oh(start) = 8
oThis is a relaxed-problem heuristic
Start State Goal State

Average nodes expanded


when the optimal path has…
…4 steps …8 steps …12 steps
UCS 112 6,300 3.6 x 106
TILES 13 39 227
oWhat if we had an easier 8-puzzle
where any tile could slide any
direction at any time, ignoring other
tiles?
Start State Goal State
oTotal Manhattan distance
Average nodes expanded
oWhy is it admissible? when the optimal path has…
…4 steps …8 steps …12 steps
TILES 13 39 227
oh(start) = 3 + 1 + 2 + … = 18
MANHATTAN 12 25 73
oHow about using the actual cost as a heuristic?
o Would it be admissible?
o Would we save on nodes expanded?
o What’s wrong with it?

oWith A*: a trade-off between quality of estimate and work per node
o As heuristics get closer to the true cost, you will expand fewer nodes but usually
do more work per node to compute the heuristic itself
oDominance: ha ≥ hc if

oHeuristics form a semi-lattice:


o Max of admissible heuristics is admissible

more informative

oTrivial heuristics
o Bottom of lattice is the zero heuristic
o Top of lattice is the exact heuristic
oFailure to detect repeated states can cause exponentially more work.

State Graph Search Tree


oIn BFS, for example, we shouldn’t bother expanding the circled nodes
(why?)
S

d e p

b c e h r q

a a h r p q f

p q f q c G

q c G a

a
oIdea: never expand a state twice
oHow to implement:
o Tree search + set of expanded states (“closed set”)
o Expand the search tree node-by-node, but…
o Before expanding a node, check to make sure its state has never been expanded
before
o If not new, skip it, if new add to closed set

oCan graph search wreck completeness? Why/why not?


oHow about optimality?
State space graph Search tree

A S (0+2)
1
1
S h=4
C
h=1 A (1+4) B (1+1)
h=2 1
2
3 C (2+1) C (3+1)
B

h=1
G G (5+0) G (6+0)

h=0
o Main idea: estimated heuristic costs ≤ actual costs
A o Admissibility: heuristic cost ≤ actual cost to goal
1 h(A) ≤ actual cost from A to G
h=4 C h=1
o Consistency: heuristic “arc” cost ≤ actual cost for each arc
h=2
h(A) – h(C) ≤ cost(A to C)
3
o Consequences of consistency:
o The f value along a path never decreases
h(A) ≤ cost(A to C) + h(C)
G
f(a) = g(a) + h(A) ≤ g(a) + cost(A to C) + h(C) = f(c)
o A* graph search is optimal
oTree search:
o A* is optimal if heuristic is admissible
o UCS is a special case (h = 0)

oGraph search:
o A* optimal if heuristic is consistent
o UCS optimal (h = 0 is consistent)

oConsistency implies admissibility

oIn general, most natural admissible heuristics tend to be consistent, especially if


from relaxed problems
oA* uses both backward costs and (estimates of) forward costs

oA* is optimal with admissible / consistent heuristics

oHeuristic design is key: often use relaxed problems


Thanks for your attention!

You might also like