0% found this document useful (0 votes)
21 views

04 Informed Search

The document provides information about an AI class. It discusses topics covered in the previous class including search algorithms and their complexities. It also announces that there are still seats available in the class and not to be scared of the first homework. The document then previews topics to be covered in the current class, including informed search strategies like best-first search, greedy search, and heuristics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

04 Informed Search

The document provides information about an AI class. It discusses topics covered in the previous class including search algorithms and their complexities. It also announces that there are still seats available in the class and not to be scared of the first homework. The document then previews topics to be covered in the current class, including informed search strategies like best-first search, greedy search, and heuristics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Announcements

Informed Search If you are not enrolled in the class, and still
would like to be, come see me later
– There’s space for up to 35 students in the class
– That means 6 more seats, from the waiting list
Burr H. Settles
CS-540, UW-Madison
www.cs.wisc.edu/~cs540-1
Don’t be scared by Homework #1
Summer 2003
1 2

Last Time Last Time

Several of you were confused about why we only find 12 Someone asked about why we might want to choose the
of the 20 possible states in yesterday’s water jug problem: BFS strategy over IDS
– They are both complete, optimal, and complexities of O(bd) for
00
fill(B)
04
pour(B,A)
31 00
fill(B)
04
pour(B,A)
31 time, but IDS has O(bd) for space
fill(A) dump(A) fill(A) fill(A) dump(A)
– It turns out, IDS has O(2×bd) time complexity
fill(B) – Recall that in “big-O” notation, we ignore linear factors, so in the
30 34 01 30 34 01 limit IDS is O(2×bd) → O(bd)
pour(A,B) pour(B,A) pour(B,A)

03 10 10 Someone else asked why DFS has O(bm) space


fill(A) fill(B)
complexity, when it could expand more than b×m states
33 14 – This is an average case
Breadth-First Depth-First
pour(B,A)
Search Search pour(B,A) – Also has to do with open/closed list management
24 Solution Solution 32
3 4

Last Time Last Time

Yet someone else asked what search algorithms have to do We talked about building goal-based agents and
with AI… my answer: AI is all search!!
utility-based agents using search strategies
Even someone else pointed out that UCS appears to be
Dijkstra’s Algorithm, a famous example of dynamic But the strategies we discussed were uninformed
programming (DP)… and it is!!
– DP is a family of algorithms that are guaranteed to find optimal
because the agent is given nothing more than the
solutions for problems problem definition
– They work by breaking the problem up into sub-problems and
solving the simplest sub-problems first
– Other examples of DP are the Viterbi Algorithm, which is used in Today we’ll present informed search strategies
speech recognition software, and the CYK Algorithm, for finding
the most probable “parse tree” for natural language sentences
5 6

1
Searching in Large Problems Searching in Large Problems

The search space of a problem is generally Some problems’ search spaces are too large to
described in terms of the number of possible search efficiently using uninformed methods
states in the search:
Sometimes we have additional domain knowledge
Water Jug 12 states about the problem that we can use to inform the
Tic-Tac-Toe 39 states agent that’s searching
Rubik’s Cube 1019 states
100-variable SAT 1030 states
Chess 10120 states To do this, we use heuristics (informed guesses)
– Heuristic means “serving to aid discovery”

7 8

Heuristic Searching Heuristic Searching

We define a heuristic function h(n): We will formalize a heuristic h(n) as follows:


– Is computed from the state at node n
– Uses domain-specific information in some way h(n) 0 for all nodes n

Heuristics can estimate the “goodness” of a


h(n) = 0 implies that n is a goal node
particular node (or state) n:
– How close is n to a goal node?
– What might be the minimal cost path from n to h(n) = implies that n is a “dead end” from
a goal node? which a goal cannot be reached
9 10

Best-First Search Greedy Search

Best-first search is a generic informed Greedy search is the best-first search strategy,
search strategy that uses an evaluation with a simple evaluation function of f(n) = h(n)
function f(n), incorporating some kind of
domain knowledge It relies only on the heuristic to select what is
currently believed to be closest to the goal state

f(n) is used to sort states in the open list


Last time someone asked if UCS was the same as
using a priority queue (like UCS) Greedy search… are they?

11 12

2
Greedy Search Example Greedy Search Issues

Greedy Search is generally faster than the


f(n) = h(n) uninformed methods
S
# of states tested: 3, expanded: 2 h=8 – It has more knowledge about the problem domain!
State Open Closed
-- S:8 --
1 5 8
S C:3, B:4, A:8 S A B C Resembles DFS in that it tends to follow a path
C G:0, B:4, A:8 SC h=8 h=4 h=3
G Goal!!
that is initially good, and thus:
3 7 9 4 5
– Not complete (could chase an infinite path or get caught
D E G in cycles if no open/closed lists)
Path: S-C-G h= h= h=0
– Not optimal (a better solution could exist through an
Cost: 13 expensive node)
13 14

Greedy Search Issues Algorithm A Search


S S
h=5 h=5
2 2 2 2 To try and solve the problems of greedy search,
A B A B we can conduct an A search by defining our
h=3 h=4 h=3 h=4
evaluation function f(n) = g(n) + h(n)
1 2 1 2
C D C D
h=3 h=1 h=3 h=1 g(n) is the minimal cost path from the initial node
G1 is the
1 1 1 1 G2 is the
E F
solution optimal to the current node
found with E G2 solution
h= h= greedy h=2 h=0
search
99 3 This adds a UCS-like component to the search:
G E and F will never G1
h=0 be expanded since h=0 – g(n) is the cost to reach n
h=
– h(n) is the estimated cost from n to a goal
15 16

Algorithm A Search Issues Admissible Heuristics

A Search is an informed strategy that incorporates Heuristic functions are good for helping us find
both the real costs and the heuristic functions to good solutions quickly
find better solutions
But it’s hard to design accurate heuristics!
– They can be expensive to compute
However, if our heuristic makes certain errors
– They can make errors estimating the costs
(e.g. estimating along the path to a goal):
– Still not complete
These problems keep informed searches like the
– Still not optimal
A search from being complete and optimal

17 18

3
Admissible Heuristics A* Search

There is hope! We can add a constraint on our When we conduct an A search using an
heuristic function that, for all nodes n in the search admissible heuristic, it is called A* search
space, h(n) h*(n)
– Where h*(n) is the true minimal cost from n to a goal
A* search is complete if
When h(n) h*(n), we say that h(n) is admissible – The branching factor is finite
– Every action has a fixed cost
Admissible heuristics are inherently optimistic,
(i.e. they never overestimate the cost to a goal) A* search is optimal
19 20

A* Search Example Proof of A* Optimality

f(n) = g(n) + h(n) Let:


# of states tested: 4, expanded: 3
State Open Closed
S – G1 be the optimal goal
h=8
-- S:8 -- – G2 be some other goal
S A:9, B:9, C:11 S 1 5 8
– f* be the cost of the optimal path (to G1)
A B:9, G:10, C:11, D:∞, E:∞ SA
A B C
B G:9, C:11, D:∞, E:∞ SAB h=8 h=4 h=3 – n be some node in the optimal path, but not to G2
G Goal!!
3 7 9 4 5 Assume that G2 is found using A* search
D E G where f(n) = g(n) + h(n), and h(n) is admissible
Path: S-B-G h= h= h=0
– i.e. A* finds a sub-optimal path, which is shouldn’t
Cost: 9
21 22

Proof of A* Optimality Proof of A* Optimality

g(G2) > f* f(G2) f* (from previous slide)


by definition, G2 is sub-optimal g(G2) + h(G2) f*
f(n) f* by substituting the definition of f(n)
by admissibility: since f(n) never overestimates the g(G2) f*
cost to the goal it must be the cost of the optimal since G2 is a goal node, h(G2) = 0
path This contradicts the assumption that G2 is sub-
f(G2) f(n) optimal (g(G2) > f*), thus A* is optimal in terms
G2 must be chosen over n, by our assumption of path cost
f(G2) f*
by transitivity of the operator A* never finds a sub-optimal goal
23 24

4
Devising Heuristics Devising Heuristics

Often done by relaxing the problem If h(n) = h*(n) for all n:


See AI: A Modern Approach for more details – Only nodes on optimal solution are searched
– No unnecessary work
The goal of admissible heuristics is to get as close – We know the actual cost from n to goal
as possible to the actual cost without going over If h(n) = 0 for all n:
– Heuristic is still admissible
– A* is identical to UCS
Trade-off:
– A really good h(n) might be expensive to compute The closer h is to h*, the fewer nodes will need to
– Could we find the solution faster with a simpler one? be expanded; the more accurate the search will be

25 26

Devising Heuristics Devising Heuristics


Let’s revisit our Madison- Madison
If h1(n) h2(n) h*(n) to-El-Paso example from
for each non-goal node n: yesterday, but instead of 1,020
optimizing dollar cost, 360
– We say h2 dominates h1 Denver 620
we’re trying to reduce
– h2 is closer to actual cost, but is still admissible travel distance Kansas St. Louis
250
City
– A* using h1 (i.e. A1*) expands at least as 910 Nashville
What’s a good heuristic to 930
many, if not not more nodes than A2* 790 600
use for this problem? 1,420
Baton
– A2* is said to be better informed than A1* Phoenix
270 Rouge
430 El Paso
Is it admissible? 740 Houston

27 28

Optimal Searching Partial Searching

Optimality isn’t always required (i.e. you So far we’ve discussed algorithms that try to find
want some solution, not the best solution) a path from the initial state to a goal state
– h(n) need not be admissible (necessarily)
– Greedy search will often suffice These are called partial search strategies because
– This is all problem-dependent, of course they build up partial solutions, which could
enumerate the entire search space to find solutions
Can result in fewer nodes being expanded, – This is OK for small “toy world” problems
and a solution can be found faster – This is not OK for NP-Complete problems or those
with exponential search spaces
29 30

5
Next Time

We will discuss complete search strategies,


in which each state/node represents a
complete, possible solution to the problem

We will also discuss optimization search,


where we try to find such complete
solutions that are optimal (the best)

31

You might also like