Lec 07 Informed Search I
Lec 07 Informed Search I
Heuristics Search
Recap: Search
• Search problem:
• States (configurations of the world)
• Actions and costs
• Successor function (world dynamics)
• Start state and goal test
• Search tree:
• Nodes: represent plans for reaching states
• Plans have costs (sum of action costs)
• Search algorithm:
• Systematically builds a search tree
• Chooses an ordering of the fringe (unexplored nodes)
• Optimal: finds least-cost plans
Video of Demo Maze Water DFS/BFS
(part 1)
Video of Demo Maze Water DFS/BFS
(part 2)
Video of Demo Contours UCS Empty
Video of Demo Contours UCS Pacman
Small Maze
Today
• Informed Search
• Heuristics
• Greedy Search
• A* Search
Breadth-first search of the 8-puzzle, showing order in which states were
removed from open.
Depth-first search of the 8-puzzle with a depth bound of 5.
Using problem specific knowledge to
aid searching
• With knowledge, one can search the state space as if he was given
“hints” when exploring a maze.
• Heuristic information in search = Hints
• Leads to dramatic speed up in efficiency.
B C D E
Search only in
F G H I J
this subtree!!
K L M N
O
What are Heuristics?
• Heuristics are know-hows obtained through a lot of experiences.
• Heuristics often enable us to make decisions quickly without thinking
deeply about the reasons.
• The more experiences we have, the better the heuristics will be.
When Heuristics are used?
AI problem solvers employ heuristics in two basic situations:
1. A problem may not have an exact solution because of inherent
ambiguities in the problem statement or available data.
• Medical diagnosis: Given symptoms may have several possible causes;
• Doctors use heuristics to choose the most likely diagnosis and formulate a
treatment plan.
2. A problem may have an exact solution, but the computational cost of
finding it may be prohibitive.
• Chess: state space growth is combinatorically explosive, with the number of possible
states increasing with the depth of the search.
• Heuristics attack this complexity by guiding the search along the most “promising”
path through the space.
Why heuristic functions work?
• Uninformed Search : at most b choices at each node and a depth of d at the goal node
• In the worst case, search around nodes before finding a solution (Exponential Time Complexity).
• Special cases:
• Uniform Cost Search:
• Greedy (best-first) Search:
• A* Search:
How it works?
• Idea: use heuristic function for each node
• that estimates desirability
• Expands most desirable unexpanded node
• Implementation:
• fringe is a queue sorted in order of desirability
• Special cases:
• Uniform Cost Search (uninformed)
• Greedy (best-first) Search (informed)
• A* Search (informed)
Heuristic Search
• A heuristic is:
• A function that estimates how close a state is to a goal
• Designed for a particular search problem
• Examples: Manhattan distance, Euclidean distance for pathing
10
5
11.2
Example: Heuristic Function
h(x)
First three levels of the tic-tac-toe state space reduced by symmetry
The “most wins” heuristic applied to the first children in tic-tac-toe.
Heuristically reduced state space for tic-tac-toe.
8–Puzzle: Heuristics?
• Notation:
• h(current state) = 1
• Because this state has one
misplaced tile.
h=100
h=0
Greedy Search
b
…
• Strategy: expand a node that you think is
closest to a goal state
• Heuristic: estimate of distance to nearest goal for
each state
• A common case: b
…
• Best-first takes you straight to the (wrong) goal
140 B B 374
C
111 C 329
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance heuristic
Greedy Search: Tree Search
Start
A
Greedy Search: Tree Search
Start
A 75
118
[253] E
Greedy Search: Tree Search
Start
A 75
118
[253] E
80 99
[193] [178]
G F
[366] A
Greedy Search: Tree Search
Start
A 75
118
[253] E
80 99
[193] [178]
G F
[366] A
211
[253] I [0]
E
Goal
Greedy Search: Tree Search
Start
A 75
118
[253] E
80 99
[193] [178]
G F
[366] A
211
[253] I [0]
E
Path cost(A-E-F-I) = 253 + 178 + 0 = 431
Goal
dist(A-E-F-I) = 140 + 99 + 211 = 450
Greedy Search: Optimal?
State Heuristic:
Start h(n)
A 75
118 A 366
140 B B 374
C
111 C 329
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance heuristic
Not optimal! dist(A-E-G-H-I) =140+80+97+101=418
Greedy Search: Complete ?
State Heuristic:
Start h(n)
A 75
118 A 366
140 B B 374
C
111 ** C 250
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance heuristic
Greedy Search: Tree Search
Start
A
Greedy Search: Tree Search
Start
A 75
118
[253] E
Greedy Search: Tree Search
Start
A 75
118
[253] E
111
[244] D
Greedy Search: Tree Search
Start
A 75
118
[253] E
111
[244] D
Infinite Branch !
[250] C
Greedy Search: Tree Search
Start
A 75
118
[253] E
111
[244] D
Infinite Branch !
[250] C
[244] D
Greedy Search: Tree Search
Start
A 75
118
[253] E
111
[244] D
Infinite Branch !
[250] C
Even Incomplete in finite state
space much like depth first search
[244] D (Complete in finite space with
repeated-state checking)
Greedy Search: Time and Space
Complexity ?
Start
A 75
118
B
• Greedy search is not optimal.
C 140
111 • Greedy search is incomplete
E
D 80 99 • In the worst case, the Time and
G F Space Complexity of Greedy
97
Search are both O(bm)
H
• Where b is the branching factor and m
211
the maximum path length
101
I
Goal
Frontier (Greedy) Expand Explored
1 (A,40) A Empty
2 (A-B,32)(A-C,25)(A-D,35) C A
3 (A-B,32)(A-D,35) (A-C-E,19)(A-C-F,17) F A,C
4 (A-B,32)(A-D,35) (A-C-E,19)(A-C-F-G,0) G A,C,F
5 (A-C-F-G,0) Goal Found
Frontier (Greedy) Expand Explored
1 (A,40) A Empty
2 (A-C,25) (A-B,32)(A-D,35) C A
3 (A-B,32)(A-D,35)(A-C-F,17)(A-C-E,19) F A,C
4 (A-B,32)(A-D,35)(A-C-F-G,0)(A-C-E,19) G A,C,F
5 (A-C-F-G) Goal Found A,C,F,G