0% found this document useful (0 votes)
12 views52 pages

Lec 07 Informed Search I

Uploaded by

wyneharis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views52 pages

Lec 07 Informed Search I

Uploaded by

wyneharis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

Informed Search

Heuristics Search
Recap: Search
• Search problem:
• States (configurations of the world)
• Actions and costs
• Successor function (world dynamics)
• Start state and goal test

• Search tree:
• Nodes: represent plans for reaching states
• Plans have costs (sum of action costs)

• Search algorithm:
• Systematically builds a search tree
• Chooses an ordering of the fringe (unexplored nodes)
• Optimal: finds least-cost plans
Video of Demo Maze Water DFS/BFS
(part 1)
Video of Demo Maze Water DFS/BFS
(part 2)
Video of Demo Contours UCS Empty
Video of Demo Contours UCS Pacman
Small Maze
Today
• Informed Search
• Heuristics
• Greedy Search
• A* Search
Breadth-first search of the 8-puzzle, showing order in which states were
removed from open.
Depth-first search of the 8-puzzle with a depth bound of 5.
Using problem specific knowledge to
aid searching
• With knowledge, one can search the state space as if he was given
“hints” when exploring a maze.
• Heuristic information in search = Hints
• Leads to dramatic speed up in efficiency.

B C D E
Search only in
F G H I J
this subtree!!
K L M N

O
What are Heuristics?
• Heuristics are know-hows obtained through a lot of experiences.
• Heuristics often enable us to make decisions quickly without thinking
deeply about the reasons.
• The more experiences we have, the better the heuristics will be.
When Heuristics are used?
AI problem solvers employ heuristics in two basic situations:
1. A problem may not have an exact solution because of inherent
ambiguities in the problem statement or available data.
• Medical diagnosis: Given symptoms may have several possible causes;
• Doctors use heuristics to choose the most likely diagnosis and formulate a
treatment plan.
2. A problem may have an exact solution, but the computational cost of
finding it may be prohibitive.
• Chess: state space growth is combinatorically explosive, with the number of possible
states increasing with the depth of the search.
• Heuristics attack this complexity by guiding the search along the most “promising”
path through the space.
Why heuristic functions work?
• Uninformed Search : at most b choices at each node and a depth of d at the goal node
• In the worst case, search around nodes before finding a solution (Exponential Time Complexity).

• Heuristics improve the efficiency of search algorithms by reducing the effective


branching factor from b to (ideally) a lower constant such that

• Guide search towards the Start Goal Start Goal


goal instead of all over the
place
Informed Uninformed
Evaluation Function
• An evaluation function gives an estimation on the “cost” of node in a
tree/graph
• So that the node with the least cost among all possible choices can be
selected for expansion first
• Three approaches to define
Evaluation / Heuristic Function

• Evaluation function f(n) = g(n) + h(n)


• g(n) = exact cost so far to reach n
• h(n) = estimated cost to goal from n
• f(n) = estimated total cost of cheapest path through n to goal

• Special cases:
• Uniform Cost Search:
• Greedy (best-first) Search:
• A* Search:
How it works?
• Idea: use heuristic function for each node
• that estimates desirability
• Expands most desirable unexpanded node
• Implementation:
• fringe is a queue sorted in order of desirability
• Special cases:
• Uniform Cost Search (uninformed)
• Greedy (best-first) Search (informed)
• A* Search (informed)
Heuristic Search
• A heuristic is:
• A function that estimates how close a state is to a goal
• Designed for a particular search problem
• Examples: Manhattan distance, Euclidean distance for pathing

10

5
11.2
Example: Heuristic Function

h(x)
First three levels of the tic-tac-toe state space reduced by symmetry
The “most wins” heuristic applied to the first children in tic-tac-toe.
Heuristically reduced state space for tic-tac-toe.
8–Puzzle: Heuristics?

• h1: The number of misplaced tiles (squares with number).


• h2: The sum of the distances of the tiles from their goal positions.
h1: The number of misplaced tiles (not including the blank)

• Notation:
• h(current state) = 1
• Because this state has one
misplaced tile.

• Only “8” is misplaced


• So the heuristic function evaluates to 1.
• The heuristic is telling us, that it thinks a solution might be
available in just 1 more move.
25
h2: The sum of the distances of tiles from goal positions

Notation: , h(current state) = 8


Greedy Best First Search
eval-fn:
Greedy (Best-First) Search
• Idea: Expand node with the smallest estimated cost to reach the goal.
• Use heuristic function f(n) = h(n)
• Best-first search tries to expand the node that is closest to the goal,
on the grounds that this is likely to lead to a solution quickly.

• Choose path with low heuristic value


Greedy Search
• Expand the node that seems closest…

• What can go wrong?


Greedy Search
• Expand the node that seems closest…(order frontier by h)
• What can possibly go wrong?
Sibiu-Fagaras-Bucharest =
h= 253 h=176 99+211 = 310
Sibiu-Rimnicu Vilcea-Pitesti-Bucharest =
80+97+101 = 278
h=193
1000000

h=100

h=0
Greedy Search
b

• Strategy: expand a node that you think is
closest to a goal state
• Heuristic: estimate of distance to nearest goal for
each state

• A common case: b

• Best-first takes you straight to the (wrong) goal

• Worst-case: like a badly-guided DFS


Best First Search
Greedy Search
State Heuristic:
Start h(n)
A 75
118 A 366

140 B B 374
C
111 C 329
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance heuristic
Greedy Search: Tree Search
Start
A
Greedy Search: Tree Search
Start
A 75
118

[329] 140 [374] B


C

[253] E
Greedy Search: Tree Search
Start
A 75
118

[329] 140 [374] B


C

[253] E
80 99

[193] [178]
G F
[366] A
Greedy Search: Tree Search
Start
A 75
118

[329] 140 [374] B


C

[253] E
80 99

[193] [178]
G F
[366] A
211

[253] I [0]
E

Goal
Greedy Search: Tree Search
Start
A 75
118

[329] 140 [374] B


C

[253] E
80 99

[193] [178]
G F
[366] A
211

[253] I [0]
E
Path cost(A-E-F-I) = 253 + 178 + 0 = 431
Goal
dist(A-E-F-I) = 140 + 99 + 211 = 450
Greedy Search: Optimal?
State Heuristic:
Start h(n)
A 75
118 A 366

140 B B 374
C
111 C 329
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance heuristic
Not optimal! dist(A-E-G-H-I) =140+80+97+101=418
Greedy Search: Complete ?
State Heuristic:
Start h(n)
A 75
118 A 366

140 B B 374
C
111 ** C 250
E
D 80 99 D 244
E 253
G F
F 178
97 G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance heuristic
Greedy Search: Tree Search
Start
A
Greedy Search: Tree Search
Start
A 75
118

[250] 140 [374] B


C

[253] E
Greedy Search: Tree Search
Start
A 75
118

[250] 140 [374] B


C

[253] E
111
[244] D
Greedy Search: Tree Search
Start
A 75
118

[250] 140 [374] B


C

[253] E
111
[244] D

Infinite Branch !
[250] C
Greedy Search: Tree Search
Start
A 75
118

[250] 140 [374] B


C

[253] E
111
[244] D

Infinite Branch !
[250] C

[244] D
Greedy Search: Tree Search
Start
A 75
118

[250] 140 [374] B


C

[253] E
111
[244] D

Infinite Branch !
[250] C
Even Incomplete in finite state
space much like depth first search
[244] D (Complete in finite space with
repeated-state checking)
Greedy Search: Time and Space
Complexity ?
Start
A 75
118
B
• Greedy search is not optimal.
C 140
111 • Greedy search is incomplete
E
D 80 99 • In the worst case, the Time and
G F Space Complexity of Greedy
97
Search are both O(bm)
H
• Where b is the branching factor and m
211
the maximum path length
101
I
Goal
Frontier (Greedy) Expand Explored
1 (A,40) A Empty
2 (A-B,32)(A-C,25)(A-D,35) C A
3 (A-B,32)(A-D,35) (A-C-E,19)(A-C-F,17) F A,C
4 (A-B,32)(A-D,35) (A-C-E,19)(A-C-F-G,0) G A,C,F
5 (A-C-F-G,0) Goal Found
Frontier (Greedy) Expand Explored
1 (A,40) A Empty
2 (A-C,25) (A-B,32)(A-D,35) C A
3 (A-B,32)(A-D,35)(A-C-F,17)(A-C-E,19) F A,C
4 (A-B,32)(A-D,35)(A-C-F-G,0)(A-C-E,19) G A,C,F
5 (A-C-F-G) Goal Found A,C,F,G

Frontier (UCS) Expand Explored


1 (A,0) A
2 (A-B,11)(A-C,14)(A-D,7) D A
3 (A-B,11)(A-C,14)(A-D-F,32) B A,D
4 (A-B-E,26)(A-C,14)(A-D-F,32) C A,D,B
5 (A-B-E,26)(A-C-F,24)(A-C-E,22)(A-D-F,32) E A,D,B,C
6 (A-B-E,26)(A-C-F,24)(A-C-E-H,31)(A-D-F,32) F A,D,B,C,E
7 (A-B-E,26)(A-C-F-G,44)(A-C-E-H,31)(A-D-F,32) E A,D,B,C,E,F
8 (A-B-E,26)(A-C-F-G,44)(A-C-E-H,31)(A-D-F,32) F
9 (A-B-E,26)(A-C-F-G,44)(A-C-E-H,31)(A-D-F,32) H
10 A-C-E-H-G,41 Goal Found
Heuristics
• Problem:
• A heuristic is only an informed guess of the next step to be taken in solving a
problem.
• It is often based on experience or intuition.
• Heuristics use limited information, such as knowledge of the present situation
or descriptions of states currently on the open list.
• They are not always able to predict the exact behavior of the state space
farther along in the search.
• A heuristic can lead a search algorithm to a suboptimal solution or fail to find
any solution at all. This is an inherent limitation of heuristic search.

You might also like