Unit 2d ExtraReading (Compatibility Mode)
Unit 2d ExtraReading (Compatibility Mode)
1
Why Search?
To achieve goals or to maximize our utility
we need to predict what the result of our
actions in the future will be.
2
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between cities or choose next city
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
3
Example: Romania
4
Task Environment
Static / Dynamic
Previous problem was static: no attention to changes in environment
Observable / Partially Observable / Unobservable
Previous problem was observable: it knew its states at all times.
Deterministic / Stochastic
Previous problem was deterministic: no new percepts
were necessary, we can predict the future perfectly given our actions
Discrete / continuous
Previous problem was discrete: we can enumerate all possibilities
Single Agent
No other agents interacting with your cost function
Sequential
Decisions depend on past decisions 5
Problem Formulation
A problem is defined by four items:
For guaranteed realizability, any real state "in Arad” must get to some
real state "in Zerind”
(Abstract) solution set of real paths that are solutions in the real world
12
Tree search example
13
Tree search example
14
Tree search example
15
Repeated states
Failure to detect repeated states can turn a
linear problem into an exponential one!
16
Solutions to Repeated States
S
B
S
B C
C C S B S
State Space
Example of a Search Tree
17
Graph Search vs Tree Search
21
Complexity Recap
• We often want to characterize algorithms independent of their
implementation.
• Better is:
“This algorithm takes O(nlog(n)) time to run and O(n) to store”.
because this statement abstracts away from irrelevant details.
Uniform-cost search
Depth-first search
23
Breadth-first search
Expand shallowest unexpanded node
Fringe: nodes waiting in a queue to be explored
Implementation:
fringe is a first-in-first-out (FIFO) queue, i.e.,
new successors go at end of the queue.
Is A a goal state?
24
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe = [B,C]
Is B a goal state?
25
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe=[C,D,E]
Is C a goal state?
26
Breadth-first search
Expand shallowest unexpanded node
Implementation:
fringe is a FIFO queue, i.e., new successors go
at end
Expand:
fringe=[D,E,F,G]
Is D a goal state?
27
Example
BFS
28
Properties of breadth-first search
Complete? Yes it always reaches goal (if b is finite)
Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1)
(this is the number of nodes we generate)
Space? O(bd+1) (keeps every node in memory,
either in fringe or on a path to fringe).
Optimal? Yes (if we guarantee that deeper
solutions are less optimal, e.g. step-cost=1).
Note: in the new edition Space & Time complexity was O(bd) because we
postpone the expansion. 29
Uniform-cost search
Breadth-first is only optimal if step costs is increasing with
depth (e.g. constant). Can we guarantee optimality for any
step cost?
Uniform-cost Search: Expand node with
smallest path cost g(n).
Proof Completeness:
30
Uniform-cost search
31
6 1
3 A D F 1
2 4 8
S B E G
1 20
C
The graph above shows the step-costs for different paths going from the start (S) to
the goal (G).
Use uniform cost search to find the optimal path to the goal.
Exercise
32
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = Last In First Out (LIFO) queue, i.e., put
successors at front
Is A a goal state?
33
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[B,C]
Is B a goal state?
34
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[D,E,C]
Is D = goal state?
35
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[H,I,E,C]
Is H = goal state?
36
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[I,E,C]
Is I = goal state?
37
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[E,C]
Is E = goal state?
38
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[J,K,C]
Is J = goal state?
39
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[K,C]
Is K = goal state?
40
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[C]
Is C = goal state?
41
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[F,G]
Is F = goal state?
42
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[L,M,G]
Is L = goal state?
43
Depth-first search
Expand deepest unexpanded node
Implementation:
fringe = LIFO queue, i.e., put successors at front
queue=[M,G]
Is M = goal state?
44
Properties of depth-first search
A
45
Iterative deepening search
46
Iterative deepening search L=0
47
Iterative deepening search L=1
48
Iterative deepening search L=2
49
Iterative Deepening Search L=3
50
Iterative deepening search
Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
Complete? Yes
Time? O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1 or increasing
function of depth.
52
Bidirectional Search
Idea
simultaneously search forward from S and backwards
from G
stop when both “meet in the middle”
need to keep track of the intersection of 2 open sets of
nodes
What does searching backwards from G mean
need a way to specify the predecessors of G
this can be difficult,
e.g., predecessors of checkmate in chess?
which to take if there are multiple goal states?
where to start if there is only a goal test, no explicit list?53
Bi-Directional Search
Complexity: time and space complexity are:
O (b d /2 )
54
Summary of algorithms
55
Summary
Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored
http://www.cs.rmit.edu.au/AI-Search/Product/
http://aima.cs.berkeley.edu/demos.html (for more demos)
56