Lecture 2-Uninformed Search
Lecture 2-Uninformed Search
Search
Credit to
University of California, Berkeley
[These slides adapted from Dan Klein and Pieter Abbeel]
Today
o Agents that Plan Ahead
o Search Problems
o Reflex agents:
o Choose action based on current
percept (and maybe memory)
o May have memory or a model of the
world’s current state
o Do not consider the future
consequences of their actions
o Consider how the world IS
o Planning agents:
o Ask “what if”
o Decisions based on (hypothesized)
consequences of actions
o Must have a model of how the world
evolves in response to actions
o Must formulate a goal (test)
o Consider how the world WOULD BE
o A state space
“E”, 1.0
o A start state and a goal test
o State space:
o Cities
o Successor function:
o Roads: Go to adjacent city
with cost = distance
o Start state:
o Arad
o Goal test:
o Is state == Bucharest?
o Solution?
What’s in a State Space?
The world state includes every last detail of the environment
A search state keeps only the details needed for planning (abstraction)
o A search tree:
o A “what if” tree of plans and their outcomes
o The start state is the root node
o Children correspond to successors
o Nodes show states, but correspond to PLANS that achieve those
states
o For most problems, we can never actually build the whole tree
State Space Graphs vs. Search Trees
Each NODE in in
State Space Graph the search tree is Search Tree
an entire PATH in
the state space S
a G graph. e p
d
b c
b c e h r q
e
d f a a h r p q f
S h We construct both
on demand – and p q f q c G
p q r
we construct as q c G a
little as possible.
a
State Space Graphs vs. Search Trees
S G
b
State Space Graphs vs. Search Trees
Consider this 4-state graph: How big is its search tree (from
S)?
a s
a b
S G
b G a G
b a G b G
… …
o Search:
o Expand out potential plans (tree nodes)
o Maintain a fringe of partial plans under
consideration
o Try to expand as few tree nodes as possible
General Tree Search
o Important ideas:
o Fringe
o Expansion
o Exploration strategy
S s
sd
d e p se
sp
b c e h r q sdb
sdc
a a h r p q f sde
sdeh
p q f q c G
sder
a sderf
q c G
sderfc
a sderfG
Depth-First Search
Depth-First Search
Strategy: expand a a G
deepest node first b c
Implementation: d
e
f
Fringe is a LIFO stack S h
p q r
d e p
b c e h r q
a a h r p q f
p q f q c G
q c G a
a
Search Algorithm Properties
Search Algorithm Properties
o Complete: Guaranteed to find a solution if one exists?
o Optimal: Guaranteed to find the least cost path?
o Time complexity?
o Space complexity? b
1 node
… b nodes
o Is it complete?
o m could be infinite, so only if we
prevent cycles (more later)
o Is it optimal?
o No, it finds the “leftmost” solution,
Breadth-First Search
Breadth-First Search
Strategy: expand a a G
shallowest node first b c
Implementation: Fringe e
d f
is a FIFO queue S h
p q r
d e p
Search
b c e h r q
Tiers
a a h r p q f
p q f q c G
q c G a
a
Breadth-First Search (BFS) Properties
o What nodes does BFS expand?
o Processes all nodes above shallowest 1 node
b
solution … b nodes
o Let depth of shallowest solution be s s tiers b2 nodes
o Search takes time O(bs)
bs nodes
o How much space does the fringe
take?
o Has roughly the last tier, so O(bs) bm nodes
o Is it complete?
o s must be finite if a solution exists
o Is it optimal?
o Only if costs are all 1 (more on costs
later)
Quiz: DFS vs BFS
DFS vs BFS
p 4 r
15
q
S 0
d 3 e 9 p 1
b 4 c e 5 h 17 r 11 q 16
11
Cost a 6 a h 13 r 7 p q f
contours
p q f 8 q c G
q 11 c G 10 a
a
Uniform Cost Search (UCS) Properties
o What nodes does UCS expand?
o Processes all nodes with cost less than cheapest
b
solution! …
c 1
o If that solution costs C* and arcs cost at least , c 2
C*/ “tiers”
then the “effective depth” is roughly C*/ c 3
o Takes time O(bC*/) (exponential in effective
depth)
o Is it complete?
o Assuming best solution has a finite cost and
minimum arc cost is positive, yes!
o Is it optimal?
Uniform Cost Issues
o Remember: UCS explores
… c
increasing cost contours 1c
2c
3
o The good: UCS is complete and
optimal!
o The bad:
o Explores options in every
“direction” Start Goal
o No information about goal location
o Search operates
over models of the
world
o The agent doesn’t
actually try all the
plans out in the real
world!
o Planning is all “in
simulation”
o Your search is only
as good as your