0% found this document useful (0 votes)
23 views42 pages

Lecture 2-Uninformed Search

The document discusses various types of agents in artificial intelligence, focusing on reflex agents and planning agents, and introduces search problems and methods. It covers uninformed search techniques such as Depth-First Search, Breadth-First Search, and Uniform-Cost Search, detailing their strategies, properties, and applications. Additionally, it explains the concepts of state spaces, search trees, and the importance of modeling in search algorithms.

Uploaded by

ann karagwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views42 pages

Lecture 2-Uninformed Search

The document discusses various types of agents in artificial intelligence, focusing on reflex agents and planning agents, and introduces search problems and methods. It covers uninformed search techniques such as Depth-First Search, Breadth-First Search, and Uniform-Cost Search, detailing their strategies, properties, and applications. Additionally, it explains the concepts of state spaces, search trees, and the importance of modeling in search algorithms.

Uploaded by

ann karagwa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

CSC 2114: Artificial Intelligence

Search

Credit to
University of California, Berkeley
[These slides adapted from Dan Klein and Pieter Abbeel]
Today
o Agents that Plan Ahead

o Search Problems

o Uninformed Search Methods


o Depth-First Search
o Breadth-First Search
o Uniform-Cost Search
Agents that Plan
Reflex Agents
Reflex Agents

o Reflex agents:
o Choose action based on current
percept (and maybe memory)
o May have memory or a model of the
world’s current state
o Do not consider the future
consequences of their actions
o Consider how the world IS

o Can a reflex agent be rational?


Planning Agents

o Planning agents:
o Ask “what if”
o Decisions based on (hypothesized)
consequences of actions
o Must have a model of how the world
evolves in response to actions
o Must formulate a goal (test)
o Consider how the world WOULD BE

o Optimal vs. complete planning

o Planning vs. replanning


Search Problems
Search Problems
o A search problem consists of:

o A state space

o A successor function “N”, 1.0


(with actions, costs)

“E”, 1.0
o A start state and a goal test

o A solution is a sequence of actions (a


plan) which transforms the start state to a
goal state
Search Problems Are Models
Example: Traveling in Romania

o State space:
o Cities
o Successor function:
o Roads: Go to adjacent city
with cost = distance
o Start state:
o Arad
o Goal test:
o Is state == Bucharest?

o Solution?
What’s in a State Space?
The world state includes every last detail of the environment

A search state keeps only the details needed for planning (abstraction)

o Problem: Pathing o Problem: Eat-All-Dots


o States: (x,y) location o States: {(x,y), dot
o Actions: NSEW booleans}
o Successor: update o Actions: NSEW
location only o Successor: update
o Goal test: is (x,y)=END location and possibly a
dot boolean
o Goal test: dots all false
Safe Passage

o Problem: eat all dots while keeping the ghosts perma-


scared
o What does the state space have to specify?
o (agent position, dot booleans, power pellet booleans, remaining
State Space Graphs and Search Trees
State Space Graphs

o State space graph: A mathematical


representation of a search problem
o Nodes are (abstracted) world
configurations
o Arcs represent successors (action results)
o The goal test is a set of goal nodes (maybe
only one)

o In a state space graph, each state


occurs only once!

o We can rarely build this full graph in


memory (it’s too big), but it’s a
useful idea
State Space Graphs

o State space graph: A mathematical


a G
representation of a search problem
o Nodes are (abstracted) world b c
configurations
e
o Arcs represent successors (action results) d f
o The goal test is a set of goal nodes (maybe S h
only one)
p r
q
o In a search graph, each state occurs
only once! Tiny search graph for a tiny search
problem

o We can rarely build this full graph in


memory (it’s too big), but it’s a
useful idea
Search Trees
This is now / start
“N”, 1.0 “E”,
1.0
Possible futures

o A search tree:
o A “what if” tree of plans and their outcomes
o The start state is the root node
o Children correspond to successors
o Nodes show states, but correspond to PLANS that achieve those
states
o For most problems, we can never actually build the whole tree
State Space Graphs vs. Search Trees

Each NODE in in
State Space Graph the search tree is Search Tree
an entire PATH in
the state space S

a G graph. e p
d
b c
b c e h r q
e
d f a a h r p q f
S h We construct both
on demand – and p q f q c G
p q r
we construct as q c G a
little as possible.
a
State Space Graphs vs. Search Trees

Consider this 4-state How big is its search tree (from


graph: S)?
a

S G

b
State Space Graphs vs. Search Trees

Consider this 4-state graph: How big is its search tree (from
S)?
a s
a b
S G
b G a G
b a G b G

… …

Important: Lots of repeated structure in the search tree!


Search Example: Romania
Searching with a Search Tree

o Search:
o Expand out potential plans (tree nodes)
o Maintain a fringe of partial plans under
consideration
o Try to expand as few tree nodes as possible
General Tree Search

o Important ideas:
o Fringe
o Expansion
o Exploration strategy

o Main question: which fringe nodes to


explore?
Example: Tree Search
a G
b c
e
d f
S h
p q r
Example: Tree Search
a G
b c
e
d f
S h
p q r

S s
sd
d e p se
sp
b c e h r q sdb
sdc
a a h r p q f sde
sdeh
p q f q c G
sder
a sderf
q c G
sderfc
a sderfG
Depth-First Search
Depth-First Search
Strategy: expand a a G
deepest node first b c

Implementation: d
e
f
Fringe is a LIFO stack S h
p q r

d e p

b c e h r q

a a h r p q f

p q f q c G

q c G a

a
Search Algorithm Properties
Search Algorithm Properties
o Complete: Guaranteed to find a solution if one exists?
o Optimal: Guaranteed to find the least cost path?
o Time complexity?
o Space complexity? b
1 node
… b nodes

o Cartoon of search tree: b2 nodes


o b is the branching factor m tiers
o m is the maximum depth
o solutions at various depths
bm nodes
o Number of nodes in entire tree?
o 1 + b + b2 + …. bm = O(bm)
Depth-First Search (DFS) Properties
o What nodes DFS expand?
o Some left prefix of the tree. 1 node
b
o Could process the whole tree! … b nodes
o If m is finite, takes time O(bm) b2 nodes
m tiers
o How much space does the fringe
take?
o Only has siblings on path to root, so
O(bm) bm nodes

o Is it complete?
o m could be infinite, so only if we
prevent cycles (more later)

o Is it optimal?
o No, it finds the “leftmost” solution,
Breadth-First Search
Breadth-First Search
Strategy: expand a a G
shallowest node first b c
Implementation: Fringe e
d f
is a FIFO queue S h
p q r

d e p
Search
b c e h r q
Tiers
a a h r p q f

p q f q c G

q c G a

a
Breadth-First Search (BFS) Properties
o What nodes does BFS expand?
o Processes all nodes above shallowest 1 node
b
solution … b nodes
o Let depth of shallowest solution be s s tiers b2 nodes
o Search takes time O(bs)
bs nodes
o How much space does the fringe
take?
o Has roughly the last tier, so O(bs) bm nodes

o Is it complete?
o s must be finite if a solution exists

o Is it optimal?
o Only if costs are all 1 (more on costs
later)
Quiz: DFS vs BFS
DFS vs BFS

o When will BFS outperform DFS?

o When will DFS outperform BFS?


Iterative Deepening
o Idea: get DFS’s space advantage
with BFS’s time / shallow-solution b
advantages …

o Run a DFS with depth limit 1. If no


solution…
o Run a DFS with depth limit 2. If no
solution…
o Run a DFS with depth limit 3. …..

o Isn’t that wastefully redundant?


o Generally most work happens in the
lowest level searched, so not so bad!
Cost-Sensitive Search
a GOAL
2 2
b c
3
2
1 8
2 e
3 d
f
9 8 2
START h
1 4 2

p 4 r
15
q

BFS finds the shortest path in terms of number of actions.


It does not find the least-cost path. We will now cover How?
a similar algorithm which does find the least-cost path.
Uniform Cost Search
Uniform Cost Search
2 a G
Strategy: expand a b c
cheapest node first: 1 8 2
2 e
Fringe is a priority queue 3 d f
9 2
(priority: cumulative cost) S h 8
1
1 p q r
15

S 0

d 3 e 9 p 1

b 4 c e 5 h 17 r 11 q 16
11
Cost a 6 a h 13 r 7 p q f
contours
p q f 8 q c G

q 11 c G 10 a

a
Uniform Cost Search (UCS) Properties
o What nodes does UCS expand?
o Processes all nodes with cost less than cheapest
b
solution! …
c 1
o If that solution costs C* and arcs cost at least  , c 2
C*/ “tiers”
then the “effective depth” is roughly C*/ c 3
o Takes time O(bC*/) (exponential in effective
depth)

o How much space does the fringe take?


o Has roughly the last tier, so O(bC*/)

o Is it complete?
o Assuming best solution has a finite cost and
minimum arc cost is positive, yes!

o Is it optimal?
Uniform Cost Issues
o Remember: UCS explores
… c
increasing cost contours 1c 
2c 
3
o The good: UCS is complete and
optimal!

o The bad:
o Explores options in every
“direction” Start Goal
o No information about goal location

o We’ll fix that soon!


The One Queue
o All these search algorithms
are the same except for
fringe strategies
o Conceptually, all fringes are
priority queues (i.e. collections
of nodes with attached
priorities)
o Practically, for DFS and BFS,
you can avoid the log(n)
overhead from an actual
priority queue, by using stacks
and queues
o Can even code one
Search and Models

o Search operates
over models of the
world
o The agent doesn’t
actually try all the
plans out in the real
world!
o Planning is all “in
simulation”
o Your search is only
as good as your

You might also like