0% found this document useful (0 votes)
41 views

Lecture 2 - Uniformed Search

The document discusses different types of search agents and search methods for solving problems, including reflex agents that react to current percepts, planning agents that consider future consequences of actions, and uninformed search methods like depth-first, breadth-first, and uniform-cost search that are used to solve search problems modeled as state spaces with start and goal states.

Uploaded by

Mamunur Rashid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Lecture 2 - Uniformed Search

The document discusses different types of search agents and search methods for solving problems, including reflex agents that react to current percepts, planning agents that consider future consequences of actions, and uninformed search methods like depth-first, breadth-first, and uniform-cost search that are used to solve search problems modeled as state spaces with start and goal states.

Uploaded by

Mamunur Rashid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Artificial Intelligence

Search
Today
▪ Agents that Plan Ahead

▪ Search Problems

▪ Uninformed Search Methods


▪ Depth-First Search
▪ Breadth-First Search
▪ Uniform-Cost Search
Agents that Plan
Reflex Agents

▪ Reflex agents:
▪ Choose action based on current percept (and
maybe memory)
▪ May have memory or a model of the world’s
current state
▪ Do not consider the future consequences of
their actions
▪ Consider how the world IS

▪ Can a reflex agent be rational?


Demo Reflex Optimal
Demo Reflex Odd
Planning Agents

▪ Planning agents:
▪ Ask “what if”
▪ Decisions based on (hypothesized)
consequences of actions
▪ Must have a model of how the world evolves in
response to actions
▪ Must formulate a goal (test)
▪ Consider how the world WOULD BE

▪ Optimal vs. complete planning

▪ Planning vs. replanning


Demo Replanning
Demo Replanning
Search Problems
Search Problems
▪ A search problem consists of:

▪ A state space

▪ A successor function “N”, 1.0


(with actions, costs)

“E”, 1.0
▪ A start state and a goal test

▪ A solution is a sequence of actions (a plan) which


transforms the start state to a goal state
Search Problems Are Models
Example: Traveling in Romania

▪ State space:
▪ Cities
▪ Successor function:
▪ Roads: Go to adjacent city with
cost = distance
▪ Start state:
▪ Arad
▪ Goal test:
▪ Is state == Bucharest?

▪ Solution?
What’s in a State Space?
The world state includes every last detail of the environment

A search state keeps only the details needed for planning (abstraction)

▪ Problem: Pathing ▪ Problem: Eat-All-Dots


▪ States: (x,y) location ▪ States: {(x,y), dot booleans}
▪ Actions: NSEW ▪ Actions: NSEW
▪ Successor: update location ▪ Successor: update location
only and possibly a dot boolean
▪ Goal test: is (x,y)=END ▪ Goal test: dots all false
State Space Sizes?

▪ World state:
▪ Agent positions: 120
▪ Food count: 30
▪ Ghost positions: 12
▪ Agent facing: NSEW

▪ How many
▪ World states?
120x(230)x(122)x4
▪ States for pathing?
120
▪ States for eat-all-dots?
120x(230)
Quiz: Safe Passage

▪ Problem: eat all dots while keeping the ghosts perma-scared


▪ What does the state space have to specify?
▪ (agent position, dot booleans, power pellet booleans, remaining scared time)
State Space Graphs and Search Trees
State Space Graphs

▪ State space graph: A mathematical


representation of a search problem
▪ Nodes are (abstracted) world configurations
▪ Arcs represent successors (action results)
▪ The goal test is a set of goal nodes (maybe only one)

▪ In a state space graph, each state occurs only


once!

▪ We can rarely build this full graph in memory


(it’s too big), but it’s a useful idea
State Space Graphs

▪ State space graph: A mathematical


a G
representation of a search problem
▪ Nodes are (abstracted) world configurations b c
▪ Arcs represent successors (action results) e
▪ The goal test is a set of goal nodes (maybe only one) d f
S h
▪ In a state space graph, each state occurs only p r
q
once!
Tiny state space graph for a tiny
▪ We can rarely build this full graph in memory search problem
(it’s too big), but it’s a useful idea
Search Trees
This is now / start
“N”, 1.0 “E”, 1.0

Possible futures

▪ A search tree:
▪ A “what if” tree of plans and their outcomes
▪ The start state is the root node
▪ Children correspond to successors
▪ Nodes show states, but correspond to PLANS that achieve those states
▪ For most problems, we can never actually build the whole tree
State Space Graphs vs. Search Trees

Each NODE in the


State Space Graph search tree is an Search Tree
entire PATH in the
state space graph. S

a G d e p
b c
b c e h r q
e
d f a a h r p q f
S h We construct both
on demand – and p q f q c G
p q r
we construct as q c G a
little as possible.
a
Quiz: State Space Graphs vs. Search Trees

Consider this 4-state graph: How big is its search tree (from S)?

S G

b
Quiz: State Space Graphs vs. Search Trees

Consider this 4-state graph: How big is its search tree (from S)?

a s
a b
S G
b G a G
b a G b G

… …

Important: Lots of repeated structure in the search tree!


Tree Search
Search Example: Romania
Searching with a Search Tree

▪ Search:
▪ Expand out potential plans (tree nodes)
▪ Maintain a fringe of partial plans under consideration
▪ Try to expand as few tree nodes as possible
General Tree Search

▪ Important ideas:
▪ Fringe
▪ Expansion
▪ Exploration strategy

▪ Main question: which fringe nodes to explore?


Example: Tree Search
a G
b c
e
d f
S h
p q r
Example: Tree Search
a G
b c
e
d f
S h
p q r

S s
s d
d e p s e
s p
b c e h r q s d b
s d c
a a h r p q f s d e
s d e h
p q f q c G s d e r
a s d e r f
q c G
s d e r f c
a s d e r f G
Depth-First Search
Depth-First Search
Strategy: expand a a G
deepest node first b c

Implementation: e
d f
Fringe is a LIFO stack S h
p q r

d e p

b c e h r q

a a h r p q f

p q f q c G

q c G a

a
Search Algorithm Properties
Search Algorithm Properties
▪ Complete: Guaranteed to find a solution if one exists?
▪ Optimal: Guaranteed to find the least cost path?
▪ Time complexity?
▪ Space complexity? b
1 node
… b nodes

▪ Cartoon of search tree: b2 nodes


▪ b is the branching factor m tiers
▪ m is the maximum depth
▪ solutions at various depths
bm nodes
▪ Number of nodes in entire tree?
▪ 1 + b + b2 + …. bm = O(bm)
Depth-First Search (DFS) Properties
▪ What nodes DFS expand?
▪ Some left prefix of the tree. 1 node
b
▪ Could process the whole tree! … b nodes
▪ If m is finite, takes time O(bm) b2 nodes
m tiers
▪ How much space does the fringe take?
▪ Only has siblings on path to root, so O(bm)

▪ Is it complete? bm nodes
▪ m could be infinite, so only if we prevent
cycles (more later)

▪ Is it optimal?
▪ No, it finds the “leftmost” solution,
regardless of depth or cost
Breadth-First Search
Breadth-First Search
Strategy: expand a a G
shallowest node first b c
Implementation: Fringe e
d f
is a FIFO queue S h
p q r

d e p
Search
b c e h r q
Tiers
a a h r p q f

p q f q c G

q c G a

a
Breadth-First Search (BFS) Properties
▪ What nodes does BFS expand?
▪ Processes all nodes above shallowest solution b
1 node
▪ Let depth of shallowest solution be s … b nodes
s tiers
▪ Search takes time O(bs) b2 nodes

▪ How much space does the fringe take? bs nodes


▪ Has roughly the last tier, so O(bs)

▪ Is it complete? bm nodes
▪ s must be finite if a solution exists, so yes!

▪ Is it optimal?
▪ Only if costs are all 1 (more on costs later)
Quiz: DFS vs BFS
Video of Demo Maze Water DFS/BFS (part 1)
Video of Demo Maze Water DFS/BFS (part 2)
Quiz: DFS vs BFS

▪ When will BFS outperform DFS?

▪ When will DFS outperform BFS?


Iterative Deepening
▪ Idea: get DFS’s space advantage with BFS’s
time / shallow-solution advantages b
▪ Run a DFS with depth limit 1. If no solution… …

▪ Run a DFS with depth limit 2. If no solution…


▪ Run a DFS with depth limit 3. …..

▪ Isn’t that wastefully redundant?


▪ Generally most work happens in the lowest
level searched, so not so bad!
Cost-Sensitive Search
G
a O
2 2 AL

b c
3
2
1 8
2 e
3 d
f
ST 9 8 2
AR h
T
1 4 2

p 4 r
15
q

BFS finds the shortest path in terms of number of actions.


It does not find the least-cost path. We will now cover
a similar algorithm which does find the least-cost path.
Uniform Cost Search
Uniform Cost Search
2 a G
Strategy: expand a b c
cheapest node first: 1 2
8 2 e
3 d f
Fringe is a priority queue 9 2
(priority: cumulative cost) S h 8 1
1 p q r
15

S 0

d 3 e 9 p 1

b 4 c e 5 h 17 r 11 q 16
11
Cost a 6 a h 13 r 7 p q f
contours
p q f 8 q c G

q 11 c G 10 a

a
Uniform Cost Search (UCS) Properties
▪ What nodes does UCS expand?
▪ Processes all nodes with cost less than cheapest solution!
b
▪ If that solution costs C* and arcs cost at least ε , then the …
c≤1
“effective depth” is roughly C*/ε c≤2
C*/ε “tiers”
▪ Takes time O(b ) (exponential in effective depth)
C*/ε
c≤3

▪ How much space does the fringe take?


▪ Has roughly the last tier, so O(bC*/ε)

▪ Is it complete?
▪ Assuming best solution has a finite cost and minimum arc cost
is positive, yes!

▪ Is it optimal?
▪ Yes! (Proof next lecture via A*)
Uniform Cost Issues
▪ Remember: UCS explores increasing cost … c≤1
contours c≤2
c≤3

▪ The good: UCS is complete and optimal!

▪ The bad:
▪ Explores options in every “direction”
▪ No information about goal location
Start Goal

▪ We’ll fix that soon!


The One Queue
▪ All these search algorithms are the
same except for fringe strategies
▪ Conceptually, all fringes are priority
queues (i.e. collections of nodes with
attached priorities)
▪ Practically, for DFS and BFS, you can
avoid the log(n) overhead from an
actual priority queue, by using stacks
and queues
▪ Can even code one implementation
that takes a variable queuing object
Search Gone Wrong?
Search and Models

▪ Search operates over


models of the world
▪ The agent doesn’t
actually try all the plans
out in the real world!
▪ Planning is all “in
simulation”
▪ Your search is only as
good as your models…

You might also like