0% found this document useful (0 votes)
55 views21 pages

Ec04 Search Intro PDF

Uploaded by

RaviSankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views21 pages

Ec04 Search Intro PDF

Uploaded by

RaviSankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Announcements

• Course TA: Danny Kumar ([email protected])
Course TA: Danny Kumar (ndkumar@cs unc edu)
• Guest lectures next week
• Assignment 1 out, due Tuesday, September 14
i d d S b
• Course discussion board 
– You should be able to find the discussion thread 
“AI in the news/on the Web” on Blackboard
– For participation points: at least two posts during the 
semester
Solving problems by searching
gp y g
Chapter 3
Search
• We will consider the problem of designing goal‐
based agents in observable, deterministic, discrete, 
based agents in observable deterministic discrete
known environments
• Example:
Start state

Goal state
Search
• We will consider the problem of designing goal‐
based agents in observable, deterministic, discrete, 
based agents in observable deterministic discrete
known environments 
– The solution is a fixed sequence of actions
q
– Search is the process of looking for the sequence of 
actions that reaches the goal
– Once the agent begins executing the search solution, it can 
ignore its percepts (open‐loop system)
Search problem components
Search problem components
• Initial state Initial
state
• Actions
A i
• Transition model
– What
What is the result of 
is the result of
performing a given action 
in a given state?
• Goal state
Goal state
• Path cost
– Assume that it is a sum of  Goal
nonnegative step costs state

• The optimal solution is the sequence of actions that gives the 
p q g
lowest path cost for reaching the goal
Example: Romania
• On vacation in Romania; currently in Arad
• Flight leaves tomorrow from Bucharest
Flight leaves tomorrow from Bucharest

• Initial state
– Arad
• Actions
– Go from one city to another
• Transition model
– If you go from city A to 
city B you end up in city B
city B, you end up in city B
• Goal state
– Bucharest
• Path cost
– Sum of edge costs
State space
State space
• The
The initial state, actions, and transition model 
initial state actions and transition model
define the state space of the problem
– The set of all states reachable from initial state by any 
The set of all states reachable from initial state by any
sequence of actions
– Can be represented as a directed graph where the 
Can be represented as a directed graph where the
nodes are states and links between nodes are actions
• What is the state space for the Romania problem?
What is the state space for the Romania problem?
Example: Vacuum world

• States
– Agent location and dirt location
– How many possible states?
H ibl ?
– What if there are n possible locations?
• Actions
– Left, right, suck
• Transition model
Vacuum world state space graph
Example: The 8‐puzzle
Example: The 8 puzzle
• States
– Locations of tiles 
Locations of tiles
• 8‐puzzle: 181,440 states
• 15‐puzzle: 1.3 trillion states
• 24‐puzzle: 1025 states
• Actions
– Move blank left, right, up, down 
• Path cost 
– 1 per move

• Optimal solution of n‐Puzzle is NP‐hard
Example: Robot motion planning
Example: Robot motion planning

• States
– Real‐valued coordinates of robot joint angles
• Actions
– Continuous motions of robot joints
C ti ti f b tj i t
• Goal state
– Desired final configuration (e.g., object is grasped)
• Path cost
P th t
– Time to execute, smoothness of path, etc.
Other Real‐World
Other Real World Examples
Examples
• Routing
• Touring
• VLSI layout
S l
• Assembly sequencing
• Protein design
Search
• Given:
– Initial state
– Actions
– Transition model
– Goal state
– Path cost

• How do we find the optimal solution?
– How about building the state space and then using 
g p g
Dijkstra’s shortest path algorithm?
• The state space is huge!
• Complexity of Dijkstra’s
p y j is O(E
( + V log V), where V
g ), is the size 
of the state space
Tree Search
Tree Search
• Let’s
Let s begin at the start node and expand
begin at the start node and expand it by 
it by
making a list of all possible successor states
• Maintain a fringe
Maintain a fringe or a list of unexpanded states
or a list of unexpanded states
• At each step, pick a state from the fringe to expand 
• Keep going until you reach the goal state
• Try to expand as few states as possible
y p p
Search tree
Search tree
Starting
• “What if” tree of possible actions and  state
outcomes
• The root node corresponds to the  Action
starting state Successor
• The children of a node correspond to  state
the successor states of that node’s state
• A path through the tree corresponds to 
A path through the tree corresponds to …
a sequence of actions
– A solution is a path ending in the goal 
state … … …
• Nodes vs. states
– A state is a representation of a physical  Goal state
configuration, while a node is a data 
g
structure that is part of the search tree
Tree Search Algorithm Outline
Tree Search Algorithm Outline
• Initialize the fringe using the starting state
the fringe using the starting state
• While the fringe is not empty
– Choose a fringe node to expand according to search strategy
g p g gy
– If the node contains the goal state, return solution
– Else expand the node and add its children to the fringe
Tree search example
Tree search example
Tree search example
Tree search example
Tree search example
Tree search example
Fringe
g
Search strategies
Search strategies
• A search strategy is defined by picking the order of node 
expansion
• Strategies are evaluated along the following dimensions:
– Completeness: does it always find a solution if one exists?
does it always find a solution if one exists?
– Optimality: does it always find a least‐cost solution?
– Time complexity: number of nodes generated
– Space complexity: maximum number of nodes in memory
Space complexity: maximum number of nodes in memory
• Time and space complexity are measured in terms of 
– b: maximum branching factor of the search tree
– d: depth of the least‐cost solution
d d h f h l l i
– m: maximum length of any path in the state space (may be infinite)
Uninformed search strategies
Uninformed search strategies
• Uninformed search strategies use only the 
search strategies use only the
information available in the problem definition

• Breadth‐first search
• Uniform‐cost search
• Depth‐first search
p
• Iterative deepening search

You might also like