0% found this document useful (0 votes)
38 views30 pages

Problem Solving and Searching: CS 171/271 (Chapter 3)

This document discusses problem solving using search algorithms. It defines the components of a problem as an initial state, successor function, goal test, and path cost function. Various uninformed search strategies are presented, including breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search. Their time and space complexities are analyzed. Bi-directional search is also introduced to potentially speed up the search. Repeated states can cause problems and are addressed using node histories.

Uploaded by

likhith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views30 pages

Problem Solving and Searching: CS 171/271 (Chapter 3)

This document discusses problem solving using search algorithms. It defines the components of a problem as an initial state, successor function, goal test, and path cost function. Various uninformed search strategies are presented, including breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search. Their time and space complexities are analyzed. Bi-directional search is also introduced to potentially speed up the search. Repeated states can cause problems and are addressed using node histories.

Uploaded by

likhith
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

Problem Solving and

Searching

CS 171/271
(Chapter 3)
Some text and images in these slides were drawn from
Russel & Norvig’s published material

1
Problem Solving
Agent Function

2
Problem Solving Agent
 Agent finds an action sequence to achieve a
goal
 Requires problem formulation
 Determine goal
 Formulate problem based on goal
 Searches for an action sequence that solves
the problem
 Actions are then carried out, ignoring
percepts during that period

3
Problem
 Initial state
 Possible actions / Successor function
 Goal test
 Path cost function

* State space can be derived from the


initial state and the successor function
4
Example: Vacuum World
 Environment consists of two squares,
A (left) and B (right)
 Each square may or may not be dirty
 An agent may be in A or B
 An agent can perceive whether a square is dirty
or not
 An agent may move left, move right, suck dirt
(or do nothing)
 Question: is this a complete PEAS description?

5
PEAS revisited
 Performance Measure: captures agent’s
aspiration
 Environment: context, restrictions
 Actuators: indicates what the agent can
carry out
 Sensors: indicates what the agent can
perceive

6
Vacuum World Problem
 Initial state: configuration describing
 location of agent
 dirt status of A and B
 Successor function
 R, L, or S, causes a different configuration
 Goal test
 Check whether A and B are both not dirty
 Path cost
 Number of actions

7
State Space
 2 possible locations
x
2 x 2 combinations
( A is clean/dirty,
B is clean/dirty )
=
8 states

8
Sample Problem and Solution

 Initial State: 2

 Action Sequence:
Suck, Left, Suck
(brings us to which
state?)

9
States and Successors

10
Example: 8-Puzzle
 Initial state:
as shown
 Actions?
successor function?
 Goal test?
 Path cost?

11
Example: 8-Queens Problem
 Position 8 queens on a chessboard so
that no queen attacks any other queen
 Initial state?
 Successor function?
 Goal test?
 Path cost?

12
Example: Route-finding
 Given a set of locations, links (with
values) between locations, an initial
location and a destination, find the best
route
 Initial state?
 Successor function?
 Goal test?
 Path cost?
13
Some Considerations
 Environment ought to be be static,
deterministic, and observable
 Why?
 If some of the above properties are
relaxed, what happens?
 Toy problems versus real-world
problems

14
Searching for Solutions
 Searching through the state space
 Search tree rooted at initial state
 A node in the tree is expanded by applying
successor function for each valid action
 Children nodes are generated with a different path
cost and depth
 Return solution once node with goal state is
reached

15
Tree-Search Algorithm

16
fringe: initially-empty container

What is
returned?

17
Search Strategy
 Strategy: specifies the order of node
expansion
 Uninformed search strategies: no
additional information beyond states
and successors
 Informed or heuristic search: expands
“more promising” states

18
Evaluating Strategies
 Completeness
 does it always find a solution if one exists?
 Time complexity
 number of nodes generated
 Space complexity
 maximum number of nodes in memory
 Optimality
 does it always find a least-cost solution?

19
Time and space complexity
Expressed in terms of:

 b: branching factor
 depends on possible actions
 max number of successors of a node
 d: depth of shallowest goal node
 m: maximum path-length in state space

20
Uninformed Search Strategies
 Breadth-First Search
 Uniform-Cost Search
 Depth-First Search
 Depth-Limited Search
 Iterative Deepening Search

21
Breadth-First Search
 fringe is a regular first-in-first-out queue
 Start with initial state; then process the
successors of initial state, followed by their
successors, and so on…
 Shallow nodes first before deeper nodes
 Complete
 Optimal (if path-cost = node depth)
 Time Complexity: O(b + b2 + b3 + … + bd+1)
 Space Complexity: same

22
Uniform-Cost Search
 Prioritize nodes that have least path-cost
(fringe is a priority queue)
 If path-cost = number of steps, this
degenerates to BFS
 Complete and optimal
 As long as zero step costs are handled properly
 The route-finding problem, for example, have
varying step costs
 Dijkstra’s shortest-path algorithm <-> UCS
 Time and space complexity?
23
Depth-First Search
 fringe is a stack (last-in-first-out
container)
 Go as deep as possible and then
backtrack
 Often implemented using recursion
 Not complete and might not terminate
 Time Complexity: O(bm)
 Space complexity: O(bm)
24
Depth-Limited Search
 DFS with a pre-determined depth-limit l
 Guaranteed to terminate
 Still incomplete
 Worse, we might choose l < m (shallowest goal
node)
 Depth-limit l can be based on problem
definition
 e.g., graph diameter in route-finding problem
 Time and space complexity depend on l

25
Iterative Deepening Search
 Depth-Limited Search for l = 0, 1, 2, …
 Stops when goal is found (when l
becomes d)
 Complete and optimal (if path-cost =
node-depth)
 Time and space complexity?

26
Comparing Search Strategies

27
Bi-directional Search
 Run two simultaneous searches
 One forward from initial state
 One backward from goal state
 Stops when node in one search is in fringe of
the other search
 Rationale: two “half-searches” quicker than a
full search
 Caveat: not always easy to search backwards

28
Caution: Repeated States
 Can occur specially in environments
where actions are reversible
 Introduces the possibility of infinite
search trees
 Time complexity blows up even with fixed
depth limits
 Solution: detect repeated states by
storing node history

29
Summary
 A problem is defined by its initial state,
a successor function, a goal test, and a
path cost function
 Problem’s environment <-> state space
 Different strategies drive different tree-
search algorithms that return a solution
(action sequence) to the problem
 Coming up: informed search strategies

30

You might also like