CSSE 502 Human Computer Interaction (Intro. To AI - Lec. 3 - Problem Solving As Search & State Space Search) Spring 2024
CSSE 502 Human Computer Interaction (Intro. To AI - Lec. 3 - Problem Solving As Search & State Space Search) Spring 2024
Lecture 3
Problem Solving as Search
& State Space Search
3.1 Solving Problems by Searching
▪ Types of Agents (Reflex vs. Planning) - State Space Search -
Example: Tic-Tac-Toe Game - Example: Mechanical Fault
Diagnosing - How human beings think? - Heuristic Search
3.2 State Space Search Graph & Strategies
▪ Search Problem Components - State Space & State Space
Graph - Examples: Vacuum World, The 8-Puzzle, Romania,
etc. - State Space Search Strategies - Selecting Search Strategy Faculty of
3.3 Basic Idea of Search & the Backtracking Search Algorithm Computers &
▪ Search: Basic idea - Search Tree - Tree Search Algorithm- Artificial Intelligence
Outline & Example - Handling Repeated States - Backtracking
Search Algorithm & Data Structures Spring 2024
Lecture 3: Solving Problem by Searching,
.. & .. State Space Search Strategies and Structures
3.1 Solving problems by Searching
▪ Solving Problems by Searching ▪ Example: Tic-Tac-Toe
▪ Types of Agents (Reflex vs. Planning) ▪ Example: Traveling Salesperson
▪ State Space Search ▪ State Space Search Strategies
▪ Example: Tic-Tac-Toe Game ▪ Selecting Search Strategy
▪ Example: Mechanical Fault Diagnosing
3.3 Basic Idea of Search & the Backtracking Search
▪ How human beings think? Algorithm
▪ Heuristic Search ▪ Search: Basic idea
3.2 State Space Search Graph & Strategies ▪ Search Tree
▪ Search Problem Components ▪ Tree Search Algorithm Outline
▪ Example: Romania ▪ Tree Search Example
▪ State Space & State Space Graph ▪ Handling Repeated States
▪ Example: Vacuum World ▪ Backtracking Search
▪ Example: The 8-Puzzle ▪ Backtracking Algorithm Data Structures
▪ Example: Robot Motion Planning ▪ Backtracking Algorithm
3.4 Practice Exercises (Solved Problems)
Resources for this lecture
• This lecture covers the following chapters:
• Chapter 3 (Solving Problems by Searching; only sections 3.1,
3.2, & 3.3) from Stuart J. Russell and Peter Norvig, "Artificial
Intelligence: A Modern Approach," Third Edition (2010), by
Pearson Education Inc.
.. AND ..
• Part II (pages 41 to 45) and Chapter 3 (Structures and
Strategies for State Space Search; only sections 3.0, 3.1, and
3.2) from George F. Luger, "Artificial Intelligence: Structures
and strategies for complex problem solving, " Fifth Edition
(2005), Pearson Education Limited. 3
Solving Problems by SEARCHING
Types of Agents
a Reflex Agent : a Planning agent :
X X X
X X X
X X X
O O O
X X X O X X O X X X
O O O
Example: Mechanical Fault Diagnosing
Start ask:
What is the problem?
Engine trouble
ask:
Transmission
ask:………
breaks
ask: ……
…
Does the car start?
Yes No
Yes No battery
Yes
ok
Turn over Won’t turn over ask: No
ask: …….. Do lights come on? battery
dead
How Human Beings Think ..?
• Human beings do not search the entire state space
(exhaustive search).
• Only alternatives that experience has shown to be effective
are explored.
• Human problem solving is based on judgmental rules that
limit the exploration of search space to those portions of
state space that seem somehow promising.
• These judgmental rules are known as “heuristics”.
Heuristic Search
• A heuristic is a strategy for selectively exploring the search
space.
• It guides the search along lines that have a high probability of
success.
• It employs knowledge about the nature of a problem to find a
solution.
• It does not guarantee an optimal solution to the problem but
can come close most of the time.
• Human beings use many heuristics in problem solving.
Exercises
What have we learned?
▪ Define the following terms:
▪ State Space Graph,
▪ Exhaustive Search, and ..
▪ Heuristics.
Goal State
Search
Initial state
o Arad
Actions
o Go from one city to another
Transition Model
o If you go from city A to
city B, you end up in city B
Goal State
o Bucharest
Path Cost
o Sum of edge costs (total distance traveled)
State Space
The initial state, actions, and transition model define the state
space of the problem;
• The set of all states reachable from initial state by any
sequence of actions.
• Can be represented as a directed graph where the
nodes are states and links between nodes are actions.
Riverbank 1
4
River
Island 1 1 Island 2
Riverbank 2
State Space
A state space is represented by four-tuple [N, A, S, GD].
o N, is the set of nodes or states of the graph. These
correspond to the states in the problem-solving process.
o A, is the set of arcs between nodes. These correspond to
the steps in a problem-solving process.
o S, a nonempty subset of N, contains the start-state (s) of
the problem.
o GD, a nonempty subset of N, contains the goal-state(s) of
the problem. The states in GD are described using either:
o A measurable property of the states encountered in
the search.
o A property of the solution path developed in the
search (a solution path is a path through this graph
from a node in S to a node in GD).
Example: Vacuum World
o States:
o Agent location and dirt location
o How many possible states?
o What if there are n possible locations?
o The size of the state space grows exponentially
with the “size” of the world!
o Actions:
o Left, right, suck.
o Transition Model .. ?
Example: Vacuum World State Space Graph
o Transition Model:
Example: the 8-Puzzle
o States
o Locations of tiles
o 8-puzzle: 181,440 states (9!/2)
o 15-puzzle: ~10 trillion states
o 24-puzzle: ~1025 states
o Actions
o Move blank left, right, up, down
o Path Cost
o 1 per move
Example: Robot Motion Planning
o States
o Real-valued joint parameters (angles, displacements).
o Actions
o Continuous motions of robot joints.
o Goal State
o Configuration in which object is grasped.
o Path Cost
o Time to execute, smoothness of path, etc.
Example: Tic-Tac-Toe
• Nodes(N): all the different configuration of Xs and Os that
the game can have.
• Arcs (A): generated by legal moves by placing an X or an O
in unused location.
• Start state (S): an empty board.
• Goal states (GD): a board state having three Xs in a row,
column, or diagonal.
• The arcs are directed, then no cycles in the state space,
directed acyclic graph (DAG).
• Complexity: 9! Different paths can be generated.
Example: Traveling Salesperson
A salesperson has five cites to visit and ten must return home.
• Nodes(N): represent 5 cites.
• Arcs(A): labeled with weight indicating the cost of traveling
between connected cites.
• Start state(S): a home city.
• Goal states(GD): an entire path contains a complete circuit
with minimum cost.
• Complexity: (n-1)! Different cost-weighted paths can be
generated.
State Space Search Strategies
There are two distinct ways for searching a state space graph:
Start
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Search: Basic idea
Starting
Search Tree (the What-if tree) State
frontier.
Tree Search Example
Start: Arad
Goal: Bucharest
Tree Search Example
Start: Arad
Goal: Bucharest
Tree Search Example
Start: Arad
Goal: Bucharest
Handling Repeated States
- Initialize the frontier using the starting state
- While the frontier is not empty
- Choose a frontier node according to search strategy
and take it off the frontier.
- If the node contains the goal state, return solution
Else expand the node and add its children to the
frontier.
Start
Node
o State List (SL): lists the states in the current path being
tried. If the goal is found, then it contains the solution
path.
o New State List (NSL): contains nodes a waiting
evaluation.
o Dead Ends (DE): lists states whose descendants have
failed to contain a goal node.
o Current State (CS): a state currently under consideration.
the Backtracking Algorithm
Function Backtrack;
Begin
SL := [Start]; NSL := [Start]; CS := Start;
while NSL# [ ] do
begin
if CS = goal (or meets goal description) then return (SL);
if CS has no children (excluding nodes already on DE, SL, and NSL)
then begin
while SL is not empty and CS = the first element of SL do
begin
add CS to DE;
remove first element from SL;
remove first element from NSL;
CS := first element of NLS;
end;
add CS to SL;
end
else begin
place children of CS on NSL; (except nodes on DE, SL, or NSL)
CS := first element of NSL;
add CS to SL;
end;
end;
return FAIL;
end;
the Backtracking Algorithm
SL = [A]; NSL = [A];
DE = [ ]; CS = A;
…
…
else
place children of CS on NSL; (except nodes on
DE, SL, or NSL)
CS := first element of NSL;
add CS to SL;
end;
…..
the Backtracking Algorithm
SL = [B A]; NSL = [B C D A];
DE = [ ]; CS = B;
…
…
else
place children of CS on NSL; (except nodes on
DE, SL, or NSL)
CS := first element of NSL;
add CS to SL;
end;
…..
the Backtracking Algorithm
SL = [E B A]; NSL = [E F B C D A];
DE = [ ]; CS = E;
…
…
else
place children of CS on NSL; (except nodes on
DE, SL, or NSL)
CS := first element of NSL;
add CS to SL;
end;
…..
the Backtracking Algorithm
SL = [H E B A]; NSL = [H I E F B C D A];
DE = [ ]; CS = H;
….
if CS has no children (excluding nodes
already on DE, SL, and NSL)
then …
while SL is not empty and CS =
the first element of SL do
begin
add CS to DE;
remove first element from SL;
remove first element from NSL;
CS := first element of NLS;
end;
add CS to SL;
else …
the Backtracking Algorithm
SL = [I E B A]; NSL = [I E F B C D A];
DE = [H]; CS = I;
….
if CS has no children (excluding nodes
already on DE, SL, and NSL)
then …
while SL is not empty and CS =
the first element of SL do
begin
add CS to DE;
remove first element from SL;
remove first element from NSL;
CS := first element of NLS;
end;
add CS to SL;
else …
the Backtracking Algorithm
SL = [E B A]; NSL = [E F B C D A];
DE = [I H]; CS = E;
….
if CS has no children (excluding nodes
already on DE, SL, and NSL)
then …
while SL is not empty and CS =
the first element of SL do
begin
add CS to DE;
remove first element from SL;
remove first element from NSL;
CS := first element of NLS;
end;
add CS to SL;
else …
the Backtracking Algorithm
SL = [F B A]; NSL = [F B C D A];
DE = [E I H]; CS = F;
the Backtracking Algorithm
SL = [J F B A]; NSL = [J F B C D A];
DE = [E I H]; CS = J;
the Backtracking Algorithm
SL = [C A]; NSL = [C D A];
DE = [B F J E I H]; CS = C;
the Backtracking Algorithm
SL = [G C A]; NSL = [G C D A];
DE = [B F J E I H]; CS = G;
Exercises
What have we learned?
▪ The following are two problems that can be solved
using state-space search techniques. You should:
▪ Suggest a suitable representation for the problem
state.
▪ State what the initial and final states are in this
representation.
▪ State the available operators/rules for getting from
one state to the next, giving any conditions on
when they may be applied.
▪ Draw the first two levels of the directed state-
space graph for the given problem.
Exercises
What have we learned?
▪ [Problem # 1] The Cannibals and Missionaries problem: "Three
cannibals and three missionaries come to a crocodile infested
river. There is a boat on their side that can be used by either
one or two persons. If cannibals outnumber the missionaries at
any time, the cannibals eat the missionaries. How can they use
the boat to cross the river so that all missionaries survive?"
Formalize the problem in terms of state-space search.
▪ [Problem # 2] A farmer with his dog, rabbit and lettuce come to
the east side of a river they wish to cross. There is a boat at the
river’s edge, but of course only the farmer can row. The boat
can only hold two things (including the rower) at any one time.
If the dog is ever left alone with the rabbit, the dog will eat it.
Similarly, if the rabbit is ever left alone with the lettuce, the
rabbit will eat it. How can the farmer get across the river so that
all four characters arrive safely on the other side?
Up Next ..
Additional Section 3.4
(video lesson)
Resources
▪ Backtracking (Solving a Maze – C++):
https://www.youtube.com/watch?v=gBC_Fd8EE8A
▪ N Queen Problem | Backtracking:
https://www.youtube.com/watch?v=0DeznFqrgAI
▪ Sudoku (Visualisation) | Backtracking:
https://www.youtube.com/watch?v=_vWRZiDUGHU
Thank you!