0% found this document useful (0 votes)
28 views130 pages

Moule 3 Problem Solving

Uploaded by

Prashant Kawathe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views130 pages

Moule 3 Problem Solving

Uploaded by

Prashant Kawathe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 130

Problem solving

Russel-Norvig

Module No :3
Problem-Solving Agent

sensors

?
environment
agent

actuators
• Formulate Goal
• Formulate Problem
• States
• Actions
• Find Solution
PROBLEM-SOLVING AGENTS

Intelligent agents are supposed to act in such a way that the environment goes through a sequence of states that maximizes the performance measure.
What is Problem solving agent?
• It is a kind of Goal-based Agents
(finding action sequences that achieve the agent goals)
• 4 general steps in problem-solving:
– Goal Formulation

– Problem Formulation

– Search

– Execute
E.g. Driving from Arad to Bucharest ...
Holiday Planning
• On holiday in Romania; Currently in Arad. Flight
leaves tomorrow from Bucharest.
• Formulate Goal:
Be in Bucharest
• Formulate Problem:
States: various cities
Actions: drive between cities
• Find solution:
Sequence of cities: Arad, Sibiu, Fagaras, Bucharest
Problem Solving
States

Actions

Start Solution

Goal
Problem-solving agent
function SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an action
static: seq, an action sequence
state, some description of the current world state
goal, a goal
problem, a problem formulation

state  UPDATE-STATE(state, percept)


if seq is empty then
goal  FORMULATE-GOAL(state)
problem  FORMULATE-PROBLEM(state,goal)
seq  SEARCH(problem)
action  FIRST(seq)
seq  REST(seq)
return action
Problem formulation
• A problem is defined by:
– An initial state, e.g. Arad
– Successor function S(X)= set of action-state pairs
• e.g. S(Arad)={<Arad  Zerind, Zerind>,…}
intial state + successor function = state space
– Goal test, can be
• Explicit, e.g. x=‘at bucharest’
• Implicit, e.g. checkmate(x)
– Path cost (additive)
• e.g. sum of distances, number of actions executed, …
• c(x,a,y) is the step cost, assumed to be >= 0

A solution is a sequence of actions from initial to goal state.


Optimal solution has the lowest path cost.
• Subclass of goal-based agents

-Goal formulation

-Problem formulation

-Example problems
-- Toy problems
– Real-world problems
-Search
– search strategies
– Constraint satisfaction
-Solution
• Goal Formulation
-Goal formulation, based on the current situation,
is the first step in problem solving. As well as
formulating a goal, the agent may wish to
decide on some other factors that affect the
desirability of different ways of achieving the
goal.
-let us assume that the agent will consider actions
at the level of driving from one major town to
another. The states it will consider therefore
correspond to being in a particular town.
• Declaring the Goal: Goal information given to agent
i.e. start from Arad and reach to Bucharest.

• Ignoring the some actions: agent has to ignore some


actions that will not lead agent to desire goal.
-i.e. there are three roads out of Arad, one toward
Sibiu, one to Timisoara, and one to Zerind. None of
these achieves the goal, so unless the agent is very
familiar with the geography of Romania, it will not
know which road to follow. In other words, the
agent will not know which of its possible actions is
best, because it does not know enough about the
state that results from taking each action.
• Limits the objective that agent is trying to achieve:
Agent will decide its action when he has some
added knowledge about map. i.e. map of Romania
is given to agent.

• Goal can be defined as set of world states: The


agent can use this information to consider
subsequent stages of a hypothetical journey
through each of the three towns, to try to find a
journey that eventually gets to Bucharest. i.e. once
it has found a path on the map from Arad to
Bucharest, it can achieve its goal by carrying out the
driving actions.
• Problem Formulation

• Problem formulation is the process of deciding what


actions and states to consider, given a goal.

• Process of looking for action sequence (number of


action that agent carried out to reach to goal) is called
search. A search algorithm takes a problem as input
and returns a solution in the form of an action
sequence. Once a solution is found, the actions it
recommends can be carried out. This is called the
execution phase. Thus, we have a simple "formulate,
search, execute" design for the agent.
Problem formulation
• A problem is defined by:
– An initial state, e.g. Arad
– Successor function S(X)= set of action-state pairs
• e.g. S(Arad)={<Arad  Zerind, Zerind>,…}
intial state + successor function = state space
– Goal test, can be
• Explicit, e.g. x=‘at bucharest’
• Implicit, e.g. checkmate(x)
– Path cost (additive)
• e.g. sum of distances, number of actions executed, …
• c(x,a,y) is the step cost, assumed to be >= 0

A solution is a sequence of actions from initial to goal state.


Optimal solution has the lowest path cost.
• Well-defined problem and solutions.
-A problem is defined by four items:
• Initial state: The initial state that the agent starts in. e.g., the
initial state for our agent in Romania might be described as
“In(Arad)”.
• Successor function S(x) = A description of the possible actions
available to the agent. The most common formulation uses a
successor function , given a particular state x, SUCCESSOR-
FN(x) returns a set of <action, successor> ordered pair where
each action is one of the legal actions in state x and each
successor is a statethat can be reached from x by applying
the action. e.g. ,from state In(Arad),the successor function
for Romania problem would return {<Go(Zerind),In(Zerind)>,
<Go(sibiu),In(sibiu)>, <Go(Timisoara),In(Timisoara)>}.
• Goal test=It determines whether a given state
is a goal state.

• path cost (additive)=Function that assigns a


numeric cost to each path. e.g., sum of
distances, number of actions executed, etc.
Usually given as c(x, a, y), the step cost from x
to y by action a, assumed to be ≥ 0. “A solution
is a sequence of actions leading from the
initial state to a goal state”.
• Example:

• The 8-puzzle consist of a 3x3 board with 8 numbered tiles and


a blank space. A tile adjacent to the blank space can slide into
the space.

The standard formulation is as follows:


Example: 8-puzzle

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
Example: 8-puzzle

• States?? Integer location of each tile


• Initial state?? Any state can be initial
• Actions?? {Left, Right, Up, Down}
• Goal test?? Check whether goal configuration is reached
• Path cost?? Number of actions to reach goal
Example: 8-puzzle

8 2 1 2 3

3 4 7 4 5 6

5 1 6 7 8

Initial state Goal state


Example: 8-puzzle

8 2 7

3 4

8 2 5 1 6

3 4 7

5 1 6 8 2 8 2

3 4 7 3 4 7

5 1 6 5 1 6
Example: vacuum world

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
Example: vacuum world

• States?? two locations with or without dirt: 2 x 22=8


states.
• Initial state?? Any state can be initial
• Actions?? {Left, Right, Suck}
• Goal test?? Check whether squares are clean.
• Path cost?? Number of actions to reach goal.
Toy problems
• Example: vacuum world
Number of states: 8
Initial state: Any
Number of actions: 4
 left, right, suck,
noOp
Goal: clean up all dirt
 Goal states: {7, 8}

 Path Cost:
 Each step costs 1
Example: 8-queens

Place 8 queens in a chessboard so that no two queens


are in the same row, column, or diagonal.

A solution Not a solution


Example: 8-queens problem

Incremental formulation vs. complete-state formulation


• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
Example: 8-queens

Formulation #1:
• States: any arrangement of
0 to 8 queens on the board
• Initial state: 0 queens on the
board
• Actions: add a
queen in any square
• Goal test: 8 queens on the
board, none attacked
• Path cost: none

 648 states with 8 queens


Example: 8-queens
Formulation #2:
• States: any arrangement of
k = 0 to 8 queens in the k
leftmost columns with none
attacked
• Initial state: 0 queens on the
board
• Successor function: add a
queen to any square in the
leftmost empty column such
that it is not attacked
by any other queen
• Goal test: 8 queens on the
board

 2,067 states
Real-world Problems
• Route finding
• Touring problems
• VLSI layout
• Robot Navigation
• Automatic assembly sequencing
• Drug design
• Internet searching
• …
Example: robot assembly

• States??
• Initial state??
• Actions??
• Goal test??
• Path cost??
Example: robot assembly

• States?? Real-valued coordinates of robot joint angles;


parts of the object to be assembled.
• Initial state?? Any arm position and object configuration.
• Actions?? Continuous motion of robot joints
• Goal test?? Complete assembly (without robot)
• Path cost?? Time to execute
Example: River Crossing

• Items: Man, Wolf, Corn, Chicken.


• Man wants to cross river with all items.
– Wolf will eat Chicken
– Chicken will eat corn.
– Boat will take max of two.
• There are two types of searching strategies
are used in path finding,

-Uninformed Search strategies.


-Informed Search strategies.
Uninformed Search:

-Uninformed search means that they have no additional


information about states beyond that provided in the
problem definition. All they can do is generate successors
and distinguish a goal state from non-goal state.
-Uninformed strategies use only the information available in
the problem definition
--Breadth-first search
--Uniform-cost search
--Depth-first search
--Depth-limited search
--Iterative deepening search
Informed Search techniques:
-A strategy that uses problem-specific knowledge
beyond the definition of the problem itself.

-Also known as “heuristic search,” informed


search strategies use information about the
domain to (try to) (usually) head in the general
direction of the goal node(s)

--Informed search methods: Hill climbing, best-


first, greedy search, A, A*.
Breadth First Search
2 4 8

s 5 7

3 6 9

37
Shortest path Breadth First Search
1
from s
2 4 8

0 s 5 7

3 6 9

Undiscovered
Discovered Queue: s

Top of queue

38
Finished
Breadth First Search
1
2 4 8

0 s 5 7

3 6 9

Undiscovered
Discovered Queue: s 2

Top of queue

39
Finished
Breadth First Search
1
2 4 8

0 s 5 7
1

3 6 9

Undiscovered
Discovered Queue: s 2 3

Top of queue

40
Finished
Breadth First Search
1
2 4 8

0 s 5 7
1

3 6 9

Undiscovered
Discovered Queue: 2 3 5

Top of queue

41
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

Undiscovered
Discovered Queue: 2 3 5

Top of queue

42
Finished
Breadth First Search
1 2
2 4 8

5 already discovered:
0 s 5 7
don't enqueue
1

3 6 9

Undiscovered
Discovered Queue: 2 3 5 4

Top of queue

43
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

Undiscovered
Discovered Queue: 2 3 5 4

Top of queue

44
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

Undiscovered
Discovered Queue: 3 5 4

Top of queue

45
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 3 5 4

Top of queue

46
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 3 5 4 6

Top of queue

47
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 5 4 6

Top of queue

48
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 5 4 6

Top of queue

49
Finished
Breadth First Search
1 2
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 4 6

Top of queue

50
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 4 6

Top of queue

51
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1

3 6 9

1 2

Undiscovered
Discovered Queue: 4 6 8

Top of queue

52
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2

Undiscovered
Discovered Queue: 6 8

Top of queue

53
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 6 8 7

Top of queue

54
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 6 8 7 9

Top of queue

55
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 8 7 9

Top of queue

56
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 7 9

Top of queue

57
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 7 9

Top of queue

58
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 7 9

Top of queue

59
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 7 9

Top of queue

60
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 9

Top of queue

61
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 9

Top of queue

62
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue: 9

Top of queue

63
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Undiscovered
Discovered Queue:

Top of queue

64
Finished
Breadth First Search
1 2 3
2 4 8

0 s 5 7
1 3

3 6 9

1 2 3

Level Graph

65
Depth-First Graph
Traversal Algorithm
Search vs Traversal

• Search: Look for a given node


– stop when node found, even if not all nodes
were visited
• Traversal: Always visit all nodes
Depth-first Search

• Similar to Depth-first Traversal of a Binary


Tree

• Choose a starting vertex


• Do a depth-first search on each adjacent
vertex
Pseudo-Code for Depth-First Search

depth-first-search
mark vertex as visited
for each adjacent vertex
if unvisited
do a depth-first search on adjacent vertex
Depth-First Search

B C

D E F G
Depth-First Search

v
A

B C

D E F G

A
Depth-First Search

v
A

B C

D E F G

A
Depth-First Search

v
A

v
B C

D E F G

A B
Depth-First Search

v
A

v
B C

D E F G

A B
Depth-First Search

v
A

v
B C

D E F G

A B
Depth-First Search

v
A

v
B C

v
D E F G

A B D
Depth-First Search

v
A

v
B C

v
D E F G

A B D
Depth-First Search

v
A

v
B C

v
D E F G

A B D
Depth-First Search

v
A

v
B C

v
D E v F G

A B D E
Depth-First Search

v
A

v
B C

v
D E v F G

A B D E
Depth-First Search

v
A

v
B C

v
D E v F G

A B D E
Depth-First Search

v
A

v
B C

v
D E F G

A B D E
Depth-First Search

v
A

v
B C

v
D E v F G

A B D E
Depth-First Search

v
A

v
B C

v
D E v F G

A B D E
Depth-First Search

v
A

v
B C

v
D E v F G

A B D E
Depth-First Search

v
A

v
B C

v
D E v F G
v

A B D E F
Depth-First Search

v
A

v
B C

v
D E v F G
v

A B D E F
Depth-First Search

v
A

v
B C

v
D E v F G
v

A B D E F
Depth-First Search

v
A

v v
B C

v
D E v F G
v

A B D E F C
Depth-First Search

v
A

v v
B C

v
D E v F G
v

A B D E F C
Depth-First Search

v
A

v v
B C

v
D E v F G
v

A B D E F C
Depth-First Search

v
A

v v
B C

v
D E v F G
v

A B D E F C
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

v
A

v v
B C

v
v
D E v F G
v

A B D E F C G
Depth-First Search

B C

D E F G

A B D E F C G
Time and Space Complexity
for Depth-First Search

• Time Complexity
– Adjacency Lists
• Each node is marked visited once
• Each node is checked for each incoming edge
• O (v + e)
– Adjacency Matrix
• Have to check all entries in matrix: O(n2)
Time and Space Complexity
for Depth-First Search
• Space Complexity
– Stack to handle nodes as they are explored
• Worst case: all nodes put on stack (if graph is linear)
• O(n)
Bidirectional search

• Search progresses from both start and goal


• Standard search techniques can be used on
re-defined state space
• Problem situations defined as pairs of form:
StartNode - GoalNode
Re-defining state space for bidirectional search
Original space:

S S1 E1 E

new_s( S - E, S1 - E1) :-
s( S, S1), % One step forward
s( E1, E). % One step backward

new_goal( S - S). % Both ends coincide

new_goal( S - E) :-
s( S, E). % Ends sufficiently close
Complexity of bidirectional search
Consider the case: forward and backward
branching both b, uniform

d/2 d/2

Time ~ bd/2 + bd/2 < bd


Searching graphs
Do our techniques work on graphs, not just trees?

Graph unfolds into a tree, parts of graph may repeat


many times
Techniques work, but may become very inefficient
Better: add check for repeated nodes
Uniform Cost Search
• Let g(n) be the sum of the edges costs from root to node n. If
g(n) is our overall cost function, then the best first search
becomes Uniform Cost Search, also known as Dijkstra’s
single-source-shortest-path algorithm .
• Initially the root node is placed in Open with a cost of zero.
At each step, the next node n to be expanded is an Open
node whose cost g(n) is lowest among all Open nodes.
Uniform-cost search

Uniform-cost Search:
Expand node with smallest path cost g(n).

109
Example of Uniform Cost Search
• Assume an example tree with different edge costs, represented by
numbers next to the edges.

2 a
1
b c
1 2 1 2
f gc dc ec

Notations for this example:


generated node
expanded node
Example of Uniform Cost Search

2 a
1

Closed list:
a
Open list: 0
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2

a
Closed list:
b c
Open list: 2 1
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2
dc ec

a c
Closed list:
b d e
Open list: 2 2 3
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2
f gc dc ec

a c b
Closed list:
d e f g
Open list: 2 3 3 4
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2
f gc dc ec

a c b d
Closed list:
e f g
Open list: 3 3 4
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2
f gc dc ec

a c b d e
Closed list:
f g
Open list: 3 4
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2
f gc dc ec

a c b d e f
Closed list:
g
Open list: 4
Example of Uniform Cost Search
2 a
1
b c
1 2 1 2
f gc dc ec

a c b d e f g
Closed list:

Open list:
Uniform Cost Search

• We consider Uniform Cost Search to be brute force search,


because it doesn’t use a heuristic function.

• Questions to ask:
– Whether Uniform cost always terminates?
– Whether it is guaranteed to find a goal state?
Uniform Cost Search - Termination
• The algorithm will find a goal node or report than there is no goal
node under following conditions:
– the problem space is finite
– there must exist a path to a goal with finite length and finite cost
– there must not be any infinitely long paths of finite cost
• We will assume that all the edges have a minimum non-zero edge
cost e to a solve a problem of infinite chains of nodes with zero-
cost edges.
Then, UCS will eventually reach a goal of finite cost if one exists in
the graph.
6 1
3 A D F 1

2 4 8
S B E G

1 20
C

The graph above shows the step-costs for different paths going from the start (S) to
the goal (G).

Use uniform cost search to find the optimal path to the goal.

Exercise for at home

121
Depth-Limited Search
• Same as depth-first search with depth limit l.

• Implementation:
– Nodes at depth l have no successors.
Iterative Deepening Search
Repeated depth-limited search with increasing depth

Function
FunctionITERATIVE-DEEPENING-SEARCH
ITERATIVE-DEEPENING-SEARCH(problem)
(problem)
returns
returnsaasolution
solutionor
orfailure
failure
for depth  0 to MAXDEPTH
for depth  0 to MAXDEPTH do do
result
resultDEPTH-LIMIT-SEARCH
DEPTH-LIMIT-SEARCH(problem,
(problem,depth)
depth)
ififresult
resultthen
thenreturn
returnresult
result
end
end
return
returnFAILURE
FAILURE
Iterative Deepening Search: depth=0

Arad
Iterative Deepening Search: depth=1

Arad

Zerind Sibiu Timisoara


Iterative Deepening Search: depth=2

Arad

Zerind Sibiu Timisoara

Arad Oradea Arad Oradea Fagaras Rimnicu Lugoj


Arad
Vilcea
Informed search techniques

• Best-first search can be represented in two


ways
--Greedy Best-first search
-- A* search

•Next lecturer
Search spaces are usually graphs
Explicit graphs: Implicit graphs:
• e.g. road maps • e.g. problem solving.
• Finite number of states • Graph is usually constructed on
the fly starting from initial state
and applying operators to find
connected states.
• Possibly infinite number of states
Romania with Edge Costs

You might also like