Chapter 3 - Problem Solving by Searching(I)
Chapter 3 - Problem Solving by Searching(I)
1
Objectives
2
Type of agent that solve problem by searching
Such agent is not reflex or model based reflex agent because this
agent needs to achieve some target (goal)
It can be goal based or utility based or learning agent
Intelligent agent knows that to achieve certain goal, the state of
the environment will change sequentially and the change should
be towards the goal
3
Steps to undertake during searching
Problem formulation:
Involves:
Abstracting the real environment configuration into state information using
preferred data structure
Describe the initial state according to the data structure
The set of action possible on a given state at specific point in the process.
The cost of the action at each state
Deciding what actions and states to consider, given a goal
7
Agent Program
8
Example: Road map of Ethiopia
Aksum
100
200 Mekele
Gondar 180
80
Lalibela
110 250
150
Bahr dar
Dessie
170
Awasa
9
Example: Road map of Ethiopia
Current position of the agent: Awasa.
Needs to arrive to: Gondar
Formulate goal:
be in Gondar
Formulate problem:
states: various cities
actions: drive between cities
Find solution:
sequence of cities, e.g., Awasa, Nazarez, Addis Ababa, Dessie, Godar
10
Types of Problems: Knowledge of States and Actions
What happens when knowledge of the states or actions is incomplete?
Four types of problems exist in the real situations:
1. Single-state problem
The environment is Deterministic, fully observable, static and discreet
Out of the possible state space, agent knows exactly which state it will be
in; solution is a sequence
Complete world state knowledge and action knowledge
2. Sensor less problem (conformant problem)
The environment is non-observable, deterministic , static and discreet
It is also called multi-state problem
Incomplete world state knowledge and action knowledge
Agent may have no idea where it is; solution is a sequence
If the agent has no sensors at all, then (as far as it knows) it could be in one of
several possible initial states, and each action might therefore lead to one of
several possible successor states
11
Types of Problems
3. Contingency problem
The environment is nondeterministic and/or partially observable
It is not possible to know the effect of the agent action
percepts provide new information about current state
It is impossible to define a complete sequence of actions that
constitute a solution in advance because information about the
intermediary states is unknown
4. Exploration problem
The environment is partially observable
It is also called unknown state space
Agent must learn the effect of actions.
When the states and actions of the environment are unknown,
the agent must act to discover them.
It can be viewed as an extreme case of contingency problems
State space and effects of actions unknown
12
Example: vacuum world
Single-state
Starting state us known say in #5.
What is the Solution?
13
Example: vacuum world
The vacuum cleaner always knows where it is and where
the dirt is. The solution then is reduced to searching for a
path from the initial state to the goal state
Single-state, start in #5.
Solution? [Right, Suck]
14
Example: vacuum world
Sensorless,
Suppose that the vacuum agent knows
all the effects of its actions, but has no
sensors.
It doesn’t know what the current
state is
It doesn’t know where it or
the dirt is
So the current start is either of the
following: {1,2,3,4,5,6,7,8}
15
Example: vacuum world
Sensorless Solution
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
16
Sensorless example : vacuum world
17
Example: vacuum world
Contingency
Nondeterministic: Suck may
dirty a clean carpet
Partially observable:
Hence we have partial
information
Let’s assume the current percept
is: [L, Clean]
i.e. start in #5 or #7
18
Example: vacuum world
Contingency Solution
[Right, if dirt then Suck]
Suppose that the “vacuum” action Move right
sometimes deposits dirt on the
carpet –but only if the carpet is
already clean.
The agent must have sensor and it suck
must combine decision making
, sensing and execution.
[Right,suck,left suck] is not correct
plane , because one room might be
clean originally, them become
dirty. No fixed plane that always
19
works
Exploration
Example 1:
Assume the agent is some where outside the blocks and wants to
clean the block. So how to get into the blocks? No clear
information about their location
What will be the solution?
Solution is exploration
Example 2:
The agent is at some point in the world and want to reach a city
called CITY which is unknown to the agent.
The agent doesn’t have any map
What will be the solution?
Solution is exploration
20
Some more problems that can be solved by searching
We have seen two such problems: The road map problem and
the vacuum cleaner world problem
21
3 cat and 3 mice puzzle
Three cats and three mice come to a crocodile infested river. There is a
boat on their sides that can be used by one or two “persons”. If cats
outnumber the mice at any time, the cats eat the mice. How can they use
the boat to cross the river so that all mice survive.
State description
[#of cats to the left side,
#of mice to the left side,
boat location,
#of cats to the right side,
#of mice to the right side]
Initial state
[3,3,Left,0,0]
Goal
[0,0,Right,3,3]
22
3 cat and 3 mice puzzle
Action
A legal action is a move which moves upto two person at a time
using the boat from the boat location to the other side provided
that action doesn’t contradict the constraint (#mice < #cats)
We can represent the action as
Move_Ncats_M_mice_lr if boat is at the left side or
Move_Ncats_M_mice_rl if boat is at the right side.
All the set of possible action except the constraints are:
23
3 cat and 3 mice puzzle
Question
Draw the state space of the problem
Provide one possible solution
24
3 cannibal and 3 missionaries problem
Three missionaries and three cannibals come to the bank of a river
they wish to cross.
There is a boat that will hold only two and any of the group is able to
row.
If there are ever more missionaries than cannibals on any side of the
river the cannibals will get converted.
How can they use the boat to cross the river without conversion.
State description
[#of cannibals to the left side,
#of missionaries to the left side,
boat location,
#of cannibals to the right side,
#of missionaries to the right side]
Initial state
[3,3,Left,0,0]
Goal
[0,0,Right,3,3]
25
3 cannibal and 3 missionaries problem
Action
A legal action is a move which moves up to two person at a time
using the boat from the boat location to the other side provided
that action doesn’t contradict the constraint (#cannibal <
#missionaries)
We can represent the action as
Move_Ncannibal_M_missionaries_lr if boat is at the left side or
Move_Ncannibal_M_missionaries_rl if boat is at the right side.
All the set of possible action except the constraints are:
26
3 cannibal and 3 missionaries problem
Question
Draw the state space of the problem
Provide one possible solution
27
Water Jug problem
We have one 3 liter jug, one 5 liter jug and unlimited supply of
water. The goal is to get exactly one liter of water in either of the
jug. Either jug can be emptied, filled or poured into the other.
State description
[Amount of water in 5 litter jug,
Amount of water in 3 litter jug]
Initial state
[0,0]
Goal
[1,ANY] or [ANY, 1]
28
Water Jug problem
Action
Fill the 3 litter jug with water (F3)
Fill the 5 litter jug with water(F5)
Empty the 5 litter jug (E5)
Empty the 3 litter jug (E3)
Pour the all 3 litter jug water onto the 5 litter jug (P35)
Pour the all 5 litter jug water onto the 3 litter jug (P53)
Pour the 3 litter jug water onto the 5 litter jug until the 5 litter
jug filled completely. (P_part35)
Pour the 5 litter jug water onto the 3 litter jug until the 3 litter
jug filled completely. (P_part53)
29
Water Jug problem
Question
Draw the complete state space diagram
Find one possible solution as action and state sequence
30
The colored block world problem
Problem Description
32
The colored block world problem
Data structure
The data structure used to describe a state is as follows:
33
Well-defined problems and solutions (single state)
A well defined problem is a problem in which
The start state of the problem
Its goal state
The possible actions (operators that can be applied to make
move from state to state)
The constraints upon the possible action to avoid invalid
moves (this defines legal and illegal moves)
are known in advance
34
Well-defined problems and solutions (single state)
A problem which is not well defined is called ill-defined
Ill-defined problem presents a dilemma in planning to get
the goal
The goal may not be precisely formulated
Examples
Cooking dinner
Writing term paper
All the problem we need to consider in this course are well
defined
35
Well-defined problems and solutions
A problem is defined by five items:
1. initial state that the agent starts in e.g., "at Awasa"
2. actions or successor function S(x) = set of action–state pairs
e.g., S(Awasa) = {<Awasa Addis Ababa, Addis Ababa>, <Awasa Nazarez,
Nazarez>, … }
Note: <AB, B> indicates action is A B and next state is B
37
Selecting a state space
Real world problem can not be directly represented in the agent
architecture since it is absurdly complex
state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) action = complex combination of real actions
e.g., “Awasa Addis Ababa" represents a complex set of
possible routes, detours, rest stops, etc.
(Abstract) solution = set of real paths that are solutions in the real
world
Each abstract action should be "easier" than the original problem
38
Vacuum world state space graph
Initial states?
states?
actions?
goal test?
path cost?
39
Vacuum world state space graph
Initial states?
states?
actions?
goal test?
path cost?
41
Example: The 8-puzzle
43
8 queens problem
45
Implementation issue: states vs nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree.
correspond to states in the state space of the problem
It includes:
state,
parent node,
action,
depth and
one or more costs [like path cost g(x), heuristic cost h(x), evaluation
function cost f(x)]
46
Searching For Solution (Tree search algorithms)
Given state space, and network of states via actions.
The network structure is usually a graph
Tree is a network in which there is exactly one path defined from
the root to any node
Given state S and valid actions being at S
the set of next state generated by executing each action is called successor
of S
Searching for solution is a simulated exploration of state space
by generating successors of already-explored states
47
Implementation issue: states vs. nodes
Nodes are the data structures from which the search tree is
constructed.
Each has a parent, a state, and various bookkeeping fields.
Arrows point from child to parent
A state corresponds to a configuration of the world
Thus, nodes are on particular paths, as defined by
PARENT pointers, whereas states are not.
Example:
48
Queue is the appropriate data structure used to store
fringes(frontier).
The search algorithm can expand the next node according to
some preferred strategy.
Operation of queue are:
EMPTY?(queue) returns true only if there are no more elements in the
queue.
POP(queue) removes the first element of the queue and returns it.
INSERT(element, queue) inserts an element and returns the resulting
queue.
Queues are characterized by the order in which they store the
inserted nodes: FIFO queue, LIFO queue and priority queue
Note :The set of all leaf nodes available for expansion at
49any given point is called fringes or frontier or open list.
Searching For Solution (Tree search algorithms)
• The Successor-Fn generate all the successors state and the action
that leads moves the current state into the successor state
• The Expand function creates new nodes, filling in the various fields
of the node using the information given by the Successor-Fn and
the input parameters
50
Tree search example
Awasa
BahrDar AA
Lalibela AA Gondar
Gondar Debre M.
51
Implementation: general tree search
52
Search strategies
A search strategy is defined by picking the order of node expansion
Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists?
time complexity: How long does it take to find a solution?
number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree(i.e., max. number of
successor of any node)
d: depth of the least-cost solution((i.e., the number of steps along the path
from the root)
m: maximum depth of the state space (may be ∞)
Search cost used to assess the effectiveness of search algorithm. It
depends on time complexity but can also include memory usage or
we can use the total cost, which combines the search cost and the path
cost
53
of the solution found.
Search strategies cont’d …
E.g., For the problem of finding a route from Awasa to
Gonder, the search cost is the amount of time taken by the
search and the solution cost is the total length of the path in
kilometers . To compute the total cost, we have to add
milliseconds and kilometers
There is no “official exchange rate” between the two, but it
might be reasonable in this case to convert kilometers into
milliseconds by using an estimate of the car’s average speed
(because time is what the agent cares about)
Generally, searching strategies can be classified in to
two as uninformed and informed search strategies
54
Uninformed search (blind search) strategies
Uninformed search strategies use only the information available in the
problem definition
They have no information about the number of steps or the path cost
from the current state to the goal
They can distinguish the goal state from other states
They are still important because there are problems with no additional
information.
Six kinds of such search strategies will be discussed and each depends
on the order of expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search
55
Breadth-first search (1)
Breadth-first search is a simple strategy in which the root node is expanded first,
then all the SEARCH successors of the root node are expanded next, then their
successors, and so on.
Expand shallowest unexpanded node
Finds the shallowest goal state
Breath-first search always has shallowest path to every node on the fringe
Implementation:
Fringe (open list) is a FIFO queue, i.e., new successors go at the end
56
Breadth-first search (2):Properties
Time Complexity: Let b be the maximal branching factor and d
the depth of a solution path. Then the maximal number of
nodes expanded is
(Note: If the algorithm were to apply the goal test to nodes
when selected for expansion rather than when generated, the
whole layer of nodes at depth d would be expanded before the
goal was detected and the time complexity would be
Space Complexity: Every node generated is kept in memory.
Therefore space needed for the fringe is and for the
explored set .
a maximum of this match node will be while reaching to the
goal node. This is a major problem for real problem
Complete : yes(if b is finite)
57Optimal? Yes (if cost = constant (k) per step)
Uniform-cost search (1)
Expands the node n with the lowest path cost g(n)
Equivalent to breadth-first if step costs all equal
Implementation:
This is done by storing the fringe as a priority queue ordered by
g
Differ from breadth-first search is that:
Ordering of the queue by path cost
Goal test is applied to a node when it is selected for expansion
rather than when it is first generated
A test is added in case a better path is found to a node currently
on the fringe
58
Uniform-cost search (2)
Uniform-cost search does not care about the number of steps a
path has, but only about their total cost
When all step costs are the same, uniform-cost search is similar
to breadth-first search, except that the latter stops as soon as it
generates a goal, whereas uniform-cost search examines all the
nodes at the goal’s depth to see if one has a lower cost; thus
uniform-cost search does strictly more work by expanding nodes
at depth d unnecessarily S
Consider the problem that moves from node S to G
A, 1 B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
C 5
15
G, 11
S
A, 1 B, 5 C, 15
59
G, 11 G, 10
Uniform-cost search (3): Properties
It finds the cheapest solution if the cost of a path never decrease as
we go along the path
i.e. g(sucessor(n)) ≥ g(n) for every node n.
Complete? Yes(if step costs positive)
Time? # of nodes with g ≤ cost of optimal solution
let ε be the minimum step cost in the search tree, C* is the total cost of the
optimal solution and branching factor b.
Now you can ask, In the worst case, what will be the number of nodes that
exist all of which has step cost = ε, branching factor b and path cost ≤ C*.
The resulting tree will have depth value floor(C*/ ε)
Hence the total node will be bceilling(C*/ ε) . Therefore, time complexity becomes
O(bceiling(C*/ ε) )
Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
Optimal? Yes – nodes expanded in increasing order of g(n)
60
Depth-first search (1)
61
Depth-first search (2): Properties
Complete? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path (see graph search)
Yes: in finite spaces
Time? O(bm): terrible if m is much larger than d( m is the maximum length of path in the
state space)
but if solutions are dense, may be much faster than breadth-first
Such algorithm is called depth limited search and its improved version
is called iterative deepening search strategy.
These two strategies will be explored in the following sections
63
Depth-limited search (2)
depth-first search in infinite state spaces can be alleviated by
supplying depth-first search with a predetermined depth limit
That is, nodes at depth are treated as if they have no successors
The depth limit solves the infinite-path problem
Depth-first search with depth limit l, will truncate all nodes having depth value
greater than l from the search space and apply depth first search on the rest of the
structure
It return solution if solution exist, if there is no solution
it return cutoff if l < m, failure otherwise
Recursive implementation:
64
Depth-limited search (3)
Complete? No( l < d, that is, the shallowest goal is beyond the depth limit)
Time? O(bl)
Space? O(bl)
Optimal? No(if l>d)
Introduces an additional source of incompleteness if we choose l < d, that is,
the shallowest goal is beyond the depth limit. (This is likely when d is
unknown.)
Depth-first search can be viewed as a special case of depth-limited search
with l =∞
Sometimes, depth limits can be based on knowledge of the problem
Notice that depth-limited search can terminate with two kinds of failure: the
standard failure value indicates no solution; the cutclff value indicates no
solution within the depth limit.
65
Iterative deepening search
Depth limit search never return a solution due to the limitation
of the limit if all solution node exist at depth >l. This limitation
can be avoided by applying iterative deepening search strategy
Is a general strategy, often used in combination with depth-
first tree search, that finds the best depth limit
It does this by gradually increasing the limit—first 0, then
1, then 2, and so on—until a goal is found
This will occur when the depth limit reaches d, the depth of
the shallowest goal node
Iterative deepening combines the benefits of depth-first
and breadth-first search.
Like depth-first search, its memory requirements are modest:
to be precise.
Like breadth-first search, it is complete when the branching
66 factor is finite and optimal when the path cost is a nondecreasing
Iterative deepening search is analogous to
breadth-first search in that it explores a complete
layer of new nodes at each iteration before going
on to the next layer
67
Iterative deepening search l =0
68
Iterative deepening search l =3
69
Properties of iterative deepening search
Complete? Yes
Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
Space? O(bd)
Optimal? Yes, if step cost = 1
70
Bidirectional search
• Simultaneously search forward from the initial state and
backward from the goal state and terminate when the two search
meet in the middle
• Reconstruct the solution by backward tracking towards the root
and forward tracking towards the goal from the point of
intersection
• These algorithm is efficient if there any very limited one or two
nodes with solution state in the search space
• Space and time complexity :
• The algorithm is complete and optimal (for uniform step costs) if
both searches are breadth-first; other combinations may sacrifice
completeness, optimality, or both.
71
Summary of algorithms
Repeated states
Failure to detect repeated states can turn a linear problem into an
exponential one!
72
Graph search
73