Chapter 2 Problem Solving
Chapter 2 Problem Solving
Prepared by
Mrs. Megha V Gupta
New Horizon Institute of Technology and Management
Steps in building a system to solve a particular
problem
• A ‘problem space’
The set of all possible configurations is the space of the problem state, also known as problem
space.
The environment where the search is performed is the problem space.
■ A ‘state space’ of the problem is the set of all states reachable from the initial state.
4 1 3
2 6
7 5 8
Initial state
1 2 3
4 5 6
7 8
Goal state
Water-Jug Problem
“You are given two jugs, a 4-gallon one and a 3-gallon one, a
pump which has unlimited water which you can use to fill the
jug, and the ground on which water may be poured. Neither
jug has any measuring markings on it. How can you get
exactly 2 gallons of water in the 4-gallon jug?
Water jug problem
■ A water jug problem: 4-gallon and 3-gallon
4 3
- no marker on the bottle
- pump to fill the water into the jug
- How can you get exactly 2 gallons of water
into the 4-gallons jug?
A state space search
(x,y) : order pair
x : water in 4-gallons x = 0,1,2,3,4
y : water in 3-gallons y = 0,1,2,3
start state : (0,0)
goal state : (2,n) where n = any value
0 0
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5 or 12
2 0 9 or 11
https://www.youtube.com/watch?v=go294ZR4Rdg
State Space Representation
■ the agent can examine different possible sequences of actions, and choose
the best
■ This process of looking for the best sequence is called search
A problem-solving agent first formulates a goal and a problem, searches for a sequence of actions
that would solve the problem, and then executes the actions one at a time. When this is complete, it
formulates another goal and starts over.
Chapter 2 Problem Solving 28
Example: Romania
■ On holiday in Romania; currently in Arad.
■ Flight leaves tomorrow from Bucharest
■ Formulate goal:
■ be in Bucharest
■ Formulate problem:
■ states: various cities
■ actions: drive between cities
■ Find solution:
■ sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
Well-defined problems and solutions
A problem is defined by 5 components:
■ Initial state
■ Actions
■ Transition model or (Successor
functions)
■ Goal Test.
■ Path Cost.
Well-defined problems and solutions
1. The initial state that the agent starts in
2. Actions:
A description of the possible actions available to the agent.
■ denoted by g
Usually the path cost is the sum of the step costs of the individual actions (in
the action list)
The solution of a problem is then
■ a path from the initial state to a state satisfying the goal test
Optimal solution
■ the solution with lowest path cost among all solutions
Vacuum world state space graph
■ states? The state is determined by both the agent location and the dirt locations.
■ Initial state: any
■ actions? Left, Right, Suck
■ Transition model: The actions have their expected effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a clean square have no effect.
■ goal test? no dirt at all locations
■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
Example: The 8-puzzle
connected by a road
■ Goal test: the trip visits each city only once that starts and
ends at A.
■ Path cost: traveling time
Using only four colors, you have to color a planar map so that no two
adjacent regions have the same color.
Goal Test: All regions of the map are colored and no two
adjacent regions have the same color.
■ The Expand function creates new nodes, filling in the various fields and using
the SuccessorFn of the problem to create the corresponding states.
Search strategies
■ A search strategy is defined by picking the order of node
expansion
■ Strategies are evaluated along the following dimensions:
■ Completeness (guarantee to find a solution if there is one): does it
always find a solution if one exists?
■ time complexity (how long does it take to find a solution): number
of nodes generated during the search
■ space complexity (how much memory is needed to perform the
search): maximum number of nodes stored in memory
■ Optimality (does it give highest quality solution when there are
several different solutions): does it always find a least-cost solution?
Measuring problem-solving performance
(If the algorithm were to apply the goal test to nodes when selected for expansion, rather
than when generated, the whole layer of nodes at depth d would be expanded before
the goal was detected and the time complexity would be O(bd+1).)
Breadth-first search
S
A D
B D A E
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Chapter 2 Problem Solving 55
Chapter 2 Problem Solving 56
Chapter 2 Problem Solving 57
Uniform Cost Search
Is B a goal state?
Depth-first search
queue=[D,E,C]
queue=[H,I,E,C]
Is D = goal state?
Is H = goal state?
Depth-first search
queue=[I,E,C]
queue=[E,C]
Is I = goal state?
Is E = goal state?
Depth-first search
queue=[K,C]
queue=[J,K,C]
Is K = goal state?
Is J = goal state?
Depth-first search
queue=[F,G]
queue=[C]
Is F = goal state?
Is C = goal state?
Depth-first search
queue=[L,M,G] queue=[M,G]
A D
B D A E
C E E B B F
1
1
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Chapter 2 Problem Solving 68
Properties of depth-first search
■ Complete? No: fails in infinite path or loops
complete in finite spaces
■
A depth-first tree search needs to store only a single path from the root
to a leaf node, along with the remaining unexpanded sibling nodes for each node on
the path. Once a node has been expanded, it can be removed from memory as soon
as all its descendants have been fully explored.
For a state space with branching factor b and maximum depth m, depth-first search
requires storage of only O(bm) nodes.
DFS
Depth-Limited Search
■ Depth-first search is clearly dangerous
• if the tree is very deep, we risk finding a suboptimal solution;
• if the tree is infinite, we risk an infinite loop.
■ The embarrassing failure of depth-first search in infinite state spaces
can be alleviated by supplying depth-first search with a predetermined
depth limit l . That is, nodes at depth are treated as if they have no
successors. This approach is called depth-limited search.
■ Three possible outcomes:
■ Solution
C E E B B F
11
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Chapter 2 Problem Solving 75
Chapter 2 Problem Solving 76
Iterative deepening search
■ Usually we do not know a reasonable depth limit in advance.
■ Iterative deepening search repeatedly runs depth-limited search for
increasing depth limits 0, 1, 2, . . .
■ this essentially combines the advantages of depth-first and breadth
first search;
■ the procedure is complete and optimal;
■ the memory requirement is similar to that of depth-first search;
Iterative deepening search
The iterative deepening search algorithm, which repeatedly applies depth limited
search with increasing limits. It terminates when a solution is found or if the depth limited
search returns failure, meaning that no solution exists.
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
■ Note: We visit top level nodes multiple times. The last (or max depth) level is
visited once, second last level is visited twice, and so on. It may seem expensive,
but it turns out to be not so costly, since in a tree most of the nodes are in the
bottom level. So it does not matter much if the upper levels are visited multiple
times.
■ For b = 2, d = 3,
■ N
BFS = (b )+ (b )+ b +(b
1 2 d d+1 –b)= 2+4+23 + (24-2)=6+8+(14)=28
■ N
IDS = (d+1) b +(d) b + (d-1)b + … + 1b =(4)*1+(3)*2+(2)* 2 +(1)* 2
0 1 2 d 2 3
=4+6+8+8=26
■ iterative deepening is the preferred uninformed search method when the
search space is large and the depth of the solution is not known.
Iterative deepening search
■ Usually we do not know a reasonable depth limit in advance.
■ Iterative deepening search repeatedly runs depth-limited search for
increasing depth limits 0, 1, 2, . . .
■ this essentially combines the advantages of depth-first and breadth
first search;
■ the procedure is complete and optimal;
■ the memory requirement is similar to that of depth-first search;
Iterative deepening search
The iterative deepening search algorithm, which repeatedly applies depth limited
search with increasing limits. It terminates when a solution is found or if the depth limited
search returns failure, meaning that no solution exists.
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
Properties of iterative deepening search
■ Complete? Yes
■ Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
■ Space? O(bd)
■ Optimal? Yes, if step cost = 1
Iterative deepening search
■Suppose we have a tree having branching factor ‘b’ (number of children of each
node), and its depth ‘d’, i.e., there are bd nodes.
■In an iterative deepening search, the nodes on the bottom level are expanded once,
those on the next to bottom level are expanded twice, and so on, up to the root of
the search tree, which is expanded d+1 times.
■IDDFS takes the same time as that of DFS and BFS, but it is indeed slower than
both as it has a higher constant factor in its time complexity expression.
IDDFS is best suited for a complete infinite tree
■
Example IDS
rig
ht
1 2 3 1 2 3 1 2 3
7 8 4 7 8 4 7 4
6 5 6 5 6 8 5
rig
ht
1 2 3 1 2 3 1 2 3
7 8 4 7 8 4 7 4
6 5 6 5 6 8 5
h=2 h=4 h=3
heuristics
E.g., for the 8-puzzle:
■ h1(n) = number of misplaced tiles
■ h2(n) = total Manhattan distance
(i.e., sum of the distances of the tiles
from the goal position)
■ h1(S) = 8
■ h2(S) = 3+1+2+2+2+3+3+2 = 18
Best-first search
■ Idea: use an evaluation function f(n) for each node
■ f(n) provides an estimate for the total cost.
🡪 Expand the node n with smallest f(n).
■ Implementation:
Order the nodes in fringe increasing order of cost.
■ Special cases:
■ greedy best-first search
■ A* search
Best-First Search
■ Use an evaluation function f(n).
■ Always choose the node from fringe that has the lowest f
value.
3 5 1
3 5 1
4 6
Greedy best-first search
Figure a Figure b
119
Chapter 2 Problem Solving
A* Shortest Path Example
2 8 3 2nd 2 8 3 2 8 3
5+1=6 1 6 4 4 1 4 6 1 6 4
7 5 7 6 5 7 5
2 8 3 2 3 2 8 3
5 1 4 5 1 8 4 6 1 4
7 6 5 7 6 5 7 6 5
Chapter 2 Problem Solving 131
A*: admissibility
■ If h(n) is admissible then search will find optimal solution.
{
■ If search algorithm is admissible, if for any graph it terminates in an optimal
path from start state to goal state if path exists.
■ A heuristic function is admissible(terminates with optimal path) if it satisfies the
following property:
■ Advantages:
■ Use very little memory
■ Can often find reasonable solutions in large or infinite
(continuous) state spaces.
• Ridge : It is region which is higher than its neighbours but itself has a
slope. It is a special kind of local maximum.
Ways Out
■ Backtrack to some earlier node and try going in a different
direction.
■ Make a big jump to try to get in a new section.
■ Moving in several directions at once.
Steepest-Ascent Hill Climbing (Gradient Search)
As the value of any structure maximizes, we will be nearer to the goal state. 157
Chapter 2 Problem Solving
GLOBAL APPROACH
159
Chapter 2 Problem Solving
Simulated Annealing
• A variation of hill climbing in which, at the beginning of the process,
some downhill moves may be made.
■ A Physical Analogy:
Imagine the task of getting a ping-pong ball into the deepest crevice in a
bumpy surface. If we just let the ball roll, it will come to rest at a local
minimum. If we shake the surface, we can bounce the ball out of the local
minimum. The trick is to shake just hard enough to bounce the ball out of
local minima but not hard enough to dislodge it from the global minimum.
The simulated-annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e.,
lower the temperature).
Simulated annealing
• Main idea: Steps taken in random directions do not decrease (but actually
increase) the ability of finding a global optimum.
• Annealing: When the amplitude of the random step is sufficiently small not to
allow to descend the hill under consideration, the result of the algorithm is said
to be annealed.
Simulated annealing
• If the schedule lowers T slowly enough, the algorithm will find a global
optimum with probability approaching 1.
Terminology from the physical problem is often used. Downhill moves are accepted readily early in the annealing schedule and then less
often as time goes on. The schedule input determines the value of the temperature T as a function of time.
Probabilty calculation
Stochastic beam search tends to allow more diversity in the k individuals than
does plain beam search.
Stochastic beam Search: Genetic Algorithms(GA)
170
Chapter 2 Problem Solving
Genetic algorithms
■ A genetic algorithm (GA) is a variant of stochastic beam search, in
which two parent states are combined.
■ Inspired by the process of natural selection:
■ Living beings adapt to the environment thanks to the characteristics
inherited from their parents.
■ The possibility of survival and reproduction are proportional to the
goodness of these characteristics.
■ The combination of “good” individuals can produce better adapted
individuals.
Genetic algorithms
■ To solve a problem via GAs requires:
■ The size of the initial population:
■ GAs start with a set of k states randomly generated
■ A strategy to combine individuals
■ The representation of the states (individuals):
■ A function, which measure the fitness of the states
■ Operators, which combine states to obtain new states
■ Cross-over and mutation operators
172
Genetic algorithms: algorithm
■ Steps of the basic GA algorithm:
1. N individuals from current population are
selected to form the intermediate population
(according to some predefined criteria).
2. Individuals are paired and for each pair:
a) The crossover operator is applied and two new
individuals are obtained.
b) New individuals are mutated
■ The resulting individuals form the new
population.
■ The process is iterated until the population
converges or a specific number of iteration has
passed.
Genetic Algorithms
Population
Selection
Mutation
■ 24/(24+23+20+11) = 31%
Chapter 2 Problem Solving
■ 23/(24+23+20+11) = 29% etc 183
Selection
■ Fitness function: number of non-attacking pairs of queens (min = 0, max =(8 × 7)/2 = 28)
Genetic algorithms: 8-queens problem
190