0% found this document useful (0 votes)
378 views190 pages

Chapter 2 Problem Solving

This document discusses problem solving and state space search techniques. It begins by outlining the steps to define, analyze, and represent a problem to solve it. It then defines key concepts like problem spaces, state spaces, and search. Several toy problems are presented, like the 8-puzzle and water jug problem, to illustrate state space representations and solutions. State space search involves representing a problem as states and operators that change states, with an initial state, goal states, and search for a solution path. Problem solving agents formulate goals and problems, search for solutions, and execute action sequences. Well-defined problems have components like initial states, actions, transition models, goal tests, and path costs.

Uploaded by

Megha Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
378 views190 pages

Chapter 2 Problem Solving

This document discusses problem solving and state space search techniques. It begins by outlining the steps to define, analyze, and represent a problem to solve it. It then defines key concepts like problem spaces, state spaces, and search. Several toy problems are presented, like the 8-puzzle and water jug problem, to illustrate state space representations and solutions. State space search involves representing a problem as states and operators that change states, with an initial state, goal states, and search for a solution path. Problem solving agents formulate goals and problems, search for solutions, and execute action sequences. Well-defined problems have components like initial states, actions, transition models, goal tests, and path costs.

Uploaded by

Megha Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 190

Chapter 2 Problem Solving

Prepared by
Mrs. Megha V Gupta
New Horizon Institute of Technology and Management
Steps in building a system to solve a particular
problem

1. Define the problem precisely – find input situations as well


as final situations for an acceptable solution
2. Analyze the problem – find few important features that
may have impact on the appropriateness of various possible
techniques for solving the problem
3. Isolate and represent task knowledge necessary to solve the
problem
4. Choose the best problem-solving technique(s) and apply to
the particular problem
PROBLEMS, PROBLEM SPACES AND SEARCH

Problem solving is a process of generating solutions from observed data.

• A ‘problem space’
The set of all possible configurations is the space of the problem state, also known as problem
space.
The environment where the search is performed is the problem space.

■ A ‘state space’ of the problem is the set of all states reachable from the initial state.

• A ‘search’ refers to the search for a solution in a problem space.


State Space Search

■ A state space represents a problem in terms of states and


operators that change states.
■ A state space consists of:
▪ A representation of the states the system can be in.
▪ A set of operators that can change one state into another state. Often the operators are
represented as programs that change a state representation to represent the new state.
▪ An initial state.
▪ A set of final states; some of these may be desirable, others undesirable. This set is often
represented implicitly by a program that detects terminal states.
Toy Problems
8-puzzle
Water Jug
Missionaries and Cannibals
8-puzzle problem
“It has set of a 3x3 board having 9 block spaces out of which, 8 blocks are
having tiles bearing number from 1 to 8. One space is left blank. The tile
adjacent to blank space can move into it. We have to arrange the tiles in a
sequence.”
8-puzzle problem
The start state is any situation of tiles,
and goal state is tiles arranged in a specific sequence.
Solution: reporting of “movement of tiles” in order to reach the
goal state.
The transition function (direction in which blank space effectively
moves either towards left or right or up or down) generates the
legal state

Chapter 2 Problem Solving 7


Chapter 2 Problem Solving 8
Example

4 1 3
2 6
7 5 8
Initial state

1 2 3
4 5 6
7 8
Goal state
Water-Jug Problem
“You are given two jugs, a 4-gallon one and a 3-gallon one, a
pump which has unlimited water which you can use to fill the
jug, and the ground on which water may be poured. Neither
jug has any measuring markings on it. How can you get
exactly 2 gallons of water in the 4-gallon jug?
Water jug problem
■ A water jug problem: 4-gallon and 3-gallon

4 3
- no marker on the bottle
- pump to fill the water into the jug
- How can you get exactly 2 gallons of water
into the 4-gallons jug?
A state space search
(x,y) : order pair
x : water in 4-gallons x = 0,1,2,3,4
y : water in 3-gallons y = 0,1,2,3
start state : (0,0)
goal state : (2,n) where n = any value

Rules : 1. Fill the 4 gallon-jug (4,-)


2. Fill the 3 gallon-jug (-,3)
3. Empty the 4 gallon-jug (0,-)
4. Empty the 3 gallon-jug (-,0)
Water jug rules
Water jug rules
A water jug solution
4-Gallon Jug 3-Gallon Jug Rule Applied

0 0
0 3 2
3 0 9
3 3 2
4 2 7
0 2 5 or 12
2 0 9 or 11

Solution : path / plan


Solution 3

Chapter 2 Problem Solving 17


Missionaries and Cannibals
Three missionaries and three cannibals
wish to cross the river. They have a small
boat that will carry up to two people.
Everyone can navigate the boat. If at any
time the Cannibals outnumber the
Missionaries on either bank of the river,
they will eat the Missionaries. Find the
smallest number of crossings that will allow
everyone to cross the river safely.
https://www.youtube.com/watch?v=W9NEWxabGmg
Production Rules
Farmer, Wolf, Goat and the Cabbage

https://www.youtube.com/watch?v=go294ZR4Rdg
State Space Representation

Chapter 2 Problem Solving 22


Problem-solving agent
■ Problem-solving agent
■ A kind of goal-based agent
■ It solves problem by
■ finding sequences of actions that lead to desirable states (goals)
■ To solve a problem,
■ the first step is the goal formulation, based on the current situation
■ The algorithms are uninformed
■ No extra information about the problem other than the definition
■ No extra information
■ No heuristics (rules)
Goal formulation

■ The goal is formulated


■ as a set of world states, in which the goal is
satisfied
■ Reaching from initial state -> goal state
■ Actions are required
■ Actions are the operators
■ causing transitions between world states
■ Actions should be abstract enough at a
certain degree, instead of very detailed
■ E.g., turn left VS turn left 30 degree, etc.
Problem formulation
■ The process of deciding
■ what actions and states to consider, given a goal.

■ E.g., driving Amman -> Zarqa


■ in-between states and actions defined
■ States: Some places in Amman & Zarqa
■ Actions: Turn left, Turn right, go straight, accelerate & brake, etc.

■ Because there are many ways to achieve the same goal


■ Those ways are together expressed as a tree

■ Multiple options of unknown value at a point,

■ the agent can examine different possible sequences of actions, and choose

the best
■ This process of looking for the best sequence is called search

■ A search algorithm takes a problem as input and returns a solution(best


sequence) in the form of an action sequence.
“formulate, search, execute”
Once a solution is found, the actions it recommends can be
carried out. This EXECUTION is called the execution phase.
Thus, we have a simple “formulate, search, execute” design
for the agent,
Problem-solving agents

A problem-solving agent first formulates a goal and a problem, searches for a sequence of actions
that would solve the problem, and then executes the actions one at a time. When this is complete, it
formulates another goal and starts over.
Chapter 2 Problem Solving 28
Example: Romania
■ On holiday in Romania; currently in Arad.
■ Flight leaves tomorrow from Bucharest
■ Formulate goal:
■ be in Bucharest
■ Formulate problem:
■ states: various cities
■ actions: drive between cities
■ Find solution:
■ sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
Well-defined problems and solutions
A problem is defined by 5 components:
■ Initial state
■ Actions
■ Transition model or (Successor
functions)
■ Goal Test.
■ Path Cost.
Well-defined problems and solutions
1. The initial state that the agent starts in

2. Actions:
A description of the possible actions available to the agent.

3. Transition model: description of what each action does.


(successor): refer to any state reachable from given state by
a single action
Together the initial state, actions and transition model define
the state space
■ the set of all states reachable from the initial state by any sequence of
actions.
A path in the state space:
■ a sequence of states connected by a sequence of actions.
Well-defined problems and solutions
4. The goal test which determines whether a given state is a goal
state
■ Sometimes there is an explicit set of possible goal states, and the

test simply checks whether the given state is one of them.


■ Sometimes the goal is described by abstract property rather than

explicitly enumerated set of states.


Eg. In Chess, the goal is to reach a state called “checkmate,”
where the opponent’s king is under attack and can’t escape.
Well-defined problems and solutions
5. A path cost function,
■ assigns a numeric cost to each path
■ = performance measure

■ denoted by g

■ to distinguish the best path from others

Usually the path cost is the sum of the step costs of the individual actions (in
the action list)
The solution of a problem is then
■ a path from the initial state to a state satisfying the goal test
Optimal solution
■ the solution with lowest path cost among all solutions
Vacuum world state space graph

■ states? The state is determined by both the agent location and the dirt locations.
■ Initial state: any
■ actions? Left, Right, Suck
■ Transition model: The actions have their expected effects, except that moving Left in the leftmost square,
moving Right in the rightmost square, and Sucking in a clean square have no effect.
■ goal test? no dirt at all locations
■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
Example: The 8-puzzle

■ states? locations of tiles


■ Initial state: Any state can be designated as the initial state.
■ actions? move blank left, right, up, down
■ Transition model: Given a state and action, this returns the resulting state; for
example, if we apply Left to the start state in Figure above, the resulting state
has the 5 and the blank switched.
■ goal test? = goal state (given)
■ path cost? Each step costs 1, so the path cost is the number of steps in the path.
Example: robotic assembly

■ states?: real-valued coordinates of robot joint angles parts of the


object to be assembled
■ actions?: continuous motions of robot joints
■ goal test?: complete assembly
■ path cost?: time to execute
Traveling Salesman Problem(TSP)
States: cities
■ Initial state: A

■ Successor function: Travel from one city to another

connected by a road
■ Goal test: the trip visits each city only once that starts and

ends at A.
■ Path cost: traveling time
Using only four colors, you have to color a planar map so that no two
adjacent regions have the same color.

Initial State: Planar map with no regions colored.

Goal Test: All regions of the map are colored and no two
adjacent regions have the same color.

Successor function: Choose an uncolored region and color it


with a color that is different from all adjacent regions.

Cost function: Could be 1 for each color used.


Airline Travel problems
■ States: Each state obviously includes a location (e.g., an airport) and the current time.
Furthermore, because the cost of an action (a flight segment) may depend on previous
segments, their fare bases, and their status as domestic or international, the state must
record extra information about these “historical” aspects.
■ Initial state: This is specified by the user’s query.
■ Actions: Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within-airport transfer if needed.
■ Transition model: The state resulting from taking a flight will have the flight’s destination as
the current location and the flight’s arrival time as the current time.
■ Goal test: Are we at the final destination specified by the user?
■ Path cost: This depends on monetary cost, waiting time, flight time, customs and immigration
procedures, seat quality, time of day, type of airplane, frequent-flyer mileage
awards, and so on.
Search tree
■ Initial state
■ The root of the search tree is a search node
■ Expanding
■ applying successor function to the current state
thereby generating a new set of states
■ leaf nodes
■ the states having no successors
Fringe : Set of search nodes that have not been
expanded yet.
Tree search example
Search tree
■ The essence of searching
■ in case the first choice is not correct
■ choosing one option and keep others for later
inspection
■ Hence we have the search strategy
■ which determines the choice of which state to
expand
■ good choice ->fewer work -> faster
■ Important:
■ state space ≠ search tree
Search tree

■ A node is having five components:


■ STATE: which state it is in the state space
■ PARENT-NODE: from which node it is generated
■ ACTION: which action applied to its parent-node
to generate it
■ PATH-COST: the cost, g(n), from initial state to
the node n itself
■ DEPTH: number of steps along the path from the
initial state
Implementation: states vs. nodes
■ A state is a (representation of) a physical configuration
■ A node is a data structure constituting part of a search tree includes state, parent
node, action, path cost g(x), depth

■ The Expand function creates new nodes, filling in the various fields and using
the SuccessorFn of the problem to create the corresponding states.
Search strategies
■ A search strategy is defined by picking the order of node
expansion
■ Strategies are evaluated along the following dimensions:
■ Completeness (guarantee to find a solution if there is one): does it
always find a solution if one exists?
■ time complexity (how long does it take to find a solution): number
of nodes generated during the search
■ space complexity (how much memory is needed to perform the
search): maximum number of nodes stored in memory
■ Optimality (does it give highest quality solution when there are
several different solutions): does it always find a least-cost solution?
Measuring problem-solving performance

■ Time and space complexity are measured in terms of


■ b: branching factor of the search tree (max. no. of successors of any node)
■ d: depth of the least-cost solution (shallowest goal node)
■ m: the maximum length of any path in the state space (maximum depth of the state
space)

Chapter 2 Problem Solving 47


Search strategies
■ Uninformed search or blind search
■ no information about the number of steps
■ or the path cost from the current state to the goal
■ is applicable when we only distinguish goal states from
non-goal states.
■ search the state space blindly
■ Informed search, or heuristic search
■ a cleverer strategy that searches toward the goal,
■ based on the information from the current state so far
■ is applied if we have some knowledge of the path cost
or the number of steps between the current state and a
goal.
Uninformed search Methods
strategies that use only the information available in the problem definition. While
searching you have no clue whether one non-goal state is better than any other.
Your search is blind.
■ Breadth-first search
■ Uniform cost search
■ Depth-first search
■ Depth-limited search
■ Iterative deepening search
■ Bidirectional search
Breadth-first search
■ Expand shallowest unexpanded node
Implementation:
■ fringe is a FIFO queue, i.e., new successors go at end of queue
Is A a goal state?
Breadth-first search
Expand: Expand:
fringe=[C,D,E] fringe=[D,E,F,G]

Is C a goal state? Is D a goal state?


Example
BFS

Chapter 2 Problem Solving 52


Properties of breadth-first search
■ Complete? Yes (if b is finite)if the shallowest goal node is at some
finite depth d
■ Time Complexity? b+b2+b3+… +bd + (bd+1 –b) = O(bd+1)
■ Space Complexity? O(bd+1) (keeps every node in memory)
■ Optimal? No, optimal in general (Yes if cost = 1 per step) (if the path
cost is a non-decreasing function of depth of the node)

Space is the bigger problem (more than time)


Breadth First Search
Imagine searching a uniform tree where every state has b successors.
The root of the search tree generates b nodes at the first level,
each of which generates b more nodes, for a total of b2 at the second level.
Each of these generates b more nodes, yielding b3 nodes at the third level,
and so on.
Now suppose that the solution is at depth d.
In the worst case, it is the last node generated at that level.
Then the total number of nodes generated is
b + b2 + b3 + ・ ・ ・ + bd = O(bd) .

(If the algorithm were to apply the goal test to nodes when selected for expansion, rather
than when generated, the whole layer of nodes at depth d would be expanded before
the goal was detected and the time complexity would be O(bd+1).)
Breadth-first search
S

A D

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Chapter 2 Problem Solving 55
Chapter 2 Problem Solving 56
Chapter 2 Problem Solving 57
Uniform Cost Search

Chapter 2 Problem Solving 58


Uniform-cost search

Implementation: fringe = queue ordered by path cost


Equivalent to breadth-first if all step costs all equal.

Complete? Yes, if step cost exceeds some small positive constant

Time? # of nodes with path cost less than of optimal solution.

Space? # of nodes on paths with path cost less than of optimal


solution.

Optimal? Yes, for any step cost.

Chapter 2 Problem Solving 59


Depth-first search

■ Always expands one of the nodes at the deepest


level of the tree
■ Only when the search hits a dead end
■ goes back and expands nodes at shallower levels
■ Dead end -> leaf nodes but not the goal
■ Backtracking search
■ only one successor is generated on expansion
■ rather than all successors
■ fewer memory
Depth-first search
■ Expand deepest unexpanded node
■ Implementation:
■ fringe = Last In First Out (LIFO) queue, i.e., put successors at front
Is A a goal state? queue=[B,C]

Is B a goal state?
Depth-first search
queue=[D,E,C]
queue=[H,I,E,C]
Is D = goal state?
Is H = goal state?
Depth-first search
queue=[I,E,C]
queue=[E,C]
Is I = goal state?
Is E = goal state?
Depth-first search
queue=[K,C]
queue=[J,K,C]
Is K = goal state?
Is J = goal state?
Depth-first search
queue=[F,G]
queue=[C]
Is F = goal state?
Is C = goal state?
Depth-first search
queue=[L,M,G] queue=[M,G]

Is L = goal state? Is M = goal state?


Example DFS

Chapter 2 Problem Solving 67


Depth-first search
S

A D

B D A E

C E E B B F
1
1
D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Chapter 2 Problem Solving 68
Properties of depth-first search
■ Complete? No: fails in infinite path or loops
complete in finite spaces

■ Time? O(bm) Terrible if m(maximum depth of the state space) can be


much larger than d (the depth of the shallowest solution) and is
infinite if the tree is unbounded
May be much faster than Breadth first search if solutions are dense
■ Space? O(bm), linear space memory requirement is

branching factor(b)*maximum depth(m)


■ Optimal? No (It may find a non-optimal goal first) cannot guarantee
the shallowest solution.
Depth First Search
A depth-first tree search may generate all of the O(bm) nodes in the search tree,
where m is the maximum depth of any node; this can be much greater than the size
of the state space.

A depth-first tree search needs to store only a single path from the root
to a leaf node, along with the remaining unexpanded sibling nodes for each node on
the path. Once a node has been expanded, it can be removed from memory as soon
as all its descendants have been fully explored.
For a state space with branching factor b and maximum depth m, depth-first search
requires storage of only O(bm) nodes.
DFS
Depth-Limited Search
■ Depth-first search is clearly dangerous
• if the tree is very deep, we risk finding a suboptimal solution;
• if the tree is infinite, we risk an infinite loop.
■ The embarrassing failure of depth-first search in infinite state spaces
can be alleviated by supplying depth-first search with a predetermined
depth limit l . That is, nodes at depth are treated as if they have no
successors. This approach is called depth-limited search.
■ Three possible outcomes:
■ Solution

■ Failure (no solution)

■ Cutoff (no solution within cutoff)


Depth-limited search

■ However, it is usually not easy to define the suitable


maximum depth
■ too small ->no solution can be found
■ too large -> the same problems are suffered from
■ Anyway the search is
■ complete
■ but still not optimal
Depth-limited search
S depth = 3
A D 3
6
B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17 G 25
Chapter 2 Problem Solving 75
Chapter 2 Problem Solving 76
Iterative deepening search
■ Usually we do not know a reasonable depth limit in advance.
■ Iterative deepening search repeatedly runs depth-limited search for
increasing depth limits 0, 1, 2, . . .
■ this essentially combines the advantages of depth-first and breadth
first search;
■ the procedure is complete and optimal;
■ the memory requirement is similar to that of depth-first search;
Iterative deepening search

The iterative deepening search algorithm, which repeatedly applies depth limited
search with increasing limits. It terminates when a solution is found or if the depth limited
search returns failure, meaning that no solution exists.
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
■ Note: We visit top level nodes multiple times. The last (or max depth) level is
visited once, second last level is visited twice, and so on. It may seem expensive,
but it turns out to be not so costly, since in a tree most of the nodes are in the
bottom level. So it does not matter much if the upper levels are visited multiple
times.

■ Number of nodes generated in an iterative deepening search to depth d with


branching factor b:
NIDS = (d+1) b0 +(d) b1 + (d-1)b2 + … + 1bd 84
Chapter 2 Problem Solving
Iterative deepening search

■ For b = 2, d = 3,
■ N
BFS = (b )+ (b )+ b +(b
1 2 d d+1 –b)= 2+4+23 + (24-2)=6+8+(14)=28

■ N
IDS = (d+1) b +(d) b + (d-1)b + … + 1b =(4)*1+(3)*2+(2)* 2 +(1)* 2
0 1 2 d 2 3
=4+6+8+8=26
■ iterative deepening is the preferred uninformed search method when the

search space is large and the depth of the solution is not known.
Iterative deepening search
■ Usually we do not know a reasonable depth limit in advance.
■ Iterative deepening search repeatedly runs depth-limited search for
increasing depth limits 0, 1, 2, . . .
■ this essentially combines the advantages of depth-first and breadth
first search;
■ the procedure is complete and optimal;
■ the memory requirement is similar to that of depth-first search;
Iterative deepening search

The iterative deepening search algorithm, which repeatedly applies depth limited
search with increasing limits. It terminates when a solution is found or if the depth limited
search returns failure, meaning that no solution exists.
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
Properties of iterative deepening search

■ Complete? Yes
■ Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
■ Space? O(bd)
■ Optimal? Yes, if step cost = 1
Iterative deepening search
■Suppose we have a tree having branching factor ‘b’ (number of children of each
node), and its depth ‘d’, i.e., there are bd nodes.

■In an iterative deepening search, the nodes on the bottom level are expanded once,
those on the next to bottom level are expanded twice, and so on, up to the root of
the search tree, which is expanded d+1 times.

■IDDFS takes the same time as that of DFS and BFS, but it is indeed slower than
both as it has a higher constant factor in its time complexity expression.
IDDFS is best suited for a complete infinite tree

Example IDS

Chapter 2 Problem Solving 94


Bidirectional search

■ Run two simultaneous searches


■ one forward from the initial state another
backward from the goal
■ stop when the two searches meet
■ However, computing backward is difficult
■ A huge amount of goal states
■ at the goal state, which actions are used to
compute it?
■ can the actions be reversible to computer its
predecessors?
Chapter 2 Problem Solving 95
Chapter 2 Problem Solving 96
Comparing search strategies
Informed Search Methods
■ How can we make use of other knowledge about the
problem to improve searching strategy?
■ Map example:
■ Heuristic: Expand those nodes closest in “straight-line” distance to goal
■ 8-puzzle:
■ Heuristic: Expand those nodes with the most tiles in place
Heuristic
■ Heuristics (Greek heuriskein = find, discover): "the study of
the methods and rules of discovery and invention".

■ Heuristic - a “rule of thumb” used to help guide search


■ often, something learned experientially and recalled when needed

■ Heuristic Function - function applied to a state in a search


space to indicate a likelihood of success if that state is
selected
Heuristic function
■A heuristic function at a node n is an estimate of the optimum cost from
the current node to a goal. It is denoted by h(n).
■h(n) = estimated cost of the cheapest path from node n to a goal node

Example: We want a path from Kolkata to Guwahati


Heuristic for Guwahati may be straight-line distance between Kolkata
and Guwahati
h(Kolkata) = euclidean Distance(Kolkata, Guwahati)
Heuristics can also help speed up exhaustive, blind search, such as depth-first and breadth-first search.
Comparing search strategies
Informed Search Methods
■ How can we make use of other knowledge about the
problem to improve searching strategy?
■ Map example:
■ Heuristic: Expand those nodes closest in “straight-line” distance to goal
■ 8-puzzle:
■ Heuristic: Expand those nodes with the most tiles in place
Heuristic
■ Heuristics (Greek heuriskein = find, discover): "the study of
the methods and rules of discovery and invention".

■ Heuristic - a “rule of thumb” used to help guide search


■ often, something learned experientially and recalled when needed

■ Heuristic Function - function applied to a state in a search


space to indicate a likelihood of success if that state is
selected
Heuristic function
■A heuristic function at a node n is an estimate of the optimum cost from
the current node to a goal. It is denoted by h(n).
■h(n) = estimated cost of the cheapest path from node n to a goal node

Example: We want a path from Kolkata to Guwahati


Heuristic for Guwahati may be straight-line distance between Kolkata
and Guwahati
h(Kolkata) = euclidean Distance(Kolkata, Guwahati)
Heuristics can also help speed up exhaustive, blind search, such as depth-first and breadth-first search.
Chapter 2 Problem Solving 105
1 2 3 GOAL 1 2 3
8 4 7 8 4
7 6 5 6 5

rig
ht
1 2 3 1 2 3 1 2 3
7 8 4 7 8 4 7 4
6 5 6 5 6 8 5

A Simple 8-puzzle heuristic


Which
move is
best?
106
Chapter 2 Problem Solving
Another approach
■ Number of tiles in the incorrect position.
■ This can also be considered a lower bound on the number of moves from a
solution!
■ The “best” move is the one with the lowest number returned by the heuristic.
1 2 3 GOAL 1 2 3
8 4 7 8 4
7 6 5 6 5

rig
ht
1 2 3 1 2 3 1 2 3
7 8 4 7 8 4 7 4
6 5 6 5 6 8 5
h=2 h=4 h=3
heuristics
E.g., for the 8-puzzle:
■ h1(n) = number of misplaced tiles
■ h2(n) = total Manhattan distance
(i.e., sum of the distances of the tiles
from the goal position)

■ h1(S) = 8
■ h2(S) = 3+1+2+2+2+3+3+2 = 18
Best-first search
■ Idea: use an evaluation function f(n) for each node
■ f(n) provides an estimate for the total cost.
🡪 Expand the node n with smallest f(n).

■ Implementation:
Order the nodes in fringe increasing order of cost.

■ Special cases:
■ greedy best-first search
■ A* search
Best-First Search
■ Use an evaluation function f(n).
■ Always choose the node from fringe that has the lowest f
value.

3 5 1

3 5 1
4 6
Greedy best-first search

■ f(n) = h(n) estimate of cost from n to goal


■ e.g., f(n) = straight-line distance from n to Bucharest
■ Greedy best-first search expands the node that
appears to be closest to goal.
Romania with straight-line dist.
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
■ Complete? No – can get stuck in loops.
■ Time? O(bm), but a good heuristic can give dramatic
improvement
■ Space? O(bm) - keeps all nodes in memory
■ Optimal? No
e.g. Arad->Sibiu->Rimnicu Vilcea->Pitesti->Bucharest is
shorter!
E.g. Route finding problem
S is the starting state, G is the goal state. Let us run the greedy search algorithm
for the graph given in Figure a. The straight line distance heuristic estimates for
the nodes are shown in Figure b.

Figure a Figure b

Chapter 2 Problem Solving 116


Chapter 2 Problem Solving 117
A* search
■ Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n)
■ Idea: avoid expanding paths that are already expensive
■ Evaluation function f(n) = g(n) + h(n)
g(n) = cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n to goal
A* Shortest Path Example

119
Chapter 2 Problem Solving
A* Shortest Path Example

Chapter 2 Problem Solving 120


A* Shortest Path Example

Chapter 2 Problem Solving 121


A* Shortest Path Example

Chapter 2 Problem Solving 122


A* algorithm
Insert the root node into the queue
While the queue is not empty
Dequeue the element with the highest priority
(If priorities are same, alphabetically smaller path is
chosen)
If the path is ending in the goal state,
print the path and exit
Else
Insert all the children of the dequeued element, with
f(n) as the priority
Chapter 2 Problem Solving 124
Chapter 2 Problem Solving 125
Chapter 2 Problem Solving 126
127
Chapter 2 Problem Solving
A* Example
Consider the search problem below with start state S and goal state G. The transition costs are next to the edges, and the heuristic
values are next to the states. What is the final cost using A* search.

Chapter 2 Problem Solving 128


8 Puzzle Example
■ f(n) = g(n) + h(n)
■ What is the usual g(n)?
■ two well-known h(n)’s
■ h1 = the number of misplaced tiles
■ h2 = the sum of the distances of the tiles from their goal positions,
using city block distance, which is the sum of the horizontal and
vertical distances
8 Puzzle Using Number of Misplaced Tiles
1 2 3
8 4 1st 2 8 3
7 6 5
1 6 4 0+4=4
7 5
goal

2 8 3 2nd 2 8 3 2 8 3
5+1=6 1 6 4 4 1 4 6 1 6 4
7 5 7 6 5 7 5

2 8 3 2 3 2 8 3
5 1 4 5 1 8 4 6 1 4
7 6 5 7 6 5 7 6 5
Chapter 2 Problem Solving 131
A*: admissibility
■ If h(n) is admissible then search will find optimal solution.

{
■ If search algorithm is admissible, if for any graph it terminates in an optimal
path from start state to goal state if path exists.
■ A heuristic function is admissible(terminates with optimal path) if it satisfies the
following property:

h’(n) ≤ h*(n) (heuristic function underestimates the true cost)

■ h’(n) has to be an optimistic estimator; it never has to overestimate h*(n).


■ h*(n) –cost of the cheapest solution path from n to goal node
Memory Bounded Heuristic Search: Recursive BFS

■ How can we solve the memory problem for


A* search?
■ Idea: Try something like depth first search,
but let’s not forget everything about the
branches we have partially explored.
■ We remember the best f-value we have
found so far in the branch we are deleting.
RBFS:
best alternative
over fringe nodes,
which are not children:
i.e. do I want to back up?

RBFS changes its mind


very often in practice.

This is because the


f=g+h become more
accurate (less optimistic)
as we approach the goal.
Hence, higher level nodes
have smaller f-values and
will be explored first.

Problem: We should keep


in memory whatever we can.

Chapter 2 Problem Solving 134


Recursive best-first
■ It is a recursive implementation of best-first, with linear
spatial cost.
■ It forgets a branch when its cost is more than the best
alternative.
■ The cost of the forgotten branch is stored in the parent node
as its new cost.
■ The forgotten branch is re-expanded if its cost becomes the
best one again.
Local search algorithms and optimization problems

■ Local search algorithms operate using a single current


state and generally move only to neighbors of that state.

■ In addition to finding goals, these algorithms are useful for


solving optimization problems in which aim is to find the
best state according to an objective function.

■ In LS, there is a function to evaluate the quality of the


states, but this is not necessarily related to a cost.
Local search and optimization
■ Local search
■ Keep track of single current state
■ Move only to neighboring states
■ Ignore paths

■ Advantages:
■ Use very little memory
■ Can often find reasonable solutions in large or infinite
(continuous) state spaces.

■ “Pure optimization” problems


■ All states have an objective function
■ Goal is to find state with max (or min) objective value
■ Does not quite fit into path-cost/goal-state formulation
■ Local search can do quite well on these problems.
Local search algorithms
■ These algorithms do not systematically explore all the state space.

■ The heuristic (or evaluation) function is used to reduce the search


space (not considering states which are not worth being explored).

■ Algorithms do not usually keep track of the path traveled. The


memory cost is minimal.
Hill Climbing (Greedy Local Search)

■ Searching for a goal state = Climbing to the top of a hill


■ Heuristic function to estimate how close a given state is to a
goal state.

■ Children are considered only if their evaluation function is better


than the one of the parent (reduction of the search space).
Simple Hill Climbing
Algorithm
1. Evaluate the initial state.

2. Loop until a solution is found or there are no


new operators left to be applied:
− Select and apply a new operator
− Evaluate the new state:
goal → quit
better than current state → new current state
Different regions in the State Space

• Local maximum: It is a state which is better than its neighboring state


however there exists a state which is better than it(global maximum).
This state is better because here the value of the objective function is
higher than its neighbors.

• Global maximum : It is the best possible state in the state space


diagram. This because at this state, objective function has highest
value.

• Plateau/flat local maximum : It is a flat region of state space where


neighboring states have the same value.

• Ridge : It is region which is higher than its neighbours but itself has a
slope. It is a special kind of local maximum.

• Current state : The region of state space diagram where we are


currently present during the search.

• Shoulder : It is a plateau that has an uphill edge.

Chapter 2 Problem Solving 141


Chapter 2 Problem Solving 142
Chapter 2 Problem Solving 143
Chapter 2 Problem Solving 144
Hill Climbing: Disadvantages
Local maximum
A state that is better than all of its
neighbours, but not better than
global maximum.
Hill Climbing: Disadvantages
Plateau
A flat area of the search space
in which all neighboring
states have the same value.
Hill Climbing: Disadvantages
Ridges (result in a sequence of local maxima)
The orientation of the high region, compared to the set
of available moves, makes it impossible to climb up.
However, two moves executed serially may increase
the height.
Hill Climbing: Disadvantages

Ways Out
■ Backtrack to some earlier node and try going in a different
direction.
■ Make a big jump to try to get in a new section.
■ Moving in several directions at once.
Steepest-Ascent Hill Climbing (Gradient Search)

• Standard hill-climbing search algorithm


– It is a simple loop which search for and select any operation that
improves the current state.
• Steepest-ascent hill climbing or gradient search
– Is a loop that continuously moves in the direction of increasing
value. (Terminates when peak is reached)
– The best move (not just any one) that improves the current state
is selected.
■ Considers all the moves from the current state.
■ Selects the best one as the next state.
Steepest-Ascent Hill Climbing (Gradient Search)
Blocks World

In this problem, a set of initial arrangement of eight blocks


is provided. We have to reach the GOAL arrangement by
moving blocks in a systematic order. States are to be
evaluated using heuristic , so that we can get next best
node by applying Steepest Ascent Hill Climbing technique.
Two Heuristics are considered : (i) LOCAL (ii) GLOBAL.

Both the function will try to maximize the score/cost of


each state.
LOCAL

Cost/score of goal state is 8 (using local heuristic),


because all the blocks are at its correct position.
I

Chapter 2 Problem Solving 154


Now J is
current new
state with
score 6 > cost
of I (4).
So , In step 2
three moves
from best
state J is
possible.

Chapter 2 Problem Solving 155


All the neighbors of node J have lower score than value of J i.e 4 , so J is a local maxima, and further no move
is possible from states K, L and M. So search falls in TRAP situation. To overcome the above problem of Local
function, we can apply GLOBAL heuristic.
Chapter 2 Problem Solving 156
I

As the value of any structure maximizes, we will be nearer to the goal state. 157
Chapter 2 Problem Solving
GLOBAL APPROACH

Chapter 2 Problem Solving 158


GLOBAL APPROACH
Now goal state will have score /cost of
28 and Initial state will have cost of -28.
Again the best node in next
move will be that which has
maximum score/cost.

Further from state M we can have following moves :


(i) PUSH block G on block A
(ii) PUSH block G on block H
(iii) PUSH block H on block A
(iv) PUSH block H on block G
(v) PUSH block A on block H
(vi) PUSH block A on block G BACK.
(vii) PUSH block G on TABLE…and so on we select best
node till we get structure with score of + 28.

159
Chapter 2 Problem Solving
Simulated Annealing
• A variation of hill climbing in which, at the beginning of the process,
some downhill moves may be made.

• Lower the chances of getting caught at a local maximum, or plateau,


or a ridge.
• It is inspired by the physical process of controlled cooling (crystallization, metal
annealing):
■ A metal is heated up to a high temperature and then is progressively cooled in a controlled
way until some solid state is reached.
■ If the cooling is adequate, the minimum-energy structure (a global minimum) is obtained.
■ Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be
attained.
Simulated Annealing
■ It is a stochastic hill-climbing algorithm (stochastic local
search, SLS):
■ A successor is selected among all possible successors according to a
probability distribution.
■ The successor can be worse than the current state.

■ A Physical Analogy:
Imagine the task of getting a ping-pong ball into the deepest crevice in a
bumpy surface. If we just let the ball roll, it will come to rest at a local
minimum. If we shake the surface, we can bounce the ball out of the local
minimum. The trick is to shake just hard enough to bounce the ball out of
local minima but not hard enough to dislodge it from the global minimum.
The simulated-annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e.,
lower the temperature).
Simulated annealing
• Main idea: Steps taken in random directions do not decrease (but actually
increase) the ability of finding a global optimum.

• Disadvantage: The structure of the algorithm increases the execution time.

• Advantage: The random steps possibly allow to avoid small “hills”.

• Temperature: It determines (through a probability function) the amplitude of


the steps, long at the beginning, and then shorter and shorter.

• Annealing: When the amplitude of the random step is sufficiently small not to
allow to descend the hill under consideration, the result of the algorithm is said
to be annealed.
Simulated annealing

• If the move improves the situation, it is always accepted. Otherwise, the


algorithm accepts the move with some probability less than 1.

• The probability decreases exponentially with the “badness” of the move—the


amount ΔE by which the evaluation is worsened.

• If the schedule lowers T slowly enough, the algorithm will find a global
optimum with probability approaching 1.

Chapter 2 Problem Solving 163


Simulated annealing
function SIMULATED-ANNEALING( problem, schedule) return a solution state
input: problem, a problem schedule, a mapping from time to temperature
local variables: current, a node.
next, a node.
T, a “temperature” controlling the probability of downward steps
current ← MAKE-NODE(INITIAL-STATE[problem])
for t ← 1 to ∞ do
T ← schedule[t]
if T = 0 then return current
next ← a randomly selected successor of current
∆E ← VALUE[next] - VALUE[current]
if ∆E > 0 then current ← next
else current ← next only with probability e∆E /T

Terminology from the physical problem is often used. Downhill moves are accepted readily early in the annealing schedule and then less
often as time goes on. The schedule input determines the value of the temperature T as a function of time.
Probabilty calculation

• The probability also decreases as the “temperature” T


goes down:
“bad” moves are more likely to be allowed at the start
when T is high, and they become more unlikely as T
decreases.

Chapter 2 Problem Solving 165


Simulated annealing
• Aim: to avoid local optima, which represent a problem in hill climbing.
• Solution: to take, occasionally, steps in a different direction from the one in
which the increase (or decrease) of energy is maximum.
Simulated annealing: conclusions
■ It is suitable for problems in which the global optimum is
surrounded by many local optima.
■ It is suitable for problems in which it is difficult to find a
good heuristic function.
■ Determining the values of the parameters can be a problem
and requires experimentation.
Local Beam Search
Local beam search

In the Stochastic beam search instead of choosing the best k individuals, it


selects k number of the individuals at random; the individuals with a better
evaluation are more likely to be chosen.

This is done by making the probability of being chosen as a function of the


evaluation function.

Stochastic beam search tends to allow more diversity in the k individuals than
does plain beam search.
Stochastic beam Search: Genetic Algorithms(GA)

170
Chapter 2 Problem Solving
Genetic algorithms
■ A genetic algorithm (GA) is a variant of stochastic beam search, in
which two parent states are combined.
■ Inspired by the process of natural selection:
■ Living beings adapt to the environment thanks to the characteristics
inherited from their parents.
■ The possibility of survival and reproduction are proportional to the
goodness of these characteristics.
■ The combination of “good” individuals can produce better adapted
individuals.
Genetic algorithms
■ To solve a problem via GAs requires:
■ The size of the initial population:
■ GAs start with a set of k states randomly generated
■ A strategy to combine individuals
■ The representation of the states (individuals):
■ A function, which measure the fitness of the states
■ Operators, which combine states to obtain new states
■ Cross-over and mutation operators

172
Genetic algorithms: algorithm
■ Steps of the basic GA algorithm:
1. N individuals from current population are
selected to form the intermediate population
(according to some predefined criteria).
2. Individuals are paired and for each pair:
a) The crossover operator is applied and two new
individuals are obtained.
b) New individuals are mutated
■ The resulting individuals form the new
population.
■ The process is iterated until the population
converges or a specific number of iteration has
passed.
Genetic Algorithms
Population

Chapter 2 Problem Solving 174


Fitness

Selection

Chapter 2 Problem Solving 175


Crossover

Mutation

Chapter 2 Problem Solving 176


8-Queens Problem

Chapter 2 Problem Solving 177


Solving 8-queens problem using Genetic algorithms

■ An 8-queens state must specify the positions of 8 queens, each in


a column of 8 squares each in the range from 1 to 8.
■ Each state is rated by the evaluation
2
function or the fitness
function.
■ A fitness function should return higher values for better states, so,
for the 8-queens problem the number of non-attacking pairs of
queens is used (8*7/2) =28 for a solution).
Solving 8-queens problem using Genetic algorithms

Chapter 2 Problem Solving 179


Representing individuals

Chapter 2 Problem Solving 180


Generating an initial population

Chapter 2 Problem Solving 181


Fitness calculation

Chapter 2 Problem Solving 182


Apply a Fitness function

■ 24/(24+23+20+11) = 31%
Chapter 2 Problem Solving
■ 23/(24+23+20+11) = 29% etc 183
Selection

Chapter 2 Problem Solving 184


Stochastic Universal sampling

Chapter 2 Problem Solving 185


Genetic algorithms

4 states for 2 pairs of 2 states New states Random


8-queens randomly selected based after crossover mutation
problem on fitness. Random applied
crossover points selected

■ Fitness function: number of non-attacking pairs of queens (min = 0, max =(8 × 7)/2 = 28)
Genetic algorithms: 8-queens problem

■ The initial population in (a)


■ is ranked by the fitness function in (b),
■ resulting in pairs for mating in (c).
■ They produce offspring in (d),
■ which are subject to mutation in (e).
Summary: Genetic Algorithm

Chapter 2 Problem Solving 188


Genetic algorithms

Has the effect of “jumping” to a completely different new


part of the search space (quite non-local)
Genetic algorithms: application

■ In practice, GAs have had a widespread


impact on optimization problems, such as:
■ circuit layout
■ scheduling

190

You might also like