0% found this document useful (0 votes)
33 views

Problem Solving Using Search Techniques

Uploaded by

Deepakraj S
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Problem Solving Using Search Techniques

Uploaded by

Deepakraj S
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 117

Unit II

Problem Solving Using Search Techniques

Prepared by
Dr.R.Sivakami
ASP/ CSE, SONA
Unit 2 :Course Topics
Key concepts in search
• Set of states that we can be in
– Including an initial state…
– … and goal states (equivalently, a goal test)
• For every state, a set of actions that we can take
– Each action results in a new state
– Typically defined by successor function
• Given a state, produces all states that can be reached from it
• Cost function that determines the cost of each
action (or path = sequence of actions)
• Solution: path from initial state to a goal state
– Optimal solution: solution with minimal cost
Searching for Solutions
 A solution is an action sequence and search algorithms considers various
possible action sequences.
 The possible action sequences starting at the initial state form a search tree
with the initial state at the root;
 The branches are actions and the nodes correspond to states in the state
space of the problem.
 Expanding the current state is application of each legal action to the
current state and generation of a new set of states.
 The current state is the parent node, newly generated states are child
nodes
 Leaf node is a node with no children in the tree.
 The set of all leaf nodes available for expansion at any given point is
called the frontier.
 Search algorithms all share this basic structure; they vary primarily
according to how they choose which state to expand next: search strategy.
Partial Search Trees for Travelling in Romania
Initial State
Partial Search Trees for Travelling in Romania
After expanding Arad
Partial Search Trees for Travelling in Romania
After expanding Sibiu
Generic search algorithm
 Fringe = set of nodes generated but not expanded

 fringe := {initial state}


 loop:
 if fringe empty, declare failure
 choose and remove a node v from fringe
 check if v’s state s is a goal state; if so, declare success
 if not, expand v, insert resulting nodes into fringe

 Key question in search: Which of the generated nodes


do we expand next?
Informal Description of Graph Search
Algorithms
function GRAPH-SEARCH(problem) returns a solution, or
failure
• initialize frontier using initial state of problem
• initialize explored set to be empty
• loop do
• If frontier is empty then return failure
• choose a leaf node from frontier and remove it from there
• If node contains a goal state then return corresponding solution
• add node to explored set
• expand node, adding resulting nodes to frontier only if not in frontier or
explored set
Infrastructure for Search Algorithms
• A state is a (representation of) a physical
configuration
• A node is a data structure constituting part of
a search tree includes parent, children, depth,
path cost g(x)
• States do not have parents, children, depth, or
path cost!
Measuring Problem-Solving Performance
A strategy is defined by picking the order of node expansion
Strategies / search algorithm’s performance are evaluated along the
following dimensions:
 Completeness: Is the algorithm guaranteed to find a solution when
there is one?
 Optimality: Does the strategy find the optimal solution?
 Time complexity: How long does it take to find a solution?
 Space complexity: How much memory is needed to perform the
search?
Time and space complexity are measured in terms of
– b : maximum branching factor of the search tree
– d : depth of the least-cost solution
– m : maximum depth of the state space (may be 1)
Types of search algorithms
Based on the search problems we can classify the
search algorithms into uninformed (Blind
search) search and informed search (Heuristic
search) algorithms.
Types of Search Algorithms
Uninformed Search
Uninformed/Blind Search:
 does not contain any domain knowledge such as closeness, the
location of the goal.
 It operates in a brute-force way as it only includes information
about how to traverse the tree and how to identify leaf and goal
nodes
 Uninformed search applies a way in which search tree is searched
without any information about the search space like initial state
operators and test for the goal, so it is also called blind search. It
examines each node of the tree until it achieves the goal node.
Uninformed search

 Given a state, we only know whether it is a goal


state or not
 Cannot say one non goal state looks better than
another non goal state
 Can only traverse state space blindly in hope of
somehow hitting a goal state at some point
 Also called blind search
 Blind does not imply unsystematic!
Types of Uninformed Search
It can be divided into five main types:
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1.Breadth-first Search
 Breadth-first search is the most common search strategy for
traversing a tree or graph.

 Breadth-first search is a simple strategy in which the root node is


expanded first, then all the successors of the root node are
expanded next, then their successors, and so on.

 All nodes are expanded at a given depth in search tree before any
nodes at next level are expanded.

 Breadth-first search uses a FIFO queue for the frontier


1.Breadth-first Search
Advantages:
 BFS will provide a solution if any solution exists.
 If there are more than one solutions for a given
problem, then BFS will provide the minimal solution
which requires the least number of steps.
Disadvantages:
 It requires lots of memory since each level of the tree
must be saved into memory to expand the next level.
 BFS needs lots of time if the solution is far away from
the root node.
1.Breadth-first Search
• S---> A--->B---->C--->D---->G--->H--->E---->F----
>I---->K
1.Breadth-first Search
 Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of

nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest

solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

 Space Complexity: Space complexity of BFS algorithm is given by the Memory size of

frontier which is O(bd).

 Completeness: BFS is complete, which means if the shallowest goal node is at some

finite depth, then BFS will find a solution.

 Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the

node.
1.Breadth-first search
Example
1.Breadth-first search
Example
1.Breadth-first search
Example
1.Breadth-first search
Example
1.Breadth-first search
Properties of Breadth-first search
 Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes

traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a node

at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

 Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is

O(bd).

 Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then

BFS will find a solution.

 Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
Breadth-first search
2.Uniform-Cost Search
 Uniform-cost search is a searching algorithm used for traversing a weighted tree or
graph.
 This algorithm comes into play when a different cost is available for each edge. The
primary goal of the uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost.
 Uniform-cost search expands nodes according to their path costs form the root node.
 It can be used to solve any graph/tree where the optimal cost is in demand.
 A uniform-cost search algorithm is implemented by the priority queue
 It gives maximum priority to the lowest cumulative cost. Uniform cost search is
equivalent to BFS algorithm if the path cost of all edges is the same.
Advantages:
• Uniform cost search is optimal because at every state the path with
the least cost is chosen.
Disadvantages:
• It does not care about the number of steps involve in searching and
only concerned about path cost. Due to which this algorithm may be
stuck in an infinite loop.
2.Uniform-Cost Search

 When all step costs are equal, breadth-first search is optimal because it

always expands the shallowest unexpanded node.


– In general, it is not optimal when step costs are different.

– By a simple extension, we can find an algorithm that is optimal with any step-

cost function.

 Uniform-cost search expands node n with the lowest path cost g(n).

 This is done by storing the frontier as a priority queue ordered by g.

 Uniform-cost search is equivalent to breadth-first if step costs are all equal


2.Uniform Cost Search
Example
2.Uniform Cost Search
3. Depth-First Search

 Depth-first search always expands deepest


unexpanded node in frontier of search tree.
• As nodes are expanded, they are dropped from frontier, so
then search “backs up” to next deepest node that still has
unexplored successors.
 Depth-first search uses a LIFO queue (STACK).
• A LIFO queue means that the most recently generated node
is chosen for expansion.
• This must be the deepest unexpanded node because it is
one deeper than its parent.
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
3. Depth-First Search
Example
Properties of Depth-First Search
 Completeness: DFS search algorithm is complete within finite state
space as it will expand every node within a limited search tree.
 Time Complexity: Time complexity of DFS will be equivalent to the
node traversed by the algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much
larger than d (Shallowest solution depth)
 Space Complexity: DFS algorithm needs to store only single path
from the root node, hence space complexity of DFS is equivalent to
the size of the fringe set, which is O(bm).
 Optimal: DFS search algorithm is non-optimal, as it may generate a
large number of steps or high cost to reach to the goal node.
4.Depth-Limited Search
A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of
the infinite path in the Depth-first search. In this algorithm, the node
at the depth limit will treat as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
– Standard failure value: It indicates that problem does not have any
solution.
– Cutoff failure value: It defines no solution for the problem within a given
depth limit.
Advantages:
– Depth-limited search is Memory efficient.
Disadvantages:
– Depth-limited search also has a disadvantage of incompleteness.
– It may not be optimal if the problem has more than one solution.
4.Depth-Limited Search
Example
4.Depth-Limited Search
 Completeness: DLS search algorithm is complete
if the solution is above the depth-limit.
 Time Complexity: Time complexity of DLS
algorithm is O(bℓ).
 Space Complexity: Space complexity of DLS
algorithm is O(b×ℓ).
 Optimal: Depth-limited search can be viewed as
a special case of DFS, and it is also not optimal
even if ℓ>d.
5.Iterative Deepening Depth First Search
 The iterative deepening algorithm is a combination of DFS and BFS
algorithms. This search algorithm finds out the best depth limit and does it by
gradually increasing the limit until a goal is found.
 This algorithm performs depth-first search up to a certain "depth limit", and it
keeps increasing the depth limit after each iteration until the goal node is
found.
 This Search algorithm combines the benefits of Breadth-first search's fast
search and depth-first search's memory efficiency.
 The iterative search algorithm is useful uninformed search when search space
is large, and depth of goal node is unknown.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of fast search and
memory efficiency.
Disadvantages:
• The main drawback of IDDFS is that it repeats all the work of the previous phase.
5.Iterative Deepening Depth First Search
IDDFS-Example

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K,
G
In the fourth iteration, the algorithm will
find the goal node.
5.Iterative Deepening Depth-First Search
5.Iterative Deepening Depth-First Search
5.Iterative Deepening Depth-First Search
5.Iterative Deepening Depth-First Search
5.Iterative Deepening Depth-First Search
5.Iterative Deepening Depth-First Search

 Completeness:
• This algorithm is complete is if the branching factor is finite.
 Time Complexity:
• Let's suppose b is the branching factor and depth is d then
the worst-case time complexity is O(bd).
 Space Complexity:
• The space complexity of IDDFS will be O(bd).
 Optimal:
• IDDFS algorithm is optimal if path cost is a non- decreasing
function of the depth of the node.
6.Bidirectional Search Algorithm
 Bidirectional search algorithm runs two simultaneous searches, one
form initial state called as forward-search and other from goal node
called as backward-search, to find the goal node.
 Bidirectional search replaces one single search graph with two small
sub graphs in which one starts the search from an initial vertex and
other starts from goal vertex.
 The search stops when these two graphs intersect each other.
 Bidirectional search can use search techniques such as BFS, DFS, DLS,
etc.
 Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory
 Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
6.Bidirectional search
• Even better: search from both the start and the
goal, in parallel!

image from cs-alb-pc3.massey.ac.nz/notes/59302/fig03.17.gif

• If the shallowest solution has depth d and


branching factor is b on both sides, requires only
O(bd/2) nodes to be explored!
6.Bidirectional search
Example
6.Bidirectional search
Complexity: time and space complexity are:O (b
d /2
)
6.Bidirectional search
 Completeness: Bidirectional Search is
complete if we use BFS in both searches.
 Time Complexity: Time complexity of
bidirectional search using BFS is O(bd).
 Space Complexity: Space complexity of
bidirectional search is O(bd).
 Optimal: Bidirectional search is Optimal.
Comparing uninformed search strategies

 b is the branching factor; d is the depth of the shallowest solution;


 m is the maximum depth of the search tree; l is the depth limit.
Superscripts:
 a complete if b is finite;
 b complete if step costs ≥ 𝜖for positive
 C optimal if step costs are all identical;
Un Informed Search Graph Example
• https://algorithmicthoughts.wordpress.com/2012/12/15/artificial-intelligence-uniform-cost-searchucs/
BREADTH FIRST SEARCH(BFS)
Initialization: { [ S ] }
Iteration1: { [ S->A ] }
Iteration2: { [ S->A->B ], [ S->A->C ] }
Iteration3: { [ S->A->B->D ], [ S->A->C->D ], [ S-
>A->C->G ] }
Iteration4 gives the final output as S->A->C->G.
DEPTH FIRST SEARCH(DFS)
Initialization: { [ S , 1 ] }
Iteration1: { [ S->A , 2 ] , [ S->G , 2 ] }
Iteration2: { [ S->A->B , 3 ] , [ S->A->C , 3 ] , [ S->G , 2
]}
Iteration3: { [ S->A->B->D , 4 ] , [ S->A->C , 3 ] , [ S-
>G , 2 ] }
Iteration4: { [ S->A->B->D->G , 5 ] , [ S->A->C , 3 ] ,
[ S->G , 2 ] }
Iteration5 gives the final output as S->A->B->D->G.
UNIFORM COST SEARCH(UCS)
Initialization: { [ S , 0 ] }
Iteration1: { [ S->A , 1 ] , [ S->G , 12 ] }
Iteration2: { [ S->A->C , 2 ] , [ S->A->B , 4 ] , [ S->G , 12] }
Iteration3: { [ S->A->C->D , 3 ] , [ S->A->B , 4 ] , [ S->A->C-
>G , 4 ] , [ S->G , 12 ] }
Iteration4: { [ S->A->B , 4 ] , [ S->A->C->G , 4 ] , [ S->A->C-
>D->G , 6 ] , [ S->G , 12 ] }
Iteration5: { [ S->A->C->G , 4 ] , [ S->A->C->D->G , 6 ] , [ S-
>A->B->D , 7 ] , [ S->G , 12 ] }
Iteration6 :gives the final output as S->A->C->G.
Repeated states
Failure to detect repeated states can turn a
linear problem into an exponential one!
Avoiding Repeated States
 Do not return to the parent state (e.g., in 8 puzzle problem, do not allow the Up
move right after a Down move)

 Do not create solution paths with cycles

 Do not generate any repeated states (need to store and check a potentially
large number of states)

 This is done by keeping a list of "expanded states" i.e., states whose daughters
have already been put on the enqueued list. This entails removing states from
the "enqueued list" and placing them on an "expanded list"
(In the standard algorithm literature, the list of expanded states is called the
"closed list ", thus, we would move states from the open list to the closed list)
Informed search algorithms
 Informed search algorithm contains an array of knowledge such as how

far we are from the goal, path cost, how to reach to goal node, etc. This

knowledge help agents to explore less to the search space and find more

efficiently the goal node.

 The informed search algorithm is more useful for large search space.

Informed search algorithm uses the idea of heuristic, so it is also called

Heuristic search.

 Types
 Greedy Search
 A* Search
Heuristics function
 Heuristic is a function which is used in Informed Search, and it finds the most

promising path.

 It takes the current state of the agent as its input and produces the estimation of how

close agent is from the goal.

 The heuristic method, however, might not always give the best solution, but it

guaranteed to find a good solution in reasonable time.

 Heuristic function estimates how close a state is to the goal. It is represented by h(n),

and it calculates the cost of an optimal path between the pair of states.

 The value of the heuristic function is always positive.


 Admissibility of the heuristic function is given as:
h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost.
Hence heuristic cost should be less than or equal to the
Pure Heuristic Search
 Pure heuristic search is the simplest form of heuristic search algorithms. It expands

nodes based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED list.

In the CLOSED list, it places those nodes which have already expanded and in the

OPEN list, it places nodes which have yet not been expanded.

 On each iteration, each node n with the lowest heuristic value is expanded and

generates all its successors and n is placed to the closed list. The algorithm continues

unit a goal state is found.

 In the informed search we will discuss two main algorithms which are given below:

1. Best First Search Algorithm(Greedy search)

2. A* Search Algorithm
1.Best-first Search (Greedy Search)
 Greedy best-first search algorithm always selects the path which
appears best at that moment.
 It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search.
 Best-first search allows us to take the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the
most promising node.
 In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by heuristic
function, i.e.
f(n)= g(n).
where, h(n)= estimated cost from node n to the goal.
 The greedy best first algorithm is implemented by the priority queue.
1.Best-first Search (Greedy Search)
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal node or
not. If any successor node is goal node, then return success and terminate the
search, else proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
Step 7: Return to Step 2.
Advantages:
• Best first search can switch between BFS and DFS by gaining the advantages of both the
algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
• It can behave as an unguided depth-first search in the worst case scenario.
• It can get stuck in a loop as DFS.
1.Best-first Search (Greedy Search) -Example
Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table
1.Best-first Search (Greedy Search) -Example
• In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for traversing the
above example.

Expand the nodes of S and put in the CLOSED


list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be: S---->
B----->F----> G
1.Best-first Search -Greedy Search
 Time Complexity: The worst case time complexity of Greedy best
first search is O(bm).

 Space Complexity: The worst case space complexity of Greedy best


first search is O(bm). Where, m is the maximum depth of the search
space.

 Complete: Greedy best-first search is also incomplete, even if the


given state space is finite.

 Optimal: Greedy best first search algorithm is not optimal.


1.Best-first Search -Greedy Search
Greedy Search (Romania Example)
Greedy Search (Romania Example)
Greedy Search (Romania Example)
Greedy Search (Romania Example)
Greedy Search
Property Analysis
2. A* Search
 A* search is the most commonly known form of best-first
search.
 It uses heuristic function h(n), and cost to reach the node n
from the start state g(n).
 It has combined features of UCS and greedy best-first search,
by which it solve the problem efficiently.
 A* search algorithm finds the shortest path through the
search space using the heuristic function.
 This search algorithm expands less search tree and provides
optimal result faster.
 A* algorithm is similar to UCS except that it uses g(n)+h(n)
instead of g(n).
2. A* Search
• In A* search algorithm, we use search heuristic as well as
the cost to reach the node. Hence we can combine both
costs as following, and this sum is called as a fitness
number.
• At each point in the search space, only those node is
expanded which have the lowest value of f(n), and the
algorithm terminates when the goal node is found.
2. A* Search
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.

Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED
list, if not then compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.


2. A* Search
2. A* Search
 Advantages:
• A* search algorithm is the best algorithm than other search
algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.
 Disadvantages:
• It does not always produce the shortest path as it mostly based
on heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps all
generated nodes in the memory, so it is not practical for various
large-scale problems.
2. A* Search Example
• In this example, we will traverse the given graph using the A* algorithm.
• The heuristic value of all states is given in the below table so we will calculate the f(n) of each
state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start
state.
• Here we will use OPEN and CLOSED list.
2. A* Search Example
• In this example, we will traverse the given graph using the A* algorithm.
• The heuristic value of all states is given in the below table so we will calculate the f(n) of each
state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start
state.
• Here we will use OPEN and CLOSED list.
2. A* Search Example
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B,
7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it
provides the optimal path with cost 6.
 Points to remember:
• A* algorithm returns the path which occurred first, and it does not
search for all remaining paths.
• The efficiency of A* algorithm depends on the quality of heuristic.
• A* algorithm expands all nodes which satisfy the condition f(n)
2. A* Search – Property Analysis
 Complete: A* algorithm is complete as long as:
• Branching factor is finite.
• Cost at every action is fixed.
 Optimal: A* search algorithm is optimal if it follows below two conditions:
 Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
 Consistency: Second required condition is consistency for only A* graph-search.
• If the heuristic function is admissible, then A* tree search will always find the least cost
path.
 Time Complexity: The time complexity of A* search algorithm depends on
heuristic function, and the number of nodes expanded is exponential to the depth
of solution d. So the time complexity is O(b^d), where b is the branching factor.
 Space Complexity: The space complexity of A* search algorithm is O(b^d)
2. A* Search – Property Analysis
2. A* Search (Romania Example)
2. A* Search (Romania Example)
2. A* Search (Romania Example)
2. A* Search (Romania Example)
2. A* Search (Romania Example)
2. A* Search (Romania Example)
2. A* Search (Romania Example)
Optimality of A*
Optimality of A*
3. Memory Bounded Heuristic Search
3.1 Iterative -Deepening A* (IDA*)

• The simples1 way to reduce memory requirements


for A* is to adapt the idea of iterative deepening to
the heuristic search context, resulting in the iterative-
deepening A*(IDA*) algorithm.
• The main difference between IDA* and standard
iterative deepening is that the cut off used is the
f -cost (g + h) rather than the depth;
• At each iteration, the cut off value is the smallest f-
cost of any node that exceeded the cutoff on the
previous iteration.
Memory Bounded Heuristic Search
3.2 Recursive Best First Search(RBFS)
 Is a simple recursive algorithm to mimic the operation of
standard best-first search but using only linear space.
 Its structure is similar to that of a recursive depth-first search,
but rather than continuing indefinitely down the current
path, it keeps track of the f-value of the best alternative
path available from any ancestor of the current node.
 If the current node exceeds this limit, the recursion unwinds
back to the alternative path.
 As the recursion unwinds, RBFS replaces the f -value of each
node along the path with the backed-up value - best f -
value of its children.
 In this way, RBFS remembers the f - value of the best leaf in
the forgotten sub tree and can therefore decide whether it's
worth re expanding the sub tree at some later time.
RBFS -Example
RBFS -Example
Efficient Utilization of Memory
3.3 MA*,3.4 SMA*
Disadvantages of IDA* and RBFS:
• IDA* and RBFS suffer from using too little memory.
Between iterations, IDA* retains only a single
number: the current f -cost limit. RBFS retains more
information in memory, but it uses only linear
space: even if more memory were available, RBFS
has no way to make use of it.
• Two algorithms that do this are
– MA* (memory-bounded A*)
– SMA* (simplified MA*)
SMA*
 SMA* proceeds just like A*, expanding the best leaf until memory is full.
 When memory is full, it cannot add a new node to the search tree without
dropping an old one.
 SMA* always drops the worst leaf node-the one with the highest f-value.
 Like RBFS, SMA* then backs up the value of the forgotten node to its
parent.
 In this way, the ancestor of a forgotten sub tree knows the quality of the
best path in that sub tree.
 With this information, SMA* regenerates the sub tree only when all other
paths have been shown to look worse than the path it has forgotten.
 (i.e) If all the descendants of a node n are forgotten, then we will not know
which way to go from n, but we will still have an idea of how worthwhile it is
to go anywhere from n.
Heuristic Functions
Consistency- Graph Search
Admissible Heuristics
 Manhattan distance( City Block Distance):Sum of distances of
the tiles from their goal positions. (Sum of horizontal and
vertical distance)
 h2=4(for I tile) + 0 (for 2 tile)+3 (for 3 tile)+…
Admissible Heuristics
Dominance
Effect of Heuristic Accuracy on
Performance
Generating Admissible Heuristics from Relaxed
Problems
 Admissible heuristics can be derived from the exact solution cost of a relaxed
version of the problem. A problem with fewer restrictions on actions is called
a relaxed problem.
 State space of relaxed problem is the super graph of the original state space.
(Relaxed add edges to original state space), so the optimal solution for the
original problem is also a solution to the relaxed problem
 But relaxed may have better solutions if added edges provide short cuts
 The cost of an optimal solution to a relaxed problem is an admissible heuristic
for the original problem.
 Because the derived heuristic is an exact cost for the relaxed problem, it must
obey the triangle inequality and is therefore consistent
 If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then
h1(n) gives the shortest solution
 If the rules are relaxed so that a tile can move to any adjacent square, then
h2(n) gives the shortest solution
Generating Admissible Heuristics from Sub
Problems
Pattern Database: Admissible heuristic can be obtained
from the solution cost of a Sub Problem of a given
problem. (Dynamic Programming)
Summary of Informed Search
 Best-first search is just GRAPH-SEARCH where the minimum-cost unexpanded
nodes (according to some measure) are selected for expansion. Best-first algorithms
typically use a heuristic function h(n) that estimates the cost of a solution from n.
 Greedy best-first search expands nodes with minimal h(n). It is not optimal, but is
often efficient.
 A* search expands nodes with minimal f (n) = g(n) + h(n). A* is complete and
optimal, provided that we guarantee that h(n) is admissible (for TREE-SEARCH) or
consistent (for GRAPH-SEARCH). The space complexity of A* is more, occupies more
memory.
 The performance of heuristic search algorithms depends on the quality of the
heuristic function.
 Good heuristics can sometimes be constructed by relaxing the problem definition,
by pre computing solution costs for sub problems in a pattern database, or by
learning from experience with the problem class.
 RBFS and SMA* are robust, optimal search algorithms that use limited amounts of
memory; given enough time, they can solve problems that A* cannot solve because
it runs out of memory.

You might also like