0% found this document useful (0 votes)
12 views32 pages

UNIT I (D)

This document discusses heuristic search strategies, focusing on informed search techniques such as Best First Search (BFS), Greedy Best First Search (GBFS), and A* Search. It explains the algorithms, their performance measures, and the importance of heuristic functions in guiding searches efficiently. Additionally, it covers memory-bounded algorithms like Recursive Best First Search (RBFS) and Simplified Memory Bounded A* (SMA*), as well as the AO* algorithm for solving problems represented as AND-OR graphs.

Uploaded by

P SANTHIYA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views32 pages

UNIT I (D)

This document discusses heuristic search strategies, focusing on informed search techniques such as Best First Search (BFS), Greedy Best First Search (GBFS), and A* Search. It explains the algorithms, their performance measures, and the importance of heuristic functions in guiding searches efficiently. Additionally, it covers memory-bounded algorithms like Recursive Best First Search (RBFS) and Simplified Memory Bounded A* (SMA*), as well as the AO* algorithm for solving problems represented as AND-OR graphs.

Uploaded by

P SANTHIYA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT I

Chapter 4: Heuristic Search Strategies - Local Search and Optimization


Problems
Searching Techniques

Introduction to Informed Search

• In the previous chapter 3 we have seen the concept of uninformed or blind search strategy. Now
we will see more efficient search strategy, the informed search strategy. Informed search strategy is
the search strategy that uses problem-specific knowledge beyond the definition of the problem itself.

• The general, method followed in informed search is best first search. Best first search is similar to
graph search or tree search algorithm wherein node expansion is done based on certain criteria.

Generate-and-Test

• Generate-and-Test is a search algorithm which uses depth first search technique. It assures to find
solution in systematic way. It generates complete solution and then the testing is done. A heuristic is
needed so that the search is improved.

Following is the algorithm for Generate-and-Test :

1) Generate a possible solution which can either be a point in the problem space or a path from the
initial state.

2) Test to see if this possible solution is a real (actual) solution by comparing the state reached with
the set of goal states.

3) If it is real solution then return the solution otherwise repeat from state 1.

Following diagram illustrates the algorithm steps:

• Generate-and-Test is acceptable for simple problems whereas it is inefficient for problems with
large spaces.

Best First Search Technique (BFS)

1) Search will start at root node.

2) The node to be expanded next is selected on the basis of an evaluation function, f(n).
3) The node having lowest value for f(n) is selected first. This lowest value of f(n) indicates that goal is
nearest from this node (that is f(n) indicates distance from current node to goal node).

Implementation: BFS can be implemented using priority queue where fringe will be stored. The
nodes in fringe will be stored in priority queue with increasing value of f(n) i.e. ascending order of
f(n). The high priority will be given to node which has low f(n) value.

As name says "best first" then, we would always expect to have optimal solution. But in general BFS
indicates that choose the node that appears to be best according to the evaluation function. Hence
the optimality based on 'best-ness' of evaluation function.

Algorithm for best first search

1. Use two ordered lists OPEN and CLOSED.

2. Start with the initial node 'n' and put it on the ordered list OPEN.

3. Create a list CLOSED. This is initially an empty list.

4. If OPEN is empty then exit with failure.

5. Select first node on OPEN. Remove it from OPEN and put it on CLOSED. Call this node n.

6. If 'n' is the goal node exit. The solution is obtained by tracing a path backward along the arcs in the
tree from 'n' to 'n1'.

7. Expand node 'n'. This will generate successors. Let the set of successors generated, be S. Create
arcs from 'n' to each member of S.

8. Reorder the list OPEN, according to the heuristic and go back to step 4.

Consider the following 8-puzzle problem-

Here the heuristic used could be "number of tiles not in correct position" (i.e. number of tiles
misplaced). In this solution the convention used is, that smaller value of the heuristic function 'f'
leads earlier to the goal state.

Step 1

This is the initial state where four tiles (1, 2, 6, 8) are misplaced so value of heuristic function at this
node is 4.

Step 2

This will be the next step of the tree.


Heuristic function gives the values as 3, 5 and 5 respectively.

Step 3

Next, the node with the lowest f(n) value 3 is the one to be further expanded which generates 3
nodes.

Step 4

Here there is tie for f(n) value. We continue to expand node.

Step 5

Step 6

In this algorithm a depth factor g is also added to h.

Heuristic Function
There are many different algorithms which employ the concept of best first search. The main
difference between all the algorithms is that they have different evaluation function. The central
component of these algorithms is heuristic function - h(n) which is defined as, h(n) = estimate cost of
the cheapest path from node 'n' to a goal node.

Note

1) Heuristic function is key component of best first search. It is denoted by h(n) and h(n) = The
shortest and cheapest path from initial node to goal node.

2) One can give additional knowledge about the problem to the heuristic function. For example - In
our problem of Pune-Chennai route we can give information about distances between cities to the
heuristic function.

3) A heuristic function guide the search in an efficient way.

4) A heuristic function h(n) consider a node as input but it rely only on the state at that node.

5) h(n) = 0, if n is goal state.

Greedy Best First Search (GBFS)

• Greedy best first search expand the node that is closest to the goal, expecting to get solution
quickly:

• It evaluates node by using the heuristic function f(n) = h(n).

• It is termed as greedy (asking for more) because at each step it tries to get as close to the goal as it
can.

• GBFS resembles DFS in the way that it prefers to follow a single path all the way to the goal. It back
tracks when it comes to dead end that is, to a node from which goal state cannot be reached.

• Choosing minimum h(n) can lead to bad start as it may not yield always a solution. Also as this
exploration is not leading to solution, therefore unwanted nodes are getting expanded.

• In GBFS, if repeated states are not detected then the solution will never found.

Performance measurement

1) Completeness: It is incomplete as it can start down an infinite path and never return to try other
possibilities which can give solution.

2) Optimal: It is not optimal as it can initially select low value h(n) node but it may happen that some
greater value node in current fringe can lead to better solution. Greedy strategies, in general, suffers
from this, "looking for current best they loose on future best and inturn finally the best solution" !

3) Time and space complexity: Worst case time and space complexity is 0 (bm) where 'm' is the
maximum depth of the search space.

The complexity can be reduced by devicing good heuristic function.

A* Search

The A* algorithm

1) A* is most popular form of best first search.


2) A* evaluates node based on two functions namely

1) g(n) - The cost to reach the node 'n'.

2) h(n) - The cost to reach the goal node from node 'n'.

These two function's costs are combined into one, to evaluate a node. New function f(n) is deviced
as,

f(n)=g(n) + h(n) that is,

f(n) = Estimated cost of the cheapest solution through 'n'.

Working of A*

1) The algorithm maintains two sets.

a) OPEN list: The OPEN list keeps track of those nodes that need to be examined.

b) CLOSED list: The CLOSED list keeps track of nodes that have already been examined.

2) Initially, the OPEN list contains just the initial node, and the CLOSED list is empty. Each node n
maintains the following: g(n), h(1 h(n), f(n) as described above.

3) Each node also maintains a pointer to its parent, so that later the best solution, if found, can be
retrieved. A* has a main loop that repeatedly gets the node, call it 'n', with the lowest f(n) value from
the OPEN list. If 'n' is the goal node, then we are done, and the solution is given by backtracking from
'n'. Otherwise, 'n' is removed from the OPEN list and added to the CLOSED list. Next all the possible
TSV successor nodes of 'n' are generated.

4) For each successor node 'n', if it is already in the CLOSED list and the copy there has an equal or
lower 'f' estimate, and then we can safely discard the newly generated 'n' and move on. Similarly, if
'n' is already in the OPEN list and the copy there has an equal or lower f estimate, we can discard the
newly generated n and move on.

5) If no better version of 'n' exists on either the CLOSED or OPEN lists, we remove the inferior copies
from the two lists and set 'n' as the parent of 'n'. We also have to calculate the cost estimates for n as
follows:

Set g(n) which is g(n) plus the cost of getting from n to n;

Set h(n) is the heuristic estimate of getting from n to the goal node;

Set f(n) is g(n) + h(n)

6) Lastly, add 'n' to the OPEN list and return to the beginning of the main loop.

Performance measurement for A*

1) Completeness: A* is complete and guarantees solution.

2) Optimality: A* is optimal if h(n) is admissible heuristic. Admissible heuristic means h(n) never over
estimates the cost to reach the goal. Tree-search algorithm gives optimal solution if h(n) is
admissible.

If we put extra requirement on h(n) which is consistency also called as monotonicity then we are
sure expect the optimal solution. A heuristic function h(n) is said to be consistent, if, for every node
'n' and every successor 'ns' of 'n' generated by action 'a', the estimated cost of reaching the goal
from 'n' is not greater than the step cost of getting to 'ns'.

h(n) ≤ cost (n, a, ns) + h (ns)

A* using graph-search is optimal if h(n) is consistent.

Note: The sequence of nodes expanded by A* using graph-search is in, increasing order of f(n).
Therefore the first goal node selected for expansion must be an optimal solution because all latter
nodes will be either having same value of f(n) (same expense) or greater value of f(n) (more
expensive) than currently expanded node.

A* never expands nodes with f(n) > C* (where C* is the cost of optimal solution).

A* algorithm also do pruning of certain nodes.

Pruning means ignoring a node completely without any examination of that node, and thereby
ignoring all the possibilities arising from these node.

3) Time and space complexity: If number of nodes reaching to goal node grows exponentially then
time taken by A* eventually increases.

The main problem area of A* is memory, because A* keeps all generated nodes inmemory.

A* usually runs out of space long before it runs out of time.

A* optimality proof

• Assume A* finds the (sub optimal) goal G2 and the optimal goal is G.

• Since h is admissible

h'(G2) = h (G) = 0

• Since G2 is not optimal

f(G2) > f(G)

• At some point during the search, some node 'n' on the optimal path to G is not expanded.

We know,

f(n) f(G)

Memory Bounded Heuristic Search

If we use idea of Iterative deepening search then we requirements of A*. From this concept we
device new algorithm.

Iterative deepening A*

• Like IDDFS uses depth as cutoff value in IDA* f-cost (g+h) is used as cutoff rather than the depth.

• It reduces memory requirements, incurred in A*, thereby putting bound on memory hence it is
called as memory bounded algorithm.

• IDA* suffers from real value costs of the problem.

• We will discuss two memory bounded algorithm


1) Recursive breadth first search.

2) MA* (Memory bounded A*).

1. Recursive Best First Search (RBFS)

• It works like best first search but using only linear space.

• Its structure is similar to recursive DFS but instead of continuing indefinitely down the current path
it keeps track of the F value of the best alternative path available from any ancestor of the current
node.

• The recursion procedure is unwinded back to the alternative path if the current node crosses limit.

• The important property of RBFS is that it remembers the f-value of the best leaf in the forgotten
subtree (previously left unexpanded). That is, it is able to take decision regarding re-expanding the
subtree.

• It is reliable, cost effective than IDA but its critical problem is excessive node generation.

Performance measure

1) Completeness: It is complete.

2) Optimality: Recursive best-first search is optimal if h(n) is admissible.

3) Time and space complexity: Time complexity of RBFS depends on two factors –

a) Accuracy of heuristic function.

b) How often (frequently) the best path changes as nodes are expanded.

RBFS suffers from problem of expanding repeated states, as the algorithm fails to detect them.

Its space complexity is O(bd).

Its suffers from problem of using too little (very less) memory. Between iterations, RBFS maintains
more information in memory but it uses only O(bd) memory. If more memory is available then also
RBFS cannot utilize it.

2. MA*

RBFS under utilizes memory. To overcome this problem MA* is deviced.

Its more simplified version called as simplied MA* proceeds as follows -

1) If expands the best leaf until memory is full.

2) At this point it cannot add new node to the search tree without dropping an oldone.

3) It always drops the node with highest f-value.

4) If goal is not reached then it backtracks and go to the alternative path. In this way the ancestor of
a forgotten subtree knows the quality of the best path in that subtree.

5) While selecting the node for expansion it may happen that two nodes are with same f-value. Some
problem arises, when the node is discarded (multiple choices can be there as many leaves can have
same f-value).
The SMA* generates new best node and new worst node for expansion and deletion respectively.

Performance measurement

1) Completeness: SMA* is complete and guarantee solution.

2) Optimality: If solution is reachable through optimal path it gives optimal solution. Otherwise it
returns best reachable solution.

3) Time and space complexity: There is memory limitations on SMA*. Therefore it has to switch back
and forth continually between a set of candidate solution paths, only small subset of which can fit in
memory. This problem is called as thrashing because of which algorithm takes more time.

Extra time is required for repeated regeneration of the same nodes, means that the problem that
would be practically solvable by A* (with unlimited memory) became intractable for SMA*.

It means that memory restrictions can make a problem intractable from the point of view of
computation time.

AO* Search and Problem Reduction Using AO*

When a problem can be divided in a set of sub problems, where each sub problem can be solved
seperately and a combination of these will be a solution. AND-OR graphs or AND-OR trees are used
for representing the solution.

AND-OR graphs

1) The decomposition of the problem or problem reduction generates AND arcs.

2) One AND arc may point to any number of successor nodes.

3) All these must be solved so that the man the arc will give rise to many arcs, indicating several
possible solutions.Hence the graph is known as AND-OR instead of AND.

4) AO* is a best-first algorithm for solving problems represented as a cyclic AND/OR graphs
problems.

5) An algorithm to find a solution in an AND-OR graph must handle AND area appropriately.

The comparative study of A* and AO*

1) Unlike A* algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single
structure G.

2) G represents the part of the search graph generated so far.


3) Each node in G points down to its immediate successors and upto its immediate predecessors, and
also has with it the value of 'h' cost of a path from itself to a set of solution nodes.

4) The cost of getting from the start nodes to the current node 'g' is not stored as in the A*
algorithm. This is because it is not possible to compute a single such value since there may be many
paths to the same state.

5) AO* algorithm serves as the estimate of goodness of a node.

6) A* algorithm cannot search AND-OR graphs efficiently.

7) AO* will always find minimum cost solution.

• The algorithm for performing a heuristic search of an AND-OR graph is given below.

AO* algorithm

1) Initialize the graph to start node.

2) Traverse the graph following the current path accumulating nodes that have not yet been
expanded or solved.

3) Pick any of these nodes and expand it and if it has no successors call this value as, FUTILITY,
otherwise calculate only f' for each of the successors.

4) If 'f' is 0 then mark the node as SOLVED.

5) Change the value of 'f' for the newly created node to reflect its successors by back el to
propagation.

6) Wherever possible use the most promising routes and if a node is marked as SOLVED then mark
the parent node as SOLVED.

7) If starting node is SOLVED or value greater than FUTILITY, stop, else repeat from 2.

How to Search Better?

We have seen many searching strategies till now, but as we can see no one is really the perfect. How
can we make our AI agent to search better?

We can make use of a concept called as metalevel state space. Each state in a metalevel state space
captures the internal (computational) state of a program that is searching in an object-level state
space such as in Indian Traveller Problem.

Consider A* algorithm

1) In A* algorithm it maintains internal state which consists of the current search tree.

2) Each action in the metalevel state space is computation step that alters the internal state.

For example - Each computation step in A* expands a leaf node and adds its successors to the tree.

3) As the level increases the sequence of larger and larger search trees is generated.

In harder problem there can be mis-steps taken by algorithm. That is algorithm can explore
unpromising unuseful subtrees. Metalearning algorithm overcomes these problems.

The main goal of metalearning is to minimize the total cost of problem solving.
Heuristic Functions and their Nature

1. Accuracy of Heuristic Function

• More accurate the heuristic function more is the performance.

• The quality of heuristic function can be measured by the effective branching factor b*.

• Consider A* that generates N nodes and 'd' is depth of a solution, then b* is the qulay branching
factor that a uniform tree of depth 'd' would have so as to contain 'n+1' nodes.

Thus,

N+1 = 1 + b* + (b*)2 + ... + (b*)d

If A* finds a solution at depth 5 using 52 nodes, then the effective branching factor is 1.92.

A well designed heuristic function will have a value of b* close to 1, allowing fairly large problems to
be solved.

2. Designing Heuristic Functions for Various Problems

Example 4.1.1 8-puzzle problem.

Solution: The objective of the puzzle is to slide the tiles horizontally or vertically into the empty
space until the configuration matches the goal configuration.

1) The average solution cost for a randomly generated 8-puzzle instance is about 22 steps.

2) The branching factor is about 3 (when the empty tile is in the middle, there are no four possible
moves, when it is in a corner there are two, and when it is along an edge there are three.

3) An exhaustive search to depth 22 would look at about 322 ≈3.1×1010 states.

4) If we keep track of repeated states, we could cut this down by a factor of about 1,70,000, because
there are only 9!/2 = 1,81,440 distinct states that are reachable.

5) If we use A*, we need a heuristic function that never overestimates the number of steps to the
goal.

6) We can use following heuristic function,

-h1 = The number of misplaced tiles. All of the eight tiles are out of position, so the start state would
have h1 =8.h1, is an admissible heuristic, because it is clear that any tile that is out of place must be
moved at least once.

- հ2 = The sum of the distances of the tiles from their goal positions. Because tiles cannot move along
diagonals, the distance we will count is the sum of the horizontal and vertical distances.

Example 4.1.2 Blocks world problem.


Solution:

Heuristic for Blocks World

A function that estimates the cost of getting from one place to another (from the current state to the
goal state.)

Used in a decision process to try to make the best choice of a list of possibilities (to choose the move
more likely to lead to the goal state.)

Best move is the one with the least cost.

The "intelligence" of a search process, helps in finding a more optimal solution.

h1(s)= Number of places with incorrect block immediately on top of it.

A more informed heuristic

Looks at each bottom position, but takes higher positions into account. Can be broken into 4 different
cases for each bottom position.

1. Current state has a blank and goal state has a block.

Two blocks must be moved onto q. So increment the heuristic value by the number of blocks needed
to be moved into a place.
2. Current state has a block and goal state has a blank.

One block must be removed from P. So increment the heuristic value by the number of blocks
needed to be moved out of a place.

3. Both states have a block, but they are not the same.

On place q, the blocks don't match. The incorrect block must be moved out and the correct blocks
must be moved in. Increment the heuristic value by the number of these blocks.

4. Both states have a block, and they are the same.

B is in the correct position. However, the position above it is incorrect. Increment the heuristic value
by the number of incorrect positions above the correct block.

Example

Know the second state leads to the optimal solution, but not guaranteed to be chosen first. Can the
heuristic be modified to decrease its value? Yes, since the block on p can be moved out and put on q
in the same move. Need to compensate for this overcounting of moves.
For a given (nonempty) place, find the top-most block. Look for its position in the goal state. If it goes
on the bottom row and the place is currently empty, the block can be moved there, so decrement
heuristic by one.

Now the second state's heuristic will be 3, and it's guaranteed to be chosen before the other one. A
little like a one move look ahead.

If the top-most block doesn't go on bottom row, but all blocks below it in the goal state are currently
in their correct place, then the block can be moved there, so decrement heuristic by one.

Example

p: case 4: +1

q: case 1: +2

r: case 2: +3

A can beput on B: -1

so h=5

optimal number of moves is 4

Example 4.1.3 Travelling salesman problem.

Solution: Heuristic function, h(n) is defined as,

h(n) = estimate cost of the cheapest path from node 'n' to a goal state.

Heuristic function for Travelling salesman problem:

• Travelling salesman problem (TSP) is a routing problem in which each city must be visited exactly
once. The aim is to find the shortest tour.

• The goal test is, all the locations are visited and agent at the initial location.

• The path cost is distance between locations.

• As a heuristic function for TSP, we can consider the sum of the distances travelled T so for. The
distance value directly affects the cost therefore, it should be taken into calculation for heuristic
function.

Heuristic function for 8 puzzle problem.

• The objective of the 8 puzzle is to slide the tiles horizontally or vertically into the empty space until
the configuration matches the goal configuration.

• We can use following heuristic function,


• h1 = The number of misplaced tiles. All of the eight tiles are out of position, so the start state would
have h1 = 8.h1, is an admissible heuristic, because it is logo clear that any tile that is out of place
must be moved at least once.

• h2 = The sum of the distances of the tiles from their goal positions. Because tiles cannot move along
diagonals, the distance we will count is the sum of the horizontal and vertical distances.

Example 4.1.4 Tic Tac Toe problem.

Solution: Heuristic for Tic Tac Toe problem

This problem can be solved all the way through the minimax algorithm but if a simple
heuristic/evaluation function can help save that computation. This is a static evaluation function
which assigns a utility value to each board position by assigning weights to each of the 8 possible
ways to win in a game of tic-tac-toe and then summing up those weights.

One can evaluate values for all of 9 fields, and then program would choose the field with highest
value.

For example,

Every Field has several WinPaths on the grid. Middle one has 4 (horizontal, vertical and two
diagonals), corners have 3 each (horizontal, diagonal and one vertical), sides have only 2 each
(horizontal and vertical). Value of each Field equals sum of its WinPaths values. And WinPath value
depends on its contents:

• Empty [| |] - 1 point

• One symbol: [X | | ] - 10 points // can be any symbol in any place

• Two different symbols: [X | O | ] -.0 points // they can be arranged in way

• Two identical opponents symbols: [X | X | ] - 100 points // arranged any of three way

• Two identical "my" symbols: [O | O | ] - 1000 points // arranged in any of three ways

• This way for example beginning situation has values as below:

3. Domination of Heuristic Function

• If we could design multiple heuristic function for same problem then we can find that which one is
better.

• Consider two heuristic functions h1 and h2. From their definitions for some node n, if h2 (n) ≥ h1(n)
then we say that h2 dominates h1.

• Domination directly points to accuracy, A* using h2 will never expand more nodes than A* using h1
(except for some node having f(n) = c*)

4. Admissible Heuristic Functions


1) A problem with fewer restrictions on actions is called a relaxed problem.

2) The cost of an admissible solution to a relaxed problem is an admissible heuristic not for the
original problem. The heuristic function is admissible because the optimal solution in the original
problem is, also a solution in the relaxed problem, and therefore must be at least as expensive as the
optimal solution in the relaxed problem.

3) As the derived heuristic is an exact cost for the relaxed problem, therefore it is consistant.

4) If problem definitions are written in formal languages then it is possible to construct relaxed
problem automatically.

5) Admissible heuristic function can also be derived from the solution cost of a subproblem of the
given problem. The cost of optimal solution of this subproblem would be the lower bound on the
cost of the complete problem.

6) We can store the exact solution costs for every possible subproblem instance in a database. Such a
database is called as pattern database.

The pattern database is constructed by searching backwards from the goal state and recording the
cost of each new pattern encountered.

We can compute an admissible heuristic function 'h' for each complete state encountered during a
search simply by looking up the corresponding subproblem configuration in the database.

7) The cost of the entire problem is always greater than sum of cost of two subproblems. Hence it is
always better to derive disjoint solution and then sum up all solutions to minimize the cost.

Learning Heuristic from Experience

1) A heuristic function h(n) is supposed to estimate the cost of a solution begining from the state at
node n. Therefore it is really difficult to design h(n).

2) One solution is to devise relaxed problems for which an optimal solution can be found.

3) Another solution is to make agent program that can learn from experience.

- "Learn from experience" means solving similar problem again and again (i.e. practicing).

- Each optimal solution will provide example from which 'h(n) design' can be learned.

- One can get experience of which h(n) was better.

- Each example consists of a state from the solution path and the actual cost of the solution from that
point.

- From such solved examples an inductive learning algorithm can be used to construct the function
h(n) that can predict solution costs for another states that have arised during search. [A lucky agent
program will get the prediction early.]

For developing inductive algorithms we can make use of techniques like neural nets, decision trees,
etc.

If inductive learning methods have knowledge about features of a state that are relevant to
evaluation of algorithms then inductive learning methods give of oldies best output.
Local Search and Optimization Problems

• The search algorithms we have seen so far, more often concentrate on path through which the goal
is reached. But if the problem does not demand the path of the solution and it expects only the final
configuration of the solution then we have different types of problem to solve.

• Following are the problems where only solution state configuration is important and not the path
which has arrived at solution.

1) 8-queen (where only solution state configuration is expected).

2) Integrated circuit design.

3) Factory-floor layout.

4) Job-shop scheduling.

5) Automatic programming.

6) Telecommunication network optimization.

7) Vehicle routing.

8) Portfolio management.

If we have such type of problem then we can have another class of algorithms, the algorithms that
do not worry about the paths at all. These are local search algorithms.

Local Search Algorithms

- They operate using single state. (rather than multiple paths).

- They generally move to neighbours of the current state.

- There is no such requirement of maintaining paths in memory.

- They are not "Systematic" algorithm procedure.

- The main advantages of local search algorithm are

1) They use very little and constant amount of memory.

2) They have ability to find resonable solution for infinite state spaces (for which systematic
algorithms are unsuitable).

Local search algorithms are useful for solving pure optimization problems. In pure optimization
problems main aim is to find the best state according to required objective function.

Local search algorithm make use of concept called as state space landscape. This landscape has two
structures -

1) Location (defined by the state)

2) Elevation (defined by the value of the heuristic cost function or objective function).

• If elevation corresponds to the cost, then the aim is to find the lowest valley - (a global minimum).

• If elevation corresponds to the objective function then aim is to find the highest peak - (a global
maximum).
• Local search algorithms explore the landscape.

Performance measurement

- Local search is complete i.e. it surely finds goal if one exists.

- Local search is optimal as it always find global minimum or maximum.

Local search representation

Hill Climbing Search

• This algorithm generally moves up in the direction of increasing value that is-uphill. It breaks its
"moving up loop" when it reaches a "peak" where no neighbour has a higher value.

• It does not maintain a search tree. It stores current node data structure. This node records the
state and its objective function value. Algorithm only look out for immediate neighbours of current
state.

• It is similar to greedy local search in a sense that it considers a current good neighbour state
without thinking ahead.

• Greedy algorithm works very well as it is very easy to improve bad state in hill climbing.

1. Algorithm for Hill Climbing

The algorithm for hill climbing is as follows: -

1) Evaluate the initial state. If it is goal state quit, otherwise make current state as initial state.

2) Select a new operator that could be applied to this state and generate a new state.

3) Evaluate the new state. If this new state is closer to the goal state than current state make the new
state as the current state. If it is not better, ignore this state and proceed with the current state.

4) If the current state is goal state or no new operators are available, quit. Otherwise repeat from 2.

2. Problems with Hill Climbing

1) Local maxima - Can't see higher peak.


2) Shoulder - Can't see the way out.

Local maxima- It is a state where we have climbed to the top of the hill, and missed on better
solution.

It is the mountain - A state that is better than all of its neighbours, but not better than some other
states further away. [Shown in Fig. 4.2.3]

Plateau: It is a state where everything around is about as good as where we are currently. In other
words a flat area of the search space in which all neighbouring states have the same value. [Shown in
Fig. 4.2.4]

Ridges: In this state we are on a ridge leading up, but we can't directly apply an operator to improve
the situation, so we have to apply more than one operator to get there. [Shown in Fig. 4.2.5]
Illustration of ridges: The grid of states (dark circles) is superimposed on a ridge rising from left to
right, creating a sequences of local maxima that are not directly connected to each other. From each
local maximum all the available actions point downhill.

3. Solving Problems Associated with Hill Climbing

• All the above discussed problems could be solved using methods like backtracking, making big
jumps (to handle plateaus or poor local maxima), applying multiple rules before testing (helps with
ridges) etc. Hill climbing is best suited to problem where the heuristic gradually improves, the closer
it gets to the solution; it works poorly where there are sharp drop-offs. It assumes that local
improvement will lead to global improvement.

4. Example for Local Search

Consider the 8-queens problem:

A complete-state formulation is used for local search algorithms. In 8-queens problem, each state has
8-queens on the board one per column. There are two functions related with 8-queens.

1) The successor function: It is function which returns all possible states which are generated by a
single queen move to another cell in the same column. The total successor of the each state 8 x 7 =
56.

2) The heuristic cost function: It is a function 'h' which hold the number of attacking pair of queens
to each other either directly or indirectly. The value is zero for the global minimum of the function
which occurs only at perfect solutions.

5. Advantages of Hill Climbing

- Hill climbing is an optimization technique for solving computationally hard problems.

- It is best used in problems with the property that the" state description itself contains all the
information needed for a solution".

- The algorithm is memory efficient since it does not maintain a search tree. It looks only at the
current state and immediate future states.
- In contrast with other iterative improvement algorithms, hill-climbing always attempts to make
changes that improve the current state. In other words, hill-climbing can only advance if there is a
higher point in the adjacent landscape.

- It is often useful when combined with other methods, getting it started right in the as immediate
general neighbourhood.

6. Variations of Hill Climbing

Many variants of hill-climbing have been invented as discussed below.

1) Stochastic hill climbing: Chooses at random from among the uphill moves; the probability of
selections can vary with the steepness of the uphill move.

2) First choice hill climbing: Implements stochastic hill climbing by generating successors randomly
until one is generated that is better than the current state. This is a good strategy when a state has
many (e.g. thousands) of successors.

3) Random restart hill climbing: Adopts the well known saying "If at first you don't succeed, try, try
again". It conducts a series of hill climbing searches from randomly generated initial states, stopping
when a goal is found.

The hill climbing algorithms described so far are incomplete because they often fail to find a goal
when surely goal exist. This can happen because these algorithms can get stuck on local maxima.
Random restart hill is complete with probability approaching to 1. This algorithm do not stop until it
reaches to goal.

The success of hill climbing depends very much on the shape of the state-space landscape. If there
are few local maxima and plateaux, random-restart hill climbing will find a good solution very quickly.

4) Steepest ascent hill climbing: This algorithm differs from the basic Hill climbing algorithm by
choosing the best successor rather than the first successor that is better. This indicates that it has
elements of the breadth first algorithm.

Steepest ascent Hill climbing algorithm

1. Evaluate the initial state.

2. If it is goal state then quit otherwise make the current state this initial state and proceed;

3. Repeat set target to be the state that any successor of the current state can better; for each
operator that can be applied to the current state apply the new operator and create a new state
evaluate this state.

4. If this state is goal state Then quit. Otherwise compare with Target. If better set Target to this
value. If Target is better than current state set current state to Target. Until a solution is found or
current state does not change.

Both the basic and this method of hill climbing may fail to find a solution by reaching a state from
which no subsequent improvement can be made and this state is not the solution.

Local maximum state is a state which is better than its neighbours but is not better than states
faraway. These are often known as foothills. Plateau states are states which have approximately the
same value and it is not clear in which direction to move in order to reach the solution. Ridge states
are special types of local maximum states. The surrounding area is basically unfriendly and makes it
difficult to escape from, in single steps, and so the path peters out when surrounded by ridges.
Escape relies on: backtracking to a previous good state and proceed in a completely different
direction-involves keeping records of the current path from the outset; making a gigantic leap
forward to a different part of the search space perhaps by applying a sensible small step repeatedly,
good for plateau; applying more than one rule at a time before testing, good for ridges. None of
these escape strategies can guarantee success.

Simulated Annealing Search

1) In simulated annealing, initially the whole space is explored.

2) This is a variation of hill climbing.

3) This avoids the danger of being caught on a plateau or ridge and makes the procedure less
sensitive to the starting point.

4) Simulated annealing is done to include a general survey of the scene, to avoid climbing, false foot
hills.

5)There are two additional changes -

i) Rather than creating maxima, minimisation is done.

ii) The term objective function is used rather than heuristic.

6) This concept is taken from physical annealing where physical substances are men melted and then
gradually cooled until some solid state is reached. In physical annealing the goal is to produce a
minimal-energy state. The annealing schedule states, that if the temperature is lowered sufficiently
slowly, then the goal will be attained. AE is called the change in the value of the objective function.

7) It becomes clear that in this algorithm we have valley descending rather than hill climbing. The
probability that the metal will jump to a higher given by P=exp8-E/kT where k is Boltzmann's constant.

The algorithm

1) Start with evaluating the initial state.

2) Apply each operator and loop until a goal state is found or till no new operators left to be applied
as described below:

i) Set T according to an annealing schedule.

ii) Select and apply a new operator.

iii) Evaluate the new state. If it is a goal state quit.

ΔE = Val (current state) - Val (new state)

• If ΔE < 0 then this is the new current state.

Else find a new current state with probability e - E / KT.

Local Beam Search

1) In the local beam search, instead of single state in the memory k states are kept in the memory.

2) In local beam search states are generated in random fashion.


3) A successor function plays an important role by generating successor of all K states.

4) If any one successor state is goal then no further processing is required.

5) In other case i.e. if goal is not achieved, it observes the K best successors from the list of all
successor and process is repeated.

6) At first glance the random parallism is achieved in the sequence by a local beam search with K
states.

7) In a random-restart search every single search activity run independently of others.

8) To implement local search, threads are used. The K parallel search threads useful information.

9) The algorithm work on principle of successful successors. If one state generate efficient/goal
reaching successor and other K-1 state generate poor successor, then in this situation the successful
successors leads all other states.

10) The algorithm drops/leaves the unsuccessful search and concentrate on successful search.

Limitations of local beam search

1) The local beam search has limitation of, lack of variation among the K states.

2) If the state concentrate on small area of state space then search becomes more expensive.

Stochastic Beam Search

1) Its one of the flavour of local beam search, which is very similar to that of stochastic hill climbing.

2) It resolves the limitation of local beam search.

3) Stochastic beam search concentrate on random selection of K successor instead of selecting k best
successor from candidate successor.

4) The probability of random selection is increasing function of its success rate.

5) A stochastic beam search is very similar to natural selection, where the child (successor) of a
parent state is eugenic (good production) according to its success rate (fitness). Thus the next
generation is also very much powerful.

Genetic Algorithms

1) Evolutionary pace of learning algorithm is genetic algorithm. The higher degree of eugency can be
achieved with new paradigm of AI called a genetic algorithms.

2) A genetic algorithm is a rich flavour of stochastic beam search.

3) In the genetic algorithm two parent states are combined together by which a good successor state
is generated.

4) The analogy to natural selection is the same as in stochastic beam search, except now we are
dealing with sexual rather than asexual reproduction.

1. Term used in Genetic Algorithm

1) Population: Population is set of states which are generated randomly.


2) Individual: It is a state or individual and it is represented as string over a finite alphabet.

Example - A string of 0s and 1s.

For example - In 8-queen all states are specified with the position of 8-queens. The memory required
is

8×log2 8 = 24 bits. Each state is represented with 8 digits.

3) Fitness function: It is a evaluation function. On its basis each state is rated. A fitness function
should return higher values for better states. For the state, the probability of being chosen for
reproducing is directly proportional to the fitness score.

In 8-queen problem the fitness function has 28 value for number of non attacking pairs.

4) Crossover: Selection of state is dependent on fitness function. If fitness function value is above
threshold then only state is selected otherwise discarded. For each state, pairs are divided, that
division point or meeting point is called crossover point, which is chosen in random order from the
positions in the string.

5) Mutation: Mutation is one of the generic operator. Mutation works on random selections or
changes. For example mutation select and changes a single bit of pattern switching 0 to 1 or 1 to #.

6) Schema: The schema is a substring in which position of some bit can be unspecified.

2. Working of a Genetic Algorithm

Input: 1) State population (a set of individuals)

2) Fitness function (that rates individual).

Steps

1) Create an individual 'X' (parent) by using random selection with fitness function 'A' of 'X'.

2) Create an individual 'Y' (parent) by using random selection with fitness function'B' of 'Y'.

3) Child with good fitness is created for X + Y.

4) For small probability apply mutate operator on child.

5) Add child to new population.

6) The above process is repeated until child (an individual) is not fit as specified by fitness function.

Genetic algorithm example

Example: 8-queens problem states.

States:

Assume each queen has its own column, represent a state by listing a row where the queen is in each
column (digits 1 to 8)

For example, the state below will be represented as 16257483.


3. Example: 8 Queens Problem Fitness

Fitness function: Instead of h as before, use the number of non-attacking pairs of queens. There are
28 pairs of different queens, smaller column first, all together, so solutions have fitness 28. (Basically,
fitness function is 28 - h)

For example, fitness of the state below is 27 (queens in columns 4 and 7 attack each other).

Example: 8 queens problem crossover.

Choose pairs for reproduction (so that those with higher fitness are more likely to be chosen,
perhaps multiple times).

For each pair, choose a random crossover point between 1 to 8, say 3.

Produce offspring by taking substring 1-3 from the first parent and 4 - 8 from the second (and vice
versa). Apply mutation (with small probability) to the offspring.

Importance of representation

Parts we swap in crossover should result in a well-informed solution (and in addition better be
meaningful).
Consider what would happen with binary representation (where position requires 3 digits).

Also chosen representation reduces search space considerably (compared to representing each
square for example).

4. Optimization using Genetic Algorithm

1) Genetic algorithm is biological model of intelligent evolution. It generates the population of


competing children.

2) The poor candidate solution will be vanished, as genetic algorithm generates best child.
Thus, in practice genetic algorithm is most promising survive and reproduction technique by
constructing new optimized solution. Thus it is optimization oriented technique.

Application of genetic algorithm on optimization.

1) Circuit layout. 2) Job-shop scheduling.

Local Search in Continuous Spaces

As we know that environment can be descrete or continuous, but the most real-world environment
are continuous.

1) It is very difficult to handle continuous state space. The successor for real-world problem are many
infinite states.

2) Origin of local search in continuous spaces lies in Newton and Leibnitz in the 17th century.

3) Optimal solution for given problem in continuous spaces can be found with "Local search
techniques".

Search and Evaluation Theory

1) Search is basic problem solving technique. Basically it is always related with evolution theory.

2) Charles Darwin is father of evolutionary theory. The theory was based on the origin of species by
means of natural selection (1859).

3) The variations (mutations) are well known attributes of reproduction and best features are
preserved in next generation in proper propagation.

4) The qualities are inherited or modified. This fact was not associated with Darwin theory.

5) Gregor Mendel (1866) theory found the fact of inheritance. He performed artificial fertilization on
peas.

6) DNA module structure was identified by Watson and Crick.

A → Adenin, G- Guanine,

T→ Thymine, C - Cytosine

7) The key difference between stochastic beam search and evolution is that successors are generated
from multiple organisms (states) rather than one organism (state).

8) The theory of evolution is very much rich than genetic algorithm.

9) The process of mutations involves duplication, reversals and motion of large group of DNA.

10) Most important is the fact that the genes themselves encode the mechanisms whereby the
genome is reproduced and translated into an organism. In genetic algorithms, those mechanisms are
a separate program that is not represented within the strings being manipulated.

11) French naturalist Jean Lamarck (1809) proposed a theory of evolution whereby traits acquired by
adaptation during an organism's lifetime would be passed on to its offspring. Such a process would
be effective, but does not seem to occur in nature.

12) James Baldwin (1896) proposed a superficially similar theory : - that behavior learned during an
organism's lifetime could accelerate the rate of evolution.
For example -

Suppose we want to place three new airports anywhere in India, such that the sum of squared
distances from each city to its nearest airport is minimized.

i) The state space, is defined by the co-ordinates of the airports: (x1, y1), (x2, y2) and (x3, y3).

ii) This is a six-dimensional space: We also say that states are defined by six variables. (In general,
states are defined by an n-dimensional vector of variables, x).

iii) Moving around in this space corresponds to moving one or more of the airports on the map.

iv) The objective function f (x1, y1, x2, y2, x3, y3) is relatively easy to compute for any particular state,
once we compute the closest cities, but rather tricky to write down in general.

Problems Associated with Local Search

• Local search methods suffer from local maxima, ridges and plateaux in continuous state spaces just
as much as in discrete spaces. Random restarts and simulated annealing can be used and are often
helpful. High-dimensional continuous spaces are, however, big places in which it is easy to get lost.

Constrained Optimization Problems

• An optimization problem is constrained if solutions must satisfy some hard constraint like sites to
be inside India and on dry land (rather than in the middle of rivers). The difficulty of constrained
optimization problems depends on the nature of the constraints and the objective function. The
best-known category is that of linear programming problems, in which constraints must be linear
inequalities forming a convex region and the objective function is also linear. Linear programming
problems can be solved in time which is polynomial in the number of variables.

Handling Unknown Environments

• Earlier algorithms we have seen are offline search algorithms. They compute a complete solution
before setting foot in the real world, and then execute the solution without considering new
percepts. Now we will consider how to handle dynamic or semidynamic environments.

Online Search Agents

1) The online search agent works on the concept of interleaving of two tasks namely computation
and action. In the real-world, the agent perform an action and it observes the environment and then
work out on the next action.

2) If an environment is dynamic or semidynamic or stochastic the online search work in best manner.

3) An agent needs to pay a penalty for lengthy computation and for sitting around.

4) An offline search is costly. For offline search we need to have proper planning which consider all
related information. In online search there is no need of planning just see what happens and act
accordingly.

Example - Chess moves

5) In online search the states and actions are unknown to the agent. Thus it has an exploration
problem. (it needs to explore the world).

6) In an online search the actions are used to predict next states and actions.
7) For predicting next states and actions computations are performed.

For example -

• Consider example of robot who builds a map between 2 locations say LOC 1 and LOC 2. For building
a map robot needs to explore the world so as to reach to LOC 2 from LOC 1. Parallely it will build the
map.

• A natural real world example of exploration problem and online search is a baby trying to
understand the world around it. It is baby's online search process. A baby tries an action and go in
certain state. It keeps on doing this, gathering new experience and learning from it.

Solving Online Search Problems

1) The purely computation process cannot handle an online search problem properly. Agent needs to
execute actions for solving problem.

2) The agent has knowledge about -

a) ACTIONS (s) which returns a list of actions allowed in state s.

b) The step-cost function cost (s1, a, s') (note that cost function cannot be used until the agent knows
that s' is the outcome).

c) GOAL-TEST (s)

3) It is not possible for agent to access the successors of a state. In fact the agent try related actions,
in that state.

4) As basic objective of an agent is to minimize cost, the goal state must be found with minimum
cost.

5) The another possible goal can be to explore the entire environment. The total path cost is cost of
path across which the agent travels.

6) In the online algorithm we can find the competitive ratio means shortest path. The competitive
ratio must be very small. But in reality many a time competitive ratio is infinite.

7) If the online search is with irreversible action then it will reach to a dead-end state and from that
state it can't achieve goal state.

8) We must build an algorithm which does not explore dead-end path. But accidently or in reality an
algorithm can avoid dead ends in all state spaces.

9) Consider two goal state space as shown in Fig. 4.4.1.


In the Fig. 4.4.1 online search algorithm visit states S and A, both the state spaces are identical so it
must be explored in both cases. One link is to goal state (G) and other lead to dead end in both states
spaces. It is an example of an adversary arguments. The exploration of state space is in an adversary
manner. One can put the goals and dead ends likewise.

Online Local Search

Hill climbing search

1) Hill climbing search has the property of locality in its node expansions because it keeps just one
current state in memory, hill-climbing search is already an online search algorithm.

2) Unfortunately, it is not very useful in its simplest form because it leaves the agent sitting at local
maxima with nowhere to go.

3) Moreover, random restarts cannot be used, because the agent cannot transport itself to a new
state.

Random walk

1) Instead of random restarts, one might consider using a random walk to explore the environment.

2) A random walk simply selects at random one of the available actions from the current state;
preference can be given to actions that have not yet been tried.

3) It is easy to prove that a random walk will eventually find a goal or complete its exploration,
provided that the space is finite.

Hill climbing with memory

1) Augmenting hill climbing with memory rather than randomness turns out to be a more effective
approach.

2) The basic idea is to store a "current best estimate", H(S) the cost to reach the goal from each state
that has been visited.

3) H(S) starts out being just the heuristic estimate h(s) and is updated as the agent gains experience
in the state space.

Learning Real Time A*


1) An agent can use a scheme called learning real-time A* (LRTA*).

2) It builds a map of the environment using the result table. It updates the cost estimate for the state
it has just left and then chooses the "apparently best" move according to its current cost estimates.

3) One important detail is that actions that have not yet been tried in a state S are always assumed to
lead immediately to the goal with the least possible cost, namely h(s). This optimism under
uncertainty encourages the agent to explore new, possibly promising paths.

4) An LRTA* agent is guaranteed to find a goal in any finite, safely explorable en environment.

5) It is not complete for infinite state spaces. There are cases where it can be led infinitely astray. It
can explore an environment of n states in O(n2) steps in the worst case.

Learning in Online Search

At every step online search agent can learn various aspects of current situation.

The agents learn a "map" of the environment - more precisely, the outcome of each action in each
state-simply by recording each of their experiences.

The local search agents acquire more accurate estimates of the value of each state by using local
updating rules.

By learning, the agent can do better searching.

Review Questions

1. When is the class of problem said to be intractable? (Refer section 4.1)

2. What is the power of heuristic search? Why does one go for heuristic search?

(Refer section 4.1.10)

3. What are the advantages of heuristic function? (Refer section 4.1)

4. When a heuristic function h is said to be admissible? Give an admissible heuristic function for
TS. (Refer section 4.1.10)

5. What is heuristic search? (Refer section 4.1.10)

6. Explain heuristic search with an example. (Refer section 4.1.10)

7. How to minimize total estimated solution cost using A* search with an example?

(Refer section 4.1.6)

8. Explain the A* search and give the proof of optimality of A*. (Refer section 4.1.6)

9. What is heuristic? For each of the following type of problem give good heuristics function.

i) Block world problem ii) Missionaries and cannibals. (Refer section 4.1.10)

10. What is heuristics? A* search uses a combined heuristic to select the best path to follow through
the state space toward the goal. Define the two heuristics used (h(n) and g(n)). (Refer section 4.1.10)
11. Best first search used both OPEN list and a CLOSED list. Describe the purpose of each for the Best-
First algorithm. Explain with suitable example. (Refer section 4.1)

12. What is heuristics? Explain any heuristics search method. Justify how heuristics function helps in
achieving goal state. (Refer section 4.1.10)

13. State the reason when hill climbing often get stuck. (Refer section 4.2.2)

14. What do you mean by local maxima with respect to search technique? (Refer section 4.2.2)

15. Describe hill climbing, random restart hill climbing and simulated annealing algorithms. Compare
their merits and demerits. (Refer section 4.2)

16. How does hill climbing ensure greedy local search? What are the problems of hill
climbing? (Refer section 4.2)

17. How does genetic algorithm come up with optimal solution? (Refer section 4.2.6)

18. Hill climbing is a standard iterotive improvement algorithm similar to greedy best-firsts search.
What are the primary problems with hill climbing? (Refer section 4.2)

Two Marks Questions with Answers


Q.1 When is the class of problem said to be intractable. AU: Dec.-03

Ans.: The problems whose algorithm take an unreasonably large amount of resources (time and/or
space) are called intractable.

For example -TSP

Given set of 'N' points, one should find shortest tour which connect all of them. Algorithm will
consider all N! orderings, i.e. consider n = 16. 16! > 250 which is impractial for any computer.

Q.2 What is the power of heuristic search? AU: Dec.-04

Why does one go for heuristic search? AU: Dec.-05

Ans.:Heuristic search uses problem specific knowledge while searching in state space. This helps to
improve average search performance.

They use evaluation function which denote relative desirability (goodness) of a expanding node set.
This makes the search more efficient and faster. One should go for heuristic search because it has
power to solve large, hard problems in affordable times.

Q.3 What is the advantage of heuristic function? AU: Dec.-09, 11

Ans.: Heuristic function ranks alternative paths in various search algorithms, at each branching step,
based on the available information, so that a better path is chosen. The main advantage of heuristic
function is that it guides for which state to explore now, while searching. It makes use of problem
specific knowledge like constraints to check the goodness of a state to be explored. This drastically
reduces the required searching time.

Q.4 State the reason when hill climbing often gets stuck. AU: May-10

Ans. Local maxima is the state where hill climbing algorithm is sure to get stuck. Local maximum is
the peak that is higher than each of its neighbour states, but lower than the global maximum. So we
have missed the better state here. All the search procedure turns out to be wasted here. It is like a
dead end.

Q.5 When a heuristic function h is said to be admissible? Give an admissible heuristic function for
TSP. AU: Dec.-10

Ans.: Admissible heuristic function is that function which never over estimates the cost to reach the
goal state. It means that h(n) gives true cost to reach the goal state 'n'.

The admissible heuristic for TSP is,

i) Minimum spanning tree.

ii) Minimum assignment problem.

Q.6 What do you mean by local maxima with respect to search technique? AU: May-11

Ans.: Local maximum is the peak that is higher than each of its neighbour states, but lower than the
global maximum i.e. a local maxima is a tiny hill on the surface whose peak is not as high as the main
peak (which is a optimal solution). Hill climbing fails to find optimum solution when it encounters
local maxima. Any small move, from here also makes things worse (temporarily). At local maxima all
the search procedure turns out to be wasted here. It is like a dead end.

Q.7 How can we avoid ridge and plateau in hill climbing? AU: Dec.-12

Ans.: Ridge and plateau in hill climbing can be avoided using methods like backtracking, making big
jumps. Backtracking and making big jumps helps to avoid plateau, whereas, application of multiple
rules help to avoid the problem of ridges.

Q.8 Define the effect of heuristic accuracy on performance. (Refer Q.2) AU: Dec.-13

Ans.:Heuristic search uses problem specific knowledge while searching in state space. This helps to
improve average search performance.

They use evaluation function which denote relative desirability (goodness) of a expanding node set.
This makes the search more efficient and faster. One should go for heuristic search because it has
power to solve large, hard problems in affordable times.

Q.9 What is heuristic search strategy? (Refer Q.2) AU: Dec.-14

Ans.:Heuristic search uses problem specific knowledge while searching in state space. This helps to
improve average search performance.

They use evaluation function which denote relative desirability (goodness) of a expanding node set.
This makes the search more efficient and faster. One should go for heuristic search because it has
power to solve large, hard problems in affordable times.

You might also like