0% found this document useful (0 votes)
32 views56 pages

Unit-2 Heuristic Search Techniques

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views56 pages

Unit-2 Heuristic Search Techniques

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Artificial Intelligence

UNIT-II:
Heuristic Search Techniques

Dr. Dwiti Krishna Bebarta


Heuristic Search Techniques
Topics to discuss:
• Issues in The Design of Search Programs
• Generate-And- Test
• Hill Climbing and its variants
• Best-First Search
• A* Algorithm
• Problem Reduction
• AO*Algorithm
• Constraint Satisfaction
• Means-Ends Analysis
Text Books:
1. Artificial Intelligence, Elaine Rich and Kevin Knight, Tata McGraw -Hill
Publications
2. Introduction To Artificial Intelligence & Expert Systems, Patterson, PHI
Publications
Heuristic Search Techniques

• Good solution to hard problems


• Very large search space
– Large databases
– Image sequences
– Game playing
• Algorithms
– Guaranteed best answer
– Can be slow – literally years
• Heuristics
– “Rules of thumb”
– Very fast
– Good answer likely, but not guaranteed!
• Searching foreign intelligence for terrorist activity.
• Heuristic algorithms are not really intelligent; they appear to be intelligent
because they achieve better performance.
• Heuristic algorithms are more efficient because they take advantage of feedback
from the data to direct the search path.
• Uninformed search algorithms or Brute-force algorithms, search through the
search space all possible candidates for the solution checking whether each
candidate satisfies the problem‟s statement.
• Informed search algorithms use heuristic functions that are specific to the
problem, apply them to guide the search through the search space to try to
reduce the amount of time spent in searching.
• A good heuristic will make an informed search dramatically outperform any
uninformed search: for example,
• The Traveling Salesman Problem (TSP), where the goal is to find is a good
solution instead of finding the best solution.
• In such problems, the search proceeds using current information about the
problem to predict which path is closer to the goal and follows it, although it does
not always guarantee to find the best possible solution.
• Such techniques help in finding a solution within reasonable time and space
(memory).
• Some prominent intelligent search algorithms are stated below: 1. Generate and
Test Search 2. Best-first Search 3. Greedy Search 4. A* Search 5. Constraint
Search 6. Means-ends analysis
Generate and Test Search
Generate and Test Search is a heuristic search technique
based on Depth First Search with Backtracking which
guarantees to find a solution if done systematically and
there exists a solution. In this technique, all the solutions
are generated and tested for the best solution. It ensures
that the best solution is checked against all possible
generated solutions.

Algorithm
1. Generate a possible solution. For example,
generating a particular point in the problem
space or generating a path for a start state.
2. Test to see if this is a actual solution by
comparing the chosen point or the endpoint
of the chosen path to the set of acceptable
goal states
3. If a solution is found, quit. Otherwise go to
Step 1
Hill Climbing

Algorithm: Hill Climbing


• Evaluate the initial state.
• Loop until a solution is found or there are no new
operators left to be applied:
– Select and apply a new operator
– Evaluate the new state:
• goal -→ quit
• better than current state -→ new current state
• Hill Climbing technique can be used to solve many problems,
where the current state allows for an accurate evaluation
function, such as Network-Flow, Travelling Salesman
problem, 8-Queens problem, Integrated Circuit design, etc.
• Like DFS
• Optimization technique based on local searches Can be used
to solve problems that have many solutions
• TSP can solved moving through a tree of paths, hill climbing
proceeds in DFS order.
• Search may reach to a position i.e. a solution but from there
no move improves the situation
– Local maximum: a state better than all but not better than
some other states which are away
– Plateau: search space where all neighboring states has
the same value
– Ridge: search space that is higher than surrounding areas
• Local maximum = no uphill step
• not complete
– Allow “random restart” which is complete, but might take
a very long time
• Plateau = all steps equal (flat or shoulder)
– Must move to equal state to make progress, but no
indication of the correct direction
Hill Climbing Algorithm
We will assume we are trying to maximize a function. That is,
we are trying to find a point in the search space that is better
than all the others. And by "better" we mean that the evaluation
is higher. We might also say that the solution is of better quality
than all the others.
The idea behind hill climbing is as follows.
1. Pick a random point in the search space.
2. Consider all the neighbours of the current state.
3. Choose the neighbour with the best quality and move to that
state.
4. Repeat 2 thru 4 until all the neighbouring states are of lower
quality.
5. Return the current state as the solution state.
Types of Hill Climbing Algorithm:
• Simple hill Climbing:
• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:
Algorithm for Simple Hill Climbing:
• Step 1: Evaluate the initial state, if it is goal state then
return success and Stop.
• Step 2: Loop Until a solution is found or there is no new
operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
– If it is goal state, then return success and quit.
– Else if it is better than the current state then assign new state
as a current state.
– Else if not better than the current state, then return to step2.
• Step 5: Exit.

This algorithm has the following features:


• Less time consuming
• Less optimal solution and the solution is not guaranteed
Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
This algorithm examines all the neighbouring nodes of the current state and
selects one neighbour node which is closest to the goal state. This algorithm
consumes more time as it searches for multiple neighbours
Algorithm for Steepest-Ascent hill climbing:

• Step 1: Evaluate the initial state, if it is goal state then return success and stop,
else make current state as initial state.
• Step 2: Loop until a solution is found or the current state does not change.
– Let SUCC be a state such that any successor of the current state will be better
than it.
– For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to
SUCC.
• Step 5: Exit.
Stochastic hill climbing:
• Stochastic hill climbing does not examine for all its
neighbour before moving. Rather, this search algorithm
selects one neighbour node at random and decides
whether to choose it as a current state or examine
another state.
Best First Search (Informed Search)
Algorithm:
Best-First-Search(Graph g, Node start)
• Create an empty PriorityQueue pq;
• pq.insert(start) // Insert "start" in pq
• While PriorityQueue is not empty do
– u = PriorityQueue.DeleteMin
– if u is the goal state, then Exit
– Else
– For each neighbour v of u
• If v "Unvisited“ then Mark v "Visited"
• pq.insert(v)
– Mark v "Examined"
• End procedure
A General Example
Best-first search
• use an evaluation function f(n) for each node
– f(n) provides an estimate for the total cost
– Expand the node n with smallest f(n).
Start node: Arad
Goal node: Bucharest
Total distance covered:
140+99+211=450
• Complete
– No, GBFS can get stuck in loops (e.g. bouncing back and
forth between cities)
• Time
– O(bm) but a good heuristic can have dramatic improvement
• Space
– O(bm) – keeps all the nodes in memory
• Optimal
– No!
A* Search
• A* (A star) is the most widely known form of Best-First
search
– It evaluates nodes by combining g(n) and h(n)
– f(n) = g(n) + h(n)
– Where
• g(n) = the actual cost to get from the initial node to
node n.
• h(n) = Heuristic cost (also known as "estimation
cost") from node n to destination node n.
1. Create an open list of found but not explored nodes, initially empty.
2. Create a closed list to hold already explored nodes, initially empty.
3. Add a starting node to the open list with an initial value of g
4. While the open list is not empty do
a. find the node with the least value of f() on the open list, call it „q‟
b. Remove „q‟ from the open list
c. generate q's successors and set their parents to q
d. for each successor
i. if successor is the goal state, then stop search
ii. else, compute g(), h(), and f() for successor
iii. if a node with the same level as successor is in the OPEN list
which has a lower f than successor, skip this successor
iv. else if a node with the same level as successor is in the CLOSED
list which has a lower f than successor, skip this successor
v. else, add the node to the open list
e. Insert q on to the closed list
140 75

118
140
75
118

140
99 80
151
140
75
118

80
140 99 151

146
97 80
140 118
75

80
140 99 151

99 146 97
211 80

Node will be skipped as per algo.


140
118
75

140 99 80
151

80
99 146 97
211
Skipped

101 138 97
• Complete
– Yes
• Time
– Exponential
– The better the heuristic, the better the time
• Best case h is perfect, O(d)
• Worst case h = 0, O(bd) same as BFS
• Space
– Keeps all nodes in memory and save in case of repetition
– This is O(bd) or worse
– A* usually runs out of space before it runs out of time
• Optimal
– Yes
Problem Reduction
• If we are looking for a sequence of actions to achieve some
goal, then one way to do it is to use state-space search,
where each node in you search space is a state of the world,
and you are searching for a sequence of actions that get you
from an initial state to a final state.
• Another way is to consider the different ways that the goal
state can be decomposed into simpler sub-goals.
• If we can discover how to get from a given node to a goal
state along any one of the branches leaving it.
• To represent problem reduction techniques we need to use
an AND-OR graph/tree.
• Problem reduction technique using AND-OR graph is useful
for representing a solution of a problems that can be solved
by decomposing them into a set of smaller problems all of
which must be solved.
• This decomposition or reduction generates arcs that we call
AND arcs.
• One AND arc may point to any numbers of successor nodes.
• All of which must then be solved in order for the arc to point
solution.
• In order to find solution in an AND-OR graph we need an
algorithm similar to best-first search but with the ability to
handle the AND arcs appropriately.
• We define FUTILITY, if the estimated cost of solution
becomes greater than the value of FUTILITY then we
abandon the search, FUTILITY should be chosen to
correspond to a threshold.
A
9 A
38

B C D
B C D 27
3 4 17 9
5

Better path is B rather E F F G H I


choosing C and D
5 10 3 4 15 10
Problem Reduction algorithm:
1. Initialize the graph to the starting node.
2. Loop until the starting node is labelled SOLVED or until its cost goes
above FUTILITY:
a. Traverse the graph, starting at the initial node and following the
current best path and accumulate the set of nodes that are on that
path and have not yet been expanded or labelled as solved.
b. Pick one of these unexpanded nodes and expand it. If there are
no successors, assign FUTILITY as the value of this node.
Otherwise, add its successors to the graph and for each of them
compute f'(n). If f'(n) of any node is 0, mark that node as
SOLVED.
c. Change the f'(n) estimate of the newly expanded node to reflect
the new information provided by its successors. Propagate this
change backwards through the graph. If any node contains a
successor arc whose descendants are all solved, label the node
itself as SOLVED.
AO*Algorithm
Difference between the A* Algorithm and AO* algorithm
• A* algorithm and AO* algorithm both works on the best first
search.
• They are both informed search and works on given
heuristics values.
• A* always gives the optimal solution but AO* doesn‟t
guarantee to give the optimal solution.
• Once AO* got a solution doesn’t explore all possible paths
but A* explores all paths.
• When compared to the A* algorithm, the AO* algorithm
uses less memory.
• Opposite to the A* algorithm, the AO* algorithm cannot go
into an endless loop.
Start from node A, f(A⇢B) = g(B) + h(B) = 1 + 5 ……here g(n)=1 is taken by
default for path cost = 6 f(A⇢C+D) = g(c) + h(c) + g(d) + h(d) = 1 + 2 + 1 + 4
……here we have added C & D because they are in AND = 8 So, by
calculation A⇢B path is chosen which is the minimum path, i.e f(A⇢B)
explore node B Here the value of E & F are calculated as follows, f(B⇢E) = g(e) +
h(e) f(B⇢E) = 1 + 7 = 8 f(B⇢f) = g(f) + h(f) f(B⇢f) = 1 + 9 = 10
B's heuristic value is different from its actual value The heuristic is
updated and the minimum cost path is selected.
f(A⇢B) = g(B) + updated h(B) = 1 + 8 = 9
By comparing f(A⇢B) & f(A⇢C+D) f(A⇢C+D) is shown to be
smaller, i.e. 8 < 9
Now explore f(A⇢C+D)
So, the current node is C f(C⇢G) = g(g) + h(g) f(C⇢G) = 1 + 3 =
4
f(C⇢H+I) = g(h) + h(h) + g(i) + h(i) f(C⇢H+I) = 1 + 0 + 1 + 0 = 2
f(C⇢H+I) is selected as the path with the lowest cost and the
heuristic is also left unchanged because it matches the actual
cost. Paths H & I are solved because the heuristic for those
paths is 0
Path A⇢D needs to be calculated because it has an AND.
f(D⇢J) = g(j) + h(j) f(D⇢J) = 1 + 0 = 1 the heuristic of node D
needs to be updated to 1.
f(A⇢C+D) = g(c) + h(c) + g(d) + h(d) = 1 + 2 + 1 + 1 = 5
path f(A⇢C+D) is get solved and this tree has become a
solved tree now.
Constraint satisfaction problems (CSPs)
Goal is to solve some problem state that satisfies a given set of
constraints instead of finding optimal path to the solution
state.
Applications
Cryptarithmetic Problem
N-Queen problem
Map coloring problem
Example: Map-Coloring
• Variables: set of nodes/regions
• Domains Di = {red, green, blue}
• Constraints: adjacent regions must have different colors
• Start state: all variables are unassigned
• Goal state: assigned values which satisfy constraints
• Operator: assigns values to any unassigned variable,
provided not conflict with previous assigned variables.
Constraint Satisfaction Problem
The general form of the constraint satisfaction procedure is as
follows:
Until a complete solution is found or until all paths ends, do
1. Select an unexpanded node of the search graph.
2. Apply the constraint inference rules to the selected node to
generate all possible new constraints.
3. If the set of constraints contains a contradiction, then report
that this path is a dead end.
4. If the set of constraints describes a complete solution then
report success.
5. If neither a constraint nor a complete solution has been
found then apply the rules to generate new partial solutions.
Insert these partial solutions into the search graph.
CONSTRAINTS:-
• Initial state of problem.
• D=?
• E=?
SEND
• Y=? + MORE
• N=? ------------
MONEY
• R=? ------------
• O=?
• S=?
• M=?
• C1 ,C 2, C3 stands for the carry variables respectively
• Goal State: the digits to the letters must be assigned in such
a manner so that the sum is satisfied.
Since E+L = S
i.e. E = S-L
Consider C3=0 or 1 C2=1

M Conflict with B i.e. 6


S-E=5 Since C4 is one So Remaining numbers:
0,2,3,4,6,7,8,9
E+L=S L=5
i.e. S-E=L G=1 Similarly Consider
(7,2) and B=7
=5 (7,2) and B=8
(7,2) and B=9
Now Consider Now Consider
(8,3) and B=6 (8,3) and B=7
A=4, E=3, S=8,
Since A=4, so M=9

S-E=5 Since C4 is one So Remaining numbers:


0,2,3,4,6,7,8,9
L=5
G=1
Solution Process:

• initial guess m=1 because the sum of two single digits can
generate at most a carry '1'.
• When n=1 o=0 or 1 because the largest single digit number
added to m=1 can generate the sum of either 0 or 1
depend on the carry received from the carry sum. By this
we conclude that o=0 because m is already 1 hence we
cannot assign same digit another letter(rule no.)
• We have m=1 and o=0 to get o=0 we have s=8 or 9, again
depending on the carry received from the earlier sum.
Means-Ends Analysis
• Most of the search strategies either reason forward of backward
however, often a mixture of the two directions is appropriate.
• Such mixed strategy would make it possible to solve the major
parts of problem first and solve the smaller problems that arise
when combining them together.
• Such a technique is called "Means - Ends Analysis".
• The means-ends analysis process centers around finding the
difference between current state and goal state. The problem space
of means - ends analysis has an initial state and one or more goal
state, a set of operate with a set of preconditions their application
and difference functions that computes the difference between two
state a(i) and s(j). A problem is solved using means – ends analysis
by
Algorithm for Means-Ends Analysis:
• Let's take a Current state as CURRENT and Goal State as GOAL, then
following are the steps for the MEA algorithm.
• Step 1: Compare CURRENT to GOAL, if there are no differences
between both then return Success and Exit.
• Step 2: Else, select the most significant difference and reduce it by doing
the following steps until the success or failure occurs.
– Select a new operator O which is applicable for the current difference,
and if there is no such operator, then signal failure.
– Attempt to apply operator O to CURRENT. Make a description of two
states.
i) O-Start, a state in which O?s preconditions are satisfied.
ii) O-Result, the state that would result if O were applied In O-start.
– If
(First-Part <------ MEA (CURRENT, O-START)
And
(LAST-Part <----- MEA (O-Result, GOAL), are successful, then signal
Success and return the result of combining FIRST-PART, O, and
LAST-PART.
Example of Mean-Ends Analysis:

Solution:
To solve the above problem, we will first find the differences
between initial states and goal states, and for each
difference, we will generate a new state and will apply the
operators. The operators we have for this problem are:
Move
Delete
Expand
1. Evaluating the initial state: 2. Applying Delete operator

3. Applying Move Operator:

4. Applying Expand Operator:

You might also like