Minor Project On Heuristic Search: Faculty Guide: Student
Minor Project On Heuristic Search: Faculty Guide: Student
Minor Project
on
Heuristic Search
Faculty Guide: Student:
Mr. Rishi Kumar Aishwarya Gupta
Assistant Professor A23224711007
Computer Science Dept. B.Tech-CSE+MBA
2011-16
2
Heuristic Search Techniques
Direct techniques (blind search) are not
always possible (they require too much time
or memory).
Weak techniques can be effective if applied
correctly on the right kinds of tasks.
Typically require domain specific information.
3
Generate-and-test
Very simple strategy - just keep guessing.
do while goal not accomplished
generate a possible solution
test solution to see if it is a goal
Heuristics may be used to determine the
specific rules for solution generation.
4
Hill Climbing
Variation on generate-and-test:
generation of next state depends on feedback
from the test procedure.
Test now includes a heuristic function that
provides a guess as to how good each possible
state is.
There are a number of ways to use the
information returned by the test procedure.
5
Simple Hill Climbing
Use heuristic to move only to states that are
better than the current state.
Always move to better state when possible.
The process ends when all operators have
been applied and none of the resulting states
are better than the current state.
6
Potential Problems with
Simple Hill Climbing
Will terminate when at local optimum.
The order of application of operators can
make a big difference.
Cant see past a single move in the state
space.
7
Steepest-Ascent Hill Climbing
A variation on simple hill climbing.
Instead of moving to the first state that is
better, move to the best possible state that is
one move away.
The order of operators does not matter.
Not just climbing to a better state, climbing
up the steepest slope.
8
Hill Climbing Termination
Local Optimum: all neighboring states are
worse or the same.
Plateau - all neighboring states are the same
as the current state.
Ridge - local optimum that is caused by
inability to apply 2 operators at once.
9
Heuristic Dependence
Hill climbing is based on the value assigned
to states by the heuristic function.
The heuristic used by a hill climbing
algorithm does not need to be a static
function of a single state.
The heuristic can look ahead many states, or
can use other means to arrive at a value for
a state.
10
Best-First Search
Combines the advantages of Breadth-First
and Depth-First searchs.
DFS: follows a single path, dont need to
generate all competing paths.
BFS: doesnt get caught in loops or dead-end-
paths.
Best First Search: explore the most
promising path seen so far.
11
Best-First Search (cont.)
While goal not reached:
1. Generate all potential successor states and
add to a list of states.
2. Pick the best state in the list and go to it.
Similar to steepest-ascent, but dont throw
away states that are not chosen.
12
Simulated Annealing
Based on physical process of annealing a
metal to get the best (minimal energy) state.
Hill climbing with a twist:
allow some moves downhill (to worse states)
start out allowing large downhill moves (to
much worse states) and gradually allow only
small downhill moves.
13
Simulated Annealing (cont.)
The search initially jumps around a lot,
exploring many regions of the state space.
The jumping is gradually reduced and the
search becomes a simple hill climb (search
for local optimum).
14
A* Algorithm
The A* algorithm uses a modified
evaluation function and a Best-First search.
A* minimizes the total path cost.
Under the right conditions A* provides the
cheapest cost solution in the optimal time!
15
A* evaluation function
The evaluation function f is an estimate of
the value of a node x given by:
f(x) = g(x) + h(x)
g(x) is the cost to get from the start state to
state x.
h(x) is the estimated cost to get from state
x to the goal state (the heuristic).
16
Modified State Evaluation
Value of each state is a combination of:
the cost of the path to the state
estimated cost of reaching a goal from the state.
The idea is to use the path to a state to
determine (partially) the rank of the state
when compared to other states.
This doesnt make sense for DFS or BFS,
but is useful for Best-First Search.
17
A* Algorithm
The general idea is:
Best First Search with the modified evaluation
function.
h(x) is an estimate of the number of steps from
state x to a goal state.
loops are avoided - we dont expand the same
state twice.
Information about the path to the goal state is
retained.
18
A* Algorithm
1. Create a priority queue of search nodes (initially the
start state). Priority is determined by the function f )
2. While queue not empty and goal not found:
get best state x from the queue.
If x is not goal state:
generate all possible children of x (and save
path information with each node).
Apply f to each new node and add to queue.
Remove duplicates from queue (using f to pick
the best).
19