Informed Search Algorithms
Informed Search Algorithms
5) So now the goal node G has been reached and the path we will follow is A->C-
>F->G .
In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness
number.
At each point in the search space, only those node is expanded which have the lowest
value of f(n), and the algorithm terminates when the goal node is found.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.
Advantages:
Disadvantages:
o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.
Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic
value of all states is given in the below table so we will calculate the f(n) of each state
using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from
start state.
Solution:
Initialization: {(S, 5)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path
with cost 6.
Points to remember:
o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
o Consistency: Second required condition is consistency for only A* graph-
search.
If the heuristic function is admissible, then A* tree search will always find the least cost
path.
o Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to
decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.
On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal
of search is to find the global minimum and local minimum. If the function of Y-axis is
Objective function, then the goal of the search is to find the global maximum and local
maximum.
Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.
Flat local maximum: It is a flat space in the landscape where all the neighbor states
of current states have the same value.
o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current
state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.
o Step 1: Evaluate the initial state, if it is goal state then return success and stop,
else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be
better than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the
SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state
to SUCC.
o Step 5: Exit.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor
states of the current state contains the same value, because of this algorithm does not
find any best direction to move. A hill-climbing search might be lost in the plateau
area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single
move.
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed
to be incomplete because it can get stuck on a local maximum. And if algorithm applies
a random walk, by moving a successor, then it may complete but not
efficient. Simulated Annealing is an algorithm which yields both efficiency and
completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state. The same process is used in simulated annealing in which the
algorithm picks a random move, instead of picking the best move. If the random move
improves the state, then it follows the same path. Otherwise, the algorithm follows the
path which has a probability of less than 1 or it moves downhill and chooses another
path.