0% found this document useful (0 votes)
16 views

Informed Search Algorithms

Jntuh Btech 3-2 CSE(AIML)

Uploaded by

RITHIK ROY
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Informed Search Algorithms

Jntuh Btech 3-2 CSE(AIML)

Uploaded by

RITHIK ROY
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Greedy Best First Search algorithm

What is the Greedy-Best-first search algorithm?


Greedy Best-First Search is an AI search algorithm that attempts to find the
most promising path from a given starting point to a goal. It prioritizes paths
that appear to be the most promising, regardless of whether or not they are
actually the shortest path. The algorithm works by evaluating the cost of each
possible path and then expanding the path with the lowest cost. This process is
repeated until the goal is reached.
The algorithm works by using a heuristic function to determine which path is
the most promising. The heuristic function considers the cost of the current
path and the estimated cost of the remaining paths. If the cost of the current
path is lower than the estimated cost of the remaining paths, then the current
path is chosen. This process is repeated until the goal is reached.
How Greedy Best-First Search Works?
• Greedy Best-First Search works by evaluating the cost of each possible
path and then expanding the path with the lowest cost. This process is
repeated until the goal is reached.
• The algorithm uses a heuristic function to determine which path is the
most promising.
• The heuristic function considers the cost of the current path and the
estimated cost of the remaining paths.
• If the cost of the current path is lower than the estimated cost of the
remaining paths, then the current path is chosen. This process is repeated
until the goal is reached.
An example of the best-first search algorithm is below graph, suppose
we have to find the path from A to G
The values in red color represent the heuristic value of reaching the goal
node G from current node
1) We are starting from A , so from A there are direct path to node B( with
heuristics value of 32 ) , from A to C ( with heuristics value of 25 ) and from A to
D( with heuristics value of 35 ) .
2) So as per best first search algorithm choose the path with lowest heuristics
value , currently C has lowest value among above node . So we will go from A to
C.

3) Now from C we have direct paths as C to F( with heuristics value of 17 ) and C


to E( with heuristics value of 19) , so we will go from C to F.
4) Now from F we have direct path to go to the goal node G ( with heuristics
value of 0 ) , so we will go from F to G.

5) So now the goal node G has been reached and the path we will follow is A->C-
>F->G .

Advantages of Greedy Best-First Search:


• Simple and Easy to Implement: Greedy Best-First Search is a relatively
straightforward algorithm, making it easy to implement.
• Fast and Efficient: Greedy Best-First Search is a very fast algorithm,
making it ideal for applications where speed is essential.
• Low Memory Requirements: Greedy Best-First Search requires only a
small amount of memory, making it suitable for applications with limited
memory.
• Flexible: Greedy Best-First Search can be adapted to different types of
problems and can be easily extended to more complex problems.
• Efficiency: If the heuristic function used in Greedy Best-First Search is
good to estimate, how close a node is to the solution, this algorithm can be
a very efficient and find a solution quickly, even in large search spaces.
Disadvantages of Greedy Best-First Search:
• Inaccurate Results: Greedy Best-First Search is not always guaranteed to
find the optimal solution, as it is only concerned with finding the most
promising path.
• Local Optima: Greedy Best-First Search can get stuck in local optima,
meaning that the path chosen may not be the best possible path.
• Heuristic Function: Greedy Best-First Search requires a heuristic function
in order to work, which adds complexity to the algorithm.
• Lack of Completeness: Greedy Best-First Search is not a complete
algorithm, meaning it may not always find a solution if one is exists. This
can happen if the algorithm gets stuck in a cycle or if the search space is a
too much complex.
Applications of Greedy Best-First Search:
• Pathfinding: Greedy Best-First Search is used to find the shortest path
between two points in a graph. It is used in many applications such as
video games, robotics, and navigation systems.
• Machine Learning: Greedy Best-First Search can be used in machine
learning algorithms to find the most promising path through a search
space.
• Optimization: Greedy Best-First Search can be used to optimize the
parameters of a system in order to achieve the desired result.
• Game AI: Greedy Best-First Search can be used in game AI to evaluate
potential moves and chose the best one.
• Navigation: Greedy Best-First Search can be use to navigate to find the
shortest path between two locations.
• Natural Language Processing: Greedy Best-First Search can be use in
natural language processing tasks such as language translation or speech
recognition to generate the most likely sequence of words.
• Image Processing: Greedy Best-First Search can be use in image
processing to segment image into regions of interest.
A* Search Algorithm:
A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has combined
features of UCS and greedy best-first search, by which it solve the problem efficiently.
A* search algorithm finds the shortest path through the search space using the
heuristic function. This search algorithm expands less search tree and provides optimal
result faster. A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness
number.

At each point in the search space, only those node is expanded which have the lowest
value of f(n), and the algorithm terminates when the goal node is found.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic
value of all states is given in the below table so we will calculate the f(n) of each state
using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from
start state.

Solution:
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path
with cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
o Consistency: Second required condition is consistency for only A* graph-
search.

If the heuristic function is admissible, then A* tree search will always find the least cost
path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic


function, and the number of nodes expanded is exponential to the depth of solution
d. So the time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)


Hill Climbing Algorithm in Artificial Intelligence
o Hill climbing algorithm is a local search algorithm which continuously moves in
the direction of increasing elevation/value to find the peak of the mountain or
best solution to the problem. It terminates when it reaches a peak value where
no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill climbing
algorithm is Traveling-salesman Problem in which we need to minimize the
distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
o A node of hill climbing algorithm has two components which are state and
value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph
as it only keeps a single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to
decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not
remember the previous states.

State-space Diagram for Hill Climbing:


The state-space landscape is a graphical representation of the hill-climbing algorithm
which is showing a graph between various states of algorithm and Objective
function/Cost.

On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal
of search is to find the global minimum and local minimum. If the function of Y-axis is
Objective function, then the goal of the search is to find the global maximum and local
maximum.

Different regions in the state space landscape:


Local Maximum: Local maximum is a state which is better than its neighbor states,
but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a flat space in the landscape where all the neighbor states
of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:


o Simple hill Climbing:
o Steepest-Ascent hill-climbing:
o Stochastic hill Climbing:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which
optimizes current cost and set it as a current state. It only checks it's one successor
state, and if it finds better than the current state, then move else be in the same state.
This algorithm has the following features:

o Less time consuming


o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current
state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This
algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more time
as it searches for multiple neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return success and stop,
else make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be
better than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the
SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state
to SUCC.
o Step 5: Exit.

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving. Rather,
this search algorithm selects one neighbor node at random and decides whether to
choose it as a current state or examine another state.

Problems in Hill Climbing Algorithm:


1. Local Maximum: A local maximum is a peak state in the landscape which is better
than each of its neighboring states, but there is another state also present which is
higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state


space landscape. Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the neighbor
states of the current state contains the same value, because of this algorithm does not
find any best direction to move. A hill-climbing search might be lost in the plateau
area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single
move.

Solution: With the use of bidirectional search, or by moving in different directions, we


can improve this problem.

Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed
to be incomplete because it can get stuck on a local maximum. And if algorithm applies
a random walk, by moving a successor, then it may complete but not
efficient. Simulated Annealing is an algorithm which yields both efficiency and
completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state. The same process is used in simulated annealing in which the
algorithm picks a random move, instead of picking the best move. If the random move
improves the state, then it follows the same path. Otherwise, the algorithm follows the
path which has a probability of less than 1 or it moves downhill and chooses another
path.

You might also like