Problem Solving by Searching AI
Problem Solving by Searching AI
4
Informed Search Algorithms
So far we have talked about the uninformed search algorithms which
looked through search space for all possible solutions of the problem
without having any additional knowledge about search space. But
informed search algorithm contains an array of knowledge such as how
far we are from the goal, path cost, how to reach to goal node, etc. This
knowledge help agents to explore less to the search space and find more
efficiently the goal node.
The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also called
Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed
Search, and it finds the most promising path. It takes the current state
of the agent as its input and produces the estimation of how close
agent is from the goal. The heuristic method, however, might not
always give the best solution, but it guaranteed to find a good solution
in reasonable time. Heuristic function estimates how close a state is to
the goal. It is represented by h(n), and it calculates the cost of an
h(n) <= path
optimal h*(n)between the pair of states. The value of the heuristic
Here h(n) is heuristic cost, and h*(n) is the estimated cost.
function
Hence is always positive.
heuristic cost should be less than or equal to the
estimated cost.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search algorithms. It
expands nodes based on their heuristic value h(n). It maintains two lists,
OPEN and CLOSED list. In the CLOSED list, it places those nodes which have
already expanded and in the OPEN list, it places nodes which have yet not
been expanded.
On each iteration, each node n with the lowest heuristic value is expanded
and generates all its successors and n is placed to the closed list. The
algorithm continues unit a goal state is found.
In the informed search we will discuss two main algorithms which are given
below:
•Best First Search Algorithm(Greedy search)
•A* Search Algorithm
Best-first Search Algorithm (Greedy Search):
Greedy best-first search algorithm always selects the path which appears best
at that moment. It is the combination of depth-first search and breadth-first
search algorithms. It uses the heuristic function and search. Best-first search
allows us to take the advantages of both algorithms. With the help of best-
first search, at each step, we can choose the most promising node. In the best
first search algorithm, we expand the node which is closest to the goal node
and the closest cost is estimated by heuristic function. The greedy best first
algorithm is implemented by the priority queue.
Best first search algorithm:
•Step 1: Place the starting node into the OPEN list.
•Step 2: If the OPEN list is empty, Stop and return failure.
•Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
•Step 4: Expand the node n, and generate the successors of node n.
•Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.
•Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If the
node has not been in both list, then add it to the OPEN list.
•Step 7: Return to Step 2.
Advantages:
•Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
•This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
•It can behave as an unguided depth-first search in the worst case scenario.
•It can get stuck in a loop as DFS.
•This algorithm is not optimal.
Example:
Consider the below search problem, and we will traverse it using greedy best-
first search. At each iteration, each node is expanded using evaluation function
f(n)=h(n) , which is given in the below table.
In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.
2
Solution:
F(x) = g(x) + h(x)
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B,
7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides
the optimal path with cost 6.w2a3
Local Search Algorithms
• Local search algorithms operate using a single current node and generally move only
to neighbors of that node.
• Local search method keeps small number of nodes in memory . They are suitable for
problems where the solution is the goal state itself and not the path.
• In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an
objective function.
• Hill-climbing and simulated annealing are examples of local search algorithms.
Hill Climbing Algorithm in Artificial Intelligence
•Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing value to find the peak of the mountain or best solution to the
problem. It terminates when it reaches a peak value where no neighbor has a higher
value.
•Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need to minimize the distance traveled by
the salesman.
•It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
•A node of hill climbing algorithm has two components which are state and value.
•Hill Climbing is mostly used when a good heuristic is available.
•In this algorithm, we don't need to maintain and handle the search tree or graph as
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
•Generate and Test variant: Hill Climbing is the variant of
Generate and Test method. The Generate and Test method produce
feedback which helps to decide which direction to move in the search
space.
•Greedy approach: Hill-climbing algorithm search moves in the
direction which optimizes the cost.
•No backtracking: It does not backtrack the search space, as it
does not remember the previous states.
State-space Diagram for Hill Climbing:
The state-space landscape is a graphical representation of the hill-climbing
algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.
On Y-axis we have taken the function which can be an objective function or
cost function, and state-space on the x-axis. If the function on Y-axis is cost
then, the goal of search is to find the global minimum and local minimum. If
the function of Y-axis is Objective function, then the goal of the search is to
find the global maximum and local maximum.