0% found this document useful (0 votes)
14 views

Searching Algorithm

fam 2nd chapter ppt

Uploaded by

Kartik Pagariya.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Searching Algorithm

fam 2nd chapter ppt

Uploaded by

Kartik Pagariya.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Search Algorithms in Al

Search Algorithms Terminologies:

Search: Searching is a step by step procedure to solve a


search-problem in a given search space. A search
problem can have three main factors:

Search Space: Search space represents a set of possible


solutions, which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and
returns whether the goal state is achieved or not.
• Search tree: A tree representation of search problem is called
Search tree. The root of the search tree is the root node which is
corresponding to the initial state.

• Actions: It gives the description of all the available actions to the


agent.

• Transition model: A description of what each action do, can be


represented as a transition model.

• Path Cost: It is a function which assigns a numeric cost to each


path.

• Solution: It is an action sequence which leads from the start node


to the goal node.

• Optimal Solution: If a solution has the lowest cost among all


solutions.
Properties of Search Algorithms:
Following are the four essential properties of search algorithms to
compare the efficiency of these algorithms:
• Completeness: A search algorithm is said to be complete if it
guarantees to return a solution if at least any solution exists
for any random input.
• Optimality: If a solution found for an algorithm is guaranteed
to be the best solution (lowest path cost) among all other
solutions, then such a solution for is said to be an optimal
solution.
• Time Complexity: Time complexity is a measure of time for an
algorithm to complete its task.
• Space Complexity: It is the maximum storage space required
at any point during the search, as the complexity of the
problem.
Types of search algorithms:
Based on the search problems we can classify the search algorithms into
uninformed (Blind search) search and informed search (Heuristic search)
algorithms.
Uninformed/Blind Search:

▪ The uninformed search does not contain any domain


knowledge such as closeness, the location of the goal.

▪ It operates in a brute-force way as it only includes


information about how to traverse the tree and how to
identify leaf and goal nodes.

▪ Uninformed search applies a way in which search tree is


searched without any information about the search
space like initial state operators and test for the goal, so
it is also called blind search.

▪ It examines each node of the tree until it achieves the


goal node.
It can be divided into five main types:

• Breadth-first search
• Uniform cost search
• Depth-first search
• Iterative deepening depth-first search
• Bidirectional Search
Informed Search
• Informed search algorithms use domain knowledge.

• In an informed search, problem information is available


which can guide the search.

• Informed search strategies can find a solution more


efficiently than an uninformed search strategy.

• Informed search is also called a Heuristic search.

• A heuristic is a way which might not always be


guaranteed for best solutions but guaranteed to find a
good solution in reasonable time.
• Informed search can solve much complex
problem which could not be solved in another
way.

• An example of informed search algorithms is a


traveling salesman problem.

1. Greedy Search
2. A* Search
Heuristic Search Techniques
• A heuristic is a technique that is used to solve a problem
faster than the classic methods.
• These techniques are used to find the approximate solution of
a problem when classical methods do not.

• Heuristics are said to be the problem-solving techniques that


result in practical and quick solutions.

• Heuristics are strategies that are derived from past


experience with similar problems.

• Heuristics use practical methods and shortcuts used to


produce the solutions that may or may not be optimal, but
those solutions are sufficient in a given limited timeframe.
History

▪ Psychologists Daniel Kahneman and Amos Tversky have


developed the study of Heuristics in human
decision-making in the 1970s and 1980s.

▪ However, this concept was first introduced by the Nobel


Laureate Herbert A. Simon, whose primary object of
research was problem-solving.
Need
• Heuristics are used in situations in which there is the requirement
of a short-term solution.

• On facing complex situations with limited resources and time,


Heuristics can help the companies to make quick decisions by
shortcuts and approximated calculations.

• Most of the heuristic methods involve mental shortcuts to make


decisions on past experiences.

• The heuristic method might not always provide us the finest


solution, but it is assured that it helps us find a good solution in a
reasonable time.
Heuristic Techniques in AI

▪ Generate and Test


▪ Hill Climbing
▪ Properties of A* algorithm
▪ Best first Search
▪ Problem Reduction
Generate and Test Search
• Generate and Test Search is a heuristic search technique based
on Depth First Search with Backtracking which guarantees to
find a solution if done systematically and there exists a
solution.
• In this technique, all the solutions are generated and tested
for the best solution.
• It ensures that the best solution is checked against all possible
generated solutions.
• It is also known as British Museum Search Algorithm as it’s like
looking for an exhibit at random or finding an object in the
British Museum by wandering randomly.
• The evaluation is carried out by the heuristic function as all
the solutions are generated systematically in generate and test
algorithm but if there are some paths which are most unlikely
to lead us to result then they are not considered.
Algorithm

1. Generate a possible solution. For example, generating a particular


point in the problem space or generating a path for a start state.

2. Test to see if this is a actual solution by comparing the chosen point


or the endpoint of the chosen path to the set of acceptable goal
states.

3. If a solution is found, quit. Otherwise go to Step 1


Diagrammatic Representation
Real Life Examples:

• If a sufficient number of monkeys were placed in front of


a set of typewriters, and left alone long enough, then
they would eventually produce all the works of
Shakespeare.

• Dendral which infers the structure of organic compounds


using NMR spectrogram also uses plan-generate-test.
Hill Climbing Search
• Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a
peak value where no neighbor has a higher value.

• Hill climbing algorithm is a technique which is used for


optimizing the mathematical problems. One of the
widely discussed examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need to
minimize the distance traveled by the salesman.

• It is also called greedy local search as it only looks to its


good immediate neighbor state and not beyond that.
• A node of hill climbing algorithm has two components
which are state and value.

• Hill Climbing is mostly used when a good heuristic is


available.

• In this algorithm, we don't need to maintain and handle


the search tree or graph as it only keeps a single current
state.
Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

• Generate and Test variant: Hill Climbing is the variant of


Generate and Test method. The Generate and Test method
produce feedback which helps to decide which direction to
move in the search space.

• Greedy approach: Hill-climbing algorithm search moves in the


direction which optimizes the cost.

• No backtracking: It does not backtrack the search space, as it


does not remember the previous states.
State-space Diagram for Hill Climbing:

• The state-space landscape is a graphical representation


of the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost.

• On Y-axis we have taken the function which can be an


objective function or cost function, and state-space on
the x-axis. If the function on Y-axis is cost then, the goal
of search is to find the global minimum and local
minimum. If the function of Y-axis is Objective function,
then the goal of the search is to find the global maximum
and local maximum.
Different regions in the state space landscape:

• Local Maximum: Local maximum is a state which is better


than its neighbor states, but there is also another state which
is higher than it.

• Global Maximum: Global maximum is the best possible state


of state space landscape. It has the highest value of objective
function.

• Current state: It is a state in a landscape diagram where an


agent is currently present.

• Flat local maximum: It is a flat space in the landscape where


all the neighbor states of current states have the same value.

• Shoulder: It is a plateau region which has an uphill edge.


Types of Hill Climbing Algorithm:

• Simple hill Climbing:

• Steepest-Ascent hill-climbing:

• Stochastic hill Climbing:


1. Simple Hill Climbing:

• Simple hill climbing is the simplest way to implement a hill climbing


algorithm. It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it as a
current state. It only checks it's one successor state, and if it finds
better than the current state, then move else be in the same state.

This algorithm has the following features:


• Less time consuming
• Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:

• Step 1: Evaluate the initial state, if it is goal state then return


success and Stop.

• Step 2: Loop Until a solution is found or there is no new


operator left to apply.

• Step 3: Select and apply an operator to the current state.

• Step 4: Check new state:


• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a
current state.
• Else if not better than the current state, then return to step2.

• Step 5: Exit.
2. Steepest-Ascent hill climbing:

• The steepest-Ascent algorithm is a variation of simple hill


climbing algorithm. This algorithm examines all the
neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This
algorithm consumes more time as it searches for multiple
neighbors
Algorithm for Steepest-Ascent hill climbing:

• Step 1: Evaluate the initial state, if it is goal state then return


success and stop, else make current state as initial state.
• Step 2: Loop until a solution is found or the current state does
not change.
• Let SUCC be a state such that any successor of the current state
will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to
SUCC.
• Step 5: Exit.
3. Stochastic hill climbing:

• Stochastic hill climbing does not examine for all its


neighbor before moving. Rather, this search algorithm
selects one neighbor node at random and decides
whether to choose it as a current state or examine
another state.
Problems in Hill Climbing Algorithm:
• 1. Local Maximum: A local maximum is a peak state in the
landscape which is better than each of its neighboring states,
but there is another state also present which is higher than
the local maximum.
• Solution: Backtracking technique can be a solution of the local
maximum in state space landscape. Create a list of the
promising path so that the algorithm can backtrack the search
space and explore other paths as well.
• 2. Plateau: A plateau is the flat area of the search space in
which all the neighbor states of the current state contains the
same value, because of this algorithm does not find any best
direction to move. A hill-climbing search might be lost in the
plateau area.
• Solution: The solution for the plateau is to take big steps or
very little steps while searching, to solve the problem.
Randomly select a state which is far away from the current
state so it is possible that the algorithm could find non-plateau
region.
• 3. Ridges: A ridge is a special form of the local maximum. It
has an area which is higher than its surrounding areas, but
itself has a slope, and cannot be reached in a single move.

• Solution: With the use of bidirectional search, or by moving in


different directions, we can improve this problem.
Best-first Search Algorithm (Greedy Search):

• Greedy best-first search algorithm always selects the path


which appears best at that moment.
• It is the combination of depth-first search and breadth-first
search algorithms. It uses the heuristic function and search.
• Best-first search allows us to take the advantages of both
algorithms. With the help of best-first search, at each step, we
can choose the most promising node.
• In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by
heuristic function, i.e.
• f(n)= g(n).
• Were, h(n)= estimated cost from node n to the goal.
• The greedy best first algorithm is implemented by the priority
queue.
Best first search algorithm:
• Step 1: Place the starting node into the OPEN list.
• Step 2: If the OPEN list is empty, Stop and return failure.
• Step 3: Remove the node n, from the OPEN list which has the
lowest value of h(n), and places it in the CLOSED list.
• Step 4: Expand the node n, and generate the successors of
node n.
• Step 5: Check each successor of node n, and find whether any
node is a goal node or not. If any successor node is goal node,
then return success and terminate the search, else proceed to
Step 6.
• Step 6: For each successor node, algorithm checks for
evaluation function f(n), and then check if the node has been
in either OPEN or CLOSED list. If the node has not been in both
list, then add it to the OPEN list.
• Step 7: Return to Step 2.
Advantages:
• Best first search can switch between BFS and DFS by gaining
the advantages of both the algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
• It can behave as an unguided depth-first search in the worst
case scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.
Example:
• Consider the below search problem, and we will traverse it
using greedy best-first search. At each iteration, each node is
expanded using evaluation function f(n)=h(n) , which is given
in the below table.
• In this search example, we are using two lists which
are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.
• Expand the nodes of S and put in the CLOSED list
• Initialization: Open [A, B], Closed [S]
• Iteration 1: Open [A], Closed [S, B]
• Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
• Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
• Hence the final solution path will be: S----> B----->F----> G
• Time Complexity: The worst case time complexity of Greedy
best first search is O(bm).
• Space Complexity: The worst case space complexity of Greedy
best first search is O(bm). Where, m is the maximum depth of
the search space.
• Complete: Greedy best-first search is also incomplete, even if
the given state space is finite.
• Optimal: Greedy best first search algorithm is not optimal.
A* Search Algorithm

▪ A*search is the most commonly known form of best-first


search. It uses heuristic function h(n), and cost to reach
the node n from the start state g(n).

▪ It has combined features of UCS and greedy best-first


search, by which it solve the problem efficiently.
▪ A* search algorithm finds the shortest path through the
search space using the heuristic function.

▪ This search algorithm expands less search tree and


provides optimal result faster.

▪ A* algorithm is similar to UCS except that it uses


g(n)+h(n) instead of g(n).
• In A* search algorithm, we use search heuristic as well as the
cost to reach the node. Hence we can combine both costs as
following, and this sum is called as a fitness number.
Algorithm of A* search:
• Step1: Place the starting node in the OPEN list.
• Step 2: Check if the OPEN list is empty or not, if the list is
empty then return failure and stops.
• Step 3: Select the node from the OPEN list which has the
smallest value of evaluation function (g+h), if node n is goal
node then return success and stop, otherwise
• Step 4: Expand node n and generate all of its successors, and
put n into the closed list. For each successor n', check whether
n' is already in the OPEN or CLOSED list, if not then compute
evaluation function for n' and place into Open list.
• Step 5: Else if node n' is already in OPEN and CLOSED, then it
should be attached to the back pointer which reflects the
lowest g(n') value.
• Step 6: Return to Step 2.
Advantages
• A* search algorithm is the best algorithm than other search
algorithms.
• A* search algorithm is optimal and complete.
• This algorithm can solve very complex problems.

Disadvantages
• It does not always produce the shortest path as it mostly
based on heuristics and approximation.
• A* search algorithm has some complexity issues.
• The main drawback of A* is memory requirement as it keeps
all generated nodes in the memory, so it is not practical for
various large-scale problems.
Problem Reduction
▪ Problem reduction is an algorithm design technique that
takes a complex problem and reduces it to a simpler one.
▪ The simpler problem is then solved and the solution of
the simpler problem is then transformed to the solution
of the original problem.
▪ Problem reduction is a powerful technique that can be
used to simplify complex problems and make them
easier to solve.
▪ It can also be used to reduce the time and space
complexity of algorithms.
Example:
• Let’s understand the technique with the help of the following
problem:
• Calculate the LCM (Least Common Multiple) of two
numbers X and Y.

• Approach 1:
• To solve the problem one can iterate through the multiples of
the bigger element (say X) until that is also a multiple of the
other element. This can be written as follows:
• Select the bigger element (say X here).
• Iterate through the multiples of X:
• If this is also a multiple of Y, return this as the answer.
• Otherwise, continue the traversal.
Algorithm:

Algorithm LCM(X, Y):


if Y > X:
swap X and Y
end if
for i = 1 to Y:
if X*i is divisible by Y
return X*i
end if
end for
• Reducing a problem to another one is only practical when the
total time taken for transforming and solving the reduced
problem is lower than solving the original problem.

• If problem A is reduced to problem B, then the lower bound of


B can be higher than the lower bound of A, but it can never be
lower than the lower bound of A.

You might also like