0% found this document useful (0 votes)
3 views49 pages

BE6 Semester (TU) : Artificial Intelligence

The document presents an overview of search techniques in Artificial Intelligence, covering both uninformed and informed search strategies, including algorithms like Depth First Search, Breadth First Search, A*, and Hill Climbing. It discusses the steps involved in searching, the parameters for evaluating search strategies, and introduces adversarial search techniques such as the Minimax algorithm and Alpha-Beta pruning. The document also highlights the advantages and disadvantages of various search methods, emphasizing the importance of heuristics in improving search efficiency.

Uploaded by

mhjbinisha12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views49 pages

BE6 Semester (TU) : Artificial Intelligence

The document presents an overview of search techniques in Artificial Intelligence, covering both uninformed and informed search strategies, including algorithms like Depth First Search, Breadth First Search, A*, and Hill Climbing. It discusses the steps involved in searching, the parameters for evaluating search strategies, and introduces adversarial search techniques such as the Minimax algorithm and Alpha-Beta pruning. The document also highlights the advantages and disadvantages of various search methods, emphasizing the importance of heuristics in improving search efficiency.

Uploaded by

mhjbinisha12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

th

BE 6 Semester(TU)

Artificial Intelligence

Chapter 3:
SEARCH TECHNIQUES

Presented By
Er. Bikash Acharya
Course Contents
•Uninformed search techniques‐ depth first search,
breadth first search, depth limit search, and search
strategy comparison,
•Informed search techniques‐hill climbing, best first
search, greedy search, A* search Adversarial search
techniques‐minimax procedure, alpha beta
procedure
Searching
• In Artificial Intelligence, Search techniques are universal
problem-solving methods.

• Rational agents or Problem-solving agents in AI mostly used these


search strategies or algorithms to solve a specific problem and
provide the best result.

• Problem-solving agents are the goal-based agents and use atomic


representation.
Steps in Searching
1. Check whether the current state is the goal state or not?

2. Expand the current state to generate the new sets of states.

3. Choose one of the new states generated for search depending upon
search strategy.

4. Repeat step 1 to 3 until the goal state is reached or there are no more
state to be expanded.
Searching Types
• Uninformed Search or Blind Search
• Informed Search or Heuristic
Search
Parameters Informed Search Uninformed Search
Known as It is also known as Heuristic Search. It is also known as Blind Search.

Using Knowledge It uses knowledge for the searching process. It doesn’t use knowledge for the searching process.

Performance It finds a solution more quickly. It finds solution slow as compared to an informed search.

Completion It may or may not be complete. It is always complete.

Cost Factor Cost is low. Cost is high.

Time It consumes less time because of quick searching. It consumes moderate time because of slow searching.

Direction There is a direction given about the solution. No suggestion is given regarding the solution in it.

Implementation It is less lengthy while implemented. It is more lengthy while implemented.

It is more efficient as efficiency takes into account


It is comparatively less efficient as incurred cost is more and the speed of
Efficiency cost and performance. The incurred cost is less and
finding the Breadth-Firstsolution is slow.
speed of finding solutions is quick.

Computational requirements Computational requirements are lessened. Comparatively higher computational requirements.

Having a wide scope in terms of handling large


Size of search problems Solving a massive search task is challenging.
search problems.

•Greedy Search
•Depth First Search (DFS)
•A* Search
Examples of Algorithms •Breadth First Search (BFS)
•AO* Search
•Branch and Bound
•Hill Climbing Algorithm
Uniformed Search (Blind Search/Brute
Force)
• These types of search strategies are provided with the problem
definition and these don’t have additional information about
the state space.
• These can only expand current state to get a new set of states
and distinguish a goal state from non-goal state.
• Uninformed search strategies use only the information available
in the problem definition
• Less effective than informed search.
Breadth First Search
•Algorithm for searching a graph
•Breadth = broad/wide
•Queue (FIFO)
•Shallow node
Breadth First Search
• Starting from the root node (initial state) explores all children of the root node, left to right
• If no solution is found, expands the first (leftmost) child of the root node, then expands
• the second node at depth 1 and so on ...
• Process
• Place the start node in the queue
• Examine the node at the front of the queue
• If the queue is empty, stop
• If the node is the goal, stop
• Otherwise, add the children of the node to the end of the queue
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Advantages:
• BFS will provide a solution if any solution exists.
• If there is more than one solution for a given problem, then BFS will
provide the minimal solution which requires the least number of
steps.

Disadvantages:
• It requires lots of memory since each level of the tree must be saved
into memory to expand the next level.
• BFS needs lots of time if the solution is far away from the root node.
Depth First Search
•Algorithm for searching
•Depth = vertical before horizontal
•Stack
•Deep node
Depth First Search
• It expands the root node, then the leftmost child of the root node, then the leftmost child of that node etc.
• Always expands a node at the deepest level of the tree
• Only when the search hits a dead end (a partial solution which can’t be extended) does the search
backtrack and expand nodes at higher levels.
• Process: Use stack to keep track of nodes. (LIFO)
• Put the start node on the stack
• While stack is not empty
• Pop the stack
• If the top of stack is the goal, stop
• Other wiser push the nodes connected to the top of the stack on the stack
• ( provided they are not already on the stack)
Depth Limit Search
• Working is similar to DFS but with a predefined limit
• Helps in solving the problem of DFS
• Terminating Conditions :
• Failure Value : There is no solution
• Cutoff Failure : Terminates on reaching predefined depth
Searching Strategies
• Strategies are evaluated along the following dimensions:
o Completeness: does it generate to find a solution if there is any?
o Optimality: does it always find the highest quality (least-cost)
solution?
o Time complexity: How long does it take to find a solution?
o Space complexity: How much memory does it need to perform the
search?
Comparison
Comparison
Uniform Cost Search
Informed Search

• In uninformed search, we don’t try to evaluate which of the


nodes on the frontier are most promising. We never
“look-ahead” to the goal
• Informed search have problem specific knowledge apart from
problem definition.
• Use of Heuristic improves efficiency of search process.
• The idea is to develop a domain specific heuristic function h(n).
• h(n) guesses the cost of getting to the goal from node n.
Heuristics

• Technique designed to solve a problem


• May be optimal or not but the solution is good
• An algorithm provides step-by-step instructions for how to solve a
specific problem in a finite number of steps. The resulting outcome is
predictable and can be reliably reproduced when using the same input.
• In contrast, heuristic outcomes are simply educated guesses. Heuristic
outcomes cannot be predicted or reproduced reliably
Heuristic Function
• Heuristic is a function which is used in Informed Search, and it finds
the most promising path.

• It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal.

• The heuristic method, however, might not always give the best
solution, but it guaranteed to find a good solution in reasonable time.

• It is represented by h(n), and it calculates the cost of an optimal path


between the pair of states. The value of the heuristic function is
always positive.
Best First Search(BFS)
• A node is selected for expansion based on evaluation function f(n)
• Node with lowest evaluation function is expanded first.
• The evaluation function must represent some estimate of the cost of
the path from state to the closest goal state.
• One important heuristic function is h(n)
• h(n) is the estimate cost of the cheapest path from node to the goal
• Types
• Greedy Best First Search
• A* search
Best First Search
Algorithm
• Step 1: Place the starting node into the OPEN list.
• Step 2: If the OPEN list is empty, Stop and return failure.
• Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
• Step 4: Expand the node n, and generate the successors of node n.
• Step 5: Check each successor of node n, and find whether any node is a goal node
or not. If any successor node is goal node, then return success and terminate the
search, else proceed to Step 6.
• Step 6: For each successor node, algorithm checks for evaluation function f(n),
and then check if the node has been in either OPEN or CLOSED list. If the node
has not been in both list, then add it to the OPEN list.
• Step 7: Return to Step 2.

h(n)= estimated cost from node n to the goal.


Example:
• At each iteration, each node is expanded using evaluation function f(n)=h(n) , which is given in the below table

In this search example, we are using two lists which are OPEN
and CLOSED Lists
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be: S----> B----->F----> G
Advantages:
• Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.

• This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:
• It can behave as an unguided depth-first search in the worst case
scenario.
• It can get stuck in a loop as DFS.

• This algorithm is not optimal.


A* Search
Algorithm
• A* search is the most commonly known form of best-first search.

• It uses heuristic function h(n), and cost to reach the node n from the
start state g(n).

• It has combined features of UCS and greedy best-first search, by


which it solve the problem efficiently.

• A* search algorithm finds the shortest path through the search space
using the heuristic function.
• This search algorithm expands less search tree and provides optimal
result faster.

• A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of


g(n).

• In A* search algorithm, we use search heuristic as well as the cost to


reach the node.

• Hence we can combine both costs as following, and this sum is called
as a fitness number.
Algorithm of A* Search
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and
stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the
closed list. For each successor n', check whether n' is already in the OPEN or
CLOSED list, if not then compute evaluation function for n' and place into
Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2.
Initialization: {(S, 0)}
Iteration1: {(S--> A, 1), (S-->G, 10)}
Iteration2: {(S--> A-->C, 2), (S--> A-->B, 3), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D--->G, 7), (S-->
A-->B-->D-->G, 10), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the
optimal path with cost 6.
Points to remember:
• A* algorithm returns the path which occurred first, and it does not
search for all remaining paths.
• The efficiency of A* algorithm depends on the quality of heuristic.
Complete: A* algorithm is complete as long as:

• Branching factor is finite.

• Cost at every action is fixed.


Hill
Climbing
• Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of the
mountain or best solution to the problem.
• No Back Tracking

• It terminates when it reaches a peak value where no neighbor has a higher


value.

• It is also called greedy local search as it only looks to its good immediate
neighbor state and not beyond that.
• A node of hill climbing algorithm has two components which are state and
value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search
tree or graph as it only keeps a single current state.

Following are some main features of Hill Climbing Algorithm:


• Generate and Test variant: Hill Climbing is the variant of Generate
and Test method. The Generate and Test method produce feedback
which helps to decide which direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search moves in the
direction which optimizes the cost.
• No backtracking: It does not backtrack the search space, as it does
not remember the previous states.
Problems in Hill Climbing
1. Local Maximum: A local maximum is a peak state in the landscape
which is better than each of its neighboring states, but there is another
state also present which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local


maximum in state space landscape.
• Create a list of the promising path so that the algorithm can backtrack
the search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because
of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state
which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.
Minimax Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and
game theory.

• It provides an optimal move for the player assuming that opponent is also playing optimally.

• Mini-Max algorithm uses recursion to search through the game-tree.

• Max will try to maximize its utility(best move) and min will try to minimize its utility(worst move).

• Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go,
and various tow-players game. This Algorithm computes the minimax decision for the current state.

• In this algorithm two players play the game, one is called MAX and other is called MIN.
Alpha Beta Pruning
• Alpha-beta pruning is a modified version of the minimax algorithm.
• It is an optimization technique for the minimax algorithm.

The two-parameter can be defined as:


1. Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. The initial value of alpha is -∞.

2. Beta: The best (lowest-value) choice we have found so far at any


point along the path of Minimizer. The initial value of beta is +∞.
The advantages of alpha-beta pruning are as follow :
i) Alpha-beta pruning plays a great role in reducing the number of
nodes which are found out by minimax algorithm.
ii) When one chance or option is found at the minimum, it stops
assessing a move.
iii) This method also helps to improve the search procedure in an
effective way.
The disadvantage of alpha - beta pruning method is -
• It has been found that alpha - beta pruning method is not feasible in
most of the cases to find out the whole game tree.

You might also like