0% found this document useful (0 votes)
13 views47 pages

Module 2

Uploaded by

Divya Sandesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views47 pages

Module 2

Uploaded by

Divya Sandesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Module 2

Search Algorithms
Contents
Informed search Algorithms: Best First Search, A*,
AO*, Hill Climbing, Generate & Test, Alpha-Beta pruning,
Min-max search.
Introduction
The aim of AI rational agent is to solve a problem without
intervention of the human.
To solve the problem:
1. There is a need to represent a problem. This representation
is called formalization.
2. Agent uses some strategies to solve the problem.
This strategy is searching technique.
Problem formulation
 This step defines the problem which will help to understand and
decide the course of action that needs to be considered to
achieve the goal.

 Every problem should be properly formulated before applying


any search algorithms, because every algorithm demands
problem in specific form.
Components of the problem:
Problem statement:
What is to be done?
Why it is important to build AI system?
What are the advantages of the proposed system?
Example: Predict whether or not the patient having diabetes or not.
Goal or solution: Can use some machine learning techniques to solve
this problem.
Solution space: Every alternative ways with which the problem can be
solved is known as solution space.
Operators: Actions taken during solving the problem.
Examples of problem formulation
Problem Statement: Mouse is hungry,
and needs to consume cheese placed in
the environment.
Problem solution: Can use search
algorithms to find the shortest path.
Solution space: Multiple paths
possible.
Operators: UP, DOWN, RIGHT and LEFT.
2. Search Strategies
Important Terminologies:
• Search: Searching is a step by step procedure to solve a search-problem
in a given search space. A search problem can have three main factors:
• Search Space: Search space represents a set of possible conditions and
solutions.
• Start State: It is a state from where agent begins the search.
• Goal state: Ultimate aim of searching process.
• Search tree: A tree representation of search space, showing possible
solutions from initial state.
• Actions: It gives the description of all the available actions to the agent.
• Transition model: A description of what each action do.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to the
goal node.
• Optimal Solution: If a solution has the lowest cost among all solutions.
Example
• States: The state is determined by both the agent location and the dirt locations. The
agent is in one of two locations, each of which might or might not contain dirt.
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in
the leftmost square, moving Right in the rightmost square, and Sucking in a clean square
have no effect.
• Goal test: This checks whether all the
squares are clean.
• Path cost: Each step costs 1, so the path
cost is the number of steps in the path
The 8-puzzle
• States: A state description specifies the location of each of the eight tiles and the
blank
in one of the nine squares.
• Initial state: Any state can be designated as the initial state. Note that any given
goal
can be reached from exactly half of the possible initial states.
• Actions: The simplest formulation defines the actions as movements of the blank
space
Left, Right, Up, or Down. Different subsets of these are possible depending on where
the blank is.
• Transition model: Given a state and action, this returns the resulting state; for
example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the
blank
switched.
• Goal test: This checks whether the state matches the goal configuration.
• Path cost: Each step costs 1, so the path cost is
the number of steps in the path.
Properties of Search Algorithms
Measuring searching performance:

Completeness: A search algorithm is said to be complete if it


guarantees to return a solution for any random input.
Optimality: Best solution among all other solutions, then such a
solution for is said to be an optimal solution.
Time Complexity: Time taken for an algorithm to complete its
task.
Space Complexity: It is the maximum storage space required at
any point during the search, as the complexity of the problem.
AO* Search

Hill climbing

Generate &
Test
Alpha – beta Min-max
pruning search
Uninformed search
‘Uninformed Search’ means the machine blindly follows the algorithm
regardless of whether right or wrong, efficient or inefficient.
These algorithms are brute force operations, and they don’t have extra
information about the search space; the only information they have is
on how to traverse or visit the nodes in the tree.

Thus uninformed search algorithms are also called blind search


algorithms.
The search algorithm produces the search tree without using any
domain knowledge, which is a brute force in nature.
They don’t have any background information on how to approach the
goal or whatsoever. But these are the basics of search algorithms in AI.
Informed search algorithms
Informed search Algorithms: Best First Search, A*,
AO*, Hill Climbing, Generate & Test, Alpha-Beta pruning,
Min-max search.
Introduction
In uninformed search algorithms, agent looks the entire search space for all
possible solutions of the problem without having any additional knowledge about
search space. Because of this it takes time to get the solution.
But informed search algorithm contains an knowledge such as how far we are
from the goal, path cost, how to reach to goal node, etc. This knowledge help
agents to explore less to the search space and find more efficiently the goal node.
Informed search algorithm uses the idea of heuristic, so it is also called Heuristic
search.
Heuristics function: Heuristic is a function which is used in Informed Search,
and it finds the most promising path. It takes the current state of the agent as its
input and produces the estimation of how close agent is from the goal. The
heuristic method, however, might not always give the best solution, but it
guaranteed to find a good solution in reasonable time. Heuristic function
estimates how close a state is to the goal.
In the informed search:
 Best First Search,
 A*,
 AO*,
 Hill Climbing,
 Generate & Test,
 Alpha-Beta pruning,
 Min-max search.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search
algorithms.
It expands nodes based on their heuristic value.
It maintains two lists, OPEN and CLOSED list.
In the CLOSED list, it places those nodes which have already

expanded and
In the OPEN list, it places nodes which have yet not been
expanded.
Best-first Search Algorithm (Greedy
Search)
Greedy best-first search algorithm always selects the path which
appears best at that moment.
It is the combination of depth-first search and breadth-first search
algorithms.
It uses the heuristic function and search. Best-first search allows us
to take the advantages of both algorithms.
With the help of best-first search, at each step, we can choose the
most promising node. In the best first search algorithm, we expand
the node which is closest to the goal node and the closest cost is
estimated by heuristic function.
3)

node h(n nod h(n) node h(n)


) e
A 11 E 4 I,J 3
B 5 F 2 S 15
C,D 9 H 7 G 0
4)
A* Search Algorithm
• A* Search Algorithm is a simple and efficient search algorithm that
can be used to find the optimal path between two nodes in a graph.

• It is used for the shortest path finding.

• It is an extension of Dijkstra’s shortest path algorithm (Dijkstra’s


Algorithm).

• It is the sum of two variables’ values that determines the node it picks
at any point in time.

• At each step, it picks the node with the smallest value of ‘f’ (the sum
of ‘g’ and ‘h’) and processes that node/cell.
• ‘g’ is the distance it takes to get to a certain square on the grid
from the starting point, following the path we generated to get
there.

• ‘h’ is the heuristic, which is the estimation of the distance it takes


to get to the finish line from that square on the grid.

• Steps:
1.Add start node to list.
2.For all the neighbouring nodes, find the least cost F node.
3.Switch to the closed list. For nodes adjacent to the current node. If the node is not
reachable, ignore it. Else. ...
4.Stop working when you find the destination. It is not possible to find the destination going
through all possible points.
9)
AO* Search Algorithm
• AO* algorithm is a best first search algorithm.

• AO* algorithm uses the concept of AND-OR


graphs to decompose any complex problem
given into smaller set of problems which are
further solved.

• AND-OR graphs are specialized graphs that are


used in problems that can be broken down into
sub problems where AND side of the graph
represent a set of task that need to be done to
achieve the main goal, whereas the OR side of
the graph represent the different ways of
performing task to achieve the same main goal.
Working of AO algorithm:
The AO* algorithm works on the formula given below :
f(n) = g(n) + h(n)
where,
•g(n): The actual cost of traversal from initial state to the current state.
•h(n): The estimated cost of traversal from the current state to the goal state.
•f(n): The actual cost of traversal from the initial state to the goal state.
Hill Climbing Algorithm

• Hill climbing algorithm is a local search algorithm which


continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem.
• It terminates when it reaches a peak value where no
neighbor has a higher value.
• It is also called greedy local search as it only looks to its
good immediate neighbor state and not beyond that.
• A node of hill climbing algorithm has two components
which are state and value.
• In this algorithm, we don't need to maintain and handle
the search tree or graph as it only keeps a single
current state.
Features of Hill Climbing:
• Generate and Test variant: Hill Climbing is the
variant of Generate and Test method. The Generate and
Test method produce feedback which helps to decide
which direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search
moves in the direction which optimizes the cost.
• No backtracking: It does not backtrack the search
space, as it does not remember the previous states.
State-space Diagram for Hill
Climbing:
• The state-space landscape is a graphical representation
of the hill-climbing algorithm which is showing a graph
between various states of algorithm and Objective
function/Cost.
• On Y-axis we have taken the function which can be an
objective function or cost function, and state-space on
the x-axis.
• If the function on Y-axis is cost then, the goal of search
is to find the global minimum and local minimum.
• If the function of Y-axis is Objective function, then the
goal of the search is to find the global maximum and
local maximum.
Different regions:
• Local Maximum: Local maximum is a
state which is better than its neighbor
states, but there is also another state
which is higher than it.
• Global Maximum: Global maximum is the
best possible state of state space
landscape. It has the highest value of
objective function.
• Current state: It is a state in a landscape
diagram where an agent is currently
present.
• Flat local maximum: It is a flat space in
the landscape where all the neighbor
states of current states have the same
value.
• Shoulder: It is a plateau region which has
an uphill edge.
Problems in Hill Climbing Algorithm:
• 1. Local Maximum: A local maximum is a
peak state in the landscape which is better
than each of its neighboring states, but there
is another state also present which is higher
than the local maximum.
• 2. Plateau: A plateau is the flat area of the
search space in which all the neighbor states
of the current state contains the same value,
because of this algorithm does not find any
best direction to move. A hill-climbing search
might be lost in the plateau area.
• 3. Ridges: A ridge is a special form of the
local maximum. It has an area which is higher
than its surrounding areas, but itself has a
slope, and cannot be reached in a single
move.
Generate and Test Search
Algorithm
• Generate and Test Search is a heuristic search
technique based on Depth First Search with
Backtracking which guarantees to find a solution if done
systematically and there exists a solution.
• In this technique, all the solutions are generated and
tested for the best solution.
• It ensures that the best solution is checked against all
possible generated solutions.
• The evaluation is carried out by the heuristic function.
Algorithm steps:

1.Generate a possible solution. For


example, generating a particular
point in the problem space or
generating a path for a start
state.
2.Test to see if this is a actual
solution by comparing the
chosen point or the endpoint of
the chosen path to the set of
acceptable goal states
3.If a solution is found, quit.
Otherwise go to Step 1.
Properties of Good Generators:
• The good generators need to have the following properties:
 Complete: Good Generators need to be complete i.e. they
should generate all the possible solutions and cover all the
possible states. In this way, we can guaranty our algorithm to
converge to the correct solution at some point in time.
 Non Redundant: Good Generators should not yield a duplicate
solution at any point of time as it reduces the efficiency of
algorithm thereby increasing the time of search and making the
time complexity exponential.
 Informed: Good Generators have the knowledge about the
search space which they maintain in the form of an array of
knowledge. This can be used to search how far the agent is from
the goal, calculate the path cost and even find a way to reach the
goal.
Mini-Max Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm
which is used in decision-making and game theory. It provides
an optimal move for the player assuming that opponent is also
playing optimally.
• Min-Max algorithm is mostly used for game playing in AI such
as Chess, Checkers, tic-tac-toe, go, and various tow-players
game. This Algorithm computes the minimax decision for the
current state.
• In this algorithm two players play the game, one is called MAX
and other is called MIN.
• Both the players fight it as the opponent player gets the
minimum benefit while they get the maximum benefit.
• Both Players of the game are opponent of each other, where
MAX will select the maximized value and MIN will select the
minimized value.
• The minimax algorithm performs a depth-first search algorithm
• The steps for the mini max algorithm can be stated as
follows:
1.Create the entire game tree.
2.Evaluate the scores for the leaf nodes based on the
evaluation function.
3.Backtrack from the leaf to the root nodes:
• For Maximizer, choose the node with the maximum
score.
• For Minimizer, choose the node with the minimum
score.
4.At the root node, choose the node with the maximum
Alpha-Beta Pruning
• Alpha-beta pruning is a modified version of the minimax
algorithm.
• It is an optimization technique for the minimax
algorithm.
• This is a technique by which without checking each
node of the game tree we can compute the correct
minimax decision, and this technique is called pruning.
• This involves two threshold parameter Alpha and beta
for future expansion, so it is called alpha-beta
pruning. It is also called as Alpha-Beta Algorithm.
• Alpha-beta pruning can be applied at any depth of a
tree, and sometimes it not only prune the tree leaves
but also entire sub-tree.
• The two-parameter can be defined as:
 Alpha: The best (highest-value) choice we have found so far
at any point along the path of Maximizer. The initial value of
alpha is -∞.
 Beta: The best (lowest-value) choice we have found so far at
any point along the path of Minimizer. The initial value of beta
is +∞.
• The Alpha-beta pruning returns the same move as the
standard minimax algorithm does, but it removes all the
nodes which are not really affecting the final decision
but making algorithm slow. Hence by pruning these
nodes, it makes the algorithm fast.
Thank you

You might also like