Unit Ii
Unit Ii
S.KAVITHA
01/17/2025
Search Algorithms in AI
Artificial Intelligence is the study of building agents that act rationally.
Most of the time, these agents perform some kind of search
algorithm in the background in order to achieve their tasks.
A search problem consists of:
– A State Space. Set of all possible states where you can be.
– A Start State. The state from where the search begins.
– A Goal Test. A function that looks at the current state returns whether or
not it is the goal state.
The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
This plan is achieved through search algorithms.
Search Algorithm Terminologies:
• Search tree: A tree representation of search problem is called Search
tree. The root of the search tree is the root node which is
corresponding to the initial state.
• Actions: It gives the description of all the available actions to the
agent.
• Transition model: A description of what each action do, can be
represented as a transition model.
• Path Cost: It is a function which assigns a numeric cost to each path.
• Solution: It is an action sequence which leads from the start node to
the goal node.
• Optimal Solution: If a solution has the lowest cost among all
solutions.
Search Tree
• Consider the explicit state space.
• One may list all possible paths, eliminating cycles from the paths, and we would get
the complete search tree from a state space graph. A search tree is a data structure
containing a root node, from where the search starts. Every node may have 0 or
more children. If a node X is a child of node Y, node Y is said to be a parent of node X.
• .
Types of search algorithms:
Based on the search problems we can classify the search
algorithms into uninformed (Blind search) search and informed
search (Heuristic search) algorithms.
Depth-first search (DFS)
Breadth-first search (BFS)
Uniform Cost Search (UCS)
Best-First Search (Greedy Search)
A* Tree Search:
Stochastic Search
• Stochastic search is the method of choice for solving many hard
combinatorial problems.
• Stochastic search algorithms are designed for problems with inherent
random noise or deterministic problems solved by injected randomness.
• The search favours designs with better performance.
• An important feature of stochastic search algorithms is that they can carry
out broad search of the design space and thus avoid local optima. Also,
stochastic search algorithms do not require gradients to guide the search,
making them a good fit for discrete problems.
• However, there is no necessary condition for an optimum solution and the
algorithm must run multiple times to make sure the attained solutions are
robust..
Stochastic hill climbing
• Stochastic hill climbing chooses at random from among the
uphill moves; the probability of selection can vary with the
steepness of the uphill move. This usually converges more
slowly than steepest ascent, but in some state landscapes, it
finds better solutions.
• First-choice hill climbing implements stochastic hill climbing
by generating successors randomly until one is generated that
is better than the current state. This is a good strategy when a
state has many (e.g., thousands) of successors.
Stochastic Games in Artificial Intelligence
.
• Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It
will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7
.
• Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
.
• Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree,
there are only 4 layers, hence we reach immediately to the root node, but in real
games, there will be more than 4 layers.
• For node A max(4, -3)= 4
.
Properties of Mini-Max algorithm:
• Complete- Min-Max algorithm is Complete. It will definitely
find a solution (if exist), in the finite search tree.
• Optimal- Min-Max algorithm is optimal if both opponents are
playing optimally.
• Time complexity- As it performs DFS for the game-tree, so the
time complexity of Min-Max algorithm is O(bm), where b is
branching factor of the game-tree, and m is the maximum
depth of the tree.
• Space Complexity- Space complexity of Mini-max algorithm is
also similar to DFS which is O(bm).
Limitation of the minimax Algorithm:
The main drawback of the minimax algorithm is that it gets really
slow for complex games such as Chess, go, etc. This type of
games has a huge branching factor, and the player has lots of
choices to decide. This limitation of the minimax algorithm can
be improved from alpha-beta pruning which we have
discussed in the next topic.
Optimal decisions in multiplayer games
Many popular games allow more than two players. Let us
examine how to extend the minimax idea to multiplayer
games. This is straightforward from the technical viewpoint,
but raises some interesting new conceptual issues.
First, we need to replace the single value for each node with a
vector of values. For example, in a three-player game with
players A, B, and C, a vector vA, vB, vC is associated with each
node. For terminal states, this vector gives the utility of the
state from each player’s viewpoint.
• In that state, player C chooses what to do. The two choices lead to
terminal states with utility vectors vA =1, vB =2, vC =6 and vA =4, vB
=2, vC =3. Since 6 is bigger than 3, C should choose the first move.
This means that if state X is reached, subsequent play will lead to a
terminal state with utilities vA =1, vB =2, vC =6. Hence, the backed-
up value of X is this vector. The backed-up value of a node n is
always the utilityvector of the successor state with the highest
value for the player choosing at n. Anyone who plays multiplayer
games, such as Diplomacy, quickly becomes aware that much more
is going on than in two-player games. Multiplayer ALLIANCE games
usually involve alliances, formal or informal, among the players.
Alliances are made and broken as the game proceeds.
Alpha-Beta Pruning
• Alpha-beta pruning is a modified version of the minimax algorithm.
It is an optimization technique for the minimax algorithm.
• As we have seen in the minimax search algorithm that the number
of game states it has to examine are exponential in depth of the
tree. Since we cannot eliminate the exponent, but we can cut it to
half. Hence there is a technique by which without checking each
node of the game tree we can compute the correct minimax
decision, and this technique is called pruning. This involves two
threshold parameter Alpha and beta for future expansion, so it is
called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.
• Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prune the tree leaves but also entire sub-tree.
• The two-parameter can be defined as:
– Alpha: The best (highest-value) choice we have found so far at any point
along the path of Maximizer. The initial value of alpha is -∞.
– Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.
• The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the
nodes which are not really affecting the final decision but making
algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.
Condition for Alpha-beta pruning:
The main condition which required for alpha-beta pruning is:
α>=β
Key points about alpha-beta pruning:
• The Max player will only update the value of alpha.
• The Min player will only update the value of beta.
• While backtracking the tree, the node values will be passed to
upper nodes instead of values of alpha and beta.
• We will only pass the alpha, beta values to the child nodes.
Pseudo-code for Alpha-beta Pruning:
function minimax(node, depth, alpha, beta, maximizingPlayer) is
if depth ==0 or node is a terminal node then
return static evaluation of node
if MaximizingPlayer then // for Maximizer Player
maxEva= -infinity
for each child of node do
eva= minimax(child, depth-1, alpha, beta, False)
maxEva= max(maxEva, eva)
alpha= max(alpha, maxEva)
if beta<=alpha
break
return maxEva
else // for Minimizer player
minEva= +infinity
for each child of node do
eva= minimax(child, depth-1, alpha, beta, true)
minEva= min(minEva, eva)
beta= min(beta, eva)
if beta<=alpha
break
Working of Alpha-Beta Pruning:
Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β=
+∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞,
and Node B passes the same value to its child D.
.
Step 2: At Node D, the value of α will be calculated as its turn for
Max. The value of α is compared with firstly 2 and then 3, and
the max (2, 3) = 3 will be the value of α at node D and node
value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β
will change as this is a turn of Min, Now β= +∞, will compare
with the available subsequent nodes value, i.e. min (∞, 3) = 3,
hence at node B now α= -∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B
which is node E, and the values of α= -∞, and β= 3 will also be
passed.
Step 4: At node E, Max will take its turn, and the value of alpha
will change. The current value of alpha will be compared with
5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where
α>=β, so the right successor of E will be pruned, and algorithm
will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from
node B to node A. At node A, the value of alpha will be
changed the maximum available value is 3 as max (-∞, 3)= 3,
and β= +∞, these two values now passes to right successor of
A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on
to node F.
Step 6: At node F, again the value of α will be compared with left
child which is 0, and max(3,0)= 3, and then compared with
right child which is 1, and max(3,1)= 3 still α remains 3, but the
node value of F will become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the
value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C,
α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of C
which is G will be pruned, and the algorithm will not compute the entire sub-
tree G.
.
Sep 8: C now returns the value of 1 to A here the best value for A is max (3, 1) =
3. Following is the final game tree which is the showing the nodes which are
computed and nodes which has never computed. Hence the optimal value for
the maximizer is 3 for this example.