Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

UNIT-II

Problem solving: state-space search and control strategies: Introduction, general problem
solving, characteristics of problem, exhaustive searches, heuristic search techniques, iterative
deepening A*, constraint satisfaction Problem reduction and game playing: Introduction,
problem reduction, game playing, alpha beta pruning, two-player perfect information games.
INTRODUCTION
Problem solving is a method of deriving solution steps beginning from initial description of the problem
to the desired solution. In AI, the problems are frequently modeled as a state space problem where the state space
is a set of all possible states from start to goal states.
• The 2 types of problem-solving methods that are generally followed include general purpose & special
purpose methods.
• A general purpose method is applicable to a wide variety of problems, where a special purpose method is a
tailor method made for a particular problem.
• The most general approach for solving a problem is to generate the solution & test it. For generating new
state in the search space, an action/operator/rule is applied & tested whether the state is the goal state or
not. In case the state is not the goal state, the procedure is repeated.
• The order of application of rules to the current state is called control strategy.

GENERAL PROBLEM SOLVING

Production System:
Production System (PS) is one of the formalisms that help AI programs to do search process more
conveniently in state-space problems. This system consists of start (initial) state(s) & goal (final) state(s) of the
problem along with one or more databases consisting of suitable & necessary information for a particular task.
Production System consists of a number of production rules.

Example 1: Water Jug Problem

Problem Statement: There are two jugs a 4-Gallon one and 3-Gallon one. Neither has any measuring marker on
it. There is a pump that can be used to fill the jugs with water. How can we get exactly 2- Gallon water into the
4- Gallon jug?

The state space for this problem can be described as the set of ordered pairs of integers (x, y) such that x
= 0, 1, 2, 3 or 4 and y = 0, 1, 2 or 3. x represents the number of gallons of water in the 4- Gallon jug. y represents
the number of gallons of water in the 3- Gallon jug.

The start state is (0, 0). The goal state is (2, n) for any value of n.

State: (x, y)

Start state: (0, 0).

Goal state: (2, n) for any n.


Here need to start from the current state and end up in a goal state.

Production Rules for Water Jug Problem in Artificial Intelligence

1 (x, y) is X<4 ->(4, Y) Fill the 4-liter jug

2 (x, y) if Y<3 -> (x, 3) Fill the 3-liter jug

3 (x, y) if x>0 -> (x-d, d) Pour some water out of the 4-liter jug.

4 (x, y) if Y>0 -> (d, y-d) Pour some water out of the 3-liter jug.

5 (x, y) if x>0 -> (0, y) Empty the 4-liter jug on the ground

6 (x, y) if y>0 -> (x,0) Empty the 3-liter jug on the ground

(x, y) if X+Y >= 4 and y>0 -> Pour water from the 3-liter jug into the 4-liter jug
7 (4, y-(4-x)) until the 4-liter jug is full

(x, y) if X+Y>=3 and x>0 -> Pour water from the 4-liter jug into the 3-liter jug
8 (x-(3-y), 3)) until the 3-liter jug is full.

(x, y) if X+Y <=4 and y>0 -> Pour all the water from the 3-liter jug into the 4-liter
9 (x+y, 0) jug.

(x, y) if X+Y<=3 and x>0 -> Pour all the water from the 4-liter jug into the 3-liter
10 (0, x+y) jug.

Pour the 2-liter water from the 3-liter jug into the 4-
11 (0, 2) -> (2, 0) liter jug.

12 (2, Y) -> (0, y) Empty the 2-liter in the 4-liter jug on the ground.

The solution to Water Jug Problem in Artificial Intelligence

1. Current state = (0, 0)


2. Loop until the goal state (2, 0) reached

– Apply a rule whose left side matches the current state

– Set the new current state to be the resulting state

One Solution to the Water Jug Problem:


(0, 0) – Start State

(0, 3) – Rule 2, Fill the 3-liter jug

(3, 0) – Rule 9, Pour all the water from the 3-liter jug into the 4-liter jug.

(3, 3) – Rule 2, Fill the 3-liter jug

(4, 2) – Rule 7, Pour water from the 3-liter jug into the 4-liter jug until the 4-liter jug is full.

(0, 2) – Rule 5, Empty the 4-liter jug on the ground

(2, 0) – Rule 9, Pour all the water from the 3-liter jug into the 4-liter jug.

Example 2: Water Jug Problem

Problem Statement: There are two jugs a 5-Gallon one and 3-Gallon one. Neither has any measuring marker on
it. There is a pump that can be used to fill the jugs with water. How can we get exactly 4- Gallon water into the
5- Gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers (x, y) such that x
= 0, 1, 2, 3, 4 or 5 and y = 0, 1, 2 or 3. x represents the number of gallons of water in the 5- Gallon jug. y represents
the number of gallons of water in the 3- Gallon jug.

The start state is (0, 0). The goal state is (4, n) for any value of n.

Table: Production Rules for Water Jug Problem


One Solution to the Water Jug Problem:

Rule Applied 5-G Jug 3-G Jug


Start State 0 0
1 5 0
8 2 3
4 2 0
6 0 2
1 5 2
8 4 3
Goal State 4 0

Example 3: Missionaries & Cannibals Problem


• Problem Statement: Three missionaries & three cannibals want to cross a river. There is a boat on their
side of the river that can be used by either 1 (or) 2 persons. How should they use this boat to cross the river
in such a way that cannibals never outnumber missionaries on either side of the river? If the cannibals ever
outnumber themissionaries (on either bank) then the missionaries will be eaten. How can they cross over
without eaten? Consider Missionaries as ‘M’, Cannibals as ‘C’ & Boat as ‘B’ which are on the same side
of the river.
• Initial State: ([3M, 3C, 1B], [0M, 0C, 0B])
• Goal State: ([0M, 0C, 0B], [3M, 3C, 1B])
Production rules are as follows:

Rule 1: (0, M): One Missionary sailing the boat from Bank-1 to Bank-2.

Rule 2: (M, 0): One Missionary sailing the boat from Bank-2 to Bank-1.

Rule 3: (M, M): Two Missionaries sailing the boat from Bank-1 to Bank-2.

Rule 4: (M, M): Two Missionaries sailing the boat from Bank-2 to Bank-1.

Rule 5: (M, C): One Missionary & One Cannibal sailing the boat from Bank-1 to Bank-2.

Rule 6: (C, M): One Cannibal & One Missionary sailing the boat from Bank-2 to Bank-1.

Rule 7: (C, C): Two Cannibals sailing the boat from Bank-1 to Bank-2.

Rule 8: (C, C): Two Cannibals sailing the boat from Bank-2 to Bank-1.

Rule 9: (0, C): One Cannibal sailing the boat from Bank-1 to Bank-2.

Rule 10: (C, 0): One Cannibal sailing the boat from Bank-2 to Bank-1.
S.No Rule Applied Persons on River Bank-1 Persons on River Bank-2
1 Start State 3M,3C,1B 0M,0C,0B
2 5 2M,2C,0B 1M,1C,1B
3 2 3M,2C,1B 0M,1C,0B
4 7 3M,0C,0B 0M,3C,1B
5 10 3M,1C,1B 0M,2C,0B
6 3 1M,1C,0B 2M,2C,1B
7 6 2M,2C,1B 1M,1C,0B
8 3 0M,2C,0B 3M,1C,1B
9 10 0M,3C,1B 3M,0C,0B
10 7 0M,1C,0B 3M,2C,1B
11 10 0M,2C,1B 3M,1C,0B
12 7 0M,0C,0B 3M,3C,1B

State Space Search:

A State Space Search is another method of problem representation that facilitates easy search. Using this
method, we can also find a path from start state to goal state while solving a problem. A state space basically
consists of 4 components:

1. A set S containing start states of the problem.


2. A set G containing goal states of the problem.
3. Set of nodes (states) in the graph/tree. Each node represents the state in problem-solving process.
4. Set of arcs connecting nodes. Each arc corresponds to operator that is a step in a problem-
solvingprocess.
A solution path is a path through the graph from a node in S to a node in G. The main objective of a search
algorithm is to determine a solution path in graph. There may be more than one solution paths, as there may be
more than one ways of solving the problem.
Example: The Eight-Puzzle Problem
Problem Statement: The eight-puzzle problem has a 3X3 grid with 8 randomly numbered (1 to 8) tiles arranged
on it with one empty cell. At any point, the adjacent tile can move to the empty cell, creating a new empty cell.
Solving this problem involves arranging tiles such that we get the goal state from the start state.
A state for this problem should keep track of the position of all tiles on the game board, with 0 representing the
blank (empty cell) position on the board. The start & goal states may be represented as follows with each list
representing corresponding row:

1. Start state: [ [3, 7, 6], [5, 1, 2], [4, 0, 8] ]


2. Goal state: [ [5, 3, 6], [7, 0, 2], [4, 1, 8] ]
3. The operators can be thought of moving {Up, Down, Left, Right}, the direction in which blank space
effectively moves.
Solution: Following is a Partial Search Tree for Eight Puzzle Problem

The search will be continued until the goal state is reached.Search Tree for

Water Jug Problem:


Example: Chess Game (One Legal Chess Move)
Chess is basically a competitive 2 player game played on a chequered board with 64 squares arranged in an 8 X
8 square. Each player is given 16 pieces of the same colour (black or white). These include 1 King, 1 Queen, 2
Rooks, 2 Knights, 2 Bishops & 8 pawns. Each of these pieces move in a unique manner. The player who chooses
the white pieces gets the first turn to play. The players get alternate chances in which they can move one piece at
a time. The objective of this game is to remove the opponent’s king from the game. The opponent’s King has to
be placed in such a situation where the king is under immediate attack & there is no wayto save it from the attack.
This is known as Checkmate.
For a problem playing chess the starting position can be described as an 8 X 8 array where each position
contains a symbol standing for appropriate piece in the official chess opening position. We can define our goal as
any board position in which the opponent does not have a legal move & his/her king is under attack. The
legal moves provide the way of getting from initial state to goal state. They can be described easily as a set of
rules consisting of 2 parts
• A left side that serves as a pattern to be matched against the current board position
• A right side that describes the change to be made to the board position to reflect the move

Fig: One Legal Chess Move


Note: There will be several number of Rules.

Fig: Another way to Describe Chess Moves


Control Strategies:

Control strategy is one of the most important components of problem solving that describes the order of
application of the rules to the current state. Control strategy should be such that it causes motion towards a
solution. The second requirement of control strategy is that it should explore the solution space in a systematic
manner. Depth-First & Breadth-First are systematic control strategies. There are 2 directions in which a search
could proceed
• Data-Driven Search, called Forward Chaining, from the Start State
• Goal-Driven Search, called Backward Chaining, from the Goal State

Forward Chaining: The process of forward chaining begins with known facts & works towards a solution. For
example, in 8-puzzle problem, we start from the start state & work forward to the goal state. In this case, we begin
building a tree of move sequences with the root of the tree as the start state. The states of next level of the tree are
generated by finding all rules whose left sides match with root & use their right side to create the new state. This
process is continued until a configuration that matches the goal state is generated.
Backward Chaining: It is a goal directed strategy that begins with the goal state & continues working backward,
generating more sub-goals that must also be satisfied to satisfy main goal until we reach to start state. Prolog
(Programming in Logic) language uses this strategy. In this case, we begin building a tree of move sequences
with the goal state of the tree as the start state. The states of next level of the tree are generated by finding all rules
whose right sides match with goal state & use their left side to create the new state. This process is continued until
a configuration that matches the start state is generated.
Note: We can use both Data-Driven & Goal-Driven strategies for problem solving, depending on the nature of
the problem.
CHARACTERISTICS OF PROBLEM

1. Type of Problems: There are 3 types of problems in real life,


• Ignorable
• Recoverable
• Irrecoverable
Ignorable: These are the problems where we can ignore the solution steps. For example, in proving a theorem, if
some lemma is proved to prove a theorem & later on we realize that it is not useful, then we can ignore this
solution step & prove another lemma. Such problems can be solved using simple control strategy.

Recoverable: These are the problems where solution steps can be undone. For example, in Water Jug Problem,
if we have filled up the jug, we can empty it also. Any state can be reached again by undoing the steps. These
problems are generally puzzles played by a single player. Such problems can be solved by back tracking, so
control strategy can be implemented using a push-down stack.
Irrecoverable: The problems where solution steps cannot be undone. For example, any 2-Player games such as
chess, playing cards, snake & ladder etc are example of this category. Such problems can be solved by planning
process.
2. Decomposability of a Problem: Divide the problem into a set of independent smaller sub-problems,
solve them and combine the solution to get the final solution. The process of dividing sub-problems continues
till we get the set of the smallest sub-problems for which a small collection of specific rules are used. Divide-
And-Conquer technique is the commonly used method for solving such problems. It is an important & useful
characteristic, as each sub-problem is simpler to solve & can be handed over to a different processor. Thus, such
problems can be solved in parallel processing environment.
3. Roll of Knowledge: Knowledge plays an important role in solving any problem. Knowledge could be in
the form of rules & facts which help generating search space for finding the solution.
4. Consistency of Knowledge Base used in Solving Problem: Make sure that knowledge base used to solve
problem is consistent. Inconsistent knowledge base will lead to wrong solutions. For example, if we have
knowledge in the form of rules & facts as follows:
If it is humid, it will rain. If it is sunny, then it is day time. It is sunny day. It is night time.
This knowledge is not consistent as there is a contradiction because ‘it is a day time’ can be deduced from the
knowledge, & thus both ‘it is night time’ and ‘it is a day time’ are not possible at the same time. If knowledge
base has such inconsistency, then some methods may be used to avoid such conflicts.
5. Requirement of Solution: We should analyze the problem whether solution require is absolute (or)
relative. We call solution to be absolute if we have to find exact solution, where as it is relative if we have
reasonable good & approximate solution. For example, in Water Jug Problem, if there are more than one ways
to solve a problem, then we follow one path successfully. There is no need to go back & find a better solution.
In this case, the solution is absolute. In travelling sales man problem, our goal is to find the shortest route, unless
all routes are known, it is difficult to know the shortest route. This is the Best-Path problem; where as Water Jug
is Any-Path problem. Any-Path problem is generally solved in reasonable amount of time by using heuristics that
suggest good paths to explore. Best-Path problems are computationally harder compared with Any-Path
problems.

EXHAUSTIVE SEARCHES (OR) UNIFORMED SEARCHES


• Breadth-First Search
• Depth-First Search
• Depth-First Iterative Deepening
• Bidirectional Search
1. Breadth-First Search (BFS):
• BFS is the most common search strategy for traversing a tree or graph. This algorithm
searchesbreadth wise in a tree or graph.
• BFS algorithm starts searching from the root node of the tree and expands all successor node at
thecurrent level before moving to nodes of next level.
• The breadth-first search algorithm is an example of a General-Graph search algorithm.
• Breadth-first search implemented using FIFO queue data structure.

Algorithm:
1. Create a variable called NODE-LIST and set it to the initial state.
2. Loop until the goal state is found or NODE-LIST is empty.
a. Remove the first element, say E, from the NODE-LIST. If NODE-LIST was empty then quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
ii. If the new state is the goal state, quit and return this state.
iii. Otherwise add this state to the end of NODE-LIST
Advantages:
• BFS will provide a solution if any solution exists.
• If there is more than one solution for a given problem, then BFS will provide the minimal solution
which requires the least number of steps.
Disadvantages:
• BFS requires lots of memory since each level of the tree must be saved into memory to expand the
nextlevel.
• BFS needs lots of time if the solution is far away from the root node.
characteristics

o Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed
in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.
o T (b) = 1+b2+b3+.......+ bd= O (bd)
o Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is
O(bd).
o Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS
will find a solution.
o Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
Example 1: S---> A--->B---->C--->D---->G--->H--->E---->F---->I --- >K.

Example 2: a---> b---> c---> d---> e---> f---> g---> h---> i---> j ---> k.

Example 3: BFS for Water Jug Problem


Example 4: 8-Puzzle Problem

Example 5:

2. Depth-First Search (DFS):


• DFS is a recursive algorithm for traversing a tree or graph data structure.
• It is called the depth-first search because it starts from the root node and follows each path to its
greatestdepth node before moving to the next path.
• DFS uses a stack data structure for its implementation.
Algorithm:
1. If the initial state is a goal state, quit and return success.
2. Otherwise, loop until success or failure is signaled.
a) Generate a state, say E, and let it be the successor of the initial state. If there are no more
successors,signal failure.
b) Call Depth-First Search with E as the initial state.
c) If success is returned, signal success. Otherwise continue in this loop.

Advantages:
• DFS requires very less memory as it only needs to store a stack of the nodes on the path from root
nodeto the current node.
• It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

Disadvantages:
• There is the possibility that many states keep re-occurring, and there is no guarantee of finding
thesolution.
• DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Example 1:

Note: It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse
node C and then G, and here it will terminate as it found goal node.
Example 2:

Example 3: Water Jug Problem

Example 4: 8- Puzzle Problem


Example 5: 8- Puzzle Problem

characterstics

o Completeness: DFS search algorithm is complete within finite state space as it will expand every node
within a limited search tree.

o Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is
given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution
depth)

o Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O (bm).

3. Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach
to the goal node.
Depth-First Iterative Deeping (DFID):

• DFID is a combination of DFS and BFS algorithms. This search algorithm finds out the best depth limit
and does it by gradually increasing the limit until a goal is found.
• This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the
depthlimit after each iteration until the goal node is found.
• This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first
search'smemory efficiency.
• The iterative search algorithm is useful uninformed search when search space is large, and depth of
goalnode is unknown.
Example:

Iteration 1: A

Iteration 2: A, B, C

Iteration 3: A, B, D, E, C, F, G
Iteration 4: A, B, D, H, I, E, C, F, K, G

In the fourth iteration, the algorithm will find the goal node.
Advantages:
• It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.
Disadvantages:
• Repeats all the work of the previous phase.
4. Bidirectional Search:

Bidirectional search is a graph search algorithm that runs 2 simultaneous searches. One search moves
forward from the start state & other moves backward from the goal state & stops when the two meet in the middle.
It is useful for those problems which have a single start state & single goal state.
Advantages:
• Bidirectional search is fast.
• Bidirectional search requires less memory

Disadvantages:
• Implementation of the bidirectional search tree is difficult.
• In bidirectional search, one should know the goal state in advance.
Example:

Fig: Graph to be Searched using Bidirectional Search

If match is found, then path can be traced from start to the matched state & from matched to the goal state.
It should be noted that each node has link to its successors as well as to its parent. These links will be generating
complete path from start to goal states.
The trace of finding path from node 1 to 16 using Bidirectional Search is as given below.
The Path obtained is 1, 2, 6, 11, 14, 16.
5. Analysis of Search Methods:
• Time Complexity: Time required by an algorithm to find a solution.

• Space Complexity: Space required by an algorithm to find a solution.

Search Technique Time Space


BFS O( bd ) O( bd )
DFS O( bd ) O(d)
DFID O( bd ) O(d)
Bidirectional O( bd/2 ) O( bd/2 )
Table: Performance Comparison
Travelling Salesman Problem (TSP):

Statement: In travelling salesman problem (TSP), one is required to find the shortest route of visiting all the
cities once & running back to starting point. Assume that there are ‘n’ cities & the distance between each pair of
the cities is given.
The problem seems to be simple, but deceptive. All the possible paths of the search tree are explored &
the shortest path is returned. This will require (n-1)! paths to be examined for ‘n’ cities.
• Start generating complete paths, keeping track of the shortest path found so far.
• Stop exploring any path as soon as its partial length becomes greater than the shortest path length
foundso far.

In this case, there will be 4!=24 possible paths. In below performance comparison, we can notice that
out of 13 paths shown, 5 paths are partially evaluated.
Table: Performance Comparison
HEURISTIC SEARCH TECHNIQUES
Heuristic: It is helpful in improving the efficiency of search process.
• Generate & Search
• Branch & Bound Search (Uniformed Cost Search)
• Hill Climbing
• Beam Search
• Best-First Search
• A* Algorithm

Generate & Test:


The Generate and test strategy is the simplest of all approaches. This method generates a solution for the given
problem and tests the generated solution with the required solution.
Algorithm:

Start
• Generate a possible solution
• Test if it is a goal
• If not go to start else
quitEnd
Advantage:
• Guarantee in finding a solution if a solution really exists.

Disadvantage:
• Not suitable for the larger problems

Branch & Bound Search (Uniform Cost Search):


Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm
comes into play when a different cost is available for each edge. The primary goal of the uniform-costsearch is to
find a path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according
to their path costs from the root node. It can be used to solve any graph/tree where the optimal cost is in demand.
A uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost.
Example:

Advantage:
• Uniform cost search is optimal because at every state the path with the least cost is
chosen.

Disadvantage:
• It does not care about the number of steps involve in searching and only concerned about path cost.
Dueto which this algorithm may be stuck in an infinite loop.
Hill Climbing:
• Simple Hill Climbing
• Steepest-Ascent Hill Climbing (Gradient Search)
Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first one which optimizes current cost and set it as a current state. It
only checks it's one successor state, and if it finds better than the current state, then move else be in the same
state.
Algorithm:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a current state.
• Else if not better than the current state, then return to
step2.Step 5: Exit.
Steepest-Ascent Hill Climbing (Gradient Search):
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm examines
all the neighboring nodes of the current state and selects one neighbor node which is closest to the goal state. This
algorithm consumes more time as it searches for multiple neighbors.
Algorithm:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial
state.
Step 2: Loop until a solution is found or the current state does not change.
a) Let SUCC be a state such that any successor of the current state will be better than it.
b) For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to
SUCC.Step 5: Exit.
Disadvantages of Hill Climbing:
1. Local Maximum: It is a state that is better than all its neighbours but not better than some other states which
are far away. From this state all moves looks to be worse. In such situation backtrack to some earlier state &
try going in different direction to find a solution.

2. Plateau: It is a flat area of the search space where all neighbouring states has the same value. It is not possible
to determine the best direction. In such situation make a big jump to some direction & try to get to new
section of the search space.
3. Ridge: It is an area of search space that is higher than surrounding areas but that cannot be traversed
bysingle moves in any one direction. It is a special kind of Local Maxima.

Beam Search:
Beam Search is a heuristic search algorithm in which W number of best nodes at each level is always
expanded. It progresses level-by-level & moves downward only from the best W nodes at each level. Beam Search
uses BFS to build its search tree. At each level of tree it generates all its successors of the states at the current
level, sorts them in order of increasing heuristic values. However it only considers W number of statesat each
level, whereas other nodes are ignored.
Here, W - Width of Beam Search
B - Branching Factor
There will only be W * B nodes under consideration at any depth but only W nodes will be selected.

Algorithm:
1. Node=Root_Node; Found= false
2. If Node is the goal node, then Found=true else find SUCCs of Node, if any with its estimated cost and store
in OPEN list;
3. While (Found=false and not able to proceed further) do
{
• Sort OPEN list;
• Select top W elements from OPEN list and put it in W_OPEN list and empty OPEN list
• For each NODE from W_OPEN list
{
• If NODE=Goal state then FOUND=true else find SUCCs of NODE. If any with its estimatedcost
and store in OPEN list;
}
}
4. If FOUND=true then return Yes otherwise return No;
5. Stop
Example:
Below is the search tree generated using Beam Search Algorithm. Assume, W=2 & B=3. Here black
nodes are selected based on their heuristic values for further expansion.

Best-First Search (Greedy Search):


It is a way of combining the advantages of both Depth-First and Breadth-First Search into a single method.
At each step of the best-first search process, we select the most promising of the nodes we have generated so far.
This is done by applying an appropriate heuristic function to each of them.
In some cases we have so many options to solve but only any one of them must be solved. In AI this can
be represented as OR graphs. In this among all available sub problems either of them must be solved. Hence the
name OR graph.
To implement such a graph-search procedure, we will need to use two lists of nodes.

OPEN: This list contains all the nodes those have been generated and have had the heuristic function applied to
them but which have not yet been examined. OPEN is actually a priority queue in which the elements with the
highest priority are those with the most promising value of the heuristic function.
CLOSED: This list contains all the nodes that have already been examined. We need to keep these nodes in
memory if we want to search a graph rather than a tree.

Algorithm:
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on OPEN do:
a) Pick the best node on OPEN
b) Generate its successors
c) For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN, and record its parent.
ii. If it has been generated before, change the parent if this new path is better than the
previous one. In that case, update the cost of getting to this node and to any successors
that this node may already have.
Example:

Fig: A Best-First Search


Step 1: At this level we have only one node, i.e., initial node A

Step 2: Now we generate the successors A, three new nodes are generated namely B, C, and D with the costs of
3, 5 and 1 respectively. So these nodes are added to the OPEN list and A can be shifted to CLOSED list since it
is processed.

Among these three nodes D is having the least cost, and hence selected for expansion. So this node is shifted to
CLOSED list.
Step 3: At this stage the node D is expanded generating the new nodes E and F with the costs 4 and 6 respectively.
The newly generated nodes will be added to the OPEN list. And node D will be added to CLOSEDlist.
Step 4: At this stage node B is expanded generating the new nodes G & H with costs 6 and 5 respectively. The
newly generated nodes will be added to the OPEN list. And node B will be added to CLOSED list.

Step 5: this stage node E is expanded generating the new nodes I & J with costs 2 and 1 respectively. The newly
generated nodes will be added to the OPEN list. And node E will be added to CLOSED list.

(Or)
Best-first Search Algorithm

Greedy best-first search algorithm always selects the path which appears best at that moment. It is the combination
of depth-first search and breadth-first search algorithms. It uses the heuristic function and search. Best-first search
allows us to take the advantages of both algorithms. With the help of best-first search, at each step, we can choose
the most promising node. In the best first search algorithm, we expand the node which is closest to the goal node
and the closest cost is estimated by heuristic function, i.e.

f(n)= h(n).

Were, h (n) = estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.


Best first search algorithm:
o Step 1: Place the starting node into the OPEN list.

o Step 2: If the OPEN list is empty, Stop and return failure.

o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the
CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.

o Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any successor
node is goal node, then return success and terminate the search, else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if the node
has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration,
each node is expanded using evaluation function f(n)=h(n) , which is given in the below table.
In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the iteration for
traversing the above example.

Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration2: Open[E,F,A],Closed[S,B]
: Open [E, A], Closed [S, B, F]

Iteration3: Open[I,G,E,A],Closed[S,B,F]
: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

o Time Complexity: The worst case time complexity of Greedy best first search is O(bm).
o Space Complexity: The worst case space complexity of Greedy best first search is O(bm). Where, m is the
maximum depth of the search space.
o Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
o Optimal: Greedy best first search algorithm is not optimal.
A* Search Algorithm:
o A* search is the most commonly known form of best-first search.
o It uses heuristic function h (n), and cost to reach the node n from the start state g (n).
o It has combined features of UCS and greedy best-first search, by which it solve the problem efficiently.
o A* search algorithm finds the shortest path through the search space using the heuristic function.
o This search algorithm expands less search tree and provides optimal result faster.
o A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).

o A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can combine
both costs as following, and this sum is called as a fitness number.

At each point in the search space, only those node is expanded which have the lowest value of f(n), and the algorithm
terminates when the goal node is found.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation function (g+h), if node n is
goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor n', check
whether n' is already in the OPEN or CLOSED list, if not then compute evaluation function for n' and place into
Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which
reflects the lowest g(n') value.

Step 6: Return to Step 2.


Advantages:
o A* search algorithm is the best algorithm than other search algorithms.
o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated nodes in the memory, so it is not
practical for various large-scale problems.

Example:
o In this example, we will traverse the given graph using the A* algorithm.
o The heuristic value of all states is given in the below table so we will calculate the f(n) of each state using
the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state.
o Here we will use OPEN and CLOSED list.

Solution:
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6.

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.

Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an admissible heuristic for A*
tree search. An admissible heuristic is optimistic in nature.
o Consistency: Second required condition is consistency for only A* graph-search.

If the heuristic function is admissible, then A* tree search will always find the least cost path.

• Time Complexity: The time complexity of A* search algorithm depends on heuristic function, and the
number of nodes expanded is exponential to the depth of solution d. So the time complexity is O(b^d), where
b is the branching factor.
• Space Complexity: The space complexity of A* search algorithm is O(b^d)

Example 2:
Example 3: Assuming all the values (arcs) as ‘1’

Optimal Solution by A* Algorithm:


• Underestimation
• Overestimation
Under estimation: From the below start node A is expanded to B, C & D with f values as 4, 5 & 6
respectively. Here, we are assuming that the cost of all arcs is 1 for the sake of simplicity. Note that node B has
minimum f value, so expand this node to E which has f value as 5. Since f value of C is also 5, we resolve in
favour of E, the path currently we are expanding. Now node E is expanded to node F with f value as 6. Clearly,
expansion ofa node F is stopped as f value of C is not the smallest. Thus, we see that by underestimating
heuristic value, we have wasted some effort by eventually discovered that B was farther away than we thought.
Now we go back &try another path & will find the optimal path.

Over estimation: Here we are overestimating heuristic value of each node in the graph/tree. We expand B to E,
Eto F & F to G for a solution path of length 4. But assume that there is direct path from D to a solution giving a
path of length 2 as h value of D is also overestimated. We will never find it because of overestimating h(D). We
may find some other worse solution without ever expanding D. So, by overestimating h, we cannot beguaranteed
to find the shortest path.

Admissibility of A*:
A search algorithm is admissible, if for any graph, it always terminates in an optional path from start state to goal
state, if path exists. We have seen earlier that if heuristic function ‘h’ underestimates the actual value from current
state to goal state, then it bounds to give an optimal solution & hence is called admissible function. So, we can
say that A* always terminates with the optimal path in case h is an admissible heuristic function.
ITERATIVE-DEEPING A*
IDA* is a combination of the DFID & A* algorithm. Here the successive iterations are corresponding to
increasing values of the total cost of a path rather than increasing depth of the search. Algorithm works as follows:

• For each iteration, perform a DFS pruning off a branch when its total cost (g+h) exceeds a given
threshold.
• The initial threshold starts at the estimate cost of the start state & increases for each iteration of
thealgorithm.
• The threshold used for the next iteration is the minimum cost of all values exceeded the
currentthreshold.
• These steps are repeated till we find a goal state.

Let us consider as example to illustrate the working IDA* Algorithm as shown below. Initially, the
threshold value is the estimated cost of the start node. In the first iteration, threshold=5. Now we generate all the
successors of start node & compute their estimated values as 6, 8, 4, 8 & 9. The successors having values greater
than 5 are to be pruned. Now for next iteration, we consider the threshold to be the minimum of the pruned nodes
value, that is, threshold=6 & the node with 6 value along with node with value 4 are retained for further expansion.
Advantages:
• Simpler to implement over A* is that it does not use Open & Closed lists
• Finds solution of least cost or optimal solution
• Uses less space than A*

Fig: Working of IDA*


(Or)Iterative Deepening A* algorithm (IDA*)
• The graph traversal algorithm.
• Find the shortest path in a weighted graph between the start and goal nodes
using the path search algorithm.
• An alternative to the iterative depth-first search algorithm.
• Uses a heuristic function, which is a technique taken from A* algorithm.
• It’s a Depth-First Search algorithm, So it used less memory than the A*
algorithm.
• IDA* is an admissible heuristic because it never overestimates the cost of
reaching the goal.
• It’s focused on searching the most promising nodes, so, it doesn’t go to the same
depth everywhere.

The evaluation function in IDA* looks like this:

Where h is admissible.
here,
f(n) = Total cost evaluation function.
g(n) = The actual cost from the initial node to the current node.
h(n) = Heuristic estimated cost from the current node to the goal state. it is based
on the approximation according to the problem characteristics.
What is the f score?
In the IDA* algorithm, F-score is a heuristic function that is used to estimate the cost of
reaching the goal state from a given state. It is a combination of two other heuristic
functions, g(n) and h(n).
It is used to determine the order in which the algorithm expands nodes in the search tree
and thus, it plays an important role in how quickly the algorithm finds a solution. A lower
F-score indicates that a node is closer to the goal state and will be expanded before a node
with a higher F-score.
Simply it is nothing but g(n) + h(n).

How IDA* algorithm work?


Step 1: Initialization
Set the root node as the current node, and find the f-score.
Sep 2: Set threshold
Set the cost limit as a threshold for a node i.e the maximum f-score allowed for that node
for further explorations.
Step 3: Node Expansion
Expand the current node to its children and find f-scores.
Step 4: Pruning
If for any node the f-score > threshold, prune that node because it’s considered too
expensive for that node. and store it in the visited node list.
Step 5: Return Path
If the Goal node is found then return the path from the start node Goal node.
Step 6: Update the Threshold 36
If the Goal node is not found then repeat from step 2 by changing the threshold with the minimum
pruned value from the visited node list. And Continue it until you reach the goal node.
Example
In the below tree, the f score is written inside the nodes means the f score is already
computed and the start node is 2 whereas the goal node is 15. the explored node is colored
green color.
so now we have to go to a given goal by using IDA* algorithm.

Iteration 1

Iteration 1

• Root node as current node i.e 2


• Threshold = current node value (2=2). So explore its children.
• 4 > Threshold & 5>Threshold. So, this iteration is over and the pruned values
are 4, and 5.

Iteration 2

Iteration 2

• In pruned values, the least is 4, So threshold = 4


• current node = 2 and 2< threshold, So explore its children. i.e two children
explore one by one
• So, first children 4, So, set current node = 4 i.e equal to the threshold, so, 37
explored its children also i.e 5, 4 having 5> threshold so, pruned it and explore
second child of node 4 i.e 4, so set current node = 4 = threshold, and explore its
children i.e 8 & 7 having both 8 & 7 > threshold so, pruned it. At the end of this,
our pruned value is 5,8,7
• Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e
5>threshold, So pruned it.
• So, our pruned value is 5,8,7.

Iteration 3

Iteration 3

• In pruned values, the least is 5, So threshold = 5


• current node = root node = 2 and 2< threshold, So explore its children. i.e two
children explore one by one
• So, first children 4, So, set current node = 4 < threshold, so, explored its children
also i.e 5, 4 having 5= threshold so explore its child also 7&8 > threshold. So,
pruned it and explore the second child of node 4 i.e 4, so set current node = 4 <
threshold, and explore its children i.e 8 & 7 here, both 8 & 7 > threshold so,
pruned it. At the end of this, our pruned value is 7 & 8
• Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5
= threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 > threshold.
So pruned it
• So, our pruned value is 7,8 & 6

Iteration 4

• In pruned values, the least value is 6, So threshold = 6


• current node = root node = 2 and 2< threshold, So explore its children. i.e two
children explore one by one 38
• So, the first child is 4, So, set current node = 4 < threshold, so, explored its
children also i.e 5, 4 having 5< threshold so explore its child also 7&8 >
threshold. So, pruned it and explore the second child of node 4 i.e 4, so set

current node = 4 < threshold, and explore its children i.e 8 & 7 here, both 8 & 7
> threshold so, pruned it. At the end of this, our pruned value is 7 & 8
• Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5
= threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 = threshold,
So, explore one by one,
• The first 6 has two children i.e 6 & 8, having 6 = threshold. So, explore its child
also i.e 13 & 7. here both 13 & 7 > Threshold. So, pruned it. next is 8 >
Threshold. pruned it, So, pruned value at this stage is 13,7 & 8.
• Explore the second child of 5 i.e 6 = Threshold. So, explore its child i.e 7 & 9.
Both are greater than Threshold. So, pruned it
• So, our pruned values are 13,7,8 & 9.

Iteration 5

• In pruned values, the least value is 7, So threshold = 7


• current node = root node = 2 and 2< threshold, So explore its children. i.e two
children explore one by one
• So, the first child is 4, So, set current node = 4 < threshold, so, explored its
children also i.e 5, 4
• The first child of 4 is 5 i.e 5< threshold so explore its child also 7&8, Here 7 =
threshold. So, explore its children i.e 12 & 14, both > Threshold. So, pruned it.
And the second child of 5 is 8 > Threshold, So, pruned it. At this stage, our
pruned value is 12, 14 & 7.
• Now explore the second child of node 4 i.e 4, so set current node = 4 <
threshold, and explore its children i.e 8 & 7 here, 8 > threshold so, pruned it.
then go to the second child i.e 7 = Threshold, So explore its children i.e 13 & 8.
having both > Threshold. So pruned it. At the end of this, our pruned value is
12,14,8 & 13
• Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5
< threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 < threshold,
So, explore one by one,
• The first 6 has two children i.e 6 & 8, having 6 < threshold. So, explore its child
also i.e 13 & 7. here 13 > Threshold. So, pruned it. And 7= Threshold. And it
hasn’t any child. So, the shift to the next sub-child of 6 i.e 8 > threshold, So,
pruned it. The pruned value at this stage is 12,14,8 &13
• Explore the second child of 5 i.e 6 < Threshold. So, explore its child i.e 7 & 9.
Here 7 = Threshold, So, explore its children i.e 8 & 14, Both are greater than
Threshold. So, pruned it, Now the sub child of 6 is 9 > Threshold, So, pruned 39
it.
• So, our pruned values are 12,14,8,13 & 9.
Iteration 6

Iteration 6

• In pruned values, the least value is 8, So threshold = 8


• current node = root node = 2 and 2< threshold, So explore its children. i.e two
children explore one by one
• So, the first child is 4, So, set current node = 4 < threshold, so, explored its
children also i.e 5, 4
• The first child of 4 is 5 i.e 5< threshold so explore its child also 7&8, Here 7 <
threshold. So, explore its children i.e 12 & 14, both > Threshold. So, pruned it.
And the second child of 5 is 8 = Threshold, So, So, explore its children i.e 16 &
15, both > Threshold. So, pruned it. At this stage, our pruned value is 12, 14, 16
& 15.
• Now explore the second child of node 4 i.e 4, so set current node = 4 <
threshold, and explore its children i.e 8 & 7 here, 8 = Threshold, So, So, explore
its children i.e 12 & 9, both > Threshold. So, pruned it. then go to the second
child i.e 7 < Threshold, So explore its children i.e 13 & 8. having 13 >
Threshold. So pruned it. and 8 = Threshold and it hasn’t any child. At the end of
this, our pruned values are 12, 14, 16, 15, and 13.
• Similarly, Explore the second child of root node 2 i.e 5 as the current node, i.e 5
< threshold, so, explored its children also i.e 6 & 6, i.e both 6 & 6 < threshold,
So, explore one by one,
• The first 6 has two children i.e 6 & 8, having 6 < threshold. So, explore its child
also i.e 13 & 7. here 13 > Threshold. So, pruned it. And 7<Threshold. And it
hasn’t any child. So, the shift to the next sub-child of 6 i.e 8 = threshold.
So, explored its children also i.e 15 & 16, Here 15 = Goal Node. So, stop this
iteration. Now no need to explore more.
• The goal path is 2–>5–>6–>8–>15
IDA* can be used to solve various real-life problems that are:
• The 15-Puzzle Problem
• The 8-Queens Problem
Advantages
• IDA* is guaranteed to find the optimal solution if one exists. 40
• IDA* avoids the exponential time complexity of traditional Depth First Search. by
using an “iterative deepening” approach, where the search depth is gradually
increased.
• .
• IDA* is an admissible heuristic, it never overestimates the cost of reaching the
goal.
• It’s efficient in handling large numbers of states and large branch factors.
Disadvantages
• Explore the visited node again and again. it doesn’t keep track of the visited nodes.
• IDA* may be slower to get a solution than other search algorithms like A* or
Breadth-First Search because it explores and repeats the explore node again and
again.
• It takes more time and power than the A* algorithm.
• It’s important to note that IDA* is not suitable for all types of problems, and the
choice of algorithm will depend on the specific characteristics of the problem you’re
trying to solve.

CONSTRAINT SATISFACTION
A Constraint Satisfaction Problem in artificial intelligence involves a set of
variables, each of which has a domain of possible values, and a set of constraints that define
the allowable combinations of values for the variables. The goal is to find a value for each
variable such that all the constraints are satisfied.

More formally, a CSP is defined as a triple (X,D,C)where:

• X is a set of variables { x1, x2, ..., xn}.

• D is a set of domains {D1, D2, ..., Dn}, where each Di is the set of possible values
for xi.

• C is a set of constraints {C1, C2, ..., Cm}, where each Ci is a constraint that restricts
the values that can be assigned to a subset of the variables.

The goal of a CSP is to find an assignment of values to the variables that satisfies all
the constraints. This assignment is called a solution to the CSP.

Significance of Constraint Satisfaction Problem in AI


CSPs are highly significant in artificial intelligence for several reasons:
• They model a wide range of real-world problems where decision-making is subject to
certain conditions and limitations.
• CSPs offer a structured and general framework for representing and solving problems,
making them versatile in problem-solving applications.
• Many AI applications, such as scheduling, planning, and configuration, can be
mapped to CSPs, allowing AI systems to find optimal solutions efficiently.

Algorithm:
1. Propagate available constraints. To do this, first set OPEN to the set of all objects 41

that must have valuesassigned to them in a complete solution. Then do until an


inconsistency is detected or until OPEN is empty:
a) Select an object OB from OPEN. Strengthen as much as possible the set of
constraints that apply to OB.
b) If this set is different from the set that was assigned the last time OB was
examined or if this is the firsttime OB has been examined, then add to OPEN all
objects that share any constraints with OB
c) Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report
the solution.
3. If the union of the constraints discovered above defines a contradiction, then return
failure.
4. If neither of the above occurs, then it is necessary to make a guess at in order to
proceed. To do this, loopuntil a solution is found or all possible solutions have been
eliminated
a) Select an object whose value is not yet determined and select a way of
strengthening the constraints onthat object.
b) Recursively invoke constrain satisfaction with the current set of constraints
augmented by thestrengthening constraint just selected.

Example: Cryptarithmetic problem

Let us consider M=1, because by adding any 2 single digit number, at maximum we get one
as carry.

Assume that S=8 (or) 9, S + M = 0


(or) S + M + C3 = 0If S = 9, S +
M = 9 + 1 =10 (with no carry)
If S = 8, S + M + C3 = 8 + 1 + 1 = 10 (with carry). So, we get O
value as 0.Therefore, M = 1, S = 9 & O = 0.

42
So, here E + 0 = N. Then E =N (It is not possible because no 2 variables
should have same value).E + O + c2 = N
E + 0 + 1 = N which gives E + 1 = N
Estimate E value from the remaining possible numbers i.e.., 2, 3, 4, 5, 6, 7, 8. So, from our
estimation the E &N values satisfies at E = 5. So, E + 1 = 5 + 1 = 6 i.e.., N = 6 (E + 1 = N)
Therefore, M = 1, S = 9, O = 0, E = 5 & N = 6.

So, here N + R + C1 = E. We already know that E = 5. So, the E value is satisfied by taking
R = 8.6 + 8 + 1 = 15.
Therefore, M = 1, S = 9, O = 0, E = 5, N = 6 & R = 8

.
Here, D + E = Y. It has to satisfy C1 = 1, so that D = 7.Then, 7 + 5 = 12. So, Y = 2.

Final Values are M = 1, S = 9, O = 0, E = 5, N = 6, R = 8, D = 7 & Y = 2.


By substituting all the above values,

Some other Examples of Constraint Satisfaction are as below,

Example 2:

Example 3:

43
Example 4:

Example 5:

Example 6:

Adversarial Search (GAME SEARCH)


Formalization of the problem:
A game can be defined as a type of search in AI which can be formalized of the following
elements:

o Initial state: It specifies how the game is set up at the start.


o Player(s): It specifies which player has moved in the state space.
o Action(s): It returns the set of legal moves in state space.
o Result(s, a): It is the transition model, which specifies the result of moves in the state
space.
o Terminal-Test(s): Terminal test is true if the game is over, else it is false at any case.
The state where the game ends is called terminal states.
o Utility(s, p): A utility function gives the final numeric value for a game that ends in
terminal states s for player p. It is also called payoff function. For Chess, the outcomes
are a win, loss, or draw and its payoff values are +1, 0, ½. And for tic-tac-toe, utility
values are +1, -1, and 0.

Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the
moves by players. Game tree involves initial state, actions function, and result Function.

Min-Max Algorithm in Artificial Intelligence

Min-max algorithm uses two functions –


MOVEGEN: It generates all the possible moves that can be generated from the current
position.
STATICEVALUATION: It returns a value depending upon the goodness from the viewpoint
of two-player
o Min-max algorithm is a recursive or backtracking algorithm which is used in decision- 44
making and game theory. It provides an optimal move for the player assuming that
opponent is also playing optimally.
o Min-Max algorithm uses recursion to search through the game-tree.
o
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers,
tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state.
o In this algorithm two players play the game, one is called MAX and other is called
MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they
get the maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
o The Minimax algorithm performs a depth-first search algorithm for the exploration of
the complete game tree.
o The Minimax algorithm proceeds all the way down to the terminal node of the tree,
then backtrack the tree as the recursion.

Working of Min-Max Algorithm:

o The working of the minimax algorithm can be easily described using an example.
Below we have taken an example of game-tree which is representing the two-player
game.
o In this example, there are two players one is called Maximizer and other is called
Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to get
the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way through the
leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs. Following are the main steps involved
in solving the two-player game tree:

Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility
function to get the utility values for the terminal states. In the below tree diagram, let's take A
is the initial state of the tree. Suppose maximizer takes first turn which has worst-case initial
value =- infinity, and minimizer will take next turn which has worst-case initial value =
+infinity.

45
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we
will compare each value in terminal state with initial value of Maximizer and determines the
higher nodes values. It will find the maximum among the all.

o node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞,
and will find the 3rd layer node values.

For node B= min(4,6) = 4

o For nodeC= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value
and find the maximum value for the root node. In this game tree, there are only 4 layers, hence
we reach immediately to the root node, but in real games, there will be more than 4 layers.

o For node A max(4, -3)= 4

46
That was the complete workflow of the Minimax two player game.

Properties of Mini-Max algorithm:

o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist),
in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of
Min-Max algorithm is O(bm), where b is branching factor of the game-tree, and m is
the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS
which is O(bm).

Limitation of the Minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really slow for complex games
such as Chess, go, etc. This type of games has a huge branching factor, and the player has lots
of choices to decide. This limitation of the minimax algorithm can be improved from alpha-
beta pruning which we have discussed in the next topic.

➢ Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.

o As we have seen in the minimax search algorithm that the number of game states it has
to examine are exponential in depth of the tree.

o Since we cannot eliminate the exponent, but we can cut it to half.

o Hence there is a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is called pruning.

o This involves two threshold parameter Alpha and beta for future expansion, so it is 47
called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.

o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any point along
the path of Maximizer. The initial value of alpha is -∞.

b. Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.

The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.

Condition for Alpha-beta pruning:


The main condition which required for alpha-beta pruning is:

α>=β
Key points about alpha-beta pruning:

o The Max player will only update the value of alpha.

o The Min player will only update the value of beta.

o While backtracking the tree, the node values will be passed to upper nodes instead of
values of alpha and beta.

o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:


Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β=
+∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and
Node B passes the same value to its child D.

48
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.
Step 3: Now algorithm backtracks to node B, where the value of β will change as this is a turn
of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) =
3, hence at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value
of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where 49
α>=β, so the right successor of E will be pruned, and algorithm will not traverse it, and the
value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A,
the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β=
+∞, these two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again
it satisfies the condition α>=β, so the next child of C which is G will be pruned, and the
algorithm will not compute the entire sub-tree G.

50
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following
is the final game tree which is the showing the nodes which are computed and nodes which has
never computed. Hence the optimal value for the maximizer is 3 for this example.

The effectiveness of alpha-beta pruning is highly dependent on the order in which each node
is examined. Move order is an important aspect of alpha-beta pruning.
It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the
51
leaves of the tree, and works exactly as minimax algorithm. In this case, it also
consumes more time because of alpha-beta factors, such a move of pruning is called
worst ordering. In this case, the best move occurs on the right side of the tree. The time
complexity for such an order is O(bm).

o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning
happens in the tree, and best moves occur at the left side of the tree. We apply DFS
hence it first search left of the tree and go deep twice as minimax algorithm in the same
amount of time. Complexity in ideal ordering is O(bm/2).

52

You might also like