0% found this document useful (0 votes)
2 views

Module3_1 - Copy

This document outlines search techniques in artificial intelligence, detailing both uninformed and informed search methods. It covers various algorithms such as Breadth-First Search, Depth-First Search, and Hill Climbing, explaining their processes, advantages, and limitations. Additionally, it discusses performance metrics for search algorithms and the importance of problem formulation in achieving optimal solutions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module3_1 - Copy

This document outlines search techniques in artificial intelligence, detailing both uninformed and informed search methods. It covers various algorithms such as Breadth-First Search, Depth-First Search, and Hill Climbing, explaining their processes, advantages, and limitations. Additionally, it discusses performance metrics for search algorithms and the importance of problem formulation in achieving optimal solutions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Artificial Intelligence (III/II)

Course Code: CT-653


(Module#3)
Dhawa Sang Dong
(Lecturer)

Kathmandu Engineering College


Kalimati, Kathmandu
January 2023
Chapter#3
Search Techniques

✓ Class Outline

1 Introduction to Searching

2 Uninformed Search Techniques

3 Informed Search Techniques

4 Game Playing
Introduction to Searching

Searching
- it is the process of getting appropriate or the required state or
node (goal state) among the possible states.

- searching is performed through the state space of the problem.

- OR search process is carried out by constructing a search tree.

¬ Therefore, search is the process of locating a solution to a


problem by systematically looking at nodes in a search tree or
search space (state space) until a goal node is found.
Introduction to Searching..

A search problem can have following three main factors:

1 Search Space: Search space represents a set of possible


solutions, which a search agent may have.

2 Start State: It is a state from where agent begins the search.

3 Goal test: It is a function which observe the current state and


returns whether the goal state is achieved or not.
Introduction to Searching...

Every problem has the following steps to solve:

✓ Define the Problem: Problem definition must include precised


initial and final state.

✓ Analyze the Problem: select the most important features with


an immense impact to the solution.

✓ Isolate and Represent: convert these important features into


knowledge representation.

✓ Problem Solving Techniques: choose the best from available


search techniques and apply it to particular problem.
Introduction to Searching...

Some terminologies in search algorithms


✍ Goal state: It is the target state in searching process.

✍ Searching strategy: A method to control search procedure.

✍ Search tree: A tree representation of search problem is called


Search tree. The root of the search tree corresponds to initial
state (root node).

✍ Actions: It gives the description of all the available actions to


the agent.
Introduction to Searching...

Some terminologies in search algorithms...


✍ Transition model: A description of what each action do, can
be represented as a transition model.

✍ Path Cost: It is a function which assigns a numeric cost to


each path from one state to another.

✍ Solution: It is an action sequence which leads from the start


node to the goal node.

✍ Optimal Solution: If a solution has the lowest cost among all


possible solutions then that is optimal solution.
Introduction to Searching...
Searching through state space involves node visiting and node
expansion (and edge in a graph traverse such that all the nodes or
vertices are explored).

✓ Node Visiting: is the process of selecting and checking the


node if it is goal node.
✓ Node Expansion/Exploring: is the process of generating new
node (child nodes) related to previously selected nodes.

Fig. 1 Visiting and Exploring Nodes


Introduction to Searching...

Problem Formulation
Search problem is defined in terms of ⃝
1 states (initial state and
goal) and ⃝2 operators (transition model).
States (initial and goal states)
is a representation of problem elements at a given instance. it
contains all of the information necessary to predict the effects of
an action and to determine if it is a goal state.
✓ initial state: is the state where the agent starts in. A sequence
of states connected by the sequence of actions is path.
✓ goal state: is a state where the agent may end the search.
Operators
An operator is an action that transforms one state into another
state. In general, not all operators can be applied in all states.
Introduction to Searching...

Performance Metrices for searching process


The output from a searching algorithm or problem-solving
techniques is either Failure or Solution.
• Completeness: An algorithm is completeness if it always
terminate with a solution or guarantee the solution.
• Optimality: An algorithm is optimal over a set of possible
solutions if it finds optimal solution.
• Time-Complexity: how long time will the searching technique
take to find the solution?
• Space Complexity: How much memory will the searching
algorithm consume to find the solution?
Classification of Search Algorithms

Search Techniques/Algorithms

Uninformed/Blind Search Informed/Heuristic Search

Breadth-First Search (BFS) Hill Climbing Search

Depth-First Search (DFS) Best First Search

Depth Limited Search A* Algorithm/Search

Cost Uniform Search A0* Algorithm/Search

Iterative Deepening Depth- Constraint Satisfaction


First Search (DFS) Search
Uninformed Search Techniques

- Uninformed Search Algorithm is also called Blind Search,


Brute Force Search or Exhaustive Search (why?)
- It is uninformed because it has no domain information such as
closeness and location of goal node.
- Blind Search because of the way in which search tree is searched
with no information about the search space (except initial state,
set of operators, and goal test).
- Brute Force Search because it assumes no additional knowledge,
other than how to traverse the search tree, and how to identify
the leaf nodes and goal nodes.
Uninformed Search Techniques

- no suggestion or any guidelines towards finding problem solution.


- it consumes moderately higher time while searching so the
slower performance.
- will examine all nodes in the tree until it finds a goal.

¬ Thus, uninformed search algorithms looks through search space


for all possible solutions of the problem without any additional
knowledge about search space.
Uninformed Search Techniques...

Breadth First Search/traversal (BFS)


- Breadth-First Search algorithm is a graph traversing
technique, where graph is search space.
- in this algorithm, you select a initial node (source or root
node) and then
- start traversing the graph/tree layer-wise in such a way that
all the nodes and their respective children (neighboring) nodes
(left to right) are visited and explored.
- Once goal node is found, then the search process is stopped.
Uninformed Search Techniques...

Breadth First Search/traversal (BFS)


- A queue is an abstract data structure that follows the
First-In-First-Out methodology while extracting.
- One end is always used to insert data (enqueue) and the other
end is used to remove data (dequeue).
Uninformed Search Techniques...

Breadth First Search/traversal (BFS)


Steps involved in Breadth First Search algorithm to reach goal
state from initial state can be explained as follows:
• Step 1: Take an Empty Queue.
• Step 2: Select a starting node (visiting a node) and insert it
into the Queue.
• Step 3: Provided that the Queue is not empty, extract a node
from the Queue and insert its child nodes
(exploring/expanding node) into the Queue.
• Step 4: Print the extracted node.
Uninformed Search Techniques...

Breadth First Search/traversal (BFS)

Fig. 2 Architecture of Breadth First Search (BFS)


Uninformed Search Techniques...

Breadth First Search/traversal (BFS)


Uninformed Search Techniques...

Depth First Search(DFS)


- It proceeds down a single branch of the search tree at a time.
- Firstly, it expands root and proceeds to left-most child node.
- Again, it expands the current node and proceeds to left-most
child node in next lower layer.
- The process continues until the dead end at lowest layer.
- Only when the search hits a dead end does the search
backtrack, and it expands nodes at previous layers.
Uninformed Search Techniques...
Depth First Search(DFS)
Uninformed Search Techniques...

Depth First Search(DFS)


Steps involved in Depth-First Search (DFS) algorithm to reach
goal state can be explained as:
Step-1: put the start node on the stack or queue.
Step-2: while stack is not empty:
✓ pop the stack
✓ if the top of the stack is the goal, stop and return success.
✓ else push the nodes connected to the top of the stack provided
they are not on the stack.
Step-3: return failure and stop.

Note: It uses stack to implement Last In First Out - LIFO


Uninformed Search Techniques...
Depth First Search(DFS)

A Solve the problem to find U from A bellow:

Fig. 3 DFS Solutions


Uninformed Search Techniques...
Depth First Search(DFS)


B Solve the problem to find G from S bellow:

Fig. 4 DFS Solutions


Uninformed Search Techniques...
Depth First Search(DFS)


✓ Solution to the problem finding G from S above:

Fig. 5 DFS Solutions


Uninformed Search Techniques...

Depth-Limited Search
¬ Breadth-First Search (BFS) algorithm has computational
complexity (space)
¬ whereas Depth-First Search (DFS) has run off down problem
for a very deep rooted data-structure.

✓ As a solution to the being both the search algorithm above


non-optimal, Depth-Limit Search could be the solution.

✍ it performs depth first search to the specified depth limit.


✍ there could be problem if goal stated is not included within
the limit.
Uninformed Search Techniques...

Depth-Limited Search

Fig. 6 Depth Limit Search


Uninformed Search Techniques...

Iterative Deepening Depth-First Search


- The iterative deepening-DFS algorithm is a combination of
DFS and BFS algorithms.
- This finds out the best depth limit and does it by gradually
increasing the limit until a goal node is found.
- This algorithm performs depth-first search up to a certain
“depth limit”, and it keeps increasing the depth limit after
each iteration until the goal node is found.
- This iterative Deepening-DFS algorithm combines the benefits
from Breadth-First Search and Depth-First Search: fast
searching and memory efficiency respectively.
- This algorithm is useful uninformed search when search space
is large, and depth of goal node is unknown.
Uninformed Search Techniques...
Iterative Deepening Depth-First Search
Uninformed Search Techniques...
Iterative Deepening Depth-First Search
Uninformed Search Techniques...
Iterative Deepening Depth-First Search
Uninformed Search Techniques...

Cost Uniform Search


- This search is used for traversing a weighted graph or tree.
- This algorithm comes into play when a different cost is
available for each edge.
- The primary goal of the uniform-cost search is to find a path
to the goal node with lowest cumulative cost.
- Uniform-cost search expands nodes according to their path
costs from the root.
- It can be used to solve graph/tree demanding optimal cost.
- A uniform-cost search algorithm is implemented by the priority
queue. It gives highest priority to the lowest cumulative cost.
- Uniform cost search is equivalent to BFS algorithm if the path
cost of all edges is the same.
Uninformed Search Techniques...

Cost Uniform Search Algorithm:


step#1: Insert the root node into the priority queue.

step#2: Remove the element with the highest priority.


✓ If the removed node is the destination, print total cost
and stop searching.
✓ Else if, Check if the node is in the visited list.
✓ Else Enqueue all the children of the current node to the
priority queue, with their cumulative cost from the root
as priority and the current not to the visited list.
Uninformed Search Techniques...
Cost Uniform Search Problem:
Lets denote visited nodes: VN , and priority queue nodes: PN .

¬ Step#1
✓ VN : [ _ ]
✓ PN : [ _ ]

Fig. 7 Search Graph Problem


Uninformed Search Techniques...
Cost Uniform Search Problem:

¬ Step#2 ¬ Step#3
✓ VN : [ Source ] ✓ VN : [ Source, A ]
✓ PN : [ A, B, C ] ✓ PN : [ B, G, C ]
Uninformed Search Techniques...
Cost Uniform Search Problem:

¬ Step#4 ¬ Step#5
✓ VN : [ Source, A, B ] ✓ VN : [ Source, A, B, G ]
✓ PN : [ G, C, F ] ✓ PN : [ C, F ]
Uninformed Search Techniques...
Cost Uniform Search Problem:
¬ Step#6 ¬ Step#7
✓ VN : [ Source, A, B, G, C ] ✓ VN : [ Source, A, B, G, C, E ]
✓ PN : [ E, F ] ✓ PN : [ Dest, F ]
Uninformed Search Techniques...
Cost Uniform Search Problem:
¬ Step#8 ¬ Step#9
✓ VN : [ Source, A, B, G, C, E, F ] ✓ VN : [ Source, A, B, G, C, E,
✓ PN : [ Dest ] F, Dest ] ✓ PN : [ _ ], Cost: 8
Uninformed Search Techniques...

Cost Uniform Search – Problem


Uninformed Search Techniques...

Cost Uniform Search – solution


Uninformed Search Techniques...
Cost Uniform Search – Problem
Uninformed Search Techniques...
Cost Uniform Search – solution
Uninformed and Informed Searching Methods

Uninformed Searching Vs Informed Searching:


Algorithms Time Complexity Space Complexity Optimality Completeness

Breadth First Search O(b d+1 ) O(b d+1 ) Yes Yes

Depth First Search O(bxd) O(bxd) No No

Depth Limit Search O(b l ) O(bxl) No No


Iterative Deepening
DFS Search O(b d ) O(bxd) Yes Yes

Uniform Cost Search O(b c/e ) O(b c/e ) Yes Yes

¬ Note: b is branching factor, d is depth level, l is depth limit, c


is cost of optimal path, and e is small positive number.
Informed Search Techniques

- Informed search algorithm contains an array of knowledge


(apart from problem definition): how far we are from the goal,
what path cost is, how goal node can be reached, etc.
- The knowledge helps searching agents to explore less in search
space and find more efficiently the goal node.
- Therefore, the informed search is more useful and efficient for
large search space.
- Informed search algorithm uses the heuristic techniques, so it
is also called Heuristic search.
- Heuristic Techniques: it is set of rules which serves to increase
the probability of solving problems.
Informed Search Techniques

Hill Climbing Search Algorithm


- Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain (local
maxima) or best solution (global maxima) to the problem.
- searching will be terminated when it reaches a peak value
where no neighbor has a higher value (greedy approach:
selecting the best option available at the moment).
- Hill climbing algorithm is a technique which is used for
optimizing the mathematical problems (optimization).
- One of the widely discussed examples of Hill climbing
algorithm is Traveling-Salesman Problem in which we need to
minimize the distance traveled by the salesman.
Informed Search Techniques

Hill Climbing Search Algorithm...


- It is because searching only looks to its good immediate
neighbor state and not beyond that - greedy approach - it is
also called greedy local search.

- nodes in hill climbing search algorithm have two components:


state and objective function value.

- Hill Climbing is mostly used when a good heuristic is available.

- In this algorithm, we don’t need to maintain and handle the


search tree or graph as it only keeps a single current state.

¬ What is gradient decent and write similarity with hill climbing?


Informed Search Techniques

State-Space-Diagram for Hill Climbing Search Algorithm


- The state-space landscape is a graphical representation of the
hill-climbing algorithm.
- Fig.8 represents state-space diagram showing a graph between
various states and objective function.
- If the objective function is cost, then the goal of search is to
find the global minimum.
- If the Y-axis is Objective function, then the goal of the search
is to find the global maximum.
Informed Search Techniques

Hill Climbing Search Algorithm...

Fig. 8 Different regions in the state space landscape


Informed Search Techniques

Stochastic Gradient Descent (Minimization of error function)

Fig. 9 Different regions in the Stochastic Gradient Descent


Informed Search Techniques

State Space Diagram for Hill Climbing Search Algorithm


✓ Local Maximum: It is a state which is better than its neighbor
states, but there could be another higher value state than it.
✓ Global Maximum: It is the best possible state of state space
landscape. It has the highest value of objective function.
✓ Current state: It is a state in a landscape diagram where an
agent is currently present.
✓ Flat local maximum: It is a flat space in the landscape where
all the neighbor states of current states have the same value.
✓ Shoulder: It is a plateau region which has an uphill edge.
Informed Search Techniques

Algorithm for Hill Climbing Search


Step#1: Evaluate the initial state, if it is goal state then
return success and Stop.
Step#2: Loop: until a solution global maxima is found or
there is no new operator left to apply.
Step#3: Select and apply an operator to the current state.
Step#4: Check new state:
- If it is goal state, then return success and quit.
- Else if it is better than the current state then assign new state
as a current state.
- Else if not better than the current state, then return to step#2.
Step#5: Exit.
Problems in Hill Climbing Algorithm
Local Maximum:
- A local maximum is a peak state in the landscape which is
better than each of its neighboring states.
- however, there could be another state in exist which is higher
than the local maximum.
Solution:
- Backtracking technique can be a solution of the local
maximum in state space landscape.
- Create a list of the promising path so that the algorithm can
backtrack the search space and explore other paths as well.
Problems in Hill Climbing Algorithm
Plateau:
- A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value.
- So algorithm could not find any best direction to move.
- A hill-climbing search might be lost in the plateau area.
Solution:
- The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem.
- Randomly select a state which is far away from the current
state so it is possible that the algorithm could find
non-plateau region.
Problems in Hill Climbing Algorithm
Ridge:
- A ridge is a special form of the local maximum.
- It has an area which is higher than its surrounding areas, but
itself has a slope, and cannot be reached in a single move.

Solution:
- With the use of bidirectional search, or by moving in
different directions, we can improve this problem.
Informed Search Techniques

Simulated Annealing
- A hill-climbing algorithm which never makes a move
towards a lower value guaranteed to be incomplete because it
can get stuck on a local maximum.
- if algorithm applies a random walk, by moving a successor,
then it may complete but not efficient.
- Simulated Annealing is an algorithm which yields both
efficiency and completeness.
Informed Search Techniques

Simulated Annealing...
- Mechanically, annealing is a process of hardening a metal or
glass to a high temperature then cooling gradually,
- This allows the metal to reach a low-energy crystalline state.
- The similar method is applied in simulated annealing where
the search algorithm picks a random move, instead of picking
the best move.
- If the random move improves the state with higher value, then
it follows the same path.
- Otherwise, the algorithm follows the path which has a
probability of less than 1 or it moves downhill and chooses
another path.
Informed Search Techniques

Best-first Search Algorithm


Before dive in Best-First Search, let’s talk about heuristic function:
- Heuristic Function: it is a function used in Informed Search,
and it finds the most promising path.
- Heuristic Function estimates how close a state is to the goal.
- It is represented by h(n), and it calculates the cost of an
optimal path between the pair of states.
- If h∗ (n) is estimated cost then, heuristic cost should be less
than or equal to the estimated cost [h(n) <= h∗ (n)] for
optimal solution.
Informed Search Techniques

Best-first Search Algorithm(Greedy Search)


- Best-First Search algorithm always selects the path which
appears best at that moment.
- It is the combination of depth-first search and breadth-first
search algorithms.
- It uses the heuristic function and search. Best-first search
allows us to take the advantages of both algorithms.
- With the help of best-first search, at each step, we can choose
the most promising node.
- In the best first search algorithm, we expand the node which
is closest to the goal node and the cost is estimated using
heuristic function.
Best-first Search Algorithm(Greedy Search)
Best-first Search Algorithm(Greedy Search) Steps:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node ⃝ n , from the OPEN list which has the
lowest value of h(n), and places it in the CLOSED list.
Step 4: Expand node ⃝ n , and generate successors of node ⃝ n
Step 5: Check each successor of node ⃝ n , and find whether any
node is a goal node. If any successor node is goal node, then
return success and terminate the search, else proceed to next.
Step 6: For each successor node, algorithm checks for evaluation
function f (n) = h(n), h(n) is heuristic value, and then check
if the node has been in either OPEN or CLOSED list. If the
node has not been in both list, then add it to the OPEN list.
Step 7: Return to Step 2.
Informed Search Techniques
Best-First Search Algorithm
Informed Search Techniques

Best-First Search Algorithm:


✓ Initialization:
Open [A, B], Closed [S]
✓ Iteration 1:
Open [A], Closed [S, B]
✓ Iteration 2:
Open [E, F, A], Closed [S, B]
Open [E, A], Closed [S, B, F]
✓ Iteration 3:
Open [I, G, E, A], Closed [S, B, F]
Open [I, E, A], Closed [S, B, F, G]
¬ Hence the final solution path will be: S−→ B−→ F−→ G
Informed Search Techniques
Best-First Search Algorithm: Problem
¬ Solve the problem start node:N@C and goal node:N@M
Informed Search Techniques

Best-First Search Algorithm: Solution


✓ Initialization:
- OPEN [N@J, N@D, N@B];
- Closed [ N@C ]
Informed Search Techniques
Best-First Search Algorithm: Solution
✓ Iteration 1:
- OPEN [N@C, N@K, N@A, N@I, N@D, N@B];
- Closed [ N@C, N@J ]
Informed Search Techniques
Best-First Search Algorithm: Solution
✓ Iteration 2:
- OPEN [N@J, N@M, N@C, N@A, N@I, N@D, N@B];
- Closed [ N@C, N@J, N@K ]
Informed Search Techniques
Best-First Search Algorithm: Solution
✓ Iteration 3:
- OPEN [N@J, N@C, N@A, N@I, N@D, N@B];
- Closed [ N@C, N@J, N@K, N@M ]

¬ Final solution path:

N@C

N@J

N@K

N@M
Informed Search Techniques

A* Search/Algorithm
- It is the most commonly known form of best-first search.
- It uses heuristic function – h(n), and cost to reach the node n
from the start state – g(n).
- It has combined features of UCS and greedy best-first search,
by which it solve the problem efficiently.
- A* search algorithm finds the shortest path through the
search space using the heuristic function.
- This search algorithm expands less search tree and provides
optimal result consuming lesser time.
- A* algorithm is similar to UCS except that it uses g(n) + h(n)
instead of g(n).
Informed Search Techniques
A* Search/Algorithm...
- In A* search algorithm, we use search heuristic as well as the
cost to reach the node.
- Hence we can combine both costs as following, and this sum
is called as a fitness number.
- It defines the evaluation function f (n) = g(n) + h(n) where,
h(n) is heuristics function and g(n) is the past knowledge
acquired while searching.
Informed Search Techniques
Steps in A* Search/Algorithm...
Step#1: Place the starting node in the OPEN list.
Step#2: Check if the OPEN list is empty or not, if the list is
empty then return failure and stops.
Step#3: Select the node from the OPEN list which has the
smallest value of evaluation function (g + h), if node n
is goal node then return success and stop, otherwise.
Step#4: Expand node n and generate all of its successors, and
put n into the closed list. For each successor n’, check
whether n’ is already in the OPEN or CLOSED list, if
not then compute evaluation function for n’ and place
into Open list.
Step#5: Else if node n’ is already in OPEN and CLOSED, then
it should be attached to the back pointer which reflects
the lowest g(n′ ) value.
Step#6: Return to Step 2.
Informed Search Techniques
A* Search Algorithm: Problem
¬ Solve the problem start node:N@C and goal node:N@M
Informed Search Techniques
A* Algorithm: Solution

¬ Final solution path: N@C → N@J→ N@I → N@L → N@M


Informed Search Techniques
Best-First and A*-Search Algorithm: Problem
¬ Solve the problem; S and goal node: ⃝
start node: ⃝ E
Informed Search Techniques
Best-First and A*-Search Algorithm: Solution
¬ Solve the problem; S and goal node: ⃝
start node: ⃝ E

¬ Hence the final solution path: S → B → H → G → E


Informed Search Techniques
Best-First and A*-Search Algorithm: Problem
¬ Solve the problem; a and goal node: ⃝
start node: ⃝ z
Informed Search Techniques

Best-First and A*-Search Algorithm: Solution


¬ Solve the problem; a and goal node: ⃝
start node: ⃝ z

¬ Hence the final solution path: a → c → d → e → z


Informed Search Techniques
Best-First and A*-Search Algorithm: Problem
¬ Solve the problem; A and goal node: ⃝
start node: ⃝ J
Informed Search Techniques
Best-First and A*-Search Algorithm: Solution
¬ Solve the problem; A and goal node: ⃝
start node: ⃝ J

¬ Hence the final solution path: A → F → G → I → J


Informed Search Techniques...

Difference between Best-First Search and A* search


Parameters Best-First Search A* Search
The evaluation function for The evaluation function for A*
Evaluation Function
best-first search is f (n) = h(n). search is f (n) = h(n) + g(n).
This search algorithm does not This search algorithm involves
Past Knowledge
involve past knowledge. past knowledge.
Completeness Best-first search is incomplete. A* search is complete.
It is not optimal as the path A* search is optimal as the path
Optimality
found may not be optimal. found is always optimal.
Its time complexity is O(b m ) Its time complexity is O(b m )
and space complexity can be and space complexity is also
Time and Space
Complexity polynomial; where b is the O(b m ); where b is the branch-
branching and m is the maxi- ing and m is the maximum
mum depth of the search tree. depth of the search tree
Memory It requires less memory. It requires more memory.
It keeps all the fringe or bor-
It keeps all the nodes in the
Type of nodes kept der nodes in the memory while memory while searching.
searching.
Uninformed and Informed Searching Methods

Uninformed Searching Vs Informed Searching:


Parameters Uninformed Searching Informed Searching
Alternatively Brute Force/Blind Search ✓ Heuristic Search ✓
It doesn’t use knowledge for the It uses domain knowledge for
Knowledge
searching process. the searching process.
It finds solution slow as com-
Performance It finds a solution more quickly.
pared to an informed search
Completion It is always complete. It may or may not be complete.
Cost Factor Cost is high. ✓ Cost is low. ✓
It consumes moderate time be- It consumes less time because
Time
cause of slow searching. of quick searching.
No suggestion is given regard- There is a direction given about
Direction
ing the solution in it. the solution.
Computational Comparatively higher computa- Computational requirements
Requirement tional requirements. are lessened.
Examples ✓ Depth First Search (DFS) ✓ Greedy Search
of ✓ Breadth First Search (BFS) ✓ A* Search, AO* Search
Algorithms ✓ Branch and Bound ✓ Hill Climbing Algorithm
Game Playing

Adversarial Search
- Previously studied search strategies are associated with only
one single agent to find the solution – which often expressed
in the form of a sequential actions.
- However, it might have some situations where more than one
agent is searching for the solution in same search space, and
this situation usually occurs in game playing.
- The environment with more than one agent is termed as
multi-agent environment, in which each agent is an opponent
of other agent and playing against each other.
Game Playing

Adversarial Search
- Each agent needs to consider the action of other agent and
effect of that action on their performance.
- So, searches in which two or more players with conflicting
goals are trying to explore the same search space for solution
– called adversarial searches – often known as Games.
- Games are modeled as a Search Problem and heuristic
evaluation function, and
- These are the two main factors which help to model and solve
games in AI.
Game Playing

Mini-Max Algorithm
- Mini-Max algorithm is a recursive or backtracking algorithm
used in decision-making and game theory.
- It provides an optimal move for the player assuming that
opponent is also playing optimally.
- The algorithm uses recursion to search through the game-tree.
- Mostly used for game playing in AI; like Chess, Checkers,
tic-tac-toe, go, and various two-players game.
- The algorithm computes minimax decision for current state.
- In two-player game, one is called MAX Node (your move) and
other is called MIN Node (opponent’s move).
Game Playing

Min-Max Algorithm
- Max node: ¬ □  △ and Min node: ¬ ⃝ 
OR ▽
OR
- Both the players fight as opponent player gets the minimum
benefit while themselves get the maximum benefit.
- Both are opponent of each other; MAX will select the
maximized value, and MIN will select the minimized value.
- Mini-Max algorithm performs a depth-first search (DFS) to
explore the complete game tree.
- Mini-Max algorithm proceeds all the way down to the terminal
node of the tree, then backtrack the tree as recursion.
Game Playing

Min-Max Algorithm: Tree-Diagram


Game Playing

Min-Max Algorithm: Example


- Let there are two players one is called Maximizer and other is
called Minimizer.
- Maximizer will try to get the Maximum possible score, and
Minimizer will try to get the minimum possible score.

Step#1
- Firstly, the algorithm generates the entire game-tree and apply
utility function to get utility values for terminal states.
- let’s take A is the initial state of the tree diagram.
- Suppose maximizer takes first turn which has worst-case initial
value = - ∞, and minimizer will take next turn which has
worst-case initial value = + ∞.
Game Playing
Min-Max Algorithm: Example
Game Playing

Min-Max Algorithm: Example


Step#2
- Now, first we find the utilities value for the Maximizer, its initial
value is - ∞,
- so we will compare each value in terminal state with initial value
of Maximizer and determines the higher nodes values.
- It will find the maximum among the all.
✓ Node D → max(-1,- -∞) ⇒ max(-1,4) = 4
✓ Node E → max(2, -∞) ⇒ max(2, 6) = 6
✓ Node F → max(-3, -∞) ⇒ max(-3,-5) = -3
✓ Node G → max(0, -∞) ⇒ max(0, 7) = 7
Game Playing
Min-Max Algorithm: Example
Game Playing

Min-Max Algorithm: Example


Step#3
- In the next step, it’s a turn for minimizer,
- so it will compare all nodes value with + ∞, and will find the
3rd layer node values.
✓ Node B = min(4, 6) = 4
✓ Node C = min(-3, 7) = -3
Game Playing
Min-Max Algorithm: Example
Game Playing

Min-Max Algorithm: Example


Step#4
- Now it’s a turn for Maximizer,
- it will again choose the maximum of all nodes value and find
the maximum value for the root node.
- In example, there are only 4 layers, hence we reach immediately
to the root node,
- but in real games, there will be more than 4 layers.
✓ Node A max(4, -3)= 4
Game Playing
Min-Max Algorithm: Example
Game Playing

Alpha-Beta Pruning
- it is a modified version of the mini-max algorithm.
- It is an optimization technique for the mini-max algorithm.
- As seen in the mini-max search algorithm, the number of
game states it has to explore is exponential with depth of the
tree – limitation of Mini-max Algorithm.
- exponent can’t be eliminated, but we can cut it to half.
- Hence there is a technique by which without checking each
node of the game tree we can compute the correct mini-max
decision, and this technique is called Alpha-Beta Pruning.
Game Playing

Alpha-Beta Pruning...
- This involves two threshold parameter Alpha and Beta for
future expansion, so it is called alpha-beta pruning.
- It is also called as Alpha-Beta Algorithm.
- Alpha-Beta pruning can be applied at any depth of a tree, and
often it is not only prune the tree leaves but entire sub-tree.
- The two-parameter can be defined as:
✓ Alpha: The best (highest-value) choice we have found so far at
any point along the path of Maximizer.
The initial value of alpha is −∞.
✓ Beta: The best (lowest-value) choice we have found so far at
any point along the path of Minimizer.
The initial value of beta is +∞.
Game Playing

Alpha-Beta Pruning...
- The Alpha-beta pruning to a standard mini-max algorithm,
- returns the same move as the standard algorithm does;
- but it removes all the nodes which are not really affecting the
final decision but making algorithm slow.
- Hence by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning


¬ The main condition requiring alpha-beta pruning is: α ≥ β
Game Playing

Alpha-Beta Pruning...

Key points about Alpha-Beta Pruning:


✓ The Max player will only update the value of alpha.
✓ The Min player will only update the value of beta.
✓ While backtracking the tree, node values will be passed to
upper nodes instead of values of alpha and beta.
✓ We will only pass the alpha, beta values to the child nodes
while moving downward in the tree diagram.
Game Playing

Alpha-Beta Pruning: two-player search tree


Game Playing

Alpha-Beta Pruning: two-player search tree


step#1:
- Firstly, Max player will start first move from node A where α
and β are assigned the value α = −∞ and β = +∞
- These value of alpha and beta passed down to Node B where
again α = −∞ and β = +∞ and Node B passes the same
value to its child D.

step#2:
- At Node D, the value of α will be calculated (turn for Max).
- The value of α is compared with firstly 2 and then 3, and the
max (2, 3) = 3 will be the value of α at node D and node
value will also 3.
Game Playing

Alpha-Beta Pruning|step#1–2: two-player search tree


Game Playing

Alpha-Beta Pruning: two-player search tree


step#3:
- Now, algorithm backtrack to Node B, where the value of β
will change as this is a turn of Min;
- Here, β = +∞, will compare with the available subsequent
nodes value, i.e. min (∞, 3) = 3, hence at node B now
α = −∞ and β = 3.
- In next step, we traverse the next successor (child) of Node B
which is Node E (alpha-beta pruning condition ✓), and the
values of α = −∞, and β = 3 will also be passed.
Game Playing

Alpha-Beta Pruning|step#3: two-player search tree


Game Playing

Alpha-Beta Pruning: two-player search tree


step#4:
- At Node E, it is turn of Max, and α value will change.
- The current value of α will be compared with 5,
- So max (−∞, 5) = 5, hence at Node E α = 5 and β = 3,
- Here α ≥ β, so the right successor of E will be pruned, and
algorithm will not traverse,
- the value at Node E will be 5.
Game Playing

Alpha-Beta Pruning|step#4: two-player search tree


Game Playing

Alpha-Beta Pruning: two-player search tree


step#5:
- Algorithm again backtrack the tree, from Node B to Node A.
- At Node A, the value of α will be changed to the maximum
available value; which is 3 as max (−∞, 3) = 3, and β = +∞,
- these two values now passes to right successor of A which is
Node C (α = 3 and β = +∞).
- Again from Node C, the same values will be passed on to
node F (α = 3 and β = +∞).
Game Playing

Alpha-Beta Pruning: two-player search tree


step#6:
- At Node F, the value of α will be compared with left child
which is 0, So, α = max (3, 0) = 3,
- Now the value α = 3 will be compared with right child which
is 1, and So α = max (3, 1) = 3.
- Still α remains 3, but the node value of F will become 1.
Game Playing

Alpha-Beta Pruning|step#5–6: two-player search tree


Game Playing

Alpha-Beta Pruning: two-player search tree


step#7:
- Node F returns the node value 1 to node C,
- So, at Node C α = 3 and β = +∞,
- Now, value of beta will be changed as: β = min (∞, 1) = 1.
- Here at Node C, α = 3 and β = 1, and again it satisfies the
alpha-beta condition α ≥ β,
- Therefore, the next child of C which is Node G will be pruned,
and the algorithm will not compute the entire sub-tree G.
Game Playing

Alpha-Beta Pruning|step#7: two-player search tree


Game Playing

Alpha-Beta Pruning: two-player search tree


step#8:
- Now, Node C returns the value of 1 to Node A, here the best
value for Node A is max (3, 1) = 3.
- The final game tree is shown in the next slide,
- the tree diagram also shows the nodes which are computed
and nodes which has never computed or traversed.
- Hence, the optimal value for the maximizer is 3 for this
example.
Game Playing

Alpha-Beta Pruning|step#8: two-player search tree


Module Assignment – As You Go

Module#3 Assignment is available at MS-Team.

Submission Deadline: 4th February 2023 (Before 3:00 PM)

You might also like