0% found this document useful (0 votes)
4 views29 pages

Lecture 4

The document discusses various search algorithms used in artificial intelligence, including uninformed searches like Breadth-First Search, Depth-First Search, and Uniform-Cost Search, as well as informed searches like Greedy Best-First Search and A* Search. It introduces adversarial search in competitive environments, detailing the Minimax algorithm and its efficiency, along with Alpha-Beta pruning as an optimization technique. The document outlines the properties and challenges of these algorithms in the context of games and strategic decision-making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views29 pages

Lecture 4

The document discusses various search algorithms used in artificial intelligence, including uninformed searches like Breadth-First Search, Depth-First Search, and Uniform-Cost Search, as well as informed searches like Greedy Best-First Search and A* Search. It introduces adversarial search in competitive environments, detailing the Minimax algorithm and its efficiency, along with Alpha-Beta pruning as an optimization technique. The document outlines the properties and challenges of these algorithms in the context of games and strategic decision-making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Artificial Intelligence and Expert Systems

Lecture Four Adversial Search

November 5, 2024
Breadth-First Search (BFS)

• Type: Uninformed
• Strategy: Explores all nodes level by level.
• Advantages:
• Complete and will always find a solution if one exists.
• Guarantees the shortest path if all actions have the same cost.
• Disadvantages:
• Explores all possibilities, which can be inefficient in large spaces.
• High memory consumption.

2/30
Depth-First Search (DFS)

• Type: Uninformed
• Strategy: Explores as far down one branch as possible before backtracking.
• Advantages:
• Low memory consumption.
• Can be useful for problems where solutions are deep in the search space.
• Disadvantages:
• Not guaranteed to find the shortest path.
• Can get stuck in deep, irrelevant paths.

3/30
Uniform-Cost Search (UCS)

• Type: Uninformed
• Strategy: Expands the node with the lowest path cost, ensuring the cheapest
path to the goal.
• Advantages:
• Guaranteed to find the optimal solution.
• Complete.
• Disadvantages:
• Can be slow if all actions have similar costs.
• Can explore in irrelevant directions.

4/30
Informed Search Overview

• Type: Heuristic-based search algorithms.


• Strategy: Uses additional information (heuristics) to estimate the cost from a
node to the goal.
• Examples: Greedy Best-First Search, A* Search.
• Advantages:
• Can significantly reduce the search space.
• More efficient than uninformed search in many cases.
• Disadvantages:
• Performance depends heavily on the quality of the heuristic.
• May not guarantee the optimal solution (depends on the specific algorithm).

5/30
Greedy Best-First Search

• Type: Informed.
• Strategy: Expands the node that appears to be closest to the goal according to
the heuristic.
• Heuristic: Uses only h(n) (estimate of cost to reach the goal).
• Advantages:
• Often faster than uninformed search methods.
• Useful when a good heuristic is available.
• Disadvantages:
• Not complete; may get stuck in loops or local minima.
• Not guaranteed to find the optimal solution.

6/30
A* Search

• Type: Informed.
• Strategy: Expands the node with the lowest combined cost f(n) = g(n) + h(n),
where:
• g(n): Cost to reach node n from the start.
• h(n): Estimated cost from node n to the goal.
• Advantages:
• Complete and optimal if h(n) is admissible (never overestimates).
• Balances path cost and estimated cost to the goal.
• Disadvantages:
• Memory-intensive due to the need to store all explored nodes.
• Can be slow in large search spaces if h(n) is not efficient.

7/30
Introduction to Adversarial Search

• Adversarial search deals with


competitive environments involving
multiple agents.
• Examples: Chess, checkers,
tic-tac-toe, Go, Pacman.
• Core challenge: Strategizing in
zero-sum games where one player’s
gain is another’s loss.

8/30
Search Algorithms

Search algorithms solve diverse problems effectively. We categorize them as follows:


• Shortest-path Search
• Goal: Find the least-cost path.
• Example: Puzzle-solving, pathfinding.
• Constraint Satisfaction
• Goal: Meet fixed constraints.
• Example: Scheduling, n-queens.
• Adversarial Search
• Goal: Best outcome against adversary.
• Example: Chess, strategic games.
Comparison Overview:
• Shortest-path vs. Optimization: Depth difference.
• Constraint Satisfaction: Focus on constraints.
• Adversarial: Dynamic, opponent-driven.
9/30
Types of Games

• Deterministic vs. Stochastic


• One, two, or more players
• Zero-sum games
• Perfect vs. Imperfect Information

10/30
Formalization of Deterministicc Games

• Many possible formalizations, one is:


• States: S (start at s0 )
• Players: P = {1 . . . N} (take turns)
• Actions: A (depend on player/state)
• Transition Function: S × A → S
• Terminal Test: S → {t, f}
• Terminal Utilities: S × P → R
• At the end we have utilities for each Player.
• Solution for a player is a policy: S → A

11/30
Zero-Sum and Variable Sum Games

Zero-Sum Games General Games


• Agents have opposite utilities (values on outcomes) • Agents have independent utilities (values on
• Lets us think of a single value that one maximizes outcomes)
and the other minimizes • Cooperation, indifference, competition, and more
• Adversarial, pure competition are all possible
• More later on non-zero-sum games
12/30
Minimax Algorithm
Algorithm 1 Minimax Algorithm
1: function MINIMAX(node, depth, maximizingPlayer)
2: if depth == 0 or node is a terminal node then
3: return static evaluation of node
4: end if
5: if maximizingPlayer then
6: value := −∞
7: for each child of node do
8: value := max(value, MINIMAX(child, depth - 1, false))
9: end for
10: return value
11: else
12: value := +∞
13: for each child of node do
14: value := min(value, MINIMAX(child, depth - 1, true))
15: end for
16: return value
17: end if
18: end function

13/30
Minimax Efficiency

• Time Complexity: O(bm )


• b: Branching factor (average number of moves per state).
• m: Maximum depth of the game tree.
• Space Complexity: O(bm)
• b: Branching factor (as above).
• m: Maximum depth of the tree (linear space for depth-first search).
• Challenges:
• High complexity in games with large b and m (e.g., chess).
• Managing computational resources for deeper searches.

14/30
Adversarial Search (Minimax)

• Deterministic, zero-sum games:


• Examples: Tic-tac-toe, chess,
checkers
• One player aims to maximize the
score
• The other aims to minimize the
score
• Minimax search:
• Involves a state-space search tree
• Players alternate turns Figure: Min Max
• Compute the minimax value of a
node:
• Represents the best utility
achievable against an optimal 15/30
Min Max

Figure: Image source: Link to image source

16/30
Min-Max Example 2

17/30
Quiz 1

Figure: Image Source: Stack Overflow Minimax Explanation.


18/30
Quiz 2

Figure: Image Source: SlidePlayer Tic-Tac-Toe State.

19/30
Properties of Mini-Max Algorithm

• Recursive exploration of the game tree down to terminal nodes.


• Backtracks to propagate scores up the tree.
• Each player chooses the move that benefits them the most (MAX maximizes, MIN
minimizes).

20/30
Alpha-Beta Pruning: An Optimization for Mini-Max

• Alpha-Beta Pruning optimizes the Mini-Max algorithm.


• Reduces the number of nodes evaluated in the game tree, effectively cutting
down the search space.
• Uses two threshold parameters:
• Alpha (α): Best value for MAX so far, initialized to −∞.
• Beta (β ): Best value for MIN so far, initialized to +∞.

21/30
Conditions and Key Points for Alpha-Beta Pruning

• Pruning occurs when α ≥ β , stopping further exploration of a branch.


• Max Player updates only α.
• Min Player updates only β .
• Node values are passed up the tree; only α and β are passed down.

22/30
Step 1

Figure: Alpha Beta Example 23/30


Example of Alpha-Beta Pruning (Step-by-Step)

• Step 1: Start at root (Node A), set α = −∞, β = +∞, pass values to Node B.
• Step 2: At Node D, MAX updates α to 3 (from terminal nodes 2, 3).
• Step 3: Backtrack to Node B, MIN updates β to 3.
• Step 4: Traverse Node E, MAX updates α to 5. Prune right child as α ≥ β .

24/30
Phase 2

Figure: Alpha Beta Example 25/30


Phase 3

Figure: Alpha Beta Example 26/30


Phase 4

Figure: Alpha Beta Example 27/30


Phase 5

Figure: Alpha Beta Example 28/30


Phase 6

Figure: Alpha Beta Example 29/30

You might also like