Unit 3
Unit 3
1. Initial State (S₀): The state that defines how the game is set up at the beginning.
2. Players: Defines which player has the move in each state.
3. Actions (A): The set of all possible actions a player can take at a given state.
4. Result Function: Defines the transition model, the result of an action on a given
state.
5. Terminal Test: A function that determines if the game has ended in a particular
state (terminal states).
6. Utility Function: Also called a payoff function, it assigns a value to terminal
states indicating outcomes (e.g., win, lose, draw).
7. Game Tree: A conceptual tree structure representing all possible moves and
states in the game(Artificial Intelligence…).
1. Maximizing outcomes: Optimal decisions ensure that the player achieves the
best possible result.
2. Adversarial nature: Since both players are assumed to play optimally, these
decisions provide a strategy to counter the opponent's optimal moves.
3. Prevents loss: In a zero-sum game, optimal decisions minimize the worst-case
loss for a player.
4. Strategic planning: Players must plan ahead, considering future possible moves
by both themselves and their opponents.
5. Handling complexity: Optimal decisions help manage complex games with
many potential states and moves.
6. Game evaluation: They guide evaluating states to estimate how good or bad a
move is without exploring the entire game tree.
7. Benchmarking: Optimal decisions form the basis for evaluating suboptimal
strategies against perfect play(Artificial Intelligence…)(Artificial Intelligence…).
Ply:
1. Definition: A "ply" is one player's move in a game. A full turn consists of two ply.
2. Game Depth: Ply is used to measure the depth of search trees (the number of
half-moves made).
3. Limit of Search: Algorithms like minimax often calculate moves up to a certain
ply depth, depending on computational resources.
4. Decision Evaluation: The higher the number of ply evaluated, the better the
decisions, as more future moves are considered.
5. Search Tree Representation: In games like chess, deeper ply searches provide
more strategic depth but increase complexity.
6. Time Complexity: With each additional ply, the number of possible game states
explored grows exponentially.
7. Influence on Search Algorithms: Ply depth impacts the effectiveness and
efficiency of algorithms like alpha-beta pruning(Artificial Intelligence…)(Artificial
Intelligence…).
Minimax:
1. Optimal Strategy: The minimax algorithm is designed to find the optimal move by
considering both players' possible actions.
2. Recursive Algorithm: Minimax works recursively, evaluating the utility values of
terminal nodes and backing them up through the game tree.
3. Maximizing Utility: The MAX player aims to maximize the minimum utility that
the MIN player can force.
4. Minimizing Utility: MIN tries to minimize the maximum gain for MAX by
choosing the lowest utility value available.
5. Depth-first Search: Minimax performs a complete depth-first search through the
game tree to find the best move.
6. Utility Propagation: Utility values are propagated from terminal nodes to root,
ensuring optimal decision-making.
7. Exponential Complexity: The time complexity of minimax is O(b^m), where b is
the branching factor and m is the depth
1. Purpose: The minimax algorithm is used to determine the optimal move for a
player assuming that both players play optimally.
2. Recursive Computation: It recursively computes the minimax values of each
successor state, starting from the leaves of the game tree and backing them up
to the root.
3. Utility Function: The value of a terminal state is determined by the utility
function. For intermediate states, MAX selects the move with the maximum
value, while MIN selects the move with the minimum value.
4. Depth-First Search: The algorithm performs a complete depth-first search
through the entire game tree.
5. Decision Process: At each node, MAX tries to maximize the utility, and MIN tries
to minimize it, leading to an optimal strategy.
6. Complexity: The time complexity is O(bm)O(b^m)O(bm), where bbb is the
branching factor, and mmm is the depth of the tree. The space complexity is
O(m)O(m)O(m) if actions are generated one at a time.
7. Example: If MAX has three possible moves leading to utilities 3, 2, and 1, it will
choose the move that leads to 3(Artificial Intelligence…)(Artificial Intelligence…).
1. Chess (Deep Blue): IBM's Deep Blue is one of the most famous chess
programs, which defeated world champion Garry Kasparov. It used parallel
computing and specialized processors to analyze up to 30 billion positions per
move.
2. Hydra: A successor to Deep Blue, Hydra utilized multiple processors and
advanced pruning heuristics to reduce the effective branching factor, allowing
deep searches in chess games.
3. Checkers: Checkers has been completely solved using endgame databases that
calculate the optimal move for every position.
4. Othello (Logistello): The Othello-playing program Logistello used probabilistic
cuts and alpha-beta pruning to enhance its ability to search large game trees
efficiently.
5. Go: AI programs like AlphaGo, developed by DeepMind, combined deep learning
and Monte Carlo Tree Search to master the game of Go, a much more complex
game than chess.
6. Poker: AI programs for poker, such as Libratus, rely on solving partially
observable decision problems and incorporate bluffing strategies.
7. Role of AI: These game programs showcase the integration of advanced search
algorithms, heuristic evaluations, and learning techniques in mastering complex
games(Artificial Intelligence…)(Artificial Intelligence…).
9. Explain Wumpus world game with diagram and agent program of it:
Associativity:
Double Negation:
Implication Elimination:
De Morgan’s Laws:
Distributivity:
1. Chess (Deep Blue): IBM's Deep Blue became famous after defeating world chess
champion Garry Kasparov. It used parallel processing, specialized VLSI chips for move
generation, and evaluated up to 30 billion positions per move(Artificial Intelligence…).
2. Hydra: Hydra succeeded Deep Blue and was designed for advanced chess play. It used
a combination of massive parallel processing and customized algorithms to dominate in
chess(Artificial Intelligence…).
3. Othello (Logistello): The Logistello program utilized probabilistic cutting algorithms, like
PROBCUT, to evaluate potential moves in Othello. It was known for its efficient pruning
techniques(Artificial Intelligence…).
4. Checkers (Chinook): Chinook, the checkers program, solved checkers by calculating
optimal endgame strategies using an endgame database, making it unbeatable(Artificial
Intelligence…).
5. Go (AlphaGo): AlphaGo, developed by DeepMind, used deep learning and Monte Carlo
Tree Search (MCTS) to play Go. It became the first AI to defeat a professional human
player(Artificial Intelligence…).
6. Poker (Libratus): Libratus, a poker-playing AI, excelled in games with incomplete
information by using a combination of game theory and machine learning(Artificial
Intelligence…).
7. Advances in AI: These programs highlight how AI has evolved from using brute-force
search to incorporating advanced learning algorithms and probabilistic models(Artificial
Intelligence…).
1. Semantics Definition: In AI, the semantics of propositional logic refers to the rules that
define the truth values of sentences relative to a model(Artificial Intelligence…).
2. Atomic Sentences: These are the simplest sentences in propositional logic, consisting
of a single proposition symbol (e.g., P1,2P1,2P1,2, "There is a pit in square
[1,2]")(Artificial Intelligence…).
3. Truth Assignment: The truth value of each atomic sentence is determined directly by
the model. For example, in a specific model, P1,2P1,2P1,2 might be true if a pit exists in
that square(Artificial Intelligence…).
4. Complex Sentences: Atomic sentences can be combined using logical connectives (¬,
∧, ∨, ⇒) to form more complex sentences, whose truth values are derived from the
atomic ones(Artificial Intelligence…).
5. Model: A model provides truth assignments for each atomic sentence. For example, a
model might define P1,2=falseP1,2 = falseP1,2=false, meaning no pit exists in that
square(Artificial Intelligence…).
6. Recursive Evaluation: The truth of a complex sentence is computed by recursively
evaluating the truth of its atomic components based on the model(Artificial
Intelligence…).
7. Use in AI: AI systems use semantics to reason about knowledge and infer new facts
from known information(Artificial Intelligence…).
18. State and explain Minimax algorithm and alpha-beta pruning algorithm:
Minimax Algorithm:
1. Purpose: Minimax is used in game theory to find the optimal move for a player
assuming both players play optimally(Artificial Intelligence…).
2. Recursion: The algorithm explores the entire game tree by recursively evaluating the
utility values of terminal states(Artificial Intelligence…).
3. MAX and MIN Players: MAX tries to maximize the minimum value, while MIN tries to
minimize the maximum value in a game state(Artificial Intelligence…).
4. Utility Values: Each terminal state is assigned a utility, and the values propagate up the
game tree to determine the best move(Artificial Intelligence…).
5. Complete Search: The algorithm explores every possible move, making it
computationally expensive (O(b^m))(Artificial Intelligence…).
6. Depth-First Search: Minimax performs a depth-first search, making it ideal for games
like chess and tic-tac-toe(Artificial Intelligence…).
7. Optimality: It guarantees an optimal strategy if both players play perfectly(Artificial
Intelligence…).
Alpha-Beta Pruning:
i) Completeness:
iii) Satisfiability: