Noteartificial intelligence
Noteartificial intelligence
4. PEAS Analysis
PEAS (Performance, Environment, Actuators, Sensors) is a framework for designing
intelligent agents.
For each agent:
1. Performance measure: What constitutes success? (e.g., cleanliness for a vacuum
cleaner).
2. Environment: Where does the agent operate? (e.g., an apartment floor).
3. Actuators: What tools does the agent use? (e.g., wheels, brushes).
4. Sensors: How does the agent perceive the environment? (e.g., dirt detectors).
Examples:
o Medical diagnosis system:
Additional Concepts
Branching Factor (b)
The maximum number of children (successors) a node can have.
A higher branching factor leads to exponential growth in both time and space complexity.
Depth (d)
The number of steps in the shortest path from the initial state to the goal state.
Determines how early the solution can be found in the tree.
Maximum Path Length (m)
The longest possible path in the state space.
For infinite state spaces, m = ∞, which can make some algorithms like DFS problematic.
Real-World Implications
Trade-offs:
o Algorithms with better completeness and optimality often require more time and
memory.
o Example: BFS is complete and optimal but has high space complexity, making it
unsuitable for memory-constrained systems.
Selection of Algorithms:
o For small problems or problems with finite depth, BFS might be a good choice.
o For problems with infinite state spaces, Iterative Deepening Search (IDS) combines
the benefits of DFS (low memory usage) and BFS (completeness and optimality).
Lecture-4
1. What is Adversarial Search?
Definition: Adversarial search is a search strategy used in environments where agents
must compete against each other, such as in games. Unlike simple search problems, the
outcome depends on the actions of both the agent and its adversary.
Characteristics:
o Involves two or more agents with conflicting goals.
o Often involves zero-sum games, where one player's gain is equivalent to the other
player's loss.
o The environment is typically deterministic and fully observable, making strategic
planning essential.
Example: Chess, where each move by one player counters the other.
2. Application in Games
Adversarial search is widely applied in games requiring strategy and competition. Some
applications include:
Board Games:
o Chess
o Checkers
o Go
Card Games:
o Poker
o Bridge
Video Games:
o Real-time strategy games like StarCraft or Age of Empires.
Economic Simulations:
o Multi-agent systems simulating competitive marketplaces.
5. Multiplayer Games
In multiplayer games, more than two players compete, requiring additional strategies to
handle complex interactions.
Features:
o Coalitions: Players may form temporary alliances.
o Dynamic Objectives: Each player may have unique or evolving goals.
o Complex Evaluation Functions: Utility values need to consider multiple players.
Example: Board games like Risk or economic simulations with multiple competing agents.
6. Min-Max Algorithm
A fundamental algorithm used in adversarial search for two-player games.
Concept:
o Maximizing Player: Chooses moves to maximize their payoff.
o Minimizing Player: Chooses moves to minimize the maximizing player’s payoff.
Steps:
1) Generate the game tree from the current state.
2) Evaluate terminal nodes using a utility function.
3) Backpropagate values up the tree:
At max nodes, choose the maximum value of child nodes.
At min nodes, choose the minimum value of child nodes.
Output: The best move for the maximizing player assuming the opponent plays optimally.
7. Alpha-Beta Pruning
An optimization technique for the Min-Max algorithm.
Purpose: Prune branches of the game tree that cannot affect the final decision, reducing
computation time.
Key Terms:
o Alpha: The best value the maximizing player can guarantee at that level or above.
o Beta: The best value the minimizing player can guarantee at that level or above.
Process:
o Traverse the game tree in depth-first order.
o Prune branches where:
The value of a branch is worse than the alpha value for the maximizing player.
The value of a branch is worse than the beta value for the minimizing player.
Result: Same decision as Min-Max but with fewer node evaluations.
Efficiency: Reduces the number of nodes to O(bd/2), where b is the branching factor and d
is the depth.
8. State-of-the-Art Game Programs
Advanced programs use adversarial search and other AI techniques to outperform human
players.
Examples:
o Deep Blue: Defeated world chess champion Garry Kasparov using advanced game
tree exploration and evaluation.
o AlphaGo: Used reinforcement learning and neural networks to master the game of
Go, defeating top human players.
o Poker AI: AI systems like Libratus and Pluribus excel in imperfect information games
like Poker.
o Real-time Strategy Games:
AI programs for games like StarCraft II use a mix of adversarial search,
machine learning, and planning to defeat professional players.