0% found this document useful (0 votes)
17 views

Unit-2 Introduction to Search

Introduction to search

Uploaded by

Shayar Chauhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Unit-2 Introduction to Search

Introduction to search

Uploaded by

Shayar Chauhan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Unit-2

Introduction to Search

2.1 Searching for Solutions

Search problems are foundational in AI, where the goal is to find a path from an initial state to a
goal state, typically in a problem space. This can involve searching for solutions in decision-
making problems or game-playing strategies.

2.1.1 Introduction to Search Problems

• Search Problem Definition: A search problem consists of:


o Initial state: The starting point of the search.
o Goal state: The target or solution.
o Actions: The set of legal moves that can transition the search from one state to
another.
o Transition model: A description of how actions transform one state into another.
o Path cost: A numerical value assigned to each path from the initial state to the
goal.
• Search space: A state space represented by a tree or graph where nodes are states and
edges represent actions.
• Search strategy: Determines the order in which the search space is explored.

2.1.2 Uninformed Search Strategies

Uninformed search strategies, also called blind search, explore the search space without any
domain-specific knowledge beyond the problem definition. These strategies do not rely on any
information about the goal, except for the rules of the environment.

2.1.2.1 Breadth-First Search (BFS)

• Description: BFS explores all nodes at the present depth level before moving on to nodes
at the next depth level.
• How it works:
o Starts from the initial state and explores all adjacent nodes.
o Expands nodes level by level, ensuring the shortest path in an unweighted graph.
• Characteristics:
o Complete: Will always find a solution if one exists.
o Optimal: Guarantees the shortest path in an unweighted problem.
o Time complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor and
ddd is the depth of the solution.
o Space complexity: O(bd)O(b^d)O(bd), as BFS stores all nodes at each level.

2.1.2.2 Depth-First Search (DFS)

• Description: DFS explores as far as possible along a branch before backtracking.


• How it works:
o Begins at the root, explores deeper by going down one branch until the goal is
found or the branch ends.
o If no goal is found, it backtracks and explores the next available branch.
• Characteristics:
o Complete: Not complete in infinite-depth search spaces (may get stuck in infinite
loops).
o Not optimal: Can miss the shortest path if deeper solutions are found first.
o Time complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor and
ddd is the depth of the solution.
o Space complexity: O(b×d)O(b \times d)O(b×d), due to the stack holding the
search path.
o Advantages: Uses less memory than BFS.

2.1.2.3 Uniform Cost Search

• Description: A variation of BFS that accounts for the cost of each action and expands the
least-cost node first.
• How it works:
o It treats the problem as a cost minimization problem, expanding nodes with the
lowest path cost.
o Uses a priority queue to store and retrieve nodes based on their accumulated cost.
• Characteristics:
o Complete: Will find a solution if one exists.
o Optimal: Finds the least-cost path, given that the cost of actions is non-negative.
o Time complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor and
ddd is the depth of the solution.
o Space complexity: O(bd)O(b^d)O(bd), due to the storage of all generated nodes.
2.1.2.4 Iterative Deepening Search

• Description: Iterative Deepening Search (IDS) combines the benefits of BFS and DFS. It
performs DFS repeatedly with increasing depth limits.
• How it works:
o The search starts with a depth limit of 1, performs DFS up to that limit, then
increases the limit and repeats the search until the goal is found.
o Ensures completeness like BFS, but uses less memory, like DFS.
• Characteristics:
o Complete: Always finds a solution if one exists.
o Optimal: Finds the shortest path in an unweighted graph, similar to BFS.
o Time complexity: O(bd)O(b^d)O(bd), though it is more efficient than BFS due
to the repeated depth-limited DFS.
o Space complexity: O(b×d)O(b \times d)O(b×d), since it stores only one path
from the root to a node at any given time.

Summary of Uninformed Search Strategies:

Completenes Optimal Time Space


Search Strategy
s ity Complexity Complexity
Breadth-First O(bd)O(b^d)O O(bd)O(b^d)O(
Yes Yes
Search (bd) bd)
O(b×d)O(b
Depth-First No (infinite O(bd)O(b^d)O
No \times
Search depth) (bd)
d)O(b×d)
Uniform Cost O(bd)O(b^d)O O(bd)O(b^d)O(
Yes Yes
Search (bd) bd)
O(b×d)O(b
Iterative O(bd)O(b^d)O
Yes Yes \times
Deepening (bd)
d)O(b×d)

2.1.3 Informed Search Strategies (Heuristic Search)

Informed search strategies, also known as heuristic search, use domain-specific knowledge
(heuristics) to guide the search process towards the most promising paths, reducing the search
space and improving efficiency. These strategies are designed to find the goal state more quickly
by selecting nodes based on their estimated distance to the goal.
2.1.3.1 Greedy Best-First Search

• Description: Greedy Best-First Search selects the node that appears to be closest to the
goal, based on a heuristic function. It does not consider the path cost but only the
estimated cost to the goal (called the heuristic value, h(n)h(n)h(n)).
• How it works:
o It begins at the initial state and uses a heuristic to evaluate the "promising" states
to expand.
o The search always chooses the node with the lowest heuristic value to explore
next.
• Characteristics:
o Not complete: If the heuristic is not admissible (i.e., it overestimates the cost),
greedy search can miss the goal or get stuck in infinite loops.
o Not optimal: It may find suboptimal solutions because it ignores the actual path
cost.
o Time complexity: O(bd)O(b^d)O(bd), where bbb is the branching factor and
ddd is the depth of the solution.
o Space complexity: O(bd)O(b^d)O(bd), as it stores all the nodes expanded.
o Heuristic: Uses h(n)h(n)h(n), the estimated cost to reach the goal from node
nnn.

2.1.3.2 A Search Algorithm*

• Description: A* Search combines the advantages of both Greedy Best-First Search and
Uniform Cost Search. It uses both the path cost g(n)g(n)g(n) and the heuristic
h(n)h(n)h(n) to guide the search. The evaluation function is f(n)=g(n)+h(n)f(n) =
g(n) + h(n)f(n)=g(n)+h(n), where:
o g(n)g(n)g(n): The cost to reach the current node nnn from the start node.
o h(n)h(n)h(n): The heuristic estimate of the cost to reach the goal from node nnn.
o f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)+h(n): The evaluation function that
balances the path cost and the heuristic estimate.
• How it works:
o A* starts from the initial node and evaluates the nodes based on their f(n)f(n)f(n)
value.
o The node with the lowest f(n)f(n)f(n) is expanded first, ensuring that the search
explores the most promising paths.
o A* guarantees an optimal solution if the heuristic is admissible (never
overestimates the true cost) and consistent (satisfies the triangle inequality).
• Characteristics:
o Complete: A* is complete, meaning it will always find a solution if one exists.
o Optimal: A* guarantees an optimal solution if the heuristic is admissible and
consistent.
o Time complexity: O(bd)O(b^d)O(bd) in the worst case, where bbb is the
branching factor and ddd is the depth of the solution.
o Space complexity: O(bd)O(b^d)O(bd), since all generated nodes are stored in
memory.
o Heuristic: Uses h(n)h(n)h(n), which is admissible and consistent.

2.1.3.3 Heuristics and Their Role in Search

• Definition of Heuristic: A heuristic is a function that estimates the cost of the cheapest
path from a given node to the goal. It helps the search algorithm prioritize nodes that are
likely to lead to the goal more quickly.
• Role in Search:
o Guiding the search: Heuristics guide the search towards the most promising
paths and prevent exploring less likely solutions, making the search process more
efficient.
o Reducing search space: By using a heuristic, we can reduce the number of nodes
that need to be explored, leading to faster solutions in large problem spaces.
o Types of Heuristics:
▪ Admissible Heuristic: A heuristic is admissible if it never overestimates
the true cost of reaching the goal. This ensures optimality in algorithms
like A*.
▪ Consistent Heuristic: A heuristic is consistent if, for every node nnn and
successor n′n'n′, the estimated cost from nnn to the goal is less than or
equal to the cost of getting from nnn to n′n'n′ plus the heuristic of n′n'n′.
Consistency ensures that the search is efficient and guarantees optimality
in A*.
• Common Heuristics:
o Manhattan Distance: Used in grid-based problems (like pathfinding on a 2D
grid), the Manhattan distance is the sum of the horizontal and vertical distances
between two points.
o Euclidean Distance: Used for problems in continuous spaces, the Euclidean
distance measures the straight-line distance between two points.
o Misplaced Tiles: Used in puzzle problems like the 8-puzzle, this heuristic counts
the number of tiles that are not in their correct position.
o Goal Test: Some heuristics are derived from problem-specific domain
knowledge, like checking whether a state is close to the goal state.
Summary of Informed Search Strategies:

Search Complete Optima Time Space


Heuristic Used
Strategy ness lity Complexity Complexity
Greedy Best- O(bd)O(b^d) O(bd)O(b^d)
No No h(n)h(n)h(n)
First O(bd) O(bd)
O(bd)O(b^d) O(bd)O(b^d) g(n)+h(n)g(n)
A* Search Yes Yes
O(bd) O(bd) + h(n)g(n)+h(n)

2.2 Local Search Algorithms and Optimistic Problems

Local search algorithms are useful for solving optimization problems where the solution is
sought by iteratively improving an initial guess. Unlike uninformed search algorithms, local
search algorithms only consider the current state and its neighbors, without keeping track of the
entire search space.

2.2.1 Introduction to Local Search Algorithms

• Local search is a class of algorithms that search for solutions in a local region of the
problem space rather than exploring the entire space.
• Key features:
o Typically, local search only requires storing a small number of states (usually one
current state and its neighbors).
o Local search algorithms are particularly effective in optimization problems, where
the goal is to find a better solution by iteratively improving the current one.
o They may get stuck in local optima, but they are often faster and require less
memory than exhaustive search algorithms.

2.2.2 Hill-Climbing Algorithm

• Description: The Hill-Climbing algorithm is a simple local search algorithm that


iteratively moves towards the direction of increasing elevation or value to find the peak
or solution.
• How it works:
o Starts from an arbitrary initial state.
o At each step, it moves to a neighbor that improves the objective function (e.g.,
increases in value or reduces cost).
o If no neighbor improves the current state, the algorithm halts.
• Characteristics:
o Local maximum: The algorithm may get stuck at a local maximum (suboptimal
solution) because it only looks at neighboring solutions.
o Greedy approach: It only evaluates one neighbor at a time, without considering a
broader search space.

2.2.2.1 Steepest Ascent Hill Climbing

• Description: Steepest Ascent Hill Climbing is a variant of Hill Climbing where, instead
of selecting any neighboring state, it selects the one with the highest improvement
(steepest ascent).
• How it works:
o For every possible neighboring state, it calculates the value of the objective
function.
o The state that provides the largest increase in the objective function is chosen as
the next state.
• Characteristics:
o More thorough than basic Hill Climbing as it always chooses the best immediate
move.
o Local optimum problem: Like basic Hill Climbing, it may still get stuck in a
local optimum.

2.2.2.2 Stochastic Hill Climbing

• Description: Stochastic Hill Climbing is another variant of Hill Climbing where the
algorithm chooses a neighbor randomly rather than deterministically.
• How it works:
o At each step, a random neighbor is selected from the neighboring states, and if
this state improves the objective function, it becomes the new current state.
• Characteristics:
o Randomness: Stochastic Hill Climbing introduces randomness, which can help
escape local optima.
o Less deterministic: Unlike steepest ascent, it does not always choose the best
neighbor but may occasionally explore less promising areas.
o Still susceptible to local maxima: Even with randomness, it can still get stuck in
suboptimal solutions.
2.2.3 Simulated Annealing

• Description: Simulated Annealing is a probabilistic local search algorithm inspired by


the annealing process in metallurgy. It helps escape local optima by allowing occasional
moves to worse states.
• How it works:
o The algorithm starts at a high temperature, which controls the probability of
moving to worse states.
o As the algorithm proceeds, the temperature is gradually decreased.
o When the temperature is high, the algorithm is more likely to explore the search
space and make moves to worse states. As the temperature decreases, it becomes
more selective and focuses on improving the solution.
• Characteristics:
o Escape local optima: Unlike Hill Climbing, it has a mechanism to accept worse
solutions initially, which helps avoid getting stuck in local minima.
o Cooling schedule: The temperature decay schedule (how quickly the temperature
decreases) affects the algorithm's performance.
o Global optimum: With the right cooling schedule, Simulated Annealing can
converge to the global optimum.
o Time complexity: Depends on the temperature schedule and the problem being
solved.

2.2.4 Local Beam Search

• Description: Local Beam Search is a variant of local search that keeps track of multiple
states (rather than just one) at each iteration.
• How it works:
o The algorithm starts with multiple random states (called beams) and explores their
neighbors simultaneously.
o At each step, it selects the best states from the current set of beams.
o The process continues until a goal is found or no improvement is possible.
• Characteristics:
o Multiple paths: By maintaining multiple states, it explores the search space more
thoroughly than basic Hill Climbing.
o Better coverage: It avoids getting stuck in local optima by considering multiple
possible paths.
o Time complexity: The time complexity is proportional to the number of beams
and the branching factor.
2.2.5 Optimistic Problems and Challenges in Local Search

• Optimistic Problems: Local search algorithms, especially Hill Climbing, can struggle
with problems where the goal is not easily reachable by small, incremental
improvements. Optimistic problems are those that require larger changes or more
sophisticated strategies to escape local optima.
• Challenges:
o Local optima: The algorithm may get stuck in a local optimum, which is not the
best solution globally.
o Plateaus: A plateau occurs when many neighboring states have the same
objective function value, making it difficult for the algorithm to determine the
next best move.
o Slow convergence: Algorithms like Simulated Annealing might take too long to
converge, especially with poor temperature schedules or very complex problems.

2.3 Adversarial Search

Adversarial search refers to the search in environments where there is a conflict between two or
more agents, each trying to maximize their own benefit. This is common in game-playing and
decision-making scenarios, where an agent's optimal strategy depends on anticipating the
opponent's actions.

2.3.1 Introduction to Adversarial Search

• Adversarial search problems involve two agents (often referred to as players) with
opposing goals, such as in games like chess, checkers, or tic-tac-toe.
• Each agent tries to minimize the other's chances of winning while maximizing their own.
• The key challenge is predicting the opponent's moves and making decisions based on this
anticipation.

2.3.2 Minimax Algorithm

• Description: The Minimax algorithm is a decision-making algorithm used in adversarial


search to minimize the possible loss for a worst-case scenario. It is primarily used in two-
player zero-sum games.
• How it works:
o The algorithm simulates the game tree, where each node represents a game state,
and the edges represent possible moves.
o The maximizing player (Player 1) aims to maximize the score, while the
minimizing player (Player 2) aims to minimize the score.
o The algorithm recursively evaluates each possible move to determine the best one.
• Characteristics:
o Optimal: If the game is played optimally, Minimax can find the best move for
both players.
o Time complexity: The time complexity is O(bd)O(b^d)O(bd), where bbb is the
branching factor and ddd is the depth of the game tree.

2.3.3 Optimal Play and Strategy

• Optimal play refers to the strategy that guarantees the best outcome for a player,
assuming both players play optimally.
• In two-player games, optimal play ensures that the player does not make a move that
leads to a worse outcome.
• Strategy: Minimax is used to predict the moves of the opponent and select the best move,
considering the future consequences of each choice.

2.3.4 Applications in Game Theory

• Game Theory studies mathematical models of strategic interaction between rational


agents. Adversarial search algorithms like Minimax are closely related to the concepts of
game theory.
• Applications:
o Chess: Using Minimax, an AI can predict the opponent's moves and select the
best move to counter them.
o Tic-Tac-Toe: A simpler application where the Minimax algorithm guarantees a
draw or win if the player plays optimally.
o Economics and Auctions: In competitive environments, game theory and
adversarial search strategies can be applied to understand and predict competitive
behavior.

Summary of Key Concepts:

Topic Description
A local search algorithm that moves towards the best neighbor
Hill-Climbing
state.
Steepest Ascent A variant of Hill Climbing that selects the steepest (best)
Hill Climbing neighboring state.
Stochastic Hill A variant that randomly selects neighbors, introducing
Climbing randomness to escape local optima.
Simulated A probabilistic algorithm that allows occasional moves to worse
Annealing states to escape local minima.
Local Beam Search Keeps track of multiple states, exploring neighbors in parallel.
Optimistic Challenges in local search where large changes or sophisticated
Problems strategies are needed.
A decision-making algorithm used in adversarial search for two-
Minimax Algorithm
player zero-sum games.
Strategy ensuring the best possible outcome in a game assuming
Optimal Play
optimal

2.4 Search for Games

Searching in games involves making decisions based on the current game state and predicting the
consequences of potential moves. In adversarial environments, where players have conflicting
goals, algorithms like Minimax and Alpha-Beta Pruning help in making optimal decisions.

2.4.1 Introduction to Game Searching

• Game Searching is the process of finding the best move in a game by evaluating the
possible outcomes of different moves.
• This is critical in two-player zero-sum games, where the goal is to maximize one's own
payoff while minimizing the opponent's payoff.
• Searching involves creating a game tree and exploring it using algorithms such as
Minimax, which evaluates all possible moves to choose the best one.
• Key Concepts:
o Game state: Represents the configuration of the game at a given time.
o Players: Two competing agents trying to maximize their respective goals
(typically, Player 1 aims to maximize, and Player 2 aims to minimize).
o Game Tree: A tree-like structure representing all possible states and moves in the
game.
2.4.2 Representation of Games

• Game Representation refers to how the game and its state are modeled for
computational search. The representation must capture the rules of the game, the set of
valid moves, and the win/loss conditions.
• Key components:
o Game State: A snapshot of the game, such as the position of pieces in chess or
the board in tic-tac-toe.
o Move: A valid action that can be performed by a player at any given state.
o Terminal State: A game state where the game ends, such as when a player wins
or the game is a draw.
o Utility Function: A function that assigns a numerical value to terminal states,
helping in evaluating the desirability of outcomes (e.g., +1 for a win, 0 for a draw,
and -1 for a loss).

2.4.3 Game Trees and Their Exploration

• Game Trees represent the possible sequences of moves in a game, with each node
representing a game state and each edge representing a move.
• Structure:
o The root of the tree represents the initial game state.
o Branches represent possible moves that lead to new game states.
o Leaf nodes represent terminal states where the game ends.
• Exploration:
o Minimax Algorithm: It explores the game tree recursively to determine the
optimal move. Each node is evaluated based on the assumption that both players
play optimally.
o Depth-Limited Search: Often, game trees are large, so depth-limited search is
used to limit the tree's size, making the search more manageable.

2.4.4 Alpha-Beta Pruning in Games

• Alpha-Beta Pruning is an optimization technique for the Minimax algorithm that


eliminates the need to explore branches of the game tree that are guaranteed to not
influence the final decision.
• Goal: To reduce the number of nodes evaluated in the search tree without affecting the
result.
• How it works:
o Alpha: The best value found so far for the maximizing player (Player 1).
o Beta: The best value found so far for the minimizing player (Player 2).
o If a node’s value is worse than the current alpha or beta, further exploration of
that branch is stopped (pruned).
• Benefits:
o Efficiency: Alpha-Beta Pruning reduces the time complexity of the Minimax
algorithm, allowing the search to explore deeper levels of the game tree in less
time.

2.5 Alpha-Beta Pruning

Alpha-Beta Pruning is an enhancement to the basic Minimax algorithm, improving its


performance in adversarial search by pruning branches of the game tree that do not need to be
explored.

2.5.1 Introduction to Alpha-Beta Pruning

• Alpha-Beta Pruning is a search algorithm that improves the efficiency of the Minimax
algorithm by eliminating the exploration of branches that do not need to be examined.
• It maintains two values: alpha (the best value found so far for Player 1) and beta (the
best value found so far for Player 2).
• When a node's value is no longer useful for making a decision, it is pruned, reducing the
size of the search tree.

2.5.2 How Alpha-Beta Pruning Works

• The algorithm works by keeping track of two values—alpha and beta—during the
Minimax evaluation.
o Alpha represents the best score that the maximizing player (Player 1) can
guarantee so far.
o Beta represents the best score that the minimizing player (Player 2) can guarantee
so far.
• Pruning occurs when:
o If Player 1's current node is worse than Player 2’s best-known move (Beta), the
algorithm will prune (stop) exploring that branch.
o If Player 2's current node is better than Player 1’s best-known move (Alpha), the
algorithm will prune that branch.
• Result: It reduces the number of nodes that need to be evaluated, thus speeding up the
search process.

2.5.3 Advantages of Alpha-Beta Pruning

• Efficiency: Alpha-Beta pruning reduces the number of nodes explored by the Minimax
algorithm, improving efficiency and enabling deeper searches in the same amount of
time.
• Optimality: Alpha-Beta pruning does not affect the correctness of the Minimax
algorithm; it still guarantees that the optimal move is chosen, as long as both players play
optimally.
• Time Complexity: While the basic Minimax algorithm has a time complexity of
O(bd)O(b^d)O(bd), where bbb is the branching factor and ddd is the depth of the game
tree, Alpha-Beta pruning can reduce this to O(bd/2)O(b^{d/2})O(bd/2), potentially
doubling the depth of search.

2.5.4 Optimizing Minimax with Alpha-Beta Pruning

• Minimax and Alpha-Beta Pruning work together to explore the game tree effectively.
• The Minimax algorithm evaluates the game tree by alternating between maximizing and
minimizing values, while Alpha-Beta pruning eliminates branches that are unnecessary.
• Optimal Search: By pruning large portions of the search tree, the algorithm can search
deeper into the tree, leading to better decisions in less time.
• Order of Node Exploration: The effectiveness of Alpha-Beta pruning depends heavily
on the order in which nodes are explored. If the best moves are explored first, more
pruning occurs, making the search faster.

2.5.5 Applications of Alpha-Beta Pruning in Game Search

• Chess: In chess, the game tree can be incredibly deep, but Alpha-Beta pruning enables
the AI to explore many more levels of the tree within a reasonable amount of time,
allowing the program to make more informed decisions.
• Checkers: Alpha-Beta pruning is used in checkers programs to evaluate possible moves
and prune unnecessary branches of the game tree, leading to optimal play.
• Other Board Games: Alpha-Beta pruning is used in various board games like Go, tic-
tac-toe, and Connect Four, where it helps in minimizing the computation required for
optimal play.

Summary of Key Concepts:

Topic Description
The process of finding optimal moves in games using search
Game Searching
algorithms like Minimax.
Game Modeling the game state, moves, and win/loss conditions to
Representation facilitate search.
A tree structure representing all possible states and moves in the
Game Trees
game.
Optimization technique for Minimax that reduces the number of
Alpha-Beta Pruning
nodes explored in a game tree.
Advantages of Improves search efficiency and allows for deeper exploration of
Alpha-Beta Pruning the game tree.
Used in games like Chess, Checkers, Go, and Tic-Tac-Toe to
Applications
enhance search efficiency and decision-making.

You might also like