UNIT3 Ai
UNIT3 Ai
SEARCHINCOMPLEXENVIRONMENTS
Local search algorithms are a class of algorithms used to find optimal solutions to optimization problems
by iteratively improving a candidate solution. They are particularly useful when the search space is large
and complex. Here’s a brief overview of how they work and some common types:
1. Optimization Problems: These involve finding the best solution from a set of feasible solutions. The
objective could be to maximize or minimize a particular function.
2. Local Search: This approach starts with an initial solution and iteratively makes small changes to find
a better solution. It’s called “local” because the search is confined to a local neighborhood around the
current solution.
1. Hill Climbing:
Description: Starts with an arbitrary solution and iteratively moves to a neighboring solution if it
improves the objective function.
Types:
SteepestAscent Hill Climbing: Evaluates all neighbors and moves to the best one.
2. Simulated Annealing:
Description: Inspired by the annealing process in metallurgy. It probabilistically accepts worse solutions
with a decreasing probability over time to escape local optima.
Features: Uses a temperature parameter that controls the likelihood of accepting worse solutions. As
the temperature decreases, the algorithm becomes more focused on local search.
3. Tabu Search:
Description: Enhances local search by using memory structures (tabu lists) to keep track of previously
visited solutions or attributes, preventing the algorithm from cycling back to them.
Features: Uses shortterm memory (tabu list) and longterm memory to diversify the search.
4. Genetic Algorithms:
Description: Uses principles from genetics (such as selection, crossover, and mutation) to evolve a
population of solutions over generations.
Features: Can explore a larger search space by combining solutions and introducing variations.
Description: Inspired by the foraging behavior of ants. Uses a population of artificial ants that build
solutions and communicate via pheromones.
Features: Utilizes pheromone trails to guide the search towards promising regions of the search space.
Description: Inspired by the social behavior of birds and fish. It uses a population of candidate solutions
(particles) that move around the search space, adjusting their positions based on their own experience and
the experience of their neighbors.
Features: Each particle updates its position based on its own bestknown position and the bestknown
position of its neighbors.
Applications
Scheduling: Finding optimal or nearoptimal schedules for tasks, such as jobshop scheduling.
Challenges
1. Local Optima: Many local search algorithms can get stuck in local optima, which are not necessarily
the best possible solutions.
2. Parameter Tuning: Many algorithms have parameters that need to be carefully tuned, such as
temperature in simulated annealing or population size in genetic algorithms.
3. Computational Resources: Some algorithms, especially those like genetic algorithms, can be
computationally intensive.
Local search algorithms are versatile and can be adapted to a wide range of problems, making them
valuable tools in optimization.
HillClimbing Search:
Hillclimbing search is a fundamental local search algorithm used in optimization problems and
artificial intelligence. It is designed to find a solution by iteratively improving an initial
candidate solution based on the evaluation of neighboring solutions. Here's a detailed look at
how hillclimbing works, its variants, and its strengths and limitations.
Variants of HillClimbing
1. Simple HillClimbing:
2. SteepestAscent HillClimbing:
i. Description: Evaluates all neighbors and selects the best one. Moves to the
neighbor with the highest improvement in the objective function.
ii. Characteristics: More thorough than simple hillclimbing but requires evaluating
more neighbors.
3. Stochastic HillClimbing:
Strengths
Limitations
1. Local Optima: Hillclimbing can get stuck in local optima, which are suboptimal
solutions compared to the global optimum.
2. Plateaus: If the search space contains flat regions (plateaus) where neighboring solutions
have the same value, the algorithm might struggle to make progress.
3. Overfitting: In complex problems, the algorithm might overfit to the local features of the
search space rather than finding a broadly optimal solution.
Applications
Example
Consider the problem of finding the highest point on a mountainous terrain. You start at a
random point and evaluate the height of the terrain. If you find a neighboring point that is higher,
you move there. This process continues until you can no longer find a higher neighboring point.
The final position may be the highest peak you can reach from your starting point, but it might
not be the highest peak in the entire terrain.
Hillclimbing search is a foundational concept in optimization and AI, often used as a building
block for more sophisticated algorithms and strategies.
Simulated Annealing:
1. Introduction
Simulated Annealing (SA) is a probabilistic optimization algorithm inspired by the annealing process in
metallurgy. The goal is to find an approximate solution to an optimization problem by mimicking the
physical process of heating and then slowly cooling a material to remove defects.
Optimization Problem: The problem where you seek to find the best solution from a set of possible
solutions.
Neighborhood: The set of solutions that can be reached from a given solution by a small change.
3. Algorithm Steps
1. Initialization:
2. Iterative Process:
Generate Neighbor: Generate a neighboring solution \( s' \) from the current solution \( s \).
Evaluate: Calculate the change in objective function \( \Delta E = E(s') E(s) \).
Acceptance Criteria:
If \( \Delta E < 0 \): Always accept the new solution (better solution).
If \( \Delta E \geq 0 \): Accept the new solution with a probability \( P(\Delta E, T) = \exp(\Delta E /
T) \).
3. Cooling Schedule:
Common schedules include exponential decay: \( T_{new} = \alpha T_{old} \) where \( 0 < \alpha <
1 \).
4. Termination:
Stop after a certain number of iterations or when the temperature is sufficiently low.
4. Cooling Schedules
Exponential Decay: \( T_{k+1} = \alpha T_k \) where \( \alpha \) is the cooling rate (0 < α < 1).
5. Parameters:
Initial Temperature (T₀): Should be high enough to explore the state space.
Advantages
Simple to implement.
Disadvantages:
7. Applications
8. Practical Tips
Multiple Runs: Running the algorithm multiple times with different initial solutions can improve results.
Local Beam Search:
Local Beam Search is a heuristic search algorithm used for solving optimization problems and finding
approximate solutions. It is an extension of the local search algorithms designed to improve the search
process by maintaining multiple states at once.
1. Introduction
Local Beam Search combines elements of both local search and beam search. Instead of maintaining a
single current state, it keeps track of a fixed number of states, allowing for more exploration of the state
space.
2. Basic Concepts
Objective Function: The function used to evaluate the quality of each state.
Neighborhood: The set of states that can be reached from a given state by applying a small change.
Beam Width (k): The number of states retained at each step of the search process.
3. Algorithm Steps
1. Initialization:
2. Iteration:
Generate Neighbors: For each of the \( k \) states, generate all possible neighboring states.
Evaluate Neighbors: Calculate the objective function for each of these neighboring states.
Select Top States: From the set of all neighbors, select the top \( k \) states with the best evaluations.
Update: Set these top \( k \) states as the current states for the next iteration.
3. Termination:
4. Example
Suppose we are using Local Beam Search to solve a problem with an objective function \( E \). The steps
might look like this:
1. Initialization:
2. Iteration:
Choose the \( k \) neighbors with the highest evaluations to form the new set of states.
3. Termination:
End when a maximum number of iterations is reached or the improvement is below a threshold.
Faster computation.
Advantages:
Broader Exploration: Maintains multiple states, which helps in exploring diverse parts of the state
space.
Disadvantages:
Heuristic Dependency: The effectiveness heavily depends on the choice of the heuristic and beam
width.
7. Applications
Artificial Intelligence: Used in various AI problems where large search spaces are involved.
Optimization: Can be applied in scheduling, routing, and other combinatorial optimization problems.
8. Practical Tips
Choosing Beam Width: Experiment with different values of \( k \) to find a good balance between
performance and resource usage.
Evaluation Function: Ensure the objective function effectively captures the quality of the solutions.
Diverse Initial States: Starting with diverse initial states can help in exploring different regions of the
state space.
Evolutionary algorithms
Key Concepts
2. Fitness Function: A function that evaluates how good a solution is with respect
to the problem.
3. Selection: The process of choosing better solutions from the current population to
form the next generation.
1. Genetic Algorithms (GAs): One of the most wellknown types, using crossover
and mutation to evolve solutions over generations. Typically, solutions are encoded
as strings (chromosomes) and evolve through genetic operators.
2. Evolution Strategies (ES): Focuses more on the mutation aspect and often uses
realvalued representations. They emphasize selfadaptation of mutation parameters.
Applications
Optimization Problems: Finding the best solution among a large set of possible
solutions.
Flexibility: Can handle a wide range of problem types, including those with
complex, multimodal, or poorly understood landscapes.
Robustness: Often perform well even when the problem space is noisy or has many
local optima.
Parallelism: Many EAs can be easily parallelized, making them suitable for modern
computing architectures.
Disadvantages
No Guarantee of Optimality: EAs typically find good but not necessarily optimal
solutions.