07 Local Search
07 Local Search
1
Local Search
Algorithms
Local search algorithms
● Local search is a type of optimization algorithm used in
artificial intelligence.
● Find solutions to problems where the goal is to optimize a
certain criterion, such as minimizing cost or maximizing
efficiency.
● Unlike systematic search algorithms that explore the entire
search space, local search algorithms start from an initial
solution and iteratively move to neighboring solutions that
improve the objective function until a satisfactory solution is
found or a termination condition is met.
Local search algorithms
Here are some key characteristics of local search algorithms:
Iterative Improvement: Local search algorithms iteratively improve upon an initial
solution by exploring neighboring solutions that are obtained by making small
changes or modifications to the current solution.
Objective Function: The quality of a solution is evaluated based on an objective function
that quantifies how well the solution satisfies the problem constraints or criteria. Local
search aims to find solutions that optimize this objective function.
Exploration of Neighbors: At each iteration, the algorithm selects a neighboring solution
and evaluates whether it improves upon the current solution. The choice of neighbors
and the method for exploring them depend on the problem domain and the specific
algorithm being used.
Termination Criteria: Local search algorithms continue iterating until a termination
criterion is met, such as reaching a specified number of iterations, finding a solution
that meets certain criteria, or exhausting a predefined computational budget.
Incomplete Search: Local search algorithms do not guarantee finding the global
optimum of the objective function. Instead, they focus on finding a satisfactory
solution within a reasonable amount of time, often sacrificing optimality for efficiency.
Local search algorithms
The basic working principle of a local search algorithm involves the following steps:
Initialization: Start with an initial solution, which can be generated randomly or
through some heuristic method.
Evaluation: Evaluate the quality of the initial solution using an objective function or
a fitness measure. This function quantifies how close the solution is to the desired
outcome.
Neighbor Generation: Generate a set of neighboring solutions by making minor
changes to the current solution. These changes are typically referred to as
"moves."
Selection: Choose one of the neighboring solutions based on a criterion, such as the
improvement in the objective function value. This step determines the direction in
which the search proceeds.
Termination: Continue the process iteratively, moving to the selected neighboring
solution, and repeating steps 2 to 4 until a termination condition is met. This
condition could be a maximum number of iterations, reaching a predefined
threshold, or finding a satisfactory solution.
Hill Climbing
Algorithm
Hill climbing algorithm
● Initialization: Begin with an initial solution, often generated randomly or using a heuristic
method.
● Evaluation: Calculate the quality of the initial solution using an objective function or fitness
measure.
● Neighbor Generation: Generate neighboring solutions by making small changes (moves) to the
current solution.
● Selection: Choose the neighboring solution that results in the most significant improvement in the
objective function.
● Termination: Continue this process until a termination condition is met (e.g., reaching a maximum
number of iterations or finding a satisfactory solution).
Hill climbing has a limitation in that it can get stuck in local optima, which are solutions that are better than
their neighbors but not necessarily the best overall solution. To overcome this limitation, variations of hill
climbing algorithms have been developed, such as stochastic hill climbing and simulated annealing.
Hill climbing algorithm
Features of Hill Climbing:
● Hill Climbing is a simple and intuitive algorithm that is easy to understand and implement.
● It can be used in a wide variety of optimization problems, including those with a large search
space and complex constraints.
● Hill Climbing is often very efficient in finding local optima, making it a good choice for
problems where a good solution is needed quickly.
● The algorithm can be easily modified and extended to include additional heuristics or
constraints.
Hill climbing algorithm
Disadvantages of Hill Climbing algorithm:
● Hill Climbing can get stuck in local optima, meaning that it may not find the global optimum of
the problem.
● The algorithm is sensitive to the choice of initial solution, and a poor initial solution may result
in a poor final solution.
● Hill Climbing does not explore the search space very thoroughly, which can limit its ability to
find better solutions.
● It may be less effective than other optimization algorithms, such as genetic algorithms or
simulated annealing, for certain types of problems.
Hill climbing algorithm
Types of Hill Climbing Algorithm:
○ Simple hill Climbing:
○ Steepest-Ascent hill-climbing:
○ Stochastic hill Climbing:
Hill climbing algorithm
1. Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's one successor state, and if it
finds better than the current state, then move else be in the same state. This algorithm
has the following features:
Less time consuming
Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to apply.
Step 3: Select and apply an operator to the current state.
Step 4: Check new state: If it is goal state, then return success and quit.
Else if it is better than the current state then assign new state as a current state.
Else if not better than the current state, then return to step2.
Step 5: Exit.
Hill climbing algorithm
2. Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state. This algorithm consumes more time as it searches for multiple
neighbors
Algorithm for Steepest-Ascent hill climbing:
Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.
Step 2: Loop until a solution is found or the current state does not change.
Let SUCC be a state such that any successor of the current state will be better than it.
For each operator that applies to the current state:
Apply the new operator and generate a new state.
Evaluate the new state.
If it is goal state, then return it and quit, else compare it to the SUCC.
If it is better than SUCC, then set new state as SUCC.
If the SUCC is better than the current state, then set current state to SUCC.
Step 5: Exit.
Hill climbing algorithm
3. Stochastic hill climbing:
Stochastic hill climbing does not examine for all its neighbor before
moving. Rather, this search algorithm selects one neighbor node at
random and decides whether to choose it as a current state or examine
another state.
Hill climbing algorithm
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape
which is better than each of its neighboring states, but there is another
state also present which is higher than the local maximum.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from
the current state so it is possible that the algorithm could find non-plateau region.\
Hill climbing algorithm
Problems in Hill Climbing Algorithm:
3. Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.
● Think of a maze with many paths leading to the exit (goal). A standard search
algorithm would try every single path until it finds the exit.
● Local Beam Search works differently. It starts at the beginning (start node) and
explores a limited number of the most promising paths (beams).
Local Beam Search
● At each step, the algorithm evaluates the current options (nodes) in each beam.
This evaluation might involve a score based on how close the node seems to be
to the goal.
● The algorithm keeps the best-scoring paths (beams) and discards the less
promising ones. This allows it to focus on areas with a higher chance of leading
to the exit.
Local Beam Search
Limited Exploration:
● Unlike a full exploration search, Local Beam Search doesn't consider
every single path. It keeps the exploration focused on a manageable
number of beams.
Local Beam Search
Benefits:
● Local Beam Search is faster than exploring every path because it
discards less promising options early on.
● It's a good choice when dealing with large search spaces (many
possible paths) where a full exploration might be impractical.
Local Beam Search
Limitations:
● By focusing on a limited number of beams, the algorithm might miss the
actual best path if it wasn't included in the chosen beams initially.
Travelling Salesman Problem
Travelling Salesman Problem
Thank You!
31