0% found this document useful (0 votes)
13 views68 pages

Local Search and Optimization New

The document discusses various local search and optimization algorithms, emphasizing their advantages, such as low memory usage and effectiveness in large state spaces. It covers techniques like hill climbing, simulated annealing, local beam search, and genetic algorithms, detailing their mechanisms and variations. Additionally, it addresses challenges like local maxima and the importance of randomness in escaping local optima.

Uploaded by

priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views68 pages

Local Search and Optimization New

The document discusses various local search and optimization algorithms, emphasizing their advantages, such as low memory usage and effectiveness in large state spaces. It covers techniques like hill climbing, simulated annealing, local beam search, and genetic algorithms, detailing their mechanisms and variations. Additionally, it addresses challenges like local maxima and the importance of randomness in escaping local optima.

Uploaded by

priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

Local Search and Optimization

• Old search algorithms designed to explore search spaces


systematically by keeping one or more paths in memory.
• In many problems, the path to the goal is irrelevant. Eg: 8 Queen
• If path to goal does not matter, then consider a different algorithms,
Local Search and Optimization
• Local search
1. Keep track of single current state
2. Move only to neighboring states
3. Ignore paths
• Advantages:
1. Use very little memory
2. Can often find reasonable solutions in large or infinite (continuous) state
spaces.
• “Pure optimization” problems
• find the best state according to an objective function
• Goal is to find state with max (or min) objective value
• Does not quite fit into path-cost/goal-state formulation
• Local search can do quite well on these problems
Local Search and Optimization
• In Local search, If elevation corresponds to cost, then the aim is to
find the lowest valley—a global minimum; if elevation corresponds to
an objective function, then the aim is to find the highest peak—a
global maximum
• A complete local search algorithm always finds a goal if one exists
• An optimal algorithm always finds a global minimum/maximum.
Hill-climbing search
• “a loop that continuously moves towards
increasing value”
• terminates when a peak is reached
• greedy local search
• Value can be either
• Objective function value
• Heuristic function value (minimized)
• Hill climbing does not look ahead beyond
the immediate neighbors
• Can randomly choose among the set of best successors
• if multiple have the best value
Hill Climbing
Drawbacks
Local maxima-
Backtracking

Plateaus- Take
big steps

Diagonal
ridges(ie more
no of local
maxima-
Bidirectional 11

search
❖ Stochastic hill climbing chooses at random from among the uphill moves; the probability of selection can
vary with the steepness of the uphill move. This usually converges more slowly than steepest ascent, but in
some state landscapes, it finds better solutions.
❖ First-choice hill climbing implements stochastic hill climbing by generating successors randomly until one is
generated that is better than the current state. This is a good strategy when a state has many (e.g., thousands)
of successors.
❖ RANDOM-RESTART HILL CLIMBING does a series of hill-climbing searches from randomly generated initial
states,1 until a goal is found. It is trivially complete with probability approaching 1, because it will eventually
generate a goal state as the initial state. If each hill-climbing search has a probability p of success, then the
expected number of restarts required is 1/p.
❖ The success of hill climbing depends very much on the shape of the state-space landscape:
• if there are few local maxima and plateaux, random-restart hill climbing will find a good
• solution very quickly.
Variants of hill climbing
Stochastic hill climbing : chooses at random from among the uphill
moves; the probability of selection can vary with the steepness of the
uphill move. This usually converges more slowly than steepest ascent,
but in some state landscapes, it finds better solutions
First-choice hill climbing : In regular hill climbing, all neighboring
solutions are generated randomly, and the algorithm selects first
neighbor that improves the current solution.
• First-Choice Hill Climbing, encounters first improving neighbor and
moved to. If no improving neighbor is found then algorithm restarts
from a random state.
• Random-restart hill climbing : conducts a series of hill-climbing
searches from randomly generated initial states, until a goal is found.
Simulated annealing
• Basic ideas:
• like hill-climbing identify the quality of the local improvements which never
makes down hill moves and get stuck in local maximum
• In other words a pure random walk which is complete but extremely
inefficient
• Simulated Annealing combines hill climbing with random walk that yields
both efficiency and completeness.
• In the annealing process, a metal is heated to a high temperature and then
gradually cooled, allowing its atoms to settle into a more ordered and stable
state.
• Simulated annealing adapts this concept to the task of finding the global
minimum (or maximum) of a cost function in a search space.
Local beam search
• Keeping only one node in memory is an extreme reaction to memory
problems.
• Keep track of k states instead of one
• Initially: k randomly selected states
• Next: determine all successors of k states
• If any of successors is goal  finished
• Else select k best from successors and repeat
• Stochastic beam search : introduces randomness in the selection of
the next set of solutions, which can be beneficial for escaping local
optima.
• It chooses k successors at random, with the probability of choosing a
given successor being an increasing function of its value.
• By allowing suboptimal solutions to be considered, stochastic beam
search increases the chances of finding a global optimum.
Genetic algorithms
• Successor states are generated by combining two parent states rather
than by modifying a single state.
• Population :set of k randomly generated states
• Fitness function: Evaluates the quality of each
solution.
• Selection: Individuals are selected from the current
population based on their fitness.
• Crossover point is chosen randomly from the positions in the string
• Mutation: Introduces random changes to individual
solutions.
• The theory of genetic algorithms explains how this works using the
idea of a schema, which is a substring in which some of the positions
can be left unspecified.
• For example, the schema 246***** describes all 8-queens states in
which the first three queens are in positions 2, 4, and 6, respectively.
Strings that match the schema (such as 24613578) are called
instances of the schema.
Local search in continuos space
SEARCHING WITH NON-
DETERMINISTIC ACTIONS
• A solution for an AND–OR search problem is a subtree that
• (1) has a goal node at every leaf,
• (2) specifies one action at each of its OR nodes, and
• (3) includes every outcome branch at each of its AND nodes.
• Algorithm uses recursive, depth-first algorithm for AND–OR graph
search.
• Reason: to deal with cycles, which arise in nondeterministic problems
Search in Partially Observable
Environment
Thank you!!!

You might also like