2025 Lecture02 P4 LocalSearch
2025 Lecture02 P4 LocalSearch
AND OPTIMIZATION
PROBLEMS
2
Paths are not always required
• There are applications that only the final state matters, not
the path to get there.
• Integrated-circuit design, factory floor layout, job shop scheduling, etc.
3
Local search algorithms
• These algorithms navigate from a start state to neighbors,
without tracking the paths, nor the set of reached states.
• Not systematic
• They might never explore a portion of the search space where a
solution resides.
4
Local search algorithms
• Local search can solve optimization problems, which finds
the best state according to an objective function.
A 1D state-space landscape in
which elevation corresponds
to the objective function.
5
Hill climbing search
• The algorithm heads in the direction that gives the steepest
ascent and terminates when it reaches a “peak”.
• Peak = a state where no neighbor has a higher value.
7
Hill climbing: Suboptimality
• Hill climbing is also called greedy local search.
• It grabs a good neighbor state without thinking where to go next.
• It can easily improve a bad state and hence make rapid
progress toward a solution.
• 8-queens: 17 million states, 4 steps on average when succeeds and
3 when get stuck.
• Hill climbing can get stuck in local maxima, ridges, or plateaus
8
Local extrema and ridges
• Current state (1 6 2 5 7 4 8 3) has ℎ 𝑛 = 1
• Every successor has a higher cost
→ local minimum
Image credit: 10
Research Gate
Overcome the suboptimality
• A sideways move lets the agent keep going in the plateau.
• It does not work on a flat local maximum.
• The number of consecutive sideways moves should be
limited to avoid non-stop wander.
• This approach raises the percentage of problem instances
solved by hill climbing.
• E.g., for 8-queens problem: from 14% to 94%.
• Success comes at a cost: roughly 21 steps for each successful
instance and 64 for each failure
11
Overcome the suboptimality
• Stochastic hill climbing chooses at random from among the
uphill moves.
• The probability of selection can vary with the steepness of the move.
• Slower convergence, yet better solutions in some state landscapes
• First-choice hill climbing generates successors randomly
until obtaining one that is better than the current state.
• Suitable when a state has many (e.g., thousands) of successors
• Random-restart hill climbing conducts several hill-climbing
searches from random initial states, until a goal is found.
• If each search has a probability 𝑝 of success, the expected number
of restarts required is 1/𝑝.
12
Quiz 01: 4-queens problem
• Consider the following 4-queens problem
13
Simulated annealing
• Combine hill climbing with a random walk in some way that
yields both efficiency and completeness
Image credit:
14
Mechanger.Wordpress
Simulated annealing a mapping function from
time to “temperature”
15
Simulated annealing: TSP
Image creadit:
16
Wikipedia,
Local beam search
• Keeping just one node in memory might seem to be an
extreme reaction to the problem of memory limitations.
17
Local beam search
18
Local beam search
• Useful information is passed among the parallel search
threads → major difference from random-restart search
• The algorithm possibly suffers from a lack of diversity among
the 𝑘 states.
• The states can become clustered in a small region of the state space
→ an expensive version of hill climbing.
• Stochastic beam search can alleviate the above problem by
randomly picking 𝑘 successors following their values.
19
Evolutionary algorithms
• Variants of stochastic beam search, explicitly motivated by
the metaphor of natural selection in biology
20
Evolutionary algorithms
Fitness function = the number of
nonattacking pairs of queens
A genetic algorithm, illustrated for digit strings representing 8-queens states. The
initial population in (a) is ranked by a fitness function in (b) resulting in pairs for
mating in (c). They produce offspring in (d), which are subject to mutation in (e).
21
Evolutionary algorithms
• There are endless forms of evolutionary algorithms.
• The representation of an individual
• Genetic algorithms: a string over a finite alphabet.
• Genetic programming: a computer program. Evolution strategies: a
sequence of real numbers.
• Digit representation
16257483
• Binary representation
000 101 001 100 110 011 111 010
22
Evolutionary algorithms
• The size of the population
• A population is a set of 𝑘 randomly generated states to begin with.
• Fitness function: an objective function that rates each state
• The higher values, the better states
• The mixing number 𝜌: number of parents that come together
to form offsprings
• 𝜌 = 1: stochastic beam search. 𝜌 = 2: most common.
23
Evolutionary algorithms
• The selection process: choose individuals as parents of the
next generation
• Individuals can be chosen with probabilities proportional to their
fitness score.
The 8-queens states corresponding to the two parents (left and middle) and the
first offspring (right). The green columns are lost in the crossover step and the red
columns are retained.
26
Evolutionary algorithms
• The mutation rate: determine how often offspring have
random mutations to their representation.
• Every bit in the individual’s composition is flipped with probability
equal to the mutation rate.
27
Quiz 02: Calculate fitness scores
• Consider the 4-queens problem, in which each state
has 4 queens, one per column, on the board. The
state can be represented in genetic algorithm as a
sequence of 4 digits, each of which denotes the
position of a queen in its own column (from 1 to 4).
28
function GENETIC-ALGORITHM(population, fitness) returns an individual
repeat
weights ← WEIGHTED-BY(population, fitness)
population2 ← empty set
for i = 1 to SIZE(population) do
parent1, parent2 ← WEIGHTED-RANDOM-CHOICES(population, weights,2)
child ← REPRODUCE(parent1 , parent2)
if (small random probability) then child ← MUTATE(child)
add child to population2
population ← population2
until some individual is fit enough, or enough time has elapsed
return the best individual in population, according to fitness
30
31