0% found this document useful (0 votes)
68 views24 pages

Local Search Algorithms: Chapter 4, Sections 3-4

The document discusses various local search algorithms for optimization problems, including hill-climbing, simulated annealing, genetic algorithms, and local search in continuous spaces. Hill-climbing iteratively improves a single solution by moving to higher-valued neighbors until no further improvements are possible, but can get stuck in local optima. Simulated annealing allows occasional "downhill" moves to help escape local optima. Genetic algorithms apply crossover and mutation operators to populations of solutions in an approach related to natural selection.

Uploaded by

Carlos Arenas
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views24 pages

Local Search Algorithms: Chapter 4, Sections 3-4

The document discusses various local search algorithms for optimization problems, including hill-climbing, simulated annealing, genetic algorithms, and local search in continuous spaces. Hill-climbing iteratively improves a single solution by moving to higher-valued neighbors until no further improvements are possible, but can get stuck in local optima. Simulated annealing allows occasional "downhill" moves to help escape local optima. Genetic algorithms apply crossover and mutation operators to populations of solutions in an approach related to natural selection.

Uploaded by

Carlos Arenas
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Local search algorithms

Chapter 4, Sections 34

Chapter 4, Sections 34

Outline
Hill-climbing Simulated annealing Genetic algorithms Local search in continuous spaces (briey)

Chapter 4, Sections 34

Iterative improvement algorithms


In many optimization problems, path is irrelevant; the goal state itself is the solution; e.g. N-queen problem. In such cases, we can use iterative improvement algorithms; keep a single current state, try to improve it by making small changes it in each iteration, until no further improvement is possible. Local search in the state space which is the set of complete congurations. nd optimal conguration, e.g., TSP or, nd conguration satisfying constraints, e.g., , N-queen, timetablee. Constant space, suitable for online as well as oine search

Chapter 4, Sections 34

Example: n-queens
Problem: Put n queens on an n n board with no two queens on the same row, column, or diagonal Move a queen to reduce number of conicts (number of attacking queens).

h=5

h=2

h=0

Almost always solves n-queens problems almost instantaneously for very large n, e.g., n = 1 million

Chapter 4, Sections 34

Example: Travelling Salesperson Problem


Short denition: Find the shortest path connecting N given cities. Start with any complete tour, perform pairwise exchanges:

Variants of this approach get within 1% of optimal very quickly with thousands of cities

Chapter 4, Sections 34

Iterative Improvement Algorithms


Iterative improvement algorithms can be applied to minimize a cost (e.g. TSP or reducing number of conicts) or to maximize a benet. At each iteration, whatever measure we use, we will select the conguration that improves that measure.

Chapter 4, Sections 34

Hill-climbing (or gradient ascent/descent)


function Hill-Climbing( problem) returns a solution state inputs: problem, a problem local variables: current, a node next, a node current Make-Node(Initial-State[problem]) loop do next a highest-valued successor of current if Value[next] < Value[current] then return current current next end

Note that this is a maximization problem, but Hill Climbing can be used for minimizing a cost, equally well (use negative of the cost as a benet).

Chapter 4, Sections 34

Hill-climbing contd.
Problem: depending on initial state, can get stuck on local maxima
global maximum value

local maximum

states

Chapter 4, Sections 34

Continuous state spaces


Suppose we want to nd 3 locations to build three airports in Romania: 6-D state space dened by (x1, y2), (x2, y2), (x3, y3) objective function f (x1, y2, x2, y2, x3, y3) = sum of squared distances from each city to nearest airport

Chapter 4, Sections 34

Continuous state spaces


In continuous spaces, we have two options: Discretize the allowed values of the variables (modify by xed amount) Use the gradient

Chapter 4, Sections 34

10

Continuous state spaces - Discretization


Discretization methods turn continuous space into discrete space, e.g., empirical gradient considers change in each coordinate

Chapter 4, Sections 34

11

Continuous state spaces - Gradient Approach


Use the gradient The standard way to solving continuous problems. Gradient methods compute f f f f f f f = , , , , , x1 y1 x2 y2 x3 y3 to increase/reduce f , e.g., by x x + f (x) Problems w/ choosing step size, slow convergence

Chapter 4, Sections 34

12

Continuous state spaces - IGNORE


Sometimes can solve for f (x) = 0 exactly (e.g., with one city). 1 NewtonRaphson (1664, 1690) iterates x x H f (x)f (x) to solve f (x) = 0, where Hij = 2f /xixj

Chapter 4, Sections 34

13

Hill-climbing variations
Stochastic hill-climbing Choose at random from among the uphill moves. The probability of selection can vary with the steepness of the uphill move. Convergence: usually slower than hill climbing Solutions: Sometimes better

Chapter 4, Sections 34

14

Hill-climbing variations
First choice hill-climbing A variation of stochastic hill-climbing. Instead of nding all the possible moves (successor states) and picking one at random, it randomly generates a next state until it nds one which is better. Useful when there are many successor states (thousands).

Chapter 4, Sections 34

15

Hill-climbing variations
All hill-climbing algorithms so far are incomplete. Random-start-hill-climbing Search a goal state, starting from randomly generated initial states. This variation is complete with probability approaching to 1 (as the number of searches increase). In fact, it can solve a 3-million queen problem in under a minute (for 8-queen, the probability of success is roughly 0.14, hence 7 random starts is expected to nd the solution (n = 1/p))

Chapter 4, Sections 34

16

Simulated annealing
Hill-climbing algorithms never makes downhill moves, hence they are guaranteed to be incomplete since they can get stuck in local maxima. Solution: simulated annealing (again, complete only probabilistically) Idea: escape local maxima by allowing some bad moves but gradually decrease their size and frequency E.g. shaking the surface where a ping pong ball is rolling to get it to the global minimum. Devised by Metropolis et al., 1953, for physical process modelling. Widely used in VLSI layout, airline scheduling, etc.

Chapter 4, Sections 34

17

Simulated annealing
function Simulated-Annealing( problem, schedule) returns a solution state inputs: problem, a maximization problem schedule, a mapping from time to temperature local variables: current, a node next, a node T, a temperature controlling the probability of downward steps current Make-Node(Initial-State[problem]) for t 1 to do T schedule[t] if T=0 then return current next a randomly selected successor of current E Value[next] Value[current] if E > 0 then current next else current next only with probability eE /T

Chapter 4, Sections 34

18

Local beam search


Idea: Keeping only one node in memory is an extreme reaction to memory problems. Local beam search: keep k states instead of just one Loop: Start from k randomly generated states Generate all successors of all k states If goal is found, stop Otherwise, keep the top k of the successor states

Chapter 4, Sections 34

19

Local beam search


Not the same as k random-start searches run in parallel! Searches that nd good states recruit other searches to join them Problem: quite often, all k states end up on same local hill Idea: Stochastic beam search Choose k successors randomly, biased towards good ones Observe the close analogy to natural selection!

Chapter 4, Sections 34

20

Genetic algorithms
= stochastic local beam search + generate successors from pairs of states
24748552 32752411 24415124 32543213
(a) Initial Population 24 31% 23 29% 20 26% 11 14% (b) Fitness Function

32752411 24748552 32752411 24415124


(c) Selection

32748552 24752411 32752124 24415411


(d) CrossOver

32748152 24752411 32252124 24415417


(e) Mutation

Chapter 4, Sections 34

21

Genetic algorithms
Population Fitness function Crossover Mutation Selection

Chapter 4, Sections 34

22

Genetic algorithms contd.


GAs require states encoded as strings (GPs use programs) Crossover helps i substrings are meaningful components

Chapter 4, Sections 34

23

Genetic algorithms
Start with a population of k randomly selected states selection w.r.t the tness function crossover (mating) mutation Idea: Try to take solutions to dierent subproblems from dierent nearsolutions.

Chapter 4, Sections 34

24

You might also like