0% found this document useful (0 votes)
10 views19 pages

Simulated Annealing

Uploaded by

taufik rizkiandi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views19 pages

Simulated Annealing

Uploaded by

taufik rizkiandi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Simulated Annealing

Komarudin
[email protected]
Random Search

2/28
Hill climbing
Simple Iterative Improvement or Hill Climbing:
• Candidate is always and only accepted if cost
is lower (or fitness is higher) than current
configuration
• Stop when no neighbor with lower cost
(higher fitness) can be found

For minimization: steepest descent


Hill climbing
Steepest descent

5/28
Hill climbing
Disadvantages:
• Local optimum as best result
• Local optimum depends on initial
configuration
• Generally no upper bound on iteration length
How to cope with disadvantages
• Repeat algorithm many times with different initial
configurations
• Use information gathered in previous runs (not
preferable, Tabu Search)
• Use a more complex Generation Function to jump
out of local optimum (crossover mutation) 
Genetic Algorithm
• Use a more complex Evaluation Criterion that
accepts sometimes (randomly) also solutions away
from the (local) optimum
Annealing process
• Annealing in metals
• Heat the solid state metal to a high temperature
• Cool it down very slowly according to a specific
schedule.
• If the heating temperature is sufficiently high to
ensure random state and the cooling process is slow
enough to ensure thermal equilibrium, then the
atoms will place themselves in a pattern that
corresponds to the global energy minimum of a
perfect crystal.
Simulated Annealing
Use a more complex Evaluation Function:
• Do sometimes accept candidates with higher
cost to escape from local optimum
• Adapt the parameters of this Evaluation
Function during execution
• Based upon the analogy with the simulation of
the annealing of solids
Simulation of cooling (Metropolis 1953)
• At a fixed temperature T :
• Perturb (randomly) the current state to a new state
• E is the difference in energy between current and new state
• If E < 0 (new state is lower), accept new state as current
state
• If E  0 , accept new state with probability
Pr (accepted) = exp (- E / kB.T)
• Eventually the systems evolves into thermal equilibrium at
temperature T ; then the formula mentioned before holds
• When equilibrium is reached, temperature T can be lowered
and the process can be repeated
Simulated Annealing
Step 1: Initialize – Start with a random initial placement.
Initialize a very high “temperature”.
Step 2: Move – Perturb the placement through a defined
move.
Step 3: Calculate score – calculate the change in the score
due to the move made.
Step 4: Choose – Depending on the change in score, accept
or reject the move. The prob of acceptance depending on
the current “temperature”.
Step 5: Update and repeat– Update the temperature value
by lowering the temperature. Go back to Step 2.
The process is done until “Freezing Point” is reached.
Simulated Annealing
initialize;
REPEAT
REPEAT
perturb ( config.i  config.j, Cij);
IF Cij < 0 THEN accept
ELSE IF exp(-Cij/T) > random[0,1) THEN accept;
IF accept THEN update(config.j);
UNTIL equilibrium is approached sufficient closely;
T := next_lower(T);
UNTIL system is frozen or stop criterion is reached
Analysis of SA
• Choose the start value of T so that in the beginning nearly all
perturbations are accepted (exploration), but not too big to
avoid long run times
• The function next_lower in the homogeneous variant is
generally a simple function to decrease T, e.g. a fixed part
(80%) of current T
• At the end T is so small that only a very small number of the
perturbations is accepted (exploitation)

• If possible, always try to remember explicitly the best solution


found so far; the algorithm itself can leave its best solution and
not find it again
SA for TSP
• INIT-TEMP = 4000000;
• INIT-TSP = Random (can use heuristics);
• PERTURB(TSP)
1. 1-insert.
2. 1-swap.
3. 2-opt.
Convergence of simulated annealing

AT INIT_TEMP Unconditional Acceptance

HILL CLIMBING Move accepted with


probability
= e-(^C/temp)
COST FUNCTION, C

HILL CLIMBING

HILL CLIMBING

AT FINAL_TEMP

NUMBER OF ITERATIONS
Algorithm for TSP
Algorithm SA
Begin
t = t0;
cur_TSP = ini_TSP;
cur_score = SCORE(cur_TSP);
repeat
repeat
trial_TSP = {1-Insert, 1-Swap, 2-opt}(cur_TSP);
trial_score = SCORE(trial_TSP);
δs = trial_score – cur_score;
if (δs < 0) then
cur_score = trial_score;
cur_TSP = MOVE(comp1, comp2);
else
r = RANDOM(0,1);
if (r < e-(δs/t)) then
cur_score = trial_score;
cur_TSP = MOVE(comp1, comp2);
until (equilibrium at t is reached)
t = αt (0 < α < 1)
until (freezing point is reached)
End.
Ball on terrain example – SA vs Greedy Algorithms

Initial position
of the ball Simulated Annealing explores
more. Chooses this move with a
small probability (Hill Climbing)

Greedy Algorithm
gets stuck here!
Locally Optimum
Solution.

Upon a large no. of iterations,


SA converges to this solution.
Run time
• Annealing: O*(n0.5) phases
• State-of-the-art walks [LV03]
– Worst case: O*(n) samples per phase
(for shape)
– O*(n3) steps per sample
• Total: O*(n4.5)
(compare to O*(n10) [GLS81], O*(n5) [BV02])
Terima kasih

You might also like