0% found this document useful (0 votes)
99 views

Meta Heuristic Method

This document provides an overview of metaheuristic methods for solving NP-hard optimization problems. It begins by introducing common NP-hard problems in computer science and operational research. It then discusses strategies for approximating solutions to these problems, including heuristic methods, metaheuristics, and exact algorithms. The core of the document describes various metaheuristic methods like simulated annealing, tabu search, variable neighborhood search, genetic algorithms, ant colony optimization, and particle swarm optimization. It concludes by noting the wide applications of metaheuristics and areas for further development.

Uploaded by

Ankit Rawat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Meta Heuristic Method

This document provides an overview of metaheuristic methods for solving NP-hard optimization problems. It begins by introducing common NP-hard problems in computer science and operational research. It then discusses strategies for approximating solutions to these problems, including heuristic methods, metaheuristics, and exact algorithms. The core of the document describes various metaheuristic methods like simulated annealing, tabu search, variable neighborhood search, genetic algorithms, ant colony optimization, and particle swarm optimization. It concludes by noting the wide applications of metaheuristics and areas for further development.

Uploaded by

Ankit Rawat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Metaheuristic Methods and Their

Applications
Lin-Yu Tseng ()
Department of Computer Science
Graduate Institute of Networking and Multimedia
National Chung Hsing University

OUTLINE
I. Optimization Problems
II. Strategies for Solving NP-hard Optimization
Problems
III. What is a Metaheuristic ?
IV. Trajectory Methods
V . Population-Based Methods
VI. The Applications of Metaheuristics
VII. Conclusions

I. Optimization Problems
Computer Science
z
z

Traveling Salesman Problem


Maximum Clique Problem

Operational Research
z
z

Flow Shop Scheduling Problem


P Median Problem

Many optimization problems are NP-hard.

Optimization problems

Calculus-based method

Hill climbing

How about this?

An example : TSP
(Traveling Salesman Problem)
23

1
10

5
2

3
18

20

15
4

5
A solution

7
A sequence

12345

Tour length = 31

13452

Tour length = 63

There may be (n-1)! tours in total

II. Strategies for Solving NP-hard


Optimization Problems
Branch-and-Bound Find exact solution
Approximation Algorithms
e.g. There is an approximation algorithm for TSP which can
find a tour with tour length 1.5 (optimal tour length) in
O(n3) time.

Heuristic Methods Deterministic


Metaheuristic Methods Heuristic + Randomization

III. What is a Metaheruistic Method?


Mata : in an uper level
Heuristic : to find
A metaheuristic is formally defined as an iterative generation
process which guides a subordinate heuristic by combining
intelligently different concepts for exploring and exploiting
the search space, learning strategies are used to structure
information in order to find efficiently near-optimal solutions.
[Osman and Laporte 1996].

Fundamental Properties of
Metaheuristics [Blum and Roli 2003]
Metaheuristics are strategies that guide the
search process.
The goal is to efficiently explore the search space
in order to find (near-)optimal solutions.
Techniques which constitute metaheuristic
algorithms range from simple local search
procedures to complex learning processes.
Metaheuristic algorithms are approximate and
usually non-deterministic.

Fundamental Properties of
Metaheuristics (cont.)
They may incorporate mechanisms to avoid
getting trapped in confined areas of the search
space.
The basic concepts of metaheuristics permit an
abstract level description.
Metaheuristics are not problem-specific.
Metaheuristics may make use of domain-specific
knowledge in the form of heuristics that are
controlled by the upper level strategy.
Todays more advanced metaheuristics use search
experience (embodied in some form of memory)
to guide the search.

IV. Trajectory Methods


1. Basic Local Search : Iterative Improvement

Improve (N(S)) can be


c First improvement
d Best improvement
e Intermediate option

Trajectory Methods

x2
x1

x3

X4
X5

2. Simulated Annealing Kirkpatrick 1983

f ( s' ) f ( s)

Probability p (T , s ' , s ) = exp


T

Temperature T may be defined as Tk +1 = Tk , 0 < < 1

Random walk + iterative improvement

3. Tabu Search Glover 1986


Simple Tabu Search

Tabu list
Tabu tenure: the length of the tabu list
Aspiration condition

Tabu Search

4. Variable Neighborhood Search Hansen and Mladenovi 1999

Composed of three phase shaking local search move

A set of neighborhood structures, N1 < N 2 < ... < N k

max

'
N k' , k = 1,..., k max

k 1
'
k = kmax

x'

x( x' N k' ( x))


x'

x' '

x' '

k k +1

x x' '

k 1

Variable Neighborhood Search


Best

N2N1

Best

N1

x1
N

x
x

x
x
x

x
x

Best
Best

V. Population-Based Methods
1. Genetic Algorithm Holland 1975
z Coding

of a solution --- Chromosome

z Fitness

function --- Related to objective


function

z Initial

population

An example : TSP
(Traveling Salesman Problem)
23

1
10

5
2

18
6

20

15
4

5
A solution

7
A sequence

12345

Tour length = 31

13452

Tour length = 63

Genetic Algorithm
Reproduction (Selection)
Crossover
Mutation

Example 1
Maximize f(x) = x2 where x I and 0 x 31

1. Coding of a solution : A five-bit integer,


e.g. 01101
2. Fitness function : F(x) = f(x) = x2
3. Initial population : (Randomly generated)
01101
11000
01000
10011

Reproduction

Roulette Wheel

Reproduction

Crossover

Mutation
The probability of mutation
Pm = 0.001
20 bits * 0.001 = 0.02 bits

Example 2 Word matching problem


to be or not to be tobeornottobe
(1/26)13 = 4.03038 10-19
The lowercase letters in ASCII are
represented by numbers in the range
[97, 122]
[116,111,98,101,114,110, 116, 116, 111, 98,101]

Initial population

The corresponding strings


rzfqdhujardbe
niedwrvrjahfj
cyueisosqvcb
fbfvgramtekuvs
kbuqrtjtjensb
fwyqykktzyojh
tbxblsoizggwm
dtriusrgkmbg
jvpbgemtpjalq

2. Ant Colony Optimization Dorigo 1992

Pheromone

Initialization
Loop
Loop
Each ant applies a state transition rule to
incrementally build a solution and applies a
local updating rule to the pheromone
Until each of all ants has built a complete solution
A global pheromone updating rule is applied
Until End_Condition

Example : TSP
A simple heuristics a greedy method: choose
the shortest edge to go out of a node
23

1
10

2
6

5
18

20

15
4

Solution : 15432

Each ant builds a solution using a step by step


constructive decision policy.
How ant k choose the next city to visit?
arg
s=

max uJ r { ru ( ru ) } if
S , otherwise

q q0

Where = 1 , rs be a distance measure associated with edge


(r,s)

rs ( rs )

k
p rs = ru ( ru )
uJ r
0
if

if

s Jr
s Jr

Local update of pheromone


rs (1 ) rs + rs ,

0 < <1

where rs = 0 , 0 is the initial


pheromone level

Global update of pheromone


rs (1 ) rs + rs ,
where rs

0 < <1

1
if ( r , s ) globally best tour
= L gb ,
0 ,
otherwise

3. Particle Swarm Optimization Kennedy and


Eberhart 1995

Population initialized by assigning random


positions and velocities;
velocities potential solutions are
then flown through hyperspace.
Each particle keeps track of its best (highest
fitness) position in hyperspace.
This is called pBest
pBest for an individual particle.
It is called gBest
gBest for the best in the population.

At each time step, each particle stochastically


accelerates toward its pBest and gBest.
gBest

Particle Swarm Optimization Process


1. Initialize population in hyperspace.
z Includes position and velocity.

2. Evaluate fitness of individual particle.


3. Modify velocities based on personal best
and global best.
4. Terminate on some condition.
5. Go to step 2.

Initialization:
- Positions and velocities

Modify velocities
- based on personal best and global best.

Here I am,
now!

pBest
gBest

xt
v

xt+1
New
Position !

Particle Swarm Optimization


Modification of velocities and positions
vt+1 = vt + c1 * rand() * (pBest - xt )
+ c2 * rand() * (gBest - xt )
xt+1 = xt + vt+1

VI. The Applications of Metaheuristics


1. Solving NP-hard Optimization Problems

Traveling Salesman Problem


Maximum Clique Problem
Flow Shop Scheduling Problem
P-Median Problem

2. Search Problems in Many Applications

Feature Selection in Pattern Recognition


Automatic Clustering
Machine Learning (e.g. Neural Network
Training)

VII. Conclusions
For NP-hard optimization problems and
complicated search problems, metaheuristic
methods are very good choices for solving these
problems.
More efficient than branch-and-bound.
Obtain better quality solutions than heuristic
methods.
Hybridization of metaheuristics
How to make the search more systematic ?
How to make the search more controllable ?
How to make the performance scalable?

References
1. Osman, I.H., and Laporte,G. Metaheuristics:A bibliography. Ann. Oper. Res. 63,
513623, 1996.
2. Blum, C., and Andrea R. Metaheuristics in Combinatorial Optimization: Overview
and Conceptual Comparison. ACM Computing Surveys, 35(3), 268308, 2003.
3. Kirkpatrick, S., Gelatt. C. D., and Vecchi, M. P. Optimization by simulated
annealing, Science, 13 May 1983 220, 4598, 671680, 1983.
4. Glover, F. Future paths for integer programming and links to artificial intelligence,
Comput. Oper. Res. 13, 533549, 1986.
5. Hansen, P. and Mladenovi, N. An introduction to variable neighborhood search. In
Metaheuristics: Advances and trends in local search paradigms for optimization, S.
Vo, S. Martello, I. Osman, and C. Roucairol, Eds. Kluwer Academic Publishers,
Chapter 30, 433458, 1999.
6. Holland, J. H. Adaption in natural and artificial systems. The University of Michigan
Press,Ann Harbor, MI. 1975.
7. Dorigo, M. Optimization, learning and natural algorithms (in italian). Ph.D. thesis,
DEI, Politecnico di Milano, Italy. pp. 140, 1992.
8. Kennedy, J. and Eberhart, R. Particle Swarm Optimization, Proceedings of the 1995
IEEE International Conference on Neural Networks, pp. 1942-1948, IEEE Press, 1995.

!

!
!

You might also like