0% found this document useful (0 votes)
37 views36 pages

A Tutorial On Meta-Heuristics For Optimization

This document summarizes a chapter about meta-heuristics for optimization. It discusses genetic algorithms, their inspiration from natural evolution, and how they work by initializing a population, evaluating fitness, selecting individuals for reproduction, and applying crossover and mutation over generations to optimize a problem. The chapter then provides an example of applying a genetic algorithm to optimize a test function, showing it can find a near-optimal solution in 20 generations with a population of 10.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views36 pages

A Tutorial On Meta-Heuristics For Optimization

This document summarizes a chapter about meta-heuristics for optimization. It discusses genetic algorithms, their inspiration from natural evolution, and how they work by initializing a population, evaluating fitness, selecting individuals for reproduction, and applying crossover and mutation over generations to optimize a problem. The chapter then provides an example of applying a genetic algorithm to optimize a test function, showing it can find a near-optimal solution in 20 generations with a population of 10.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 4

A Tutorial on Meta-Heuristics for


Optimization

Shu-Chuan Chu, Chin-Shiuh Shieh, and John F. Roddick

Nature has inspired computing and engineering researchers in many


different ways. Natural processes have been emulated through a va-
riety of techniques including genetic algorithms, ant systems and
particle swarm optimization, as computational models for optimiza-
tion. In this chapter, we discuss these meta-heuristics from a practi-
tioner’s point of view, emphasizing the fundamental ideas and their
implementations. After presenting the underlying philosophy and al-
gorithms, detailed implementations are given, followed by some dis-
cussion of alternatives.

1 Introduction
Optimization problems arise from almost every field ranging from
academic research to industrial application. In addition to other (ar-
guably more conventional) optimization techniques, meta-heuristics,
such as genetic algorithms, particle swarm optimization and ant
colony systems, have received increasing attention in recent years
for their interesting characteristics and their success in solving prob-
lems in a number of realms. In this tutorial, we discuss these meta-
heuristics from a practicer’s point of view. After a brief explanation
of the underlying philosophy and a discussion of the algorithms, de-
tailed implementations in C are given, followed by some implemen-
tation notes and possible alternatives.
The discussion is not intended to be a comprehensive review of re-
lated topics, but a compact guide for implementation, with which
readers can put these meta-heuristics to work and experience their
power in timely fashion. The C programming language was adopted
because of its portability and availability. The coding style is delib-
erately made as accessible as possible, so that interesting readers can
easily transfer the code into any other programming language as pre-
ferred1 .

In general, an optimization problem can be modeled as follows:

F (~x), ~x = (x1 , x2 , · · · , xn ) ∈ X, (1)

where F (~x) is the object function subject to optimization, and X is


the domain of independent variables x1 , x2 , · · · , xn . We are asked to
find out certain configuration of ~x = (x1 , x2 , · · · , xn ) to maximize or
minimize the object function F (~x).

The optimization task can be challenging for several reasons,


such as high dimensionality of ~x, constrains imposed on ~x, non-
differentiability of F (~x), and the existence of local optima.

2 Genetic Algorithms
Based on long-term observation, Darwin asserted his theory of nat-
ural evolution. In the natural world, creatures compete with each
other for limited resources. Those individuals that survive in the
competition have the opportunity to reproduce and generate descen-
dants. In so doing, any exchange of genes may result in superior or
inferior descendants with the process of natural selection eventually
filtering out inferior individuals and retain those adapted best to their
environment.
1
The code segments can be downloaded from
http://kdm.first.flinders.edu.au/IDM/.
Inspired by Darwin’s theory of evolution, Holland (Holland 1975,
Goldberg 1989) introduced the genetic algorithm as a powerful com-
putational model for optimization. Genetic algorithms work on a
population of potential solutions, in the form of chromosomes, and
try to locate a best solution through the process of artificial evolu-
tion, which consist of repeated artificial genetic operations, namely
evaluation, selection, crossover and mutation.

A multi-modal object function F1 (x, y) as shown in Figure 1 is used


to illustrate this. The global optimum is located at approximately
F1 (1.9931, 1.9896) = 4.2947.
4 3
F1 (x, y) = +
(x − 2)2+ (y − 2) + 1 (x − 2) + (y + 2)2 + 1
2 2

2
, − 5 ≤ x, y < 5 (2)
(x + 2) + (y − 2)2 + 1
2

Figure 1. Object function F1

The first design issue in applying genetic algorithms is to select an


adequate coding scheme to represent potential solutions in the search
99
space in the form of chromosomes. Among other alternatives, such
as expression trees for genetic programming (Willis et al. 1997) and
city index permutation for the travelling salesperson problem, binary
string coding is widely used for numerical optimization. Figure 2
gives a typical binary string coding for the test function F1 . Each
genotype has 16 bits to encode an independent variable. A decoding
function will map the 65536 possible combinations of b15 · · · b0 onto
the range [−5, 5) linearly. A chromosome is then formed by cascad-
ing genotypes for each variable. With this coding scheme, any 32 bit
binary string stands for a legal point in the problem domain.
P15 i
i=0 bi 2
d(b15 b14 · · · b1 b0 ) = × 10 − 5 (3)
216

E E « E E E E « E  E

ELWVIRU[ ELWVIRU\


Figure 2. A binary coding scheme for F1

A second issue is to decide the population size. The choice of popu-


lation size, N , is a tradeoff between solution quality and computation
cost. A larger population size will maintain higher genetic diversity
and therefore a higher possibility of locating global optimum, how-
ever, at a higher computational cost. The operation of genetic algo-
rithms is outlined as follows:

Step 1 Initialization
Each bit of all N chromosomes in the population is randomly set
to 0 or 1. This operation in effect spreads chromosomes randomly
into the problem domains. Whenever possible, it is suggested to
incorporate any a priori knowledge of the search space into the
initialization process to endow the genetic algorithm with a better
starting point.
Step 2 Evaluation
Each chromosome is decoded and evaluated according to the
given object function. The fitness value, fi , reflects the degree of
success chromosome ci can achieve in its environment.

~xi = D(ci )
fi = F (~xi ) (4)

Step 3 Selection
Chromosomes are stochastically picked to form the population for
the next generation based on their fitness values. The selection is
done by roulette wheel selection with replacement as the follow-
ing:

f SF
P r(ci be selected) = PN i SF (5)
j=1 fj

The selection factor, SF, controls the discrimination between su-


perior and inferior chromosomes by reshaping the landscape of
the fitness function. As a result, better chromosomes will have
more copies in the new population, mimicking the process of nat-
ural selection. In some applications, the best chromosome found
is always retained in the next generation to ensure its genetic ma-
terial remains in the gene pool.

Step 4 Crossover
Pairs of chromosomes in the newly generated population are sub-
ject to a crossover (or swap) operation with probability PC , called
Crossover Rate. The crossover operator generates new chromo-
somes by exchanging genetic material of pair of chromosomes
across randomly selected sites, as depicted in Figure 3. Similar
to the process of natural breeding, the newly generated chromo-
somes can be better or worse than their parents. They will be
tested in the subsequent selection process, and only those which
are an improvement will thrive.
101
 VLWHFURVVRYHU

VLWHFURVVRYHU

Figure 3. Crossover operation

Step 5 Mutation
After the crossover operation, each bit of all chromosomes are
subjected to mutation with probability PM , called the Mutation
Rate. Mutation flips bit values and introduces new genetic mater-
ial into the gene pool. This operation is essential to avoid the en-
tire population converging to a single instance of a chromosome,
since crossover becomes ineffective in such situations. In most
applications, the mutation rate should be kept low and acts as a
background operator to prevent genetic algorithms from random
walking.

Step 6 Termination Checking


Genetic algorithms repeat Step 2 to Step 5 until a given termina-
tion criterion is met, such as pre-defined number of generations
or quality improvement has failed to have progressed for a given
number of generations. Once terminated, the algorithm reports the
best chromosome it found.

Program 1 is an implementation of genetic algorithm. Note that, for


the sake of program readability, variable of int type is used to store
a single bit. More compact representation is possible with slightly
tricky genetic operators.

The result of applying Program 1 to the object function F1 is reported


in Figure 4. With a population of size 10, after 20 generations, the
genetic algorithm was capable of locating a near optimal solution at
F1 (1.9853, 1.9810) = 4.2942. Readers should be aware that, due to
the stochastic nature of genetic algorithms, the same program may
produce a different results on different machines.

Figure 4. Progress of the GA program applied to test function F1

Although the operation of genetic algorithms is quite simple, it does


have some important characteristics providing robustness:

• They search from a population of points rather than a single point.


• The use the object function directly, not their derivative.
• They use probabilistic transition rules, not deterministic ones, to
guide the search toward promising region.

In effect, genetic algorithms maintain a population of candidate solu-


tions and conduct stochastic searches via information selection and
103
exchange. It is well recognized that, with genetic algorithms, near-
optimal solutions can be obtained within justified computation cost.
However, it is difficult for genetic algorithms to pin point the global
optimum. In practice, a hybrid approach is recommended by incor-
porating gradient-based or local greedy optimization techniques. In
such integration, genetic algorithms act as course-grain optimizers
and gradient-based method as fine-grain ones.

The power of genetic algorithms originates from the chromosome


coding and associated genetic operators. It is worth paying atten-
tion to these issues so that genetic algorithms can explore the search
space more efficiently. The selection factor controls the discrimi-
nation between superior and inferior chromosomes. In some appli-
cations, more sophisticated reshaping of the fitness landscape may
be required. Other selection schemes (Whitley 1993), such as rank-
based selection, or tournament selection are possible alternatives for
the controlling of discrimination.

Numerous variants with different application profiles have been


developed following the standard genetic algorithm. Island-model
genetic algorithms, or parallel genetic algorithms (Abramson and
Abela 1991), attempt to maintain genetic diversity by splitting a
population into several sub-populations, each of them evolves inde-
pendently and occasionally exchanges information with each other.
Multiple-objective genetic algorithms (Gao et al. 2000, Fonseca and
Fleming 1993, Fonseca and Fleming 1998) attempt to locate all near-
optimal solutions by careful controlling the number of copies of
superior chromosomes such that the population will not be domi-
nated by the single best chromosome (Sareni and Krahenbuhl 1998).
Co-evolutionary systems (Handa et al. 2002, Bull 2001) have two
or more independently evolved populations. The object function for
each population is not static, but a dynamic function depends on the
current states of other populations. This architecture vividly models
interaction systems, such as prey and predator, virus and immune
system.
3 Ant Systems
Inspired by the food-seeking behavior of real ants, Ant Systems, at-
tributable to Dorigo et al. (Dorigo et al. 1996), has demonstrated
itself to be an efficient, effective tool for combinatorial optimization
problems. In nature, a real ant wandering in its surrounding envi-
ronment will leave a biological trace, called pheromone, on its path.
The intensity of left pheromone will bias the path-taking decision of
subsequent ants. A shorter path will possess higher pheromone con-
centration and therefore encourage subsequent ants to follow it. As
a result, an initially irregular path from nest to food will eventually
contract to a shorter path. With appropriate abstraction and modifi-
cation, this observation has led to a number of successful computa-
tional models for combinatorial optimization.

The operation of ant systems can be illustrated by the classical Trav-


elling Salesman Problem (see Figure 5 for example). In the TSP
problem, a travelling salesman problem is looking for a route which
covers all cities with minimal total distance. Suppose there are n
cities and m ants. The entire algorithm starts with initial pheromone
intensity set to τ0 on all edges. In every subsequent ant system cycle,
or episode, each ant begins its trip from a randomly selected starting
city and is required to visit every city exactly once (a Hamiltonian
Circuit). The experience gained in this phase is then used to update
the pheromone intensity on all edges.

The operation of ant systems is given below:

Step 1 Initialization
Initial pheromone intensities on all edges are set to τ0 .

Step 2 Walking phase


In this phase, each ant begins its trip from a randomly selected
starting city and is required to visit every city exactly once. When
an ant, the k-th ant for example, is located at city r and needs to

105


Figure 5. A traveling salesman problem with 12 cities

decide the next city s, the path-taking decision is made stochasti-


cally based on the following probability function:


 [τ (r, s)] · [η(r, s)]β

 X , if s ∈ Jk (r);
Pk (r, s) =  [τ (r, u)] · [η(r, u)]β (6)
  u∈Jk (r)

0, otherwise.
where τ (r, s) is the pheromone intensity on the edge between
1
cities r and s; visibility η(r, s) = δ(r,s) is the reciprocal of the dis-
tance between cities r and s; Jk (r) is the set of unvisited cities for
the k-th ant. According to Equation 6, an ant will favour a nearer
city or a path with higher pheromone intensity. β is parameter
used to control the relative weighting of these two factors. During
the circuit, the route made by each ant is recorded for pheromone
updating in step 3. The best route found so far is also tracked.
Step 3 Updating phase
The experience accumulated in step 2 is then used to modify the
pheromone intensity by the following updating rule:
m
X
τ (r, s) ←− (1 − α) · τ (r, s) + ∆τk (r, s) (7)
k=1

( 1
L
, if (r, s) ∈ route made by ant k;
∆τk (r, s) = k
0, otherwise.

where 0 < α < 1 is a parameter modelling the evaporation of


pheromone; Lk is the length of the route made by the k-th ant;
∆τk (r, s) is the pheromone trace contributed by the k-th ant to
edge (r, s).
The updated pheromone intensities are then used to guide the
path-taking decision in the next ant system cycle. It can be ex-
pected that, as the ant system cycle proceeds, the pheromone in-
tensities on the edges will converge to values reflecting their po-
tential for being components of the shortest route. The higher the
intensity, the more chance of being a link in the shortest route, and
vice visa.
Step 4 Termination Checking
Ant systems repeat Step 2 to Step 3 until certain termination crite-
ria are met, such as a pre-defined number of episodes is performed
or the algorithm has failed to make improvements for certain num-
ber of episodes. Once terminated, ant system reports the shortest
route found.

Program 2 at the end of this chapter is an implementation of an ant


system. The results of applying Program 2 to the test problem in
Figure 5 are given in Figure 6 and 7. Figure 6 reports a found short-
est route of length 3.308, which is the truly shortest route validated
by exhaustive search. Figure 7 gives a snapshot of the pheromone
intensities after 20 episodes. A higher intensity is represented by a
107
wider edge. Notice that intensity alone cannot be used as a criteria
for judging whether a link is a constitute part of the shortest route or
not, since the shortest route relies on the cooperation of other links.

Figure 6. The shortest route found by the ant system

A close inspection on the ant system reveals that the heavy com-
putation required may make it prohibitive in certain applications.
Ant Colony Systems was introduced by Dorigo et al. (Dorigo and
Gambardella 1997) to remedy this difficulty. Ant colony systems dif-
fer from the simpler ant system in the following ways:

• There is explicit control on exploration and exploitation. When an


ant is located at city r and needs to decide the next city s, there
are two modes for the path-taking decision, namely exploitation
and biased exploration. Which mode to be used is governed by a
random variable 0 < q < 1,
Figure 7. The snapshot of pheromone intensities after 20 episodes

Exploitation Mode:

s = arg max [τ (r, u)] · [η(r, u)]β , if q ≤ q0 (8)


u∈Jk (r)

Biased Exploration Mode:

[τ (r, s)] · [η(r, s)]β


Pk (r, s) = X , if q > q0 (9)
[τ (r, u)] · [η(r, u)]β
u∈Jk (r)

• Local updating. A local updating rule is applied whenever a edge


from city r to city s is taken:

τ (r, s) ←− (1 − ρ) · τ (r, s) + ρ∆τ (r, s) (10)

where ∆τ (r, s) = τ0 = (n · Lnn )−1 , Lnn is a rough estimation


of circuit length calculated using the nearest neighbor heuristic;
0 < ρ < 1 is a parameter modeling the evaporation of pheromone.

109
• Count only the shortest route in global updating. As all ants com-
plete their circuits, the shortest route found in the current episode
is used in the global updating rule:
τ (r, s) ←− (1 − α) · τ (r, s) + α∆τ (r, s) (11)

(
(Lgb )−1 , if (r, s) ∈ global best route;
∆τ (r, s) =
0, otherwise.

where Lgb is the length of shortest route.

In some respects, the ant system has implemented the idea of emer-
gent computation – a global solution emerges as distributed agents
performing local transactions, which is the working paradigm of
real ants. The success of ant systems in combinatorial optimization
makes it a promising tool for dealing with a large set of problems in
the N P -complete class (Papadimitriou and Steiglitz 1982). In addi-
tion, the work of Wang and Wu (Wang and Wu 2001) has extended
the applicability of ant systems further into continuous search space.
Chu et al. (2003) have proposed a parallel ant colony system, in
which groups of ant colonies explore the search space independently
and exchange their experiences at certain time intervals.

4 Particle Swarm Optimization


Some social systems of natural species, such as flocks of birds and
schools of fish, possess interesting collective behavior. In these sys-
tems, globally sophisticated behavior emerges from local, indirect
communication amongst simple agents with only limited capabili-
ties.

In an attempt to simulate this flocking behavior by computers,


Kennedy and Eberthart (1995) realized that an optimization prob-
lem can be formulated as that of a flock of birds flying across an
area seeking a location with abundant food. This observation, to-
gether with some abstraction and modification techniques, led to the
development of a novel optimization technique – particle swarm op-
timization.

Particle swarm optimization optimizes an object function by con-


ducting a population-based search. The population consists of po-
tential solutions, called particles, which are a metaphor of birds in
flocks. These particles are randomly initialized and freely fly across
the multi-dimensional search space. During flight, each particle up-
dates its velocity and position based on the best experience of its own
and the entire population. The updating policy will drive the particle
swarm to move toward region with higher object value, and even-
tually all particles will gather around the point with highest object
value. The detailed operation of particle swarm optimization is given
below:

Step 1 Initialization
The velocity and position of all particles are randomly set to
within pre-specified or legal range.
Step 2 Velocity Updating
At each iteration, the velocity of all particles are updated accord-
ing to the following rule:

~νi ←− ω · ~νi + c1 · R1 · (~pi,best − p~i ) + c2 · R2 · (~gbest − p~i ) (12)

where p~i and ~νi are the position and velocity of particle i, respec-
tively; p~i,best and ~gbest is the position with the ‘best’ object value
found so far by particle i and the entire population, respectively;
ω is a parameter controlling the dynamics of flying; R1 and R2
are random variables from the range [0, 1]; c1 and c2 are factors
used to control the related weighting of corresponding terms.
The inclusion of random variables endows the particle swarm op-
timization with the ability of stochastic searching. The weight-
111
ing factors, c1 and c2 , compromises the inevitable tradeoff be-
tween exploration and exploitation. After the updating, ~νi should
be checked and clamped to pre-specified range to avoid violent
random walking.
Step 3 Position Updating
Assuming a unit time interval between successive iterations, the
positions of all particles are updated according to the following
rule:
p~i ←− p~i + ~νi (13)
After updating, p~i should be checked and coerced to the legal
range to ensure legal solutions.
Step 4 Memory Updating
Update p~i,best and ~gbest when condition is meet.
p~i,best ←− p~i if f (~pi ) > f (~pi,best ),
~gbest ←− p~i if f (~pi ) > f (~gbest ) (14)
where f (~x) is the object function subject to maximization.
Step 5 Termination Checking
The algorithm repeats Steps 2 to 4 until certain termination
conditions are met, such as pre-defined number of iterations
or a failure to make progress for certain number of iterations.
Once terminated, the algorithm reports the ~gbest and f (~gbest ) as its
solution.

Program 3 at the end of this chapter is a straightforward implemen-


tation of the algorithm above. To experience the power of particle
swarm optimization, Program 3 is applied to the following test func-
tion, as visualized in Figure 8.
µq ¶ µq ¶
F2 (x, y) = −x sin | x | − y sin | y | , −500 < x, y < 500
(15)
where global optimum is at F2 (−420.97, −420.97) = 837.97.

Figure 8. Object function F2

In the tests above, both learning factors, c1 and c2 , are set to a value of
2, and a variable inertia weight w is used according to the suggestion
from Shi and Eberhart (1999). Figure 9 reports the progress of parti-
cle swarm optimization on the test function F2 (x, y) for the first 300
iterations. At the end of 1000 iterations, F2 (−420.97, −420.96) =
837.97 is located, which is close to the global optimum.

It is worthwhile to look into the dynamics of particle swarm opti-


mization. Figure 10 presents the distribution of particles at different
iterations. There is a clear trend that particles start from their initial
positions and fly toward the global optimum.

Numerous variants had been introduced since the first particle swarm
113


Figure 9. Progress of PSO on object function F2

optimization. A discrete binary version of the particle swarm opti-


mization algorithm was proposed by Kennedy and Eberhart (1997).
Shi and Eberhart (2001) applied fuzzy theory to particle swarm op-
timization algorithm, and successfully incorporated the concept of
co-evolution in solving min-max problems (Shi and Krohling 2002).
(Chu et al. 2003) have proposed a parallel architecture with commu-
nication mechanisms for information exchange among independent
particle groups, in which solution quality can be significantly im-
proved.

5 Discussions and Conclusions


The nonstop evolution process has successfully driven natural
species to develop effective solutions to a wide range of problems.
    


D WKLWHUDWLRQ       E WKLWHUDWLRQ

    


F WKLWHUDWLRQ          G WKLWHUDWLRQ

Figure
 10. The distribution of particles at different iterations


Genetic algorithms, ant systems, and particle swarm optimization,




all inspired by nature, have also proved themselves to be effective


    

solutions to optimization problems. However, readers should remem-


ber that, despite of the robustness these approaches claim to be of,
there is no panacea. As discussed in previous sections, there are con-
trol parameters involved in these meta-heuristics and an adequate
setting of these parameters is a key point for success. In general,
some kind of trial-and-error tuning is necessary for each particular
instance of optimization problem. In addition, these meta-heuristics
should not be considered in isolation. Prospective users should spec-
ulate on the possibility of hybrid approaches and the integration of
gradient-based methods, which are promising directions deserving
further study.

115
References
Abramson, D. and Abela, J. (1991), “A parallel genetic algorithm for
solving the school timetabling problem,” Technical report, Divi-
sion of Information Technology, CSIRO.

Bull, L. (2001), “On coevolutionary genetic algorithms,” Soft Com-


puting, vol. 5, no. 3, pp. 201-207.

Chu, S. C. and Roddick, J. F. and Pan, J. S. (2003), “Parallel par-


ticle swarm optimization algorithm with communication strate-
gies,” personal communication.

Chu, S. C. and Roddick, J. F. and Pan, J. S. and Su, C. J. (2003),


“Parallel ant colony systems,” 14th International Symposium on
Methodologies for Intelligent Systems, LNCS, Springer-Verlag,
(will appear in Japan).

Dorigo, M. and Maniezzo, V. and Colorni, A. (1996), “The ant sys-


tem: optimization by a colony of cooperating agents,” IEEE Trans.
on Systems, Man, and Cybernetics–Part B, vol. 26, no. 2, pp. 29-
41.

Dorigo, J. M. and Gambardella, L. M. (1997), “Ant colony system: a


cooperative learning approach to the traveling salesman problem,”
IEEE Trans. on Evolutionary Computation, vol. 26, no. 1, pp. 53–
66.

Fonseca, C. M. and Fleming, P. J. (1993), “Multiobjective genetic


algorithms,” IEE Colloquium on Genetic Algorithms for Control
Systems Engineering, number 1993/130, pp. 6/1-6/5.

Fonseca, C. M. and Fleming, P. J. (1998), “Multiobjective opti-


mization and multiple constraint handling with evolutionary al-
gorithms I: A unified formulation,” IEEE Trans. on Systems, Man
and Cybernetics-Part A, vol. 28, no. 1, pp. 26-37.
Gao, Y., Shi, L., and Yao, P. (2000), “Study on multi-objective ge-
netic algorithm,” Proceedings of the Third World Congress on In-
telligent Control and Automation, pp. 646-650.
Goldberg, D. E. (1989), Genetic Algorithm in Search, Optimization
and Machine Learning, Addison-Wesley, Reading, MA.
Handa, H., Baba, M., Horiuchi, T., and Katai, O. (2002), “A novel
hybrid framework of coevolutionary GA and machine learning,”
International Journal of Computational Intelligence and Appli-
cations, vol. 2, no. 1, pp. 33-52.
Holland, J. (1975), Adaptation In Natural and Artificial Systems,
University of Michigan Press.
Kennedy, J. and Eberhart, R. (1995), “Particle swarm optimization,”
IEEE International Conference on Neural Networks., pp. 1942-
1948.
Papadimitriou C. H. and Steiglitz, K. (1982), Combinatorial Opti-
mization – Algorithms and Complexity, Prentice Hall.
Sareni, B. and Krahenbuhl, L. (1998), “Fitness sharing and niching
methods revisited,” IEEE Trans. on Evolutionary Computation,
vol. 2, no. 3, pp. 97-106.
Shi, Y. and Eberhart, R. C. (2001), “Fuzzy adaptive particle swarm
optimization,” Proceedings of 2001 Congress on Evolutionary
Computation (CEC’2001), pp. 101-106.
Shi, Y. and Krohling, R. A. (2002), “Co-evolutionary particle swarm
optimization to solve min-max problems,” Proceedings of 2002
Congress on Evolutionary Computation (CEC’2002), vol. 2, pp.
1682-1687.
Wang, L. and Wu, Q. (2001), “Ant system algorithm for optimization
in continuous space,” IEEE International Conference on Control
Applications (CCA’2001), pp. 395–400.
117
Whitley, D. (1993), A genetic algorithm tutorial, Technical Report
CS-93-103, Department of Computer Science, Colorado State
University, Fort Collins, CO 8052.

Willis, M.-J., Hiden, H. G., Marenbach, P., McKay, B., and Mon-
tague, G.A. (1997), “Genetic programming: an introduction and
survey of applications,” Proceedings of the Second International
Conference on Genetic Algorithms in Engineering Systems: Inno-
vations and Applications (GALESIA’97), pp. 314-319.
3URJUDP$QLPSOHPHQWDWLRQRIJHQHWLFDOJRULWKPLQ
&ODQJXDJH


LQFOXGHVWGLRK!
LQFOXGHVWGOLEK!
LQFOXGHPDWKK!

GHILQH0*  0D[LPDO1XPEHURI*HQHUDWLRQV 
GHILQH1  3RSXODWLRQ6L]H 
GHILQH&/  1XPEHURIELWVLQHDFK
FKURPRVRPH 
GHILQH6)  6HOHFWLRQ)DFWRU 
GHILQH&5  &URVVRYHU5DWH 
GHILQH05  0XWDWLRQ5DWH 

 0DFURIRUUDQGRPQXPEHUEHWZHHQDQG 
GHILQH5$1' IORDW UDQG  IORDW 5$1'B0$; 

LQWF>1@>&/@   3RSXODWLRQRI&KURPRVRPHV 
IORDWI>1@   )LWQHVV9DOXHRI&KURPRVRPHV

LQWEHVWBF>&/@  %HVW&KURPRVRPH 
IORDWEHVWBI   %HVW)LWQHVV9DOXH 

 'HFRGH&KURPRVRPH 
YRLGGHFRGH LQWFKURPRVRPH>&/@IORDW [IORDW \ 
^
LQWM
 'HFRGHWKHORZHUELWVIRUYDULDEOH[ 
[ 
IRU M M&/M 
 [ [ FKURPRVRPH>M@
[ [ SRZ  
 'HFRGHWKHXSSHUELWVIRUYDULDEOH\ 
\ 
IRU M &/M&/M 

119
 \ \ FKURPRVRPH>M@
\ \ SRZ  
`

 2EMHFW)XQFWLRQ 
IORDWREMHFW IORDW[IORDW\ 
^
UHWXUQ  [ [  \ \  
 [ [  \ \   [
[  \ \  
`

YRLGPDLQ YRLG 
^
LQWLN   ,QGH[IRU&KURPRVRPH 
LQWM   ,QGH[IRU*HQHUDWLRQ 
LQWJHQ   ,QGH[IRU*HQHUDWLRQ 
IORDW[\  ,QGHSHQGHQW9DULDEOHV 
LQWVLWH   0XWDWLRQ6LWH 
IORDWWPSI
LQWWPSL
LQWWPSF>1@>&/@ 7HPSRUDU\3RSXODWLRQ 
IORDWS>1@   6HOHFWLRQ3UREDELOLW\ 

 6HWUDQGRPVHHG 
VUDQG  

 ,QLWLDOL]H3RSXODWLRQ 
EHVWBI H
IRU L L1L 
 ^
  5DQGRPO\VHWHDFKJHQHWR

RU

 
 IRU M M&/M 
  LI 5$1' 
   F>L@>M@ 
  HOVH
   F>L@>M@ 
 `

 5HSHDW*HQHWLF$OJRULWKPF\FOHIRU0*WLPHV 
IRU JHQ JHQ0*JHQ 
 ^
  (YDOXDWLRQ 
 IRU L L1L 
  ^
  GHFRGH F>L@ [ \ 
  I>L@ REMHFW [\ 
   8SGDWHEHVWVROXWLRQ 
  LI I>L@!EHVWBI 
   ^
   EHVWBI I>L@
   IRU M M&/M 
    EHVWBF>M@ F>L@>M@
   `
  `
  6HOHFWLRQ 
  (YDOXDWH6HOHFWLRQ3UREDELOLW\ 
 WPSI 
 IRU L L1L 
  ^
  S>L@ SRZ I>L@6) 
  WPSI WPSIS>L@
  `
 IRU L L1L 
  S>L@ S>L@WPSI
  5HWDLQWKHEHVW&KURPRVRPHIRXQGVRIDU 
 IRU M M&/M 
  WPSF>@>M@ EHVWBF>M@
  5RXOHWWHZKHHOVHOHFWLRQZLWKUHSODFHPHQW 
 IRU L L1L 
  ^
  WPSI 5$1'
  IRU N WPSI!S>N@N 
   WPSI WPSIS>N@

121
   &KURPRVRPHNLVVHOHFWHG 
  IRU M M&/M 
   WPSF>L@>M@ F>N@>M@
  `
  &RS\WHPSRUDU\SRSXODWLRQWRSRSXODWLRQ 
 IRU L L1L 
  IRU M M&/M 
   F>L@>M@ WPSF>L@>M@
  VLWH&URVVRYHU 
 IRU L L1L L 
  LI 5$1'&5 
   ^
   VLWH 5$1' &/
   IRU M MVLWHM 
    ^
    WPSL F>L@>M@
    F>L@>M@ F>L@>M@
    F>L@>M@ WPSL
    `
   `
  0XWDWLRQ 
 IRU L L1L 
  IRU M M&/M 
   LI 5$1'05 
 )OLSMWKJHQH 
F>L@>M@ F>L@>M@
  5HSRUW3URJUHVV 
 SULQWI I?QEHVWBI 
 `
 5HSRUW6ROXWLRQ 
GHFRGH EHVWBF [ \ 
SULQWI ) II I?Q[\REMHFW [\ 
`
3URJUDP  $Q LPSOHPHQWDWLRQ RI DQW V\VWHP LQ & ODQJXDJH



LQFOXGHVWGLRK!
LQFOXGHVWGOLEK!
LQFOXGHPDWKK!

GHILQH0DS PDSW[W  ILOHQDPHRIFLW\PD
S 
GHILQH1XPEHU2I&LW\   QXPEHURIFLWLHV 
GHILQH1XPEHU2I$QW   QXPEHURIDQWV 
GHILQHDOSKD    SKHURPRQHGHFD\IDFW
RU 
GHILQHEHWD   WUDGHRIIIDFWRU
EHWZHHQSKHURPRQHDQGGLVWDQFH 
GHILQHWDX    LQLWLDOLQWHQVLW\RI
        SKHURPRQH 
GHILQH(SLVRGH/LPLW  OLPLWRIHSLVRGH 
GHILQH5RXWHURXWHW[W ILOHQDPHIRUURXWH
PDS 
 5$1'0DFURIRUUDQGRPQXPEHUEHWZHHQDQG 
GHILQH5$1' IORDW UDQG  IORDW 5$1'B0$; 

W\SHGHIVWUXFW^
IORDW[ [FRRUGLQDWH 
IORDW\ \FRRUGLQDWH 
`&LW\7\SH
W\SHGHIVWUXFW^
LQW URXWH>1XPEHU2I&LW\@
 YLVLWLQJVHTXHQFHRIFLWLHV 
IORDWOHQJWK    OHQJWKRIURXWH 
`5RXWH7\SH

&LW\7\SHFLW\>1XPEHU2I&LW\@ FLW\DUUD\ 
IORDWGHOWD>1XPEHU2I&LW\@>1XPEHU2I&LW\@
 GLVWDQFHPDWUL[ 
IORDWHWD>1XPEHU2I&LW\@>1XPEHU2I&LW\@

123
 ZHLJKWHGYLVLELOLW\PDWUL[ 
IORDWWDX>1XPEHU2I&LW\@>1XPEHU2I&LW\@
 SKHURPRQHLQWHQVLW\PDWUL[ 
5RXWH7\SH%HVW5RXWH VKRUWHVWURXWH 
5RXWH7\SHDQW>1XPEHU2I$QW@ DQWDUUD\ 

IORDWS>1XPEHU2I&LW\@  SDWKWDNLQJSURED
  ELOLW\DUUD\ 
LQWYLVLWHG>1XPEHU2I&LW\@ DUUD\IRUYLVLWLQJ
VWDWXV 

IORDWGHOWDBWDX>1XPEHU2I&LW\@>1XPEHU2I&LW\@
 VXPRIFKDQJHLQWDX 

YRLGPDLQ YRLG 
^
),/( PDSISU ILOHSRLQWHUIRUFLW\PDS 
LQWUV LQGLFHVIRUFLWLHV 
LQWN LQGH[IRUDQW 
LQWHSLVRGH LQGH[IRUDQWV\VWHPF\FOH 
LQWVWHS LQGH[IRUURXWLQJVWHS 
IORDWWPS WHPSRUDU\YDULDEOH 
),/( URXWHISU ILOHSRLQWHUIRUURXWHPDS 

 6HWUDQGRPVHHG 
VUDQG  

 5HDGFLW\PDS 
PDSISU IRSHQ 0DSU 
IRU U U1XPEHU2I&LW\U 
IVFDQI PDSISUII FLW\>U@[  FLW\>U@
\ 
IFORVH PDSISU 

 (YDOXDWHGLVWDQFHPDWUL[ 
IRU U U1XPEHU2I&LW\U 
IRU V V1XPEHU2I&LW\V 
GHOWD>U@>V@ VTUW FLW\>U@[FLW\>V@[
FLW\>U@[FLW\>V@[  FLW\>U@\FLW\>V@\ FLW\>
U@\FLW\>V@\ 

 (YDOXDWHZHLJKWHGYLVLELOLW\PDWUL[ 
IRU U U1XPEHU2I&LW\U 
IRU V V1XPEHU2I&LW\V 
LI U V 
HWD>U@>V@ SRZ GHOWD>U@>V@
EHWD 
HOVH
HWD>U@>V@ 

 ,QLWLDOL]HSKHURPRQHRQHGJHV 
IRU U U1XPEHU2I&LW\U 
IRU V V1XPEHU2I&LW\V 
WDX>U@>V@ WDX

 ,QLWLDOL]HEHVWURXWH 
%HVW5RXWHURXWH>@ 
%HVW5RXWHOHQJWK 
IRU U U1XPEHU2I&LW\U 
^
%HVW5RXWHURXWH>U@ U
%HVW5RXWHOHQJWK GHOWD>U@>U@
`
%HVW5RXWHOHQJWK GHOWD>1XPEHU2I&LW\@>@

 5HSHDWDQWV\VWHPF\FOHIRU(SLVRGH/LPLWWLPHV 
IRU HSLVRGH HSLVRGH(SLVRGH/LPLWHSLVRGH 
^
 ,QLWLDOL]HDQWV
VWDUWLQJFLW\ 
IRU N N1XPEHU2I$QWN 
DQW>N@URXWH>@ 5$1' 1XPEHU2I&LW\



125
 /HWDOODQWVSURFHHGIRU1XPEHU2I&LW\VWHSV 
IRU VWHS VWHS1XPEHU2I&LW\VWHS 
^
IRU N N1XPEHU2I$QWN 
^
 (YDOXDWHSDWKWDNLQJSUREDELOLW\
DUUD\IRUDQWNDWFXUUHQWWLPHVWHS

U DQW>N@URXWH>VWHS@
 &OHDUYLVLWHGOLVWRIDQWN 
IRU V V1XPEHU2I&LW\V 
YLVLWHG>V@ 
 0DUNYLVLWHGFLWLHVRIDQWN 
IRU V VVWHSV 
YLVLWHG>DQW>N@URXWH>V@@ 
WPS 
IRU V V1XPEHU2I&LW\V 
LI YLVLWHG>V@  
S>V@ 
HOVH
^
S>V@ WDX>U@>V@ HWD>U@
>V@
WPS S>V@
`
IRU V V1XPEHU2I&LW\V 
S>V@ WPS
 3UREDELOLVWLFDOO\SLFNXSQH[W
HGJHE\URXOHWWHZKHHOVHOHFWLRQ 
WPS 5$1'
IRU V WPS!S>V@V 
WPS S>V@
DQW>N@URXWH>VWHS@ V
`
`
 8SGDWHSKHURPRQHLQWHQVLW\ 
 5HVHWPDWUL[IRUVXPRIFKDQJHLQWDX 
IRU U U1XPEHU2I&LW\U 
IRU V V1XPEHU2I&LW\V 
GHOWDBWDX>U@>V@ 
IRU N N1XPEHU2I$QWN 
^
  (YDOXDWHURXWHOHQJWK 
DQW>N@OHQJWK 
IRU U U1XPEHU2I&LW\U 
DQW>N@OHQJWK GHOWD>DQW>N@
URXWH>U@@
>DQW>N@URXWH>U@@
DQW>N@OHQJWK GHOWD>DQW>N@URXWH
>1XPEHU2I&LW\@@>DQW>N@URXWH>@@

 (YDOXDWHFRQWULEXWHGGHOWDBWDX 
IRU U U1XPEHU2I&LW\U 
GHOWDBWDX>DQW>N@URXWH>U@@>DQW>N@
URXWH>U@@ DQW>N@OHQJWK
 GHOWDBWDX>DQW>N@URXWH>1XPEHU2I&LW\
@@>DQW>N@
URXWH>@@ DQW>N@OHQJWK

 8SGDWHEHVWURXWH 
LI DQW>N@OHQJWK%HVW5RXWHOHQJWK 
^
%HVW5RXWHOHQJWK DQW>N@OHQJWK
IRU U U1XPEHU2I&LW\U 
%HVW5RXWHURXWH>U@ DQW>N@
URXWH>U@
`
`
 8SGDWHSKHURPRQHPDWUL[ 
IRU U U1XPEHU2I&LW\U 
IRU V V1XPEHU2I&LW\V 

127
WDX>U@>V@ DOSKD WDX>U@>V@
GHOWDBWDX>U@>V@
SULQWI GI?QHSLVRGH%HVW5RXWHOHQJWK 
`

 :ULWHURXWHPDS 
URXWHISU IRSHQ 5RXWHZ 
IRU U U1XPEHU2I&LW\U 
ISULQWI URXWHISUII?QFLW\>%HVW5RXWH
URXWH>U@@[FLW\>%HVW5RXWHURXWH>U@@\ 
IFORVH URXWHISU 
`


6DPSOH GDWD PDSW[W IRU 3URJUDP 














3URJUDP$QLPSOHPHQWDWLRQRISDUWLFOHVZDUPRSWLPL
]DWLRQLQ&ODQJXDJH


LQFOXGHVWGLRK!
LQFOXGHVWGOLEK!
LQFOXGHPDWKK!

GHILQH,WHUDWLRQ/LPLW 0D[LPDO1XPEHURI,
WHUDWLRQ 
GHILQH3RSXODWLRQ6L]H 3RSXODWLRQ6L]H
 1XPEHURI3DUWLFOHV 
GHILQH'LPHQVLRQ  'LPHQVLRQRI6HDUFK
6SDFH 
GHILQHZ8  8SSHU%RXQGRI,QHU
WLD:HLJKW 
GHILQHZ/  /RZHU%RXQGRI,QHU
WLD:HLJKW 
GHILQHF  $FFHOHUDWLRQ)DFWRU
 
GHILQHF  $FFHOHUDWLRQ)DFWRU
 
GHILQH9PD[  0D[LPDO9HORFLW\ 
GHILQH5$1' IORDW UDQG  IORDW 5$1'B
0$; 

W\SHGHIVWUXFW^
IORDW[>'LPHQVLRQ@ 3RVLWLRQ 
IORDWY>'LPHQVLRQ@ 9HORFLW\ 
IORDWILWQHVV )LWQHVV 
IORDWEHVWB[>'LPHQVLRQ@ ,QGLYLGXDO%HVW
  6ROXWLRQ 
IORDWEHVWBILWQHVV ,QGLYLGXDO%HVW
  )LWQHVV 
`3DUWLFOH7\SH


129
3DUWLFOH7\SH S>3RSXODWLRQ6L]H@  3DUWLFOH
$UUD\ 
IORDW JEHVWB[>'LPHQVLRQ@  *OREDO
%HVW6ROXWLRQ 
IORDW JEHVWBILWQHVV  *OREDO
%HVW)LWQHVV 

 6FKZHIHO)XQFWLRQ 
IORDW6FKZHIHO IORDW[>'LPHQVLRQ@ 
^
LQWL
IORDWWPS
WPS 
IRU L L'LPHQVLRQL 
WPS  [>L@ VLQ VTUW IDEV [>L@ 
UHWXUQ WPS 
`

YRLGPDLQ YRLG 
^
LQWL ,QGH[IRU3DUWLFOH 
LQWG ,QGH[IRU'LPHQVLRQ 
IORDWZ ,QHUWLD:HLJKW 
LQWVWHS ,QGH[IRU362F\FOH 

 6HWUDQGRPVHHG 
VUDQG  

 ,QLWLDOL]HSDUWLFOHV 
JEHVWBILWQHVV H
IRU L L3RSXODWLRQ6L]HL 
^
IRU G G'LPHQVLRQG 
^
S>L@[>G@ S>L@EHVWB[>G@ 5$1' 
S>L@Y>G@ 5$1' 
`
S>L@ILWQHVV S>L@EHVWBILWQHVV 6FKZHIHO S>L@
[ 
 8SGDWHJEHVW 
LI S>L@EHVWBILWQHVV!JEHVWBILWQHVV 
^
IRU G G'LPHQVLRQG 
JEHVWB[>G@ S>L@EHVWB[>G@
JEHVWBILWQHVV S>L@EHVWBILWQHVV
`
`

 5HSHDW362F\FOHIRU,QWHUDWLRQ/LPLWWLPHV 
IRU VWHS VWHS,WHUDWLRQ/LPLWVWHS 
^
Z Z8 Z8Z/ IORDW VWHS IORDW ,WHUDWLRQ/LPLW 
IRU L L3RSXODWLRQ6L]HL 
^
IRU G G'LPHQVLRQG 
^
S>L@Y>G@ Z S>L@Y>G@F 5$1' S>L@
EHVWB[>G@S>L@[>G@ F 5$1' JEHVWB[>G@S>L@[>G@

LI S>L@Y>G@!9PD[ S>L@Y>G@ 9PD[
LI S>L@Y>G@9PD[ S>L@Y>G@ 9PD[
S>L@[>G@ S>L@[>G@S>L@Y>G@
LI S>L@[>G@! S>L@[>G@

LI S>L@[>G@ S>L@[>G@
`
S>L@ILWQHVV 6FKZHIHO S>L@[ 
 8SGDWHSEHVW 
LI S>L@ILWQHVV!S>L@EHVWBILWQHVV 
^
IRU G G'LPHQVLRQG 
S>L@EHVWB[>G@ S>L@[>G@
S>L@EHVWBILWQHVV S>L@ILWQHVV

131
 8SGDWHJEHVW 
LI S>L@EHVWBILWQHVV!JEHVWBILWQHVV 
^
IRU G G'LPHQVLRQG 
JEHVWB[>G@ S>L@EHVWB[>G@
JEHVWBILWQHVV S>L@EHVWB
ILWQHVV
`
`
`
SULQWI I?QJEHVWBILWQHVV 
`
`


You might also like