Multi-Objective Optimization Using Genetic Algorithms
Multi-Objective Optimization Using Genetic Algorithms
Kaveh Amouzgar
Besksadress:
Gjuterigatan 5
Telefon:
036-10 10 00 (vx)
Multi-Objective Optimization
using Genetic Algorithms
Kaveh Amouzgar
This thesis work has been carried out at the School of Engineering in
Jnkping in the subject area Product Development and Materials Engineering.
The work is a part of the masters degree.
The authors take full responsibility for opinions, conclusions and findings
presented.
Supervisor: Niclas Strmberg
Scope: 30 ECTS credits
Date: 2012-05-30
This thesis has been prepared using LATEX.
Postadress:
Box 1026
551 11 Jnkping
Besksadress:
Gjuterigatan 5
Telefon:
036-10 10 00 (vx)
Abstract
In this thesis, the basic principles and concepts of single and multi-objective Genetic Algorithms (GA) are reviewed. Two algorithms, one for single objective and
the other for multi-objective problems, which are believed to be more efficient,
are described in details. The algorithms are coded with MATLAB and applied
on several test functions. The results are compared with the existing solutions
in literatures and shows promising results. Obtained pareto-fronts are exactly
similar to the true pareto-fronts with a good spread of solution throughout the
optimal region. Constraint handling techniques are studied and applied in the two
algorithms. Constrained benchmarks are optimized and the outcomes show the
ability of algorithm in maintaining solutions in the entire pareto-optimal region.
In the end, a hybrid method based on the combination of the two algorithms is
introduced and the performance is discussed. It is concluded that no significant
strength is observed within the approach and more research is required on this
topic. For further investigation on the performance of the proposed techniques,
implementation on real-world engineering applications are recommended.
Keywords
Single Objective Optimization, Multi-objective Optimization, Constraint Handling, Hybrid Optimization, Evolutionary Algorithm, Genetic Algorithm, ParetoFront, Domination.
Contents
1 Introduction
1.1
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
1.3
Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4
Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2 Theoretical background
2.1
What is Optimization? . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Single-Objective Optimization . . . . . . . . . . . . . . . . . . . . .
2.2.1
Evolutionary Method . . . . . . . . . . . . . . . . . . . . . .
2.2.2
2.2.3
2.2.4
Real Parameter GA . . . . . . . . . . . . . . . . . . . . . . .
2.2.5
2.2.6
2.2.7
Constraint Handling . . . . . . . . . . . . . . . . . . . . . .
2.3
Multi-objective Optimization
. . . . . . . . . . . . . . . . . . . . . 10
2.3.1
2.3.2
2.3.3
2.3.4
2.3.5
Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . 15
2.3.6
MOEA Techniques . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.7
Comparison of MOEAs . . . . . . . . . . . . . . . . . . . . . 17
2.3.8
2.3.9
18
3 Implementation
3.1
3.2
3.3
24
Single Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.1
3.1.2
Multi objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.1
3.2.2
Hybrid Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4 Test Results
4.1
4.2
4.3
36
Single Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1.1
Unconstrained Functions . . . . . . . . . . . . . . . . . . . . 36
4.1.2
Constrained Functions . . . . . . . . . . . . . . . . . . . . . 38
Multi-Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2.1
Unconstrained Functions . . . . . . . . . . . . . . . . . . . . 40
4.2.2
Constrained Functions . . . . . . . . . . . . . . . . . . . . . 51
Hybrid Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5 Conclusion
54
6 Bibliography
55
59
63
ii
List of Figures
1
Trade-off curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Min-Min pareto-front . . . . . . . . . . . . . . . . . . . . . . . . . . 13
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
List of Tables
1
iv
Introduction
1.1
Background
Previous works by Beasley and Bull (1993); Coello (2007); Deb (1995, 2001, 2002,
2004); Deb et al. (2001); Fonseca and Fleming (1993); Haupt et al. (2004); Kim
et al. (2004); Kukkonen (2006); Man et al. (1996); Zitzler and Thiele (1998);
Zitzler et al. (2001) on the theory, concepts and algorithms of single and multiobjective optimization using evolutionary algorithms.
Previous studies by Deb (2000); Deb et al. (2002); Jimnez et al. (1999); KuriMorales and A.Gutierrez-Garcia (2002); Mezura-Montes et al. (2003); T. Ray
(2001) on constraint handling methods.
Test functions and their comparison has been studied by (Binh and Korn, 1997);
Deb (1991, 1999); Gamot and Mesa (2008); Kuri-Morales and A.Gutierrez-Garcia
(2002); Kursawe (1991); Osyczka and Kundu (1995); Srinivas and Deb (1994);
Tanaka and Watanabe (1995) and Zitzler et al. (2000).
1.2
1.3
Delimitations
Genetic Algorithm is the only method used in developing the technique. Other
evolutionary methods like Evolution strategies, Evolutionary programming and
Genetic Programming are not considered in the thesis.
1.4
Outline
Theoretical background
2.1
What is Optimization?
2.2
2.2.1
Single-Objective Optimization
Evolutionary Method
This method was inspired by the evolutionary process of human being and the
interests for imitating living being is increasing since 19600 s. Evolutionary method
mimics the evolution principle of nature which results in a stochastic search and
optimization algorithm. It also can out pace the classical method in many ways
(Gen and Cheng, 1997).
Evolutionary method (algorithm) uses an initial population of random solutions in
each iteration, instead of using a single solution as in classical method. This initial
population is updated in each generation to finally converge to a single optimal
solution. Having a population of optimum solution in a single simulation run,
is a unique characteristic of the method in solving multi-objective optimization
problems (Deb, 2001).
Gen and Cheng (1997) divides the method into three main types: genetic algorithm, evolutionary programming and evolutionary strategy while Deb (2001)
describes an additional type to the three above-mentioned: genetic programming.
2.2.2
fit individual (parents) has a potential to have a better fitness compared to both
parents called super-fit offspring. By this principle the initial population evolves
to a better suited population to their environment in each generation (Beasley and
Bull, 1993).
2.2.3
population and increase the possibility of not losing any potential solution and find
the global optimal, while cross over operator is a technique of rapid exploration
of search space (Beasley and Bull, 1993).
To sum up, the selection operator selects and maintains the good solutions; while
crossover recombines the fit solutions to create a fitter offspring and mutation
operator randomly alter a gene or genes in a string to hopefully find a better
string (Deb, 2001).
2.2.4
Real Parameter GA
There are some difficulties in binary-coded GAs, including inability to solve the
problems where the values of variables have continuous search space or when the
required precision is high. According to Deb (2001) hamming cliffs related to
certain strings (01111 or 11110) is one of the difficulties where altering to a near
neighbour string requires changes in many genes. He also claims necessity of large
strings (chromosomes with many genes) in order to fulfil a necessary precision
which in result increases the size of population, as another struggle for binary
GAs. Therefore using floating point numbers to represent the variables in most
problems is more logical which requires less storage than binary coded strings. In
addition, since there is no need for decoding the chromosomes before evaluation
of objective function in selection phase the real parameter GA (in some literature
called continuous GA) is inherently faster than binary GA (Haupt et al., 2004).
Since the real value of parameters are directly used to find the fitness value in
selection operator and there is no decoding to a string in real parameter GAs, this
operator does not alter with binary GA selection operators and the same operators
can be used in real parameter GAs. On the other hand, since the cross over and
mutation operators used in binary GAs are based on strings and alteration in
genes (bits), new cross over and mutation operators shall be defined for this type
of GA.
Deb (2001) outlines some real parameter crossover operators such as linear crossover,
naive crossover, blend crossover (BLX), simulated binary crossover (SBX), fuzzy
recombination operator, unimodal normally distributed crossover (UNDX), simplex crossover (SPX), fuzzy connectives based crossover and unfair average crossover.
Other cross over operators including parent centric crossover (PCX), modified
PCX (mPCX) are recommended in literatures (Deb and Joshi, 2001).
Since in real parameter crossover operator two or more parents directly recombine
to create on or more offspring and it has the same concept as mutation operator,
a question comes up: Is there a good reason for using a mutation operator along
with crossover operator? The debate still remains, however Deb (2001) argues, the
different between these two operators is in the number of parent solutions selected
for perturbation. He claims if offspring is created from one parent the operator
is mutation while offspring created from more than one parent is crossover. He
also mentions some common mutation operators in his book: Random mutation,
non-uniform mutation, normally distributed mutation and polynomial mutation.
2.2.5
Several parametric studies such as Deb (2001, 2002, 2004); Kita (2001), compare
the performance of G3 model to other evolutionary algorithms, and in all of the
studies G3 model has shown a better performance and robustness. Also using
different recombination operators has been examined and the overall result shows
faster computation time and lower number of evaluation required to meet a desired
accuracy of a parent centric recombination operator (PCX) proposed by Deb et al.
(2001), which will be briefly described.
2.2.6
Deb et al. (2001) suggests a variation operator (combination of crossover and mutation operator) for this algorithm, called parent centric recombination operator
(PCX). A parent centric operator ensures identically of population mean of the
total offspring population to that of the parent population while mean centric
operators preserve the mean between the participating parents and resulting offspring. The paper states the benefit of parent centric recombination operators
over mean centric operator, as the parents are selected from the fittest solution
in selection plan and in most real parameter optimization problems it is assumed
that the solutions near the parents can be the potential good solutions. Therefore,
creating new solutions close to parents as how it is in PCX is a steady and reliable
search technique.
The mean vector ~g of the chosen parents is computed. For each offspring, one
parent ~x(p) is chosen with equal probability. The direction vector d~ = ~g ~x(p) is
calculated. Thereafter from each of the 1 parents perpendicular distance Di to
is found. The offspring is created
the line d~(p) are computed and their average D
as follows:
(p)
~y = ~x
~(p)
+ w d
e(i)
w D~
i=1,i6=p
where ~e(i) are the 1 orthonormal bases that span the subspace perpendicular
to d~(p) . Parameters w and w are zero-mean normally distributed variables with
variance w2 and w2 , respectively.
2.2.7
Constraint Handling
Most existing constraint handling methods in literatures are classified in five categories which Deb (2001) describes them briefly:
Method based on preserving feasibility of solutions.
Method based on penalty functions.
Methods biasing feasible over infeasible solutions.
Methods based on decoders.
Hybrid methods.
In the thesis, the method based on penalty function is used for single-objective
optimization.
Penalty function method transforms a constrained optimization problem to an
unconstrained problem usually by using an additive penalty term or penalty multiplier. Penalty method can also be categorized in seven different type:
Death Penalty
Static Penalties
Dynamic Penalties
Annealing Penalties
Adaptive Penalties
Segregated GA
In the Static Penalty method which is implemented in this section, the penalty
parameters do not change within generations and is only applied to infeasible
solutions.
There are number of approaches in this method suggested by authors but Morales
et al. (1997) penalizes the objective function of infeasible solutions by using the
information on the number of violated constraints. His approach is formulated as
follows:
(
F (x) =
f (x),
if xisf easible,
Ps K
otherwise.
K i=1 m ,
2.3
Multi-objective Optimization
In real world applications, most of the optimization problems involve more than
one objective to be optimized. The objectives in most of engineering problems are
often conflicting, i.e., maximize performance, minimize cost, maximize reliability,
etc. In the case, one extreme solution would not satisfy both objective functions
and the optimal solution of one objective will not necessary be the best solution for
other objective(s). Therefore different solutions will produce trade-offs between
different objectives and a set of solutions is required to represent the optimal
solutions of all objectives.
Figure 1 shows the trade-off curve of decision making involved in buying a house
problem.
The trade-off curve reveals that considering the extreme optimal of one objective
(price) requires a compromise in other objective (house area). However there
exists number of trade-off solutions between the two extreme optimal, that each
are better with regards to one objective.
2.3.1
10
j = 1, 2, ..., J;
hk (x) = 0,
k = 1, 2, ..., K;
(L)
xi
2.3.2
m = 1, 2, ..., M ;
(U )
xi xi ,
i = 1, 2, ..., n.
In order to fully understand multi-objective optimization problems (MOOP), algorithms and concepts some definitions must be clarified.
Decision variable and objective space: The variable bounds of an optimization problem restrict each decision variable to a lower and upper limit
which institutes a space called decision variable space.
In multi-objective optimization values of objective functions create a mutlidimensional space called objective space. Each decision variable on variable
11
(i)
, x
(i)
(i)
(i)
(i)
x1 , x2 , ..., xM
iT
fi (x(i) ) = OP T fi (x).
objective optimization problem and the point in <n which determines this
vector is the ideal solution, therefore called the ideal objective vector. Generally, ideal objective vector is a solution that does not exist. The reason
is that the optimal solution of each objective in a MOP is not necessary
the same solution. However, if the optimal solutions of all objectives are
identical the ideal vector is feasible.
Utopian objective vector: A vector that all of its components are marginally
smaller (in case of minimization MOP) than that of the ideal objective vector
is called utopian objective vector. In other words:
i = 1, 2, ..., M
zi = zi i , i > 0.
Linear and non- linear MOOP: If all objectives and constraints are linear
the problem is named a linear optimization problem (MOLP). In contrast, if
one or more of the objectives and/or constrains are non-linear the problem
in non-linear MOOP (Deb, 2001).
Convex and Non-convex MOOP: The problem is convex if all objective
functions and feasible region are convex. Therefore a MOLP problem is
convex (Deb, 2001).
Convexity is an important issue in MOOPs, where in non-convex problems
the solutions obtained from a preference-based approach will not cover the
12
non-convex part of the trade-off curve. Moreover many of the existing algorithms can only be used for convex problems. Convexity can be defined on
both of spaces (objective and decision variable space). A problem can have
a convex objective space while the decision variable space is non-convex.
Domination (dominated, dominating and non-dominated): Most
of real world applications consist of conflicting objectives. Optimizing a
solution with respect to one objective will not result in an optimal solution
regarding the other objective(s). For a M objective MOP, the operator /
between two solutions i and j as i / j is translated as solution i is better than
solution j on a particular objective. Also, i . j means that solution i is worse
than solution j on this objective. Therefore, if the MOP is a minimization
case, the operator / denotes < and vice versa. Now a general definition of
domination for both minimization and maximization MOP can be made:
A feasible solution x(1) is said to dominate another feasible solution x(2) (or
mathematically x(1) x(2) ), if and only if:
1. The solution x(1) is no worse than x(2) with respect to all objectives
value, or fj (x(1) ) 7 fj (x(2) ) for all j = 1, 2, ..., M .
2. The solution x(1) is strictly better than x(2) in at least one objective
value, or fj (x(1) ) C fj (x(2) ) for at least one j {1, 2, ..., M }.
Therefore solution x(1) dominates solution x(2) , solution x(1) is non-dominated
by solution x(2) or solution x(2) is dominated by solution x(1) .
Pareto- optimal set (non-dominated set): A solution is pareto-optimal
if it is not dominated by any other solution in decision variable space. The
pareto-optimal is the best known (optimal) solution with respect to all objectives and cannot be improved in any objective without worsening in another
objective. The set of all feasible solutions that are non-dominated by any
other solution is called the pareto-optimal or non-dominated set. If the non
dominated set is within the entire feasible search space, it is called globally
pareto-optimal set. In other words, for a given MOP, the pareto-optimal set,
P , is defined as:
P = {x | x0 F (x0 ) F (x)}.
Pareto-front: The values of objective functions related to each solution of
a pareto-optimal set in objective space is called pareto-front. In other words,
13
2.3.3
They are several methods and algorithms towards finding the non-dominated set
of solutions from a given population in an optimization problem. Deb (2001)
describes three of the most common methods in his book from a naive and slow
to an efficient and fast approach.
Approach 1: Naive and slow
Approach 2: Continuously updated
Approach 3: Kung et al.s efficient method
Approach 3 has the least computational complexity among the three and according
to Kung and Luccio (1975) is the most efficient method. In all methods the concept
of domination is used to compare the solution with respect to different objective
functions.
14
2.3.4
15
1. To find a set of non-dominated solutions with the least distance to paretooptimal set.
2. To have maximum diversity in the non-dominated set of solutions.
Recall from section 2.1, that classifies optimization solving methods into; classical and evolutionary method , the classification is also valid for multi-objective
optimization problems.
In the classical method objectives are transformed to one objective function by
means of different techniques. The easiest and probably most common is the
weighted sum method which the objectives are scalarized to one objective by
multiplying the sum of objectives to a weight vector (Deb, 2001). Other techniques
are such as considering all objectives except one as constraints and limiting them
by a user defined value ( constraint) (Haimes and A., 1971). Deb (2001) very
well presents some of the most important classical methods in one chapter of the
book.
2.3.5
Evolutionary Algorithms
and introduces the test functions and there analysis. Various applications of multiobjective evolutionary algorithms (MOEA) are also discussed in the book. Deb
(2001) is another comprehensive source of different MOEAs. The book divides the
evolutionary algorithms into non-elitist and elitist algorithms.
2.3.6
MOEA Techniques
All researchers are agreed upon that the invention of first MOEA is devoted to
David Schaffer with his Vector Evaluation Genetic Algorithm (VEGA) in the mid1980s, aimed at solving optimization problems in machine learning.
Deb (2001) and Coello (2007) both name various MOEAs which shows the difference in the frame work and their operators as follows:
Vector Evaluated GA (VEGA)
Vector Optimized Evolution Strategy (VOES)
Weight Based GA (WBGA)
Multiple Objective GA (MOGA)
Niched Pareto GA (NPGA, NPGA2)
Non-dominated Sorting GA (NSGA,NSGA-II)
Distance-Based Pareto GA (DPGA)
Thermodynamical GA (TDGA)
Strength Pareto Evolutionary Algorithm (SPEA, SPEA2)
Multi-Objective Messy GA (MOMGA-I, II, III)
Pareto Archived Evolution Strategy (PAES)
Pareto Enveloped Based Selection Algorithm (PESA, PESA II)
Micro GA-MOEA (GA, GA2)
Coello (2007) describes the concept of each EA along with an illustration of algorithm and short notes on advantages and disadvantages. At the end he summarizes
all EAs in a table. While Deb (2001) devotes two complete chapter of the book
17
to fully define the concept and principle of each EA by step-by step description of
algorithm, hand calculation, discussion on advantages and short comings, calculating the computational complexity and simulating an identical test problem for
all algorithms.
2.3.7
Comparison of MOEAs
Since there exist several MOEAs, a question of which algorithm has the best
performance is a common question among scientist and researchers. In order to
settle to an answer several test problems has been designed and various amount of
researches is carried out. In Debs book, a few significant studies on comparison
of EAs are discussed. (Deb, 2001)
Konak et al. (2006) demonstrates the advantages and disadvantages of most wellknown EAs in a table.
However the most representative, discussed and compared evolutionary algorithms
are Non-dominated Sorting GA (NSGA-II) (Deb et al., 2002), Strength Pareto
Evolutionary Algorithm (SPEA, SPEA2) (Zitzler and Thiele, 1998; Zitzler et al.,
2001), Pareto archived Evolution Strategy (PAES)(Knowles, 1999, 2000) , and
Pareto Enveloped Based Selection Algorithm (PESA, PESA II) (Corne and Knowles,
2000; Corne et al., 2001).
Extensive comparison studies and numerical simulation on various test problems
shows a better overall behavior of NSGA-II and SPEA2 compared to other algorithms. In cases where more than two objectives are present SPEA2 seems to
indicate some advantages over NSGA-II. Strength Pareto Evolutionary Algorithm
(SPEA2) is comprehensively described in next section. Also SPEA2 is coded and
implemented on a number of test functions.
2.3.8
Zitzler et al. (2001) improves the original SPEA (Zitzler and Thiele, 1998), and
addresses some potential weaknesses of SPEA.
SPEA2 uses an initial population and an archive (external set). At the start,
random initial and archive population with fixed sizes are generated. The fitness
value of each individual in the initial population and archive is calculated per
iteration. Next, all non-dominated solutions of initial and external population are
18
copied to the external set of next iteration (new archive). With the environmental
selection procedure the size of the archive is set to a predefined limit. After wards,
mating pool is filled with the solutions resulted from performing binary tournament
selection on the new archive set. Finally, cross-over and mutation operators are
applied to the mating pool and the new initial population is generated. If any of
the stopping criteria is satisfied the non-dominated individuals in the new archive
forms the pareto-optimal set.
Kim et al. (2004) adds two new mechanisms to SPEA2 in order to improve the
searching ability of algorithm. The SPEA2 + algorithm, as it is named, uses a
more effective crossover (Neighborhood Crossover) and new archive mechanism to
diversify the solutions in both objective and variable spaces.
Kukkonen (2006) introduces a pruning method, which can be used to improve the
performance of SPEA2. The idea of pruning is to reduce the size of a set of nondominated solution to a pre-defined limit, while the maximum possible diversity
is encountered.
2.3.9
N (population size)
N (archive size)
T
19
Fitness Assignment
Each individual i in the archive P t and the population Pt is assigned a strength
value S(i), representing the number of solutions it dominates:
R(i) =
S(j).
jPt +P t ,ji
The density estimation technique is adopted from the k-th nearest neighbor method
(Silverman 1986), where the density at any point is a (decreasing) function of the
distance to the k-th nearest data point. In SPEA2 the inverse of the distance to
the k-th nearest neighbor is considered as the density measurement. The density
D(i) corresponding to i is defined by:
D(i) =
ik
1
,
+2
where,
k=
p
N + N.
20
and ik is the distance of solution i to the k-th nearest neighbour. Finally, the
fitness of an individual i is calculated by adding D(i) to the raw fitness value
R(i):
Environmental Selection
The first step is to copy all non-dominated individuals, i.e., the ones with fitness
value lower than one, from archive and population to the external set of the next
generation:
0 < k < |P t+1 | : 0 < l < k : il = jl ik < jk ,
where ik denotes the distance of i to a user-predefined (k-th) nearest neighbor in P t+1 . In other words, at each stage removed solution will be the one
with the least distance to the k-th neighbor; if there is more than one solution with the same distance the judgement will be upon the second smallest
distance and so forth.
21
2.3.10
Constraint Handling
Handling constraints within MOEAs is an essential issue which must be considered carefully, especially when dealing with real world engineering applications
where constraints are always involved. Constraints can be in form of equality or
inequality. Another classification of constraints are hard and soft constraints. A
hard constraint is a must to be satisfied, while on the other hand, a soft one can
be relaxed in order to accept a solution (Coello, 2007; Deb, 2000). Normally only
inequality constraints are handled in MOEAs, however equality constraints can
be easily transformed to inequality using:
|h(x)| 6 0
where h(x) = 0 is the equality constraint and is very small value.
Constraints divide the decision space into two separate parts: feasible and infeasible regions. A solution in the feasible region of search space satisfies all the
constraints and it is called a feasible solution, otherwise the solution is infeasible.
The most popular and common way of handling constraints is the penalty function
method. However sensitivity of penalty method to the penalty parameter is a
drawback in this method.
In addition to penalty method, Jimnez et al. (1999) proposed a systematic constraint handling procedure. Two other method which are more credited and elaborated are the Ray-Tai- Seows constraint handling approach (T. Ray, 2001) and
the Deb et al. (2002) proposed constraint handling method, which is implemented
in NSGA II algorithm.
In Debs method a binary tournament selection operator is used for any two solutions selected from the population. Therefore in presence of constraints three
scenarios will occur: 1) Both solutions are feasible; 2) One is feasible and the other
is infeasible; 3) Both solutions are infeasible. In the method for each scenario following rule is applied:
22
23
2.4
In real world engineering problems there is no prior knowledge on the true global
pareto-front. Although Evolutionary algorithms have shown a good convergence
in benchmarks, hybrid methods have been proposed to ensure the convergence of
an algorithm to the true pareto-front. Several hybridization techniques (combining
an MOEA with other methods) are discussed in literatures.
Coello (2007) comprehensively deliberate the use of local search and co-evolutionary
techniques as a hybrid method in a complete chapter of his book. He specifies local
search decision space approaches such as depth-first search (hill-climbing), simulated annealing and Tabu search for consideration in hybridization.
Deb (2001) also argues the use of local search techniques with an MOEA. According to Goldberg, the best way to achieve convergence to the exact pareto-front is
implementing the local search techniques on the solutions obtained from an EA.
However Deb proposes two other ways to use local search techniques; 1) during
EA generations, 2) at the end of an EA run.
Here a new method of hybridization is introduced and tested on benchmarks to
investigate the performance of the technique. A combination of single and multiobjective optimization evolutionary algorithms discussed in previous subsections
are applied to obtain the global optimal solutions. The archive population in
SPEA2, which holds the non-dominated solutions of each generation, is created
using the single-objective genetic algorithm optimization method introduced in
earlier sections called G3 algorithm.
First, the objectives are transformed to a single objective function by using the
weighted sum method. A number of random weights equal to size of population
are multiplied to each objective to scalarize the objective function. Then, every
scalarized function is optimized with G3 single objective GA. After finding the
optimal solution for each weighted function, the required initial population is
obtained. Finally, the multi-objective algorithm (SPEA2) is used to optimize the
function. Therefore, the hybridizing technique is applied before EA generations
to create the required initial archive population.
24
Implementation
All the algorithms in the thesis are coded with MATLAB. Several benchmarks are
encompassed and solved with the coded algorithms to ensure the accuracy and
efficiency of algorithm.
3.1
3.1.1
Single Objective
Unconstrained Test Functions
Sphere Function
fSphere (x) =
n
X
x2i
i=1
Ellipsoidal Function
The behaviour of the algorithms for a poorly scaled objective function is discussed
using the following objective function:
fEllipsoid (x) =
ax21
n
X
x2i
i=1
Schwefels Function
fSchwef el (x) =
i
n
X
X
i=1
25
j=1
!2
xi
Goldstein-Price Function
The Goldstein-Price function is given by:
fGoldstein (x1 , x2 ) = (1 + (x1 + x2 + 1)2 )(19 14x1 + 3x21 14x2 + 6x1 x2 + 3x22 ))
(30 + (2x1 3x2 )2 (18 32x1 + 12x21 + 48x2 36x1 x2 + 27x22 ))
has a global minimum of 3 at x = (0, 1)T . The typical search range is 2
xi 2, i = 1, 2.
Rosenbrock Function
This function is used to discuss the behaviour of the algorithms for functions
having complex non-separable structure, such as a curved, deep valley, given by
fRosenbrock (x) =
n
X
(100(x21 xi )2 + (1 xi )2 ),
i=2
Colville Function
The Colville function is defined as
fColville (x1 , x2 , x3 , x4 ) = 100(x2 x21 )2 + (1 x1 )2 + 90(x4 x23 )2 + (1 x3 )2
+10.1((x2 1)2 + (x4 1)2 ) + 19.8(x2 1)(x4 1).
xi = 1, i = 1, ..., 4.
Considering the results of systematic studies on parameters of G3 algorithm (Deb,
2004), in all above cases, a population size of N = 100, a parent size = 3, number
of offspring = 2 and r = 2 (Step 2 and 3) are used. For the PCX different values
of and is implemented.
In addition, for Spherical, Ellipsoidal, Schwefel and Rosenbrock functions two cases
are considered for initial population:
26
3.1.2
Test Function 1
This test problem is a two dimensional constrained optimization problem:
where
r
(x) =
((
(x))2
( (x))2
+ x2
q
0.25 [x22 + (x1 + x3 )2 ],
(x) (x))/
6P L
,
x23 x4
4P L3
(x) =
,
Ex23 x4
q
r #
x23 x64 "
4.013E
x3
E
36
Pc (x) =
1
,
2
L
2L 4G
(x) =
where
MR0
P
0
(x) =
,
(x) =
,
J
2x1 x2
r
h
x2 i
x22
x1 + x3 2
, R=
M =P L+
+(
).
2
4
2
P = 6000 lb, L = 14 in, E = 30 106 psi,
M AX = 13600 psi,
M AX = 30000 psi,
28
G = 12 106 psi,
M AX = 0.25 in.
The optimized solution reported in literature (Deb, 1991) is x = (0.2489, 6.1730, 8.1789, 0.2533)
with f = 2.43 using binary GA.
,
g2 (x) = 1
3
12566 x1 (x2 x1 ) 5108 x21
140.45 x1
g3 (x) =
1,
xx3 x22
x 1 ) + x2
g4 (x) = 1
,
1.5 1
Subject to g1 (x) =
0.05 x1 2,
0.25 x2 1.3,
2 x2 15.
29
3.2
Multi objective
Similar to single objective, two sets of test functions, one for unconstrained and
the other for constrained, are utilized to assess the performance of SPEA2 and the
proposed constrained handling method. The Algorithm is coded with MATLAB.
3.2.1
Exercise 14
A non-convex function presented in Stromberg (2011) with one variable.
f1 (x) = 1 2x + x ,
Minimize f2 (x) = x,
0 x 1.
The pareto-front is obtained by using two different methods; 1) Single objective
G3 algorithm (the function is transformed to single-objective by weighted sum
method), 2) Multi-objective SPEA2 algorithm.
30
,
f1 (x) = n1
10e
i=1
KUR:
P
5 xi 5, i = 1, 2, 3.
Deb (2001) illustrates the pareto-front of KUR function in figure 201. Three
distinct disconnected regions create the pareto-front of the problem. Also figure
202 of the same book shows the pareto-optimal solutions in decision space.
In the thesis five of the six test problems (ZDT1, ZDT2, ZDT3, ZDT4 and ZDT6)
are implemented with SPEA2 algorithm.
f1 (x) = x1 ,
p
ZDT1:
f
(x)
=
g(x)
1
f
/g(x)
,
2
1
P
xi
g(x) = 1 + 9 ni=2
.
n1
31
All the variables are limited between 0 and 1. Figure 213 of Deb (2001) shows
the search space and pareto-front in objective space. This is the easiest among all
ZDTs and the only difficulty is the large number of variables.
ZDT2:
f1 (x) = x1 ,
f2 (x) = g(x) 1 (x1 /g(x))2 ,
9 Pn
g(x) = 1 +
xi .
n 1 i=2
The range for all the variables is [0, 1]. Pareto-front and the search region in
objective space is shown in figure 214 of Deb (2001). Non-convexity of paretooptimal set is the only difficulty of this problem.
f1 (x) = x1 ,
p
ZDT3:
f
(x)
=
g(x)
1
f
/g(x)
(f1 /g(x)) sin (10f1 ) ,
2
1
P
x
g(x) = 1 + 9 ni=2
.
n1
All the variables are limited within [0, 1]. Finding all the discontinuous paretooptimal regions with a good diversity of non-dominated solutions may be difficult
for an MOEA. Deb (2001) show the search space and pareto-front in figure 215.
32
f1 (x) = x1 ,
p
ZDT4:
f
(x)
=
g(x)
1
x
/g(x)
,
2
1
2
ZDT6:
f
(x)
=
g(x)
1
(f
/g(x))
,
2
1
1/4
Pn
xi
g(x) = 1 + 9
.
i=2
n1
All the variables lie in the range [0, 1]. Non-convexity of pareto-front, coupled with
adverse density solutions across the front, may rise some difficulty in convergence.
Figure 218 in Deb (2001) shows the pareto-optimal region for this problem.
3.2.2
The presence of constraints may cause hurdles for an MOEA to converge to the true
and global pareto-front, also maintaining diversity in the non-dominated solutions
may be another problem. A number of common test problems used in literatures,
are presented in this section and implemented in the SPEA2 code.
33
Minimize
Minimize
subject to
BNH:
Deb (2001) illustrates the decision variable and objective space of the problem in
figures 219 and 220. In BNH problem, constraints will not add any difficulty to
the unconstrained problem.
OSY:
Minimize
Minimize
subject to
f1 (x) =
f2 (x) =
C1 (x) = x1 + x2 2 0,
C2 (x) = 6 x1 x2 0,
C3 (x) = 2 + x1 x2 0,
C4 (x) = 2 x1 + 3x2 0,
C5 (x) = 4 (x3 3)2 x4 0,
C6 (x) = (x5 3)2 + x6 4 0,
0 x1 , x2 , x6 10,
1 x3 , x5 5,
0 x4 6.
The pareto-front as shown in figure 221 of Deb (2001), is a line connecting some
parts of five different region. Since the algorithm should maintain the solutions
within intersections of constraint boundaries, this is a difficult problem to solve.
34
Minimize
Minimize
SRN:
subject to
Since the constraints eliminate some parts of the original pareto-front, difficulties
may arise in solving the problem. Figures 222 and 223 in Deb (2001) shows the
corresponding pareto-front of feasible decision variable and objective space.
Minimize
Minimize
subject to
TNK:
f1 (x) = x1 ,
f2 (x) = x2 ,
x1
C1 (x) = + 1 0.1 cos 16 arctan
x2
2
2
C2 (x) = (x1 0.5) + (x2 0.5) 0.5,
x21
x22
0,
0 x1 , x2 .
35
3.3
Hybrid Approach
In order to test the performance of the proposed hybrid method, the archive population generated from the hybrid technique (weighted sum single objective G3
algorithm) is compared with the random archive population for different test problems. It is obvious that the test functions with non-convex pareto-fronts are not
suitable for the technique, since the weighted sum method is used to transform
the objectives into one objective. Therefore, the random and hybrid archive population are plotted in objective space for ZDT1 and ZDT3 test functions. Also
despite the non-convex property of ZDT6 problem, comparison of archive population has also been applied to this problem to study the performance of hybrid
method on non-convex benchmarks.
36
Test Results
4.1
Single Objective
4.1.1
Unconstrained Functions
To examine the behaviour of algorithm and code, evaluation in two and multidimensional search space is carried out for some of the test functions as blow:
Spherical: n = 2, 4
Ellipsoidal: n = 2, 4
Schwefel: n = 2, 10, 15
Rosenbrock: n = 2, 5
Goldstein function is by default in two dimensional search space and Colville is a
four variable function.
Table 1: Results of unconstrained test functions, single objective.
Function
Initial
Number of Number of
Population
Variable
Evaluation
Sphere
Normal
2
98
Sphere
Normal
2
67
Sphere
Normal
4
279
Sphere
Offset
2
261
Ellipsoidal
Normal
2
99
Ellipsoidal
Offset
2
405
Ellipsoidal
Offset
4
553
Schwefel
Normal
2
553
Schwefel
Offset
10
553
Schwefel
Offset
15
6434
Rosenbrock
Normal
2
146
Rosenbrock
Offset
5
2200
Rosenbrock
Offset
5
5095
Goldstein
2
211
Colville
4
no result
Colville
4
1055
Colville
4
1468
37
Variance from
Global Optimum
7.32 106
5.47 106
3.54 106
3.65 106
3.75 106
3.65 106
8.10 106
8.10 106
8.10 106
7.76 106
3.55 106
3.9308 106
3.55 106
9.46 106
no result
9.46 106
8.09 106
0.1
0.4
0.4
0.1
0.1
0.1
0.4
0.1
0.3
0.3
0.1
0.5
0.9
0.1
0.1
0.3
0.6
The experiment for each function runs until the best objective function of the population reaches a minimum difference of 105 from the optimal solution. Number
of Generation (evaluation) and best fitness are shown in table 1.
The result shows acceptable behaviour of algorithm for two dimensional search
space with = 0.1 and = 0.1 , but when the number of variable increases
or functions are more complex such as Rosenbrock, the algorithm converges in
local optima or the global optima is obtained with high number of generations.
Therefore by increasing the variance of zero-mean normally distributed variables
in PCX operator better results are obtained as it can be seen in table of results
the Schwefels function with 15 variable has reached the required variance from
global optimal. Convergence of the best individual obtained from some of the test
functions during generations can be seen in the figures 5 and 6.
38
4.1.2
Constrained Functions
The Penalty method used for constrained test functions shows a good behaviour.
All three functions reached the optimal solution reported in literature, in addition the optimal solution found by the algorithm in this thesis with related
constraint handling method is better than some other approaches used in literature. In the first test function the optimal solution of 13.590842 is found at
x = (2.246818, 2.381735) which is better than the optimal found at literature
(Deb, 2000) with the value f = 13.59085. Furthermore, figures 7, 8 and 9 illustrate the convergence of results for the three constrained test functions.
Tables 2 and 3 compare the results of different methods for welded beam design problem and the minimization of the weight of a tension/compression spring,
which shows the out-performance of the approach used here to some methods.
Figure 8: Convergence of objective function for welded beam problem with obtained optimal of f = 1.834756
39
Table 2: Comparison of the results of different methods for welded beam design
problem.
Method
This Thesis
Coello (self-adaptive penalty approach)
Arora (constraint correction at constant cost)
He and Wang (CPSO)
Ragsdell and Phillips (Geometric programming)
Deb (GA)
Coello and Montes (feasibility-based tournament selection)
Ebehart (modified PSO)
f (x)
1.83475678
1.74830941
2.43311600
1.728024
2.385937
2.433116
1.728226
1.72485512
40
4.2
Multi-Objective
SPEA2 algorithm coded with MATLAB is used to solve multi-objective test functions. Simulated Binary Crossover (SBX) and Polynomial Mutation (Deb, 2001)
are implemented in the step 6 (Variation) of SPEA2 algorithm as recombination
and mutation operators.
4.2.1
Unconstrained Functions
In exercise 14 test function, a weight vector with 20 weight factors within the range
[0, 1] with step length of 0.05 is used to create 20 single objective functions and
each function is optimized separately to obtain the pareto-front shown in figure
10 . In multi-objective method, the initial and archive set with population of 30
individuals after 100 generations result in the pareto-front (figure 11).
41
Since the function is non-convex the pareto-front obtained from single objective
method does not cover the non-convex parts of pareto-optimal set. In the other
hand, the pareto-front of multi-objective method clearly illustrates all parts of
pareto-optimal region. Thus, an important drawback of single-objective weighted
sum method for solving multi-objective optimization problems is the weakness in
non-convex problems.
Table 4 shows the defined parameters of SPEA2 algorithm, such as size of initial
and archive population and number of generations, for the other test functions
from Kursawe to ZDT6.
Table 4: Pre-defined parameters of SPEA2 algorithm for unconstrained multiobjective test functions
Test Function
Kursawe
ZDT1
ZDT2
ZDT3
ZDT4
ZDT6
Initial Population
Size (N )
50
50
50
50
100
100
Archive Population
Size (N )
50
50
50
50
100
100
Number of
Generations
100
400
400
250
250
250
Figures 12, 13, 14, 15, 16 and 17 shows the non-dominated solutions obtained
from SPEA2 algorithm for problems Kursawe, ZDT1, ZDT2, ZDT3, ZDT4 and
ZDT6. Comparing the figures to the true pareto-fronts illustrated in literature
(Deb, 2001), confirms the excellent performance of SPEA2 algorithm in finding
the pareto-front of problems with upto 30 variables.
42
plot. However, illustrating the non-dominated solutions in a multi-objective problem with more than two objectives can be a difficult task. Even the 3D plot for
three objective problems, which each axes represents one objective, is confusing
and unhelpful.
There are number of methods for presenting problems with more than two objectives in literatures. Scatter-plot matrix is one way, which Meisel (1973) and
Cleveland (1994) suggest to plot all (M
2 ) pairs of plots among the M objective
functions. Therefore, the non-dominated solutions of a problem with three objectives will be illustrated with 6 plots in a 5 5 matrix. Each diagonal plot is
used to mark the axis for the matching off diagonal plots. In this method the
non-dominated solutions in each pair of objective spaces are shown twice with the
difference in the axis marked for each objective.
44
The scatter plot matrix can also be used for comparison of two different algorithms
on an identical problem. The upper diagonal plots shows the non-dominated
solutions of one algorithm and lower diagonal plot is utilized to illustrate the
corresponding solutions of other algorithm.
Furthermore, in engineering applications the relation of variables with objective
functions and the non-dominated solutions in variable space is an imperative issue.
Investigating the variations of each variable of non-dominated solutions and the
effect of the variations to objective functions and other variables can be very helpful
in better understanding the optimized problem. Here, the scatter-plot matrix is
used to show these variations and their affects. For this purpose Kursawe, ZDT1,
ZDT2, ZDT3, ZDT4 and ZDT6 are optimized with three variables by using SPEA2
algorithm. In figures 18, 19, 20, 21, 22 and 23 each variable and the two objective
functions are marked in the diagonal plots of a 55 matrix for mentioned problems.
The off-diagonal plots clearly illustrate the non-dominated solutions in objective
and variable space.
45
46
Figure 18: Scatter-plot matrix of Kursawe test function
47
Figure 19: Scatter-plot matrix of ZDT1 test function
48
Figure 20: Scatter-plot matrix of ZDT2 test function
49
Figure 21: Scatter-plot matrix of ZDT3 test function
50
Figure 22: Scatter-plot matrix of ZDT4 test function
51
Figure 23: Scatter-plot matrix of ZDT6 test function
4.2.2
Constrained Functions
The constrained test functions are optimized by SPEA2 algorithm with the predefined parameters shown in table 5. The non-dominated solutions obtained for
BNH, OSY, SRN and TNK problems are illustrated respectively in figures 24, 25,
26 and 27.
Table 5: Pre-defined parameters of SPEA2 algorithm for constrained multiobjective test functions
Test Function
BNH
OSY
SRN
TNK
Initial Population
Size (N )
30
30
30
30
Archive Population
Size (N )
30
30
30
30
52
Number of
Generations
100
600
100
100
Comparing the figures with the true pareto-fronts reported in literatures, proves
the good performance of algorithm in converging to optimal results with a good
diversity of solutions.
53
4.3
Hybrid Approach
Figures 28 and 29 illustrate the comparison of the two archive population for ZDT1
and ZDT3 test functions respectively.
Assessing the plots with the existing true pareto-optimal fronts in the literatures,
shows that the hybrid approach generates a population near the actual paretofront.
However, figure 30 which plots the two population of ZDT6 problem confirms the
fact that convexity of objective function has an important influence in diversity
and closeness of population to pareto-front.
Furthermore, the obtained hybrid populations are the outcome of a single objective
GA with relatively high number of generations. Consequently, the proposed hybrid
approach do not show any improvement in overall computation time of test functions. However the number of generations to reach near the actual pareto-front,
accordingly the computation time in the multi-objective part of the algorithm
decreases.
Figure 28: Hybrid (left) and random (right) initial archive population for ZDT1.
Figure 29: Hybrid (left) and random (right) initial archive population for ZDT3.
54
More precise and reliable judgement can be made only after conduction of an
extensive research on convergence of optimization problems and introducing a
proper metric to compare the two approaches in a more scientific way.
Also, parameter setting in the hybrid method will have an important effect on
computation time. There exist a large number of parameters including size of
population in each algorithm, number of generations for single objective algorithm
and size of different sets used in the algorithm, which will have a great impact
in computation time. Nevertheless, creating a predefined archive population may
enhance convergence to the true pareto-front in an EA.
Figure 30: Hybrid (left) and random (right) initial archive population for ZDT6.
55
Conclusion
After implementing the proposed algorithm for single objective optimization test
functions, it was concluded that the approach showed a good performance in converging to the true optimal solution. However parameter setting in problems with
higher number of variables is crucial. The penalty method used for constrained
handling managed to find the optimal solution for all three test functions. Also,
better behaviour was observed in comparison to some of the other techniques.
The SPEA2 algorithm, for multi-objective optimization problems, was applied on
several benchmarks and the obtained pareto-fronts were completely similar to the
fronts reported in literatures. Furthermore, the diversity and spread of solutions
along the pareto-optimal region appeared to be equally distanced and the nondominated solutions were uniformly distributed in all parts of pareto-front. The
constraint handling approach performed well on all test functions and the paretofronts were exactly comparable to the true fronts illustrated in references. It is
recommended to extend the research on a real-world engineering application and
problems with more than two objectives with the aim of assessing the performance
of algorithm in different situations.
The scatter-plot matrix method for illustrating the non-dominated solutions, could
be very supportive in studying real world engineering problems, where understanding the relations between variables and objectives are crucial.
The suggested hybrid approach did not show any advantages in overall computation time, and in some problems it can be considered as a weakness regarding this
issue. There is an essential need of comprehensive studies related to convergence
of optimization problems, comparison metrics and different ways of combining
single and multi-objective methods in order to conclude in a more precise and
scientific manner. It is believed that the hybrid method may improve the ability
of algorithm in finding the global optimal solutions.
56
Bibliography
Deb, K., Joshi, D., and Anand, A. (2001). Real-coded evolutionary algorithms with
parent-centric recombination. Technical report, KanGAL Report No. 2001003,
Kanpur Genetic Algorithms Laboratory (KanGAL), Indian Institute of Technology, Kanpur.
Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T. (2002). A fast and elitist
multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary
Computation, 6(2):182197.
Fonseca, C. and Fleming, P. (1993). Genetic algorithms for multiobjective optimization: Formulation, discussion and generalization. conference on genetic
algorithms, (July).
Gamot, R. and Mesa, A. (2008). Particle Swarm OptimizationTabu Search Approach to Constrained Engineering Optimization Problems. WSEAS Transactions on Mathematics, 7(11):666675.
Gen, M. and Cheng, R. (1997). Genetic algorithms and engineering design. New
York: Wiley.
Goldberg, Robert (ed.), A. A. e. J. L. (2005). Evolutionary Multiobjective Optimization: Theoretical Advances and Applications. Springer London Ltd, 1st
edition. edition 2005 edition.
Haimes, Y. Y., L. L. S. and A., W. D. (1971). On a bicriterion formulation of
the problems of integrated system identification and system optimization. IEEE
Transactions on Systems, Man and Cybernetics, 1(3)():296297.
Haupt, R. L., Haupt, S. E., and Wiley, A. J. (2004). ALGORITHMS PRACTICAL
GENETIC ALGORITHMS. John Wiley & Sons, Inc., Hoboken, New Jersey.
Jimnez, F., Verdegay, J. L., and Gomez-skarmeta, A. F. (1999). Evolutionary
techniques for constrained multiobjective optimization problems.
Kim, M., Hiroyasu, T., Miki, M., and Watanabe, S. (2004). Spea2+: Improving
the performance of the strength pareto evolutionary algorithm 2. 3242:742751.
Kita, H. (2001). A comparison study of self-adaptation in evolution strategies and
real-coded genetic algorithms. Evolutionary computation, 9(2):22341.
Knowles, J. (1999). The pareto archived evolution strategy: A new baseline algorithm for pareto multiobjective optimisation. Evolutionary Computation, 1999.
CEC.
58
60
Appendices
A
A simple real world single objective with constraint optimization problem is presented and solved by the proposed G3 algorithm and related constraint handling
method for one generation run.
A car spare part manufacturing company manufactures disk brakes and brake
pads. A disk brake takes 8 hours to manufacture and 2 hours to finish and pack.
A brake pad takes 2 hours to manufacture and 1 hour to finish and pack. The
maximum number of labour-hours per day is 400 for the manufacturing process
and 120 for the finishing and packing process. If the profit on a disk brake is
90Euro and the profit on a brake pad is 25Euro, how many disk brakes and brake
pads should be made each day to maximize the profit (assuming that all of the
disk brakes and brake pads can be sold)?
Therefore the objective is to maximize the profit by maximizing:
P rof it = 990x1 + 25x2
Where x1 is the number of disk brakes and x2 is the number of brake pads. The objective can be transformed to a minimization problem by multiplying the objective
function by 1. The constraints are labour hours for each product:
8x1 + 2x2 400
2x1 + x2
120
subject to 8x + 2x 400 0,
1
2
Spare part company:
2x1 + x2 120 0,
x1 , x2 0.
The G3 algorithm has four steps (plans): 1) Selection plan, 2) Generation Plan,
61
x1
64.56
77.39
26.70
19.05
97.83
80.16
33.28
33.97
21.43
68.42
x2
20.46
28.45
63.96
11.10
8.99
74.53
34.16
62.01
48.04
75.20
Fitness
109
109
4002.47
1992.25
109
109
3849.32
5 108
3129.69
109
Step 1: The best solution in set B and 1 other random solutions create set
P . First, the best solution, with regards to its fitness, is chosen. Thus, the
fitness (value of objective function) of each individual in set B has to be
calculated. Since a constraint is involved, the fitness is assigned according
to the proposed constraint handling method:
Step C1: The feasibility or infeasibility of each solution is inspected. Feasible solutions are {3, 4, 7, 9} and infeasible solutions are {1, 2, 5, 6, 8, 10}.
Step C2: Number of non-violated constraints of each infeasible individual
is counted.
Step C3: Fitness of feasible solutions is the value of objective function. For
example solution 3 is feasible, therefore:
F (3) = 4002.47
Step C4: Fitness of infeasible solutions are calculated by:
F (x) = K
s
X
K
i=1
62
= 7.62
D
63
~y = ~x
+ w d~(p) +
e(i)
w D~
i=1,i6=p
By assuming that the second random parent is solution 5, the resulted offspring is:
Parent2 = [97.83, 8.99],
The value of variance is selected according to the desired distance of offspring from parent.
In other words, higher values of variances increases the distance of offspring from parent, whereas
a small variance creates an offspring close to the parent.
2
The fitness of the two newly created offspring is calculated according to the same procedure
described in steps C1 to C3 considering the feasibility or infeasibility of offspring.
64
A simple minimization type optimization example problem is defined. Simulation of the steps in SPEA2 and hand calculation of one generation is
described in this appendix.
A two-objective with two variable minimization problem introduced by Deb
(2001) is chosen to illustrate the function of SPEA2.
Minimize f1 (x) = x1 ,
Minimize f (x) = x2 + 1 ,
2
x1
Min-Ex:
subject to 0.1 x1 1,
0 x2 5.
This problem unlike the simple look, has two conflicting objective which
create a convex pareto-front as shown in figure 31. The search space is also
illustrated in the figure.
Table 7: Current and external initial random population of SPEA2 with their
objective function values
Initial population Pt
External population Pt
Solution x1
x2
f1
f2 Solution x1
x2
f1
f2
1
0.31 0.89 0.31 6.10
a
0.27 0.87 0.27 6.93
2
0.43 1.92 0.43 6.79
b
0.79 2.14 0.79 3.97
c
0.58 1.62 0.58 4.52
3
0.22 0.56 0.22 7.09
4
0.59 3.63 0.59 7.85
5
0.66 1.41 0.66 3.65
6
0.83 2.51 0.83 4.23
puted:
v
u |M |
uX
dij = t
k=1
(i)
(j)
fk fk
fkmax fkmin
!2
13
1
= 0.4974
+2
Here the minimum and maximum of each objective is obtained by first, calculating a sample
of random solutions and then use the corresponding upper and lower objective values as initial
bounds. If in any generation, the limits are changed and lower or higher bounds are found, the
sample minimum and maximum values will be replace with the new values.
67
Initial population Pt
Dominated Solutions
2, 4
4
4
6, 8
R(i) F (i)
0
0.4974
2
2.4928
0
0.4974
6
6.497
0
0.4972
3
3.4912
External population Pt
Solution S(i) Dominated Solutions R(i) F (i)
a
1
4
0
0.4992
b
1
6
2
2.4948
c
1
4
0
0.4980
d13 = 0.0103,
d15 = 0.1530,
d1a = 0.0022,
d1c = 0.0907.
Step E2: After sorting the distances in increasing order, the k th nearest solution (here we set k = 2) to solution 1 is solution a with
d1a = 0.0022. The same procedure is done for all other solutions.
With comparing the 2nd (k th ) nearest solution to all solutions in
P1 , we can conclude that solutions a and c should be eliminated.
They have the lowest distance to their 2nd nearest solution; therefore they are removed from the archive set to improve the diversity
of non-dominated solutions.
Step 4: Since the stopping criteria which is the number of generations is
not met, we will continue to step 5.
Step 5: In this step the three individuals in P1 will participate in a binary
tournament selection with replacement. Each solution will participate
in two tournaments and the better solution in matter of lower fitness will
win the tournament and be placed in the mating pool. Since solution
5 has the best fitness among the three, it will win both tournaments,
thereby creating two copies of it in the mating pool. Therefore the
mating pool is filled with solutions {5, 5, 3}.
68
Step 6: The variation step creates child population from the parent population of mating pool. Here we used cross-over probability equal to 0.9
and mutation probability of 0.1.
Step V1: Two parents are randomly selected; we assume that solution
5 is the first parent and 3 is the second one.
Step V2: Two children are created from the parents by using SBX
cross-over and pre-defined value of c = 2 as follows:
Step SBX1: A random number ui [0, 1] is chosen. For example
u1 = 0.7577.
Step SBX2: qi is calculated by:
qi =
(2ui ) c + 1 ,
ifui 0.5;
1
1
c + 1
, otherwise.
2(1 ui )
(1,t)
xi
= 0.5[(1 + qi )xi
(2,t+1)
xi
(1,t)
here, x1
= 0.5[(1
(2,t)
= 0.66 and x1
(1,t+1)
x1
(1,t)
qi )xi
(2,t)
+ (1 qi )xi
+ (1 +
],
(2,t)
qi )xi ].
= 0.7201, x1
= 0.1599.
The same procedure with a random u2 = 0.72 is done for the second
variable and the resulted children are:
(1,t+1)
x2
(2,t+1)
= 1.5157, x2
= 0.4543.
69
(2ri ) m1+1 1,
if ri < 0.5,
i =
1
1 [2(1 ri )] m +1 , if ri 0.5.
1
yi
(1,t+1)
= xi
+ i .
y1
70
B.1
The same simple minimizing optimization problem used for hand calculation of
SPEA2 algorithm is used to simulate the working principle of suggested constraint
handling method. However, two constraints are added to the problem:
Minimize
Minimize
subject to
Min-Ex:
f1 (x) = x1 ,
x2 + 1
f2 (x) =
,
x1
x2 + 9x1 6,
x2 + 9x1 1,
0.1 x1 1,
0 x2 5.
Constraints divide the decision and objective search space into two regions. A part
of the unconstrained pareto-optimal front is not feasible and a new pareto-front
will emerge. The new constrained pareto-front is convex. The same two initial
sets, six solutions for the first set and three solutions for the external set (table 7),
are chosen. Figure 32 illustrates the constrained pareto-front and the solutions in
objective space.
Figure 32: Constrained Min-Ex pareto-front, feasible region and initial solutions
The only difference of constrained SPEA2 algorithm with the normal algorithm
is the definition of domination concept. Therefore, a step by step simulation of
constraint domination concept is described by using the initial chosen population.
71
Feasibility
Infeasible
Infeasible
Infeasible
Feasible
Feasible
Feasible
Solution
a
b
c
Feasibility
Infeasible
Feasible
Feasible
Initial population Pt
Constraint violation Dominated Solutions
0.3867
3, a
0.0356
1, 3, a
0.5767
0
1, 2, 3, a
0
1, 2, 3, 6, a, b
0
1, 2, 3, a
External population Pt
Constraint violation Dominated Solutions
0.4500
3
0
1, 2, 3, 6, a
0
1, 2, 3, 4, a
72