0% found this document useful (0 votes)
2 views

Multi-objective Optimization Using Genetic Algorithms

Uploaded by

jafarzafari95
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Multi-objective Optimization Using Genetic Algorithms

Uploaded by

jafarzafari95
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

ARTICLE IN PRESS

Reliability Engineering and System Safety 91 (2006) 992–1007


www.elsevier.com/locate/ress

Multi-objective optimization using genetic algorithms: A tutorial


Abdullah Konaka,, David W. Coitb, Alice E. Smithc
a
Information Sciences and Technology, Penn State Berks, USA
b
Department of Industrial and Systems Engineering, Rutgers University
c
Department of Industrial and Systems Engineering, Auburn University
Available online 9 January 2006

Abstract

Multi-objective formulations are realistic models for many complex engineering optimization problems. In many real-life problems,
objectives under consideration conflict with each other, and optimizing a particular solution with respect to a single objective can result
in unacceptable results with respect to the other objectives. A reasonable solution to a multi-objective problem is to investigate a
set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. In this
paper, an overview and tutorial is presented describing genetic algorithms (GA) developed specifically for problems with multiple
objectives. They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution
diversity.
r 2005 Elsevier Ltd. All rights reserved.

1. Introduction problem lies in the proper selection of the weights or utility


functions to characterize the decision-maker’s preferences.
The objective of this paper is present an overview and In practice, it can be very difficult to precisely and
tutorial of multiple-objective optimization methods using accurately select these weights, even for someone familiar
genetic algorithms (GA). For multiple-objective problems, with the problem domain. Compounding this drawback is
the objectives are generally conflicting, preventing simulta- that scaling amongst objectives is needed and small
neous optimization of each objective. Many, or even most, perturbations in the weights can sometimes lead to quite
real engineering problems actually do have multiple- different solutions. In the latter case, the problem is that to
objectives, i.e., minimize cost, maximize performance, move objectives to the constraint set, a constraining value
maximize reliability, etc. These are difficult but realistic must be established for each of these former objectives.
problems. GA are a popular meta-heuristic that is This can be rather arbitrary. In both cases, an optimization
particularly well-suited for this class of problems. Tradi- method would return a single solution rather than a set of
tional GA are customized to accommodate multi-objective solutions that can be examined for trade-offs. For this
problems by using specialized fitness functions and reason, decision-makers often prefer a set of good solutions
introducing methods to promote solution diversity. considering the multiple objectives.
There are two general approaches to multiple-objective The second general approach is to determine an entire
optimization. One is to combine the individual objective Pareto optimal solution set or a representative subset. A
functions into a single composite function or move all but Pareto optimal set is a set of solutions that are
one objective to the constraint set. In the former case, nondominated with respect to each other. While moving
determination of a single objective is possible with methods from one Pareto solution to another, there is always a
such as utility theory, weighted sum method, etc., but the certain amount of sacrifice in one objective(s) to achieve a
certain amount of gain in the other(s). Pareto optimal
Corresponding author. solution sets are often preferred to single solutions because
E-mail address: [email protected] (A. Konak). they can be practical when considering real-life problems

0951-8320/$ - see front matter r 2005 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ress.2005.11.018
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 993

since the final solution of the decision-maker is always a optimization approach should achieve the following three
trade-off. Pareto optimal sets can be of varied sizes, but the conflicting goals [1]:
size of the Pareto set usually increases with the increase in
the number of objectives. 1. The best-known Pareto front should be as close as
possible to the true Pareto front. Ideally, the best-known
2. Multi-objective optimization formulation Pareto set should be a subset of the Pareto optimal set.
2. Solutions in the best-known Pareto set should be
Consider a decision-maker who wishes to optimize K uniformly distributed and diverse over of the Pareto
objectives such that the objectives are non-commensurable front in order to provide the decision-maker a true
and the decision-maker has no clear preference of the picture of trade-offs.
objectives relative to each other. Without loss of generality, 3. The best-known Pareto front should capture the whole
all objectives are of the minimization type—a minimization spectrum of the Pareto front. This requires investigating
type objective can be converted to a maximization type by solutions at the extreme ends of the objective function
multiplying negative one. A minimization multi-objective space.
decision problem with K objectives is defined as follows:
Given an n-dimensional decision variable vector For a given computational time limit, the first goal is
x ¼ {x1,y,xn} in the solution space X, find a vector x* best served by focusing (intensifying) the search on a
that minimizes a given set of K objective functions particular region of the Pareto front. On the contrary, the
z(x*) ¼ {z1(x*),y,zK(x*)}. The solution space X is gen- second goal demands the search effort to be uniformly
erally restricted by a series of constraints, such as distributed over the Pareto front. The third goal aims at
gj(x*) ¼ bj for j ¼ 1, y, m, and bounds on the decision extending the Pareto front at both ends, exploring new
variables. extreme solutions.
In many real-life problems, objectives under considera- This paper presents common approaches used in multi-
tion conflict with each other. Hence, optimizing x with objective GA to attain these three conflicting goals while
respect to a single objective often results in unacceptable solving a multi-objective optimization problem.
results with respect to the other objectives. Therefore, a
perfect multi-objective solution that simultaneously opti- 3. Genetic algorithms
mizes each objective function is almost impossible. A
reasonable solution to a multi-objective problem is to The concept of GA was developed by Holland and his
investigate a set of solutions, each of which satisfies the colleagues in the 1960s and 1970s [2]. GA are inspired by
objectives at an acceptable level without being dominated the evolutionist theory explaining the origin of species. In
by any other solution. nature, weak and unfit species within their environment are
If all objective functions are for minimization, a feasible faced with extinction by natural selection. The strong ones
solution x is said to dominate another feasible solution y have greater opportunity to pass their genes to future
(x  y), if and only if, zi(x)pzi(y) for i ¼ 1, y, K and generations via reproduction. In the long run, species
zj(x)ozj(y) for least one objective function j. A solution is carrying the correct combination in their genes become
said to be Pareto optimal if it is not dominated by any other dominant in their population. Sometimes, during the slow
solution in the solution space. A Pareto optimal solution process of evolution, random changes may occur in genes.
cannot be improved with respect to any objective without If these changes provide additional advantages in the
worsening at least one other objective. The set of all challenge for survival, new species evolve from the old
feasible non-dominated solutions in X is referred to as the ones. Unsuccessful changes are eliminated by natural
Pareto optimal set, and for a given Pareto optimal set, the selection.
corresponding objective function values in the objective In GA terminology, a solution vector xAX is called an
space are called the Pareto front. For many problems, the individual or a chromosome. Chromosomes are made of
number of Pareto optimal solutions is enormous (perhaps discrete units called genes. Each gene controls one or more
infinite). features of the chromosome. In the original implementa-
The ultimate goal of a multi-objective optimization tion of GA by Holland, genes are assumed to be binary
algorithm is to identify solutions in the Pareto optimal digits. In later implementations, more varied gene types
set. However, identifying the entire Pareto optimal set, have been introduced. Normally, a chromosome corre-
for many multi-objective problems, is practically impos- sponds to a unique solution x in the solution space. This
sible due to its size. In addition, for many problems, requires a mapping mechanism between the solution space
especially for combinatorial optimization problems, proof and the chromosomes. This mapping is called an encoding.
of solution optimality is computationally infeasible. There- In fact, GA work on the encoding of a problem, not on the
fore, a practical approach to multi-objective optimization problem itself.
is to investigate a set of solutions (the best-known Pareto GA operate with a collection of chromosomes, called a
set) that represent the Pareto optimal set as well as population. The population is normally randomly initia-
possible. With these concerns in mind, a multi-objective lized. As the search evolves, the population includes fitter
ARTICLE IN PRESS
994 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

and fitter solutions, and eventually it converges, meaning 4. Multi-objective GA


that it is dominated by a single solution. Holland also
presented a proof of convergence (the schema theorem) to Being a population-based approach, GA are well suited
the global optimum where chromosomes are binary to solve multi-objective optimization problems. A generic
vectors. single-objective GA can be modified to find a set of
GA use two operators to generate new solutions from multiple non-dominated solutions in a single run. The
existing ones: crossover and mutation. The crossover ability of GA to simultaneously search different regions of
operator is the most important operator of GA. In a solution space makes it possible to find a diverse set of
crossover, generally two chromosomes, called parents, are solutions for difficult problems with non-convex, discon-
combined together to form new chromosomes, called tinuous, and multi-modal solutions spaces. The crossover
offspring. The parents are selected among existing chromo- operator of GA may exploit structures of good solutions
somes in the population with preference towards fitness so with respect to different objectives to create new non-
that offspring is expected to inherit good genes which make dominated solutions in unexplored parts of the Pareto
the parents fitter. By iteratively applying the crossover front. In addition, most multi-objective GA do not require
operator, genes of good chromosomes are expected to the user to prioritize, scale, or weigh objectives. Therefore,
appear more frequently in the population, eventually GA have been the most popular heuristic approach to
leading to convergence to an overall good solution. multi-objective design and optimization problems. Jones et
The mutation operator introduces random changes into al. [4] reported that 90% of the approaches to multi-
characteristics of chromosomes. Mutation is generally objective optimization aimed to approximate the true
applied at the gene level. In typical GA implementations, Pareto front for the underlying problem. A majority of
the mutation rate (probability of changing the properties of these used a meta-heuristic technique, and 70% of all meta-
a gene) is very small and depends on the length of the heuristics approaches were based on evolutionary ap-
chromosome. Therefore, the new chromosome produced proaches.
by mutation will not be very different from the original The first multi-objective GA, called vector evaluated GA
one. Mutation plays a critical role in GA. As discussed (or VEGA), was proposed by Schaffer [5]. Afterwards,
earlier, crossover leads the population to converge by several multi-objective evolutionary algorithms were devel-
making the chromosomes in the population alike. Muta- oped including Multi-objective Genetic Algorithm
tion reintroduces genetic diversity back into the population (MOGA) [6], Niched Pareto Genetic Algorithm (NPGA)
and assists the search escape from local optima. [7], Weight-based Genetic Algorithm (WBGA) [8], Ran-
Reproduction involves selection of chromosomes for the dom Weighted Genetic Algorithm (RWGA)[9], Nondomi-
next generation. In the most general case, the fitness of an nated Sorting Genetic Algorithm (NSGA) [10], Strength
individual determines the probability of its survival for the Pareto Evolutionary Algorithm (SPEA) [11], improved
next generation. There are different selection procedures in SPEA (SPEA2) [12], Pareto-Archived Evolution Strategy
GA depending on how the fitness values are used. (PAES) [13], Pareto Envelope-based Selection Algorithm
Proportional selection, ranking, and tournament selection (PESA) [14], Region-based Selection in Evolutionary
are the most popular selection procedures. The procedure Multiobjective Optimization (PESA-II) [15], Fast Non-
of a generic GA [3] is given as follows: dominated Sorting Genetic Algorithm (NSGA-II) [16],
Multi-objective Evolutionary Algorithm (MEA) [17],
Step 1: Set t ¼ 1. Randomly generate N solutions to Micro-GA [18], Rank-Density Based Genetic Algorithm
form the first population, P1. Evaluate the fitness of (RDGA) [19], and Dynamic Multi-objective Evolutionary
solutions in P1. Algorithm (DMOEA) [20]. Note that although there are
Step 2: Crossover: Generate an offspring population Qt many variations of multi-objective GA in the literature,
as follows: these cited GA are well-known and credible algorithms
2.1. Choose two solutions x and y from Pt based on that have been used in many applications and their
the fitness values. performances were tested in several comparative studies.
2.2. Using a crossover operator, generate offspring Several survey papers [1,11,21–27] have been published
and add them to Qt. on evolutionary multi-objective optimization. Coello lists
Step 3: Mutation: Mutate each solution xAQt with a more than 2000 references in his website [28]. Generally,
predefined mutation rate. multi-objective GA differ based on their fitness assign-
Step 4: Fitness assignment: Evaluate and assign a fitness ment procedure, elitisim, or diversification approaches. In
value to each solution xAQt based on its objective Table 1, highlights of the well-known multi-objective with
function value and infeasibility. their advantages and disadvantages are given. Most survey
Step 5: Selection: Select N solutions from Qt based on papers on multi-objective evolutionary approaches intro-
their fitness and copy them to Pt+1. duce and compare different algorithms. This paper takes
Step 6: If the stopping criterion is satisfied, terminate the a different course and focuses on important issues while
search and return to the current population, else, set designing a multi-objective GA and describes common
t ¼ t+1 go to Step 2. techniques used in multi-objective GA to attain the three
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 995

Table 1
A list of well-known multi-objective GA

Algorithm Fitness assignment Diversity mechanism Elitism External Advantages Disadvantages


population

VEGA [5] Each subpopulation is No No No First MOGA Tend converge to the


evaluated with respect Straightforward extreme of each objective
to a different implementation
objective
MOGA [6] Pareto ranking Fitness sharing by No No Simple extension of single Usually slow
niching objective GA convergence
Problems related to niche
size parameter
WBGA [8] Weighted average of Niching No No Simple extension of single Difficulties in nonconvex
normalized objectives Predefined weights objective GA objective function space
NPGA [7] No fitness Niche count as tie- No No Very simple selection Problems related to niche
assignment, breaker in tournament process with tournament size parameter
tournament selection selection selection Extra parameter for
tournament selection
RWGA [9] Weighted average of Randomly assigned Yes Yes Efficient and easy Difficulties in nonconvex
normalized objectives weights implement objective function space
PESA [14] No fitness assignment Cell-based density Pure elitist Yes Easy to implement Performance depends on
Computationally efficient cell sizes
Prior information needed
about objective space
PAES [29] Pareto dominance is Cell-based density as Yes Yes Random mutation hill- Not a population based
used to replace a tie breaker between climbing strategy approach
parent if offspring offspring and parent Easy to implement Performance depends on
dominates Computationally efficient cell sizes
NSGA [10] Ranking based on Fitness sharing by No No Fast convergence Problems related to niche
non-domination niching size parameter
sorting
NSGA-II [30] Ranking based on Crowding distance Yes No Single parameter (N) Crowding distance works
non-domination Well tested in objective space only
sorting Efficient
SPEA [11] Raking based on the Clustering to truncate Yes Yes Well tested Complex clustering
external archive of external population No parameter for algorithm
non-dominated clustering
solutions
SPEA-2 [12] Strength of Density based on the Yes Yes Improved SPEA Computationally
dominators k-th nearest neighbor Make sure extreme points expensive fitness and
are preserved density calculation
RDGA [19] The problem reduced Forbidden region cell- Yes Yes Dynamic cell update More difficult to
to bi-objective based density Robust with respect to the implement than others
problem with solution number of objectives
rank and density as
objectives
DMOEA [20] Cell-based ranking Adaptive cell-based Yes (implicitly) No Includes efficient More difficult to
density techniques to update cell implement than others
densities
Adaptive approaches to
set GA parameters

goals in multi-objective optimization. This approach is also problems have preferred to design their own customized
taken in the survey paper by Zitzler et al. [1]. However, the algorithms by adapting strategies from various multi-
discussion in this paper is aimed at introducing the objective GA. This observation is another motivation for
components of multi-objective GA to researchers and introducing the components of multi-objective GA rather
practitioners without a background on the multi-objective than focusing on several algorithms. However, the pseudo-
GA. It is also import to note that although several of the code for some of the well-known multi-objective GA are
state-of-the-art algorithms exist as cited above, many also provided in order to demonstrate how these proce-
researchers that applied multi-objective GA to their dures are incorporated within a multi-objective GA.
ARTICLE IN PRESS
996 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

5. Design issues and components of multi-objective GA Step 4: Select parents using the selection probabilities
calculated in Step 3. Apply crossover on the selected
5.1. Fitness functions parent pairs to create N offspring. Mutate offspring with
a predefined mutation rate. Copy all offspring to Pt+1.
5.1.1. Weighted sum approaches Update E if necessary.
The classical approach to solve a multi-objective Step 5: Randomly remove nE solutions from Pt+1 and
optimization problem is to assign a weight wi to each add the same number of solutions from E to Pt+1.
normalized objective function z0 i ðxÞ so that the problem is Step 6: If the stopping condition is not satisfied, set t ¼
converted to a single objective problem with a scalar t þ 1 and go to Step 2. Otherwise, return to E.
objective function as follows:
The main advantage of the weighted sum approach is a
min z ¼ w1 z01 ðxÞ þ w2 z02 ðxÞ þ    þ wk z0k ðxÞ, (1)
straightforward implementation. Since a single objective is
0
whereP zi ðxÞ is the normalized objective function zi(x) used in fitness assignment, a single objective GA can be
and wi ¼ 1. This approach is called the priori approach used with minimum modifications. In addition, this
since the user is expected to provide the weights. approach is computationally efficient. The main disadvan-
Solving a problem with the objective function (1) for a tage of this approach is that not all Pareto-optimal
given weight vector w ¼ {w1, w2,y,wk} yields a single solutions can be investigated when the true Pareto front
solution, and if multiple solutions are desired, the is non-convex. Therefore, multi-objective GA based on
problem must be solved multiple times with different the weighed sum approach have difficulty in finding
weight combinations. The main difficulty with this solutions uniformly distributed over a non-convex trade-
approach is selecting a weight vector for each run. To off surface [1].
automate this process; Hajela and Lin [8] proposed the
WBGA for multi-objective optimization (WBGA-MO) 5.1.2. Altering objective functions
in the WBGA-MO, each solution xi in the population uses As mentioned earlier, VEGA [5] is the first GA used to
a different weight vector wi ¼ {w1, w2,y,wk} in the approximate the Pareto-optimal set by a set of non-
calculation of the summed objective function (1). The dominated solutions. In VEGA, population Pt is randomly
weight vector wi is embedded within the chromosome of divided into K equal sized sub-populations; P1, P2,y, PK.
solution xi. Therefore, multiple solutions can be simulta- Then, each solution in subpopulation Pi is assigned a
neously searched in a single run. In addition, weight fitness value based on objective function zi. Solutions are
vectors can be adjusted to promote diversity of the selected from these subpopulations using proportional
population. selection for crossover and mutation. Crossover and
Other researchers [9,31] have proposed a MOGA based mutation are performed on the new population in the
on a weighted sum of multiple objective functions where a same way as for a single objective GA.
normalized weight vector wi is randomly generated for each Procedure VEGA:
solution xi during the selection phase at each generation. NS ¼ subpopulation size (N S ¼ N=K)
This approach aims to stipulate multiple search directions
in a single run without using additional parameters. The Step 1: Start with a random initial population P0. Set
general procedure of the RWGA using random weights is t ¼ 0.
given as follows [31]: Step 2: If the stopping criterion is satisfied, return Pt.
Procedure RWGA: Step 3: Randomly sort population Pt.
E ¼ external archive to store non-dominated solutions Step 4: For each objective k, k ¼ 1,yK, perform the
found during the search so far; following steps:
nE ¼ number of elitist solutions immigrating from E to P Step 4.1: For i ¼ 1 þ ðk21ÞN S ; . . . ; kN S , assign fit-
in each generation. ness value f ðxi Þ ¼ zk ðxi Þ to the ith solution in the
sorted population.
Step 1: Generate a random population. Step 4.2: Based on the fitness values assigned in Step
Step 2: Assign a fitness value to each solution xAPt by 4.1, select NS solutions between the (1+(k1)NS)th
performing the following steps: and (kNS)th solutions of the sorted population to
Step 2.1: Generate a random number uk in [0,1] for create subpopulation Pk.
each objective k, k ¼ 1,y,K. Step 5: Combine all subpopulations P1,y,Pk and apply
Step 2.2: Calculate the P random weight of each crossover and mutation on the combined population to
objective k as wk ¼ ð1=uk Þ K i¼1 ui . create Pt+1 of size N. Set t ¼ t þ 1, go to Step 2.
Step 2.3:
P Calculate the fitness of the solution as
f ðxÞ ¼ K w z
k¼1 k k ðxÞ. A similar approach to VEGA is to use only a single
Step 3: Calculate the selection probabilityP of each solution objective function which is randomly determined each time
xAPt as follows: pðxÞ ¼ ðf ðxÞ  f min Þ1 y2Pt ðf ðyÞ  f min Þ in the selection phase [32]. The main advantage of the
where f min ¼ minff ðxÞjx 2 Pt g. alternating objectives approach is easy to implement and
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 997

computationally as efficient as a single-objective GA. In have been investigated thus far during the search. For each
fact, this approach is a straightforward extension of a solution yAE, a strength value is defined as
single objective GA to solve multi-objective problems. The
major drawback of objective switching is that the popula- npðy; tÞ
sðy; tÞ ¼ ,
tion tends to converge to solutions which are superior in NP þ 1
one objective, but poor at others.
where npðy; tÞ is the number solutions that y dominates in
P. The rank r(y,t) of a solution yAE is assigned as r3 ðy; tÞ ¼
5.1.3. Pareto-ranking approaches
sðy; tÞ and the rank of a solution xAP is calculated as
Pareto-ranking approaches explicitly utilize the concept
of Pareto dominance in evaluating fitness or assigning X
r3 ðx; tÞ ¼ 1 þ sðy; tÞ.
selection probability to solutions. The population is ranked y2E;yx
according to a dominance rule, and then each solution is
assigned a fitness value based on its rank in the population, Fig. 1c illustrates an example of the SPEA ranking
not its actual objective function value. Note that herein all method. In the former two methods, all non-dominated
objectives are assumed to be minimized. Therefore, a lower solutions are assigned a rank of 1. This method, however,
rank corresponds to a better solution in the following favors solution a (in the figure) over the other non-
discussions. dominated solutions since it covers the least number of
The first Pareto ranking technique was proposed by solutions in the objective function space. Therefore, a wide,
Goldberg [3] as follows: uniformly distributed set of non-dominated solutions is
encouraged.
Step 1: Set i ¼ 1 and TP ¼ P. Accumulated ranking density strategy [19] also aims to
Step 2: Identify non-dominated solutions in TP and penalize redundancy in the population due to overrepre-
assigned them set to Fi. sentation. This ranking method is given as
Step 3: Set TP ¼ TPFi. If TP ¼ + go to Step 4, else set X
i ¼ i þ 1 and go to Step 2. r4 ðx; tÞ ¼ 1 þ rðy; tÞ.
Step 4: For every solution xAP at generation t, assign y2P;yx

rank r1 ðx; tÞ ¼ i if xAFi. To calculate the rank of a solution x, the rank of the
solutions dominating this solution must be calculated first.
In the procedure above, F1, F2,y are called non- Fig. 1d shows an example of this ranking method (based on
dominated fronts, and F1 is the Pareto front of population r2). Using ranking method r4, solutions i, l and n are ranked
P. NSGA [10] also classifies the population into non- higher than their counterparts at the same non-dominated
dominated fronts using an algorithm similar to that given front since the portion of the trade-off surface covering
above. Then a dummy fitness value is assigned to each them is crowded by three nearby solutions c, d and e.
front using a fitness sharing function such that the worst Although some of the ranking approaches described in
fitness value assigned to Fi is better than the best fitness this section can be used directly to assign fitness values to
value assigned to Fi+1. NSGA-II [16], a more efficient individual solutions, they are usually combined with
algorithm, named the fast non-dominated-sort algorithm, various fitness sharing techniques to achieve the second
was developed to form non-dominated fronts. Fonseca and goal in multi-objective optimization, finding a diverse and
Fleming [6] used a slightly different rank assignment uniform Pareto front.
approach than the ranking based on non-dominated-fronts
as follows: 5.2. Diversity: fitness assignment, fitness sharing, and
r2 ðx; tÞ ¼ 1 þ nqðx; tÞ; (2) niching

where nq(x,t) is the number of solutions dominating Maintaining a diverse population is an important
solution x at generation t. This ranking method penalizes consideration in multi-objective GA to obtain solutions
solutions located in the regions of the objective function uniformly distributed over the Pareto front. Without
space which are dominated (covered) by densely populated taking preventive measures, the population tends to form
sections of the Pareto front. For example, in Fig. 1b relatively few clusters in multi-objective GA. This phenom-
solution i is dominated by solutions c, d and e. Therefore, it enon is called genetic drift, and several approaches have
is assigned a rank of 4 although it is in the same front with been devised to prevent genetic drift as follows.
solutions f, g and h which are dominated by only a single
solution. 5.2.1. Fitness sharing
SPEA [11] uses a ranking procedure to assign better Fitness sharing encourages the search in unexplored
fitness values to non-dominated solutions at underrepre- sections of a Pareto front by artificially reducing fitness of
sented regions of the objective space. In SPEA, an external solutions in densely populated areas. To achieve this goal,
list E of a fixed size stores non-dominated solutions that densely populated areas are identified and a penalty
ARTICLE IN PRESS
998 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

F2 F3
F1 2 2
3 5
z2 z2
1 f j F4 1 f j
a 2 a 2
g 4 g 5
m m

3 4
2 4 2 8
h k h k
3 n 6 n
1 l 1 l
b 2 b 4
1 i 1 i
c 1 c 1
d 1 d 1
e e
(a) z1 (b) z1

2 6
z2 f z2 f j
2/15 j 1
a a 2
9
g g
m m

5
2 17
k k
h n h 9 n
7/15 l 1 l
b b 4
5/15 i 1 i
c 4/15 c 1
d 3/15 d 1
e e
(c) z1 (d) z1

Fig. 1. Ranking methods used in multi-objective GA.

method is used to penalize the solutions located in such Step 3: After calculating niche counts, the fitness of each
areas. solution is adjusted as follows:
The idea of fitness sharing was first proposed by f ðx; tÞ
Goldberg and Richardson [33] in the investigation of f 0 ðx; tÞ ¼ .
ncðx; tÞ
multiple local optima for multi-modal functions. Fonseca
and Fleming [6] used this idea to penalize clustered
solutions with the same rank as follows: In the procedure above, sshare defines a neighborhood of
solutions in the objective space (Fig. 1a). The solutions in
the same neighborhood contribute to each other’s niche
Step 1: Calculate the Euclidean distance between every count. Therefore, a solution in a crowded neighborhood
solution pair x and y in the normalized objective space will have a higher niche count, reducing the probability of
between 0 and 1 as selecting that solution as a parent. As a result, niching
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
! limits the proliferation of solutions in one particular
u K
uX z ðxÞ  z ðyÞ 2 neighborhood of the objective function space.
dzðx; yÞ ¼ t k k
, (3)
zmax  zmin Another alternative is to use the distance in the decision
k¼1 k k
variable space between two solutions x and y which is
where zmax
k and zmin k are the maximum and minimum defined as
value of the objective function zk ðÞ observed so far vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
u
during the search, respectively. u1 X M

Step 2: Based on these distances, calculate a niche count dxðx; yÞ ¼ t ðxi  yi Þ2 (5)
M i¼1
for each solution xAP as
X  
sshare  dzðx; yÞ in the calculation of niche count. Eq. (5) is a measure of the
ncðx; tÞ ¼ max ;0 , (4) structural differences between two solutions. Two solutions
y2P;rðy;tÞ¼rðx;tÞ
sshare
might be very close in the objective function space while
where sshare is the niche size. they have very different structural features. Therefore,
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 999

fitness sharing based on the objective function space may In SPEA2 [12], a density measure is used to discriminate
reduce diversity in the decision variable space. However, between solutions with the same rank, where the density of
Deb and Goldberg [34] reported that fitness sharing in a solution is defined as the inverse of the distance to its kth
objective function space usually performs better than one closest neighbor in objective function space. The density of
based on decision variable space. a solution is similar to its niche count. However, selecting a
One of the disadvantages of fitness sharing based on value for parameter k is more straightforward then
niche count is that the user has to select a new parameter selecting a value for sshare.
sshare. To address this problem, Deb and Goldberg [34]
and Fonseca and Fleming [6] developed systematic 5.2.2. Crowding distance
approaches to estimate and dynamically update sshare. Crowding distance approaches aim to obtain a uniform
Another disadvantage of niching is computational effort to spread of solutions along the best-known Pareto front
calculate niche counts. However, benefits of fitness sharing without using a fitness sharing parameter. For example,
usually surpass the cost of extra computational effort. NSGA-II [16] uses a crowding distance method as follows
Miller and Shaw [35] proposed a dynamic niche sharing (Fig. 2b):
approach to increase effectiveness of computing niche
counts.
Step 1: Rank the population and identify non-domi-
MOGA [6] was the first multi-objective GA that
nated fronts F1, F2, y, FR. For each front j ¼ 1, y, R
explicitly used Pareto-based ranking and niching techni-
repeat Steps 2 and 3.
ques together to encourage the search toward the true
Step 2: For each objective function k, sort the solutions
Pareto front while maintaining diversity in the population.
in Fj in the ascending order. Let l ¼j F j j and x½i;k
Therefore, it is a good example to demonstrate how Pareto-
represent the ith solution in the sorted list with respect
based ranking and fitness sharing can be integrated in a
to the objective function k. Assign cdk ðx½1;k Þ ¼ 1 and
multi-objective GA. The procedure of the MOGA is given
cdk ðx½l;k Þ ¼ 1, and for i ¼ 2, y, l1 assign
as follows:
Procedure MOGA: zk ðx½iþ1;k Þ  zk ðxk½i1;k Þ
cdk ðx½i;k Þ ¼ .
zmax
k  zmin
k

Step 1: Start with a random initial population P0. Set


t ¼ 0. Step 3: To find the total crowding distance cd(x) of a
Step 2: If the stopping criterion is satisfied, return Pt. solution x, sum the solution’s crowding P distances with
Step 3: Evaluate fitness of the population as follows: respect to each objective, i.e., cdðxÞ ¼ k cdk ðxÞ:
Step 3.1: Assign a rank r(x,t) to each solution xAPt
using the ranking scheme given in Eq. (2). The main advantage of the crowding approach described
Step 3.2: Assign a fitness values to each solution above is that a measure of population density around a
based on the solution’s rank as follows [36]: solution is computed without requiring a user-defined
X
rðx;tÞ1 parameter such as sshare or the kth closest neighbor. In
f ðx; tÞ ¼ N  nk  :5  ðnrðx;tÞ  1Þ NSGA-II, this crowding distance measure is used as a tie-
k¼1 breaker in a selection technique called the crowded tourna-
where nk is the number of the solutions with rank k. ment selection operator: Randomly select two solutions x
Step 3.3: Calculate the niche count ncðx; tÞ of each and y; if the solutions are in the same non-dominated front,
solution xAPt using Eq. (4). the solution with a higher crowding distance is the winner.
Step 3.4: Calculate the shared fitness value of each Otherwise, the solution with the lowest rank is selected.
solution xAPt as follows:
f 0 ðx; tÞ ¼ f ðx; tÞ=ncðx; tÞ. 5.2.3. Cell-based density
In this approach [13,19,20,29], the objective space is
Step 3.5: Normalize the fitness values by using the
divided into K-dimensional cells (see Fig. 2c). The number of
shared fitness values
solutions in each cell is defined as the density of the cell, and
f 0 ðx; tÞnrðx;tÞ the density of a solution is equal to the density of the cell in
f 00 ðx; tÞ ¼ P f ðx; tÞ.
f 0 ðx; tÞ which the solution is located. This density information is
y2Pt used to achieve diversity similarly to the fitness sharing
rðy;tÞ¼rðx;tÞ
approach. For example, in PESA [14], between two non-
dominated solutions, the one with a lower density is
Step 4: Use a stochastic selection method based on f 00 to preferable. The procedure of PESA is given as follows:
select parents for the mating pool. Apply crossover and Procedure PESA:
mutation on the mating pool until offspring population NE ¼ the maximum size of non-dominated archive E,
Qt of size N is filled. Set Ptþ1 ¼ Qt . NP ¼ the population size, n ¼ number of the grids along
Step 5: Set t ¼ t þ 1, go to Step 2. each objective function axis.
ARTICLE IN PRESS
1000 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

z2 z2

a a
a a a a
a a
σ share

a a
x cd2(x) x
a a
a cd1(x) a
a a

(a) z1 (b) z1

z2

a 2 1 0 0
a a
a
1 1 1 0
a
x
a
1 0 0 a 3 a

(c) z1

Fig. 2. Diversity methods used in multi-objective GA.

Step 1: Start with a random initial population P0 and set located in the less crowded hyper-cubes. Apply cross-
external achieve E0 ¼ +, t ¼ 0. over and mutation to generate NP offspring and copy
Step 2: Divide the normalized objective space into nK them to Pt+1.
hyper-cubes where n is the number of grids along a Step 6: Set t ¼ t þ 1 and go to Step 3.
single objective axis and K is the number of objectives.
Step 3: Update non-dominated archive Et by incorpor- PESA-II [15] follows a more direct approach, namely
ating new solutions from Pt one by one as follows: region-based selection, where cells but not individual
Case 1: If a new solution is dominated by at least a solutions are selected during the selection process. In this
solution in Et, discard the new solution. approach, a cell that is sparsely occupied has a higher
Case 2: If a new solution dominates some solutions in chance to be selected than a crowded cell. Once a cell is
Et, remove those dominated solutions from Et and selected, solutions within the cell are randomly chosen to
add to the new solution to Et. Update the member- participate to crossover and mutation.
ship of the hyper-cubes. Lu and Yen [19] and Yen and Lu [20] developed an
Case 3: If a new solution is not dominated by and efficient approach to identify a solution’s cell in case of
does not dominate any solution in Et, add this dynamic cell dimensions. In this approach, the width of a
solution to Et. If j E t j¼ N E þ 1, randomly choose a cell along the kth objective dimension is ðzmax
k  zmin
k Þ=nk
solution from the most crowded hyper-cubes to be where nk is the number cells dedicated to the kth objective
removed. Update the membership of the hyper-cubes. dimension and zmax k and zmin
k are the maximum and
Step 4: If the stopping criterion is satisfied, stop and minimum values of the objective function k so far in the
return Et. search, respectively. Therefore, cell boundaries are updated
Step 5: Set Pt ¼ +, and select solutions from Et for when a new maximum or minimum objective function
crossover and mutation based on the density informa- value is discovered. RDGA [19] uses a cell-based density
tion of the hyper-cubes. For example, if binary approach in an interesting way to convert a general
tournament selection is used, the winner is the solution K-objective problem into a bi-objective optimization
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 1001

problem with the objectives to minimize the individual survives to the next generation. In this respect, all non-
rank value and density of the population. The procedure of dominated solutions discovered by a multi-objective
this approach is given as follows: GA are considered elite solutions. However, implementa-
Procedure RDGA: tion of elitism in multi-objective optimization is not
nk ¼ number of cells along the axis of objective function k. as straightforward as in single objective optimization
mainly due to the large number of possible elitist solutions.
Step 1: Create a random parent population P0 of size N, Early multi-objective GA did not use elitism. However,
t ¼ 0. most recent multi-objective GA and their variations use
Step 2: Divide the normalized objective space into elitism. As discussed in [11,36,37], multi-objective GA
n1  n2  ?  nK hyper-cells. using elitist strategies tend to outperform their non-elitist
Step 3: Update the cells dimensions as d k ¼ ðzmax  counterparts. Multi-objective GA use two strategies to
k
zmin implement elitism [26]: (i) maintaining elitist solutions in
k Þ=nk :
Step 4: Identify the cell membership of each solution the population, and (ii) storing elitist solutions in an
xAPt. external secondary list and reintroducing them to the
Step 5: Assign a density value each solution xAPt as population.
mðx; tÞ ¼the number of solutions located in the same cell
with x. 5.3.1. Strategies to maintain elitist solutions in the
Step 6: Use the automatic accumulated tanking method population
to rank each solution as follows: Random selection does not ensure that a non-dominated
solution will survive to the next generation. A straightfor-
X
rðx; tÞ ¼ 1 þ rðy; tÞ. ward implementation of elitism in a multi-objective GA is
y2P;yx to copy all non-dominated solutions in population Pt to
population Pt+1, then fill the rest of Pt+1 by selecting
Step 7: Use the rank and density of each solution as the from the remaining dominated solutions in Pt. This
objectives of a bi-objective optimization problem. Use approach will not work when the total number of non-
VEGA’s fitness assignment approach to minimize the dominated parent and offspring solutions is larger than NP.
individual rank value and the population density while To address this problem, several approaches have been
creating the mating pool. In addition, randomly copy proposed.
solutions from the non-dominated archive to the mating Konak and Smith [38,39] proposed a multi-objective GA
pool. with a dynamic population size and a pure elitist strategy.
Step 8: Apply crossover and mutation on the mating In this multi-objective GA, the population includes only
pool. A selected parent performs crossover only with the non-dominated solutions. If the size of the population
best solution in the parent’s cell and neighborhood cells. reaches an upper bound Nmax, NmaxNmin solutions are
Do not allow an offspring to be located in a cell removed from the population giving consideration to
dominated by its parents. Replace the selected parent if maintaining the diversity of the current non-dominated
it is dominated by the offspring. Update non-dominated front. To achieve this, Pareto domination tournament
solutions archive. Set t ¼ t þ 1, go to Step 3 if the selection is used as follows [7]. Two solutions are randomly
stopping condition is not satisfied. chosen and the solution with the higher niche count is
removed since all solutions are non-dominated. A similar
pure elitist multi-objective GA with a dynamic population
The main advantage of the cell-based density approach is
size has also been proposed [17].
that a global density map of the objective function space is
NSGA-II uses a fixed population size of N. In generation
obtained as a result of the density calculation. The search
t, an offspring population Qt of size N is created from
can be encouraged toward sparsely inhabited regions of the
parent population Pt and non-dominated fronts F1, F2,y,
objective function space based on this map. RDGA [19]
FR are identified in the combined population Pt[Qt. The
uses a method based on this global density map to push
next population Pt+1 is filled starting from solutions in F1,
solutions out of high density areas towards low density
then F2, and so on as follows. Let k be the index of a non-
areas. Another other advantage is its computational
dominated front Fk that j F 1 [ F 2 [    [ F k j pN and
efficiency compared to the niching or neighborhood-based
j F 1 [ F 2 [    [ F k [ F kþ1 j 4N. First, all solutions in
density techniques. Yen and Lu [20] proposed several data
fronts F1, F2,y, Fk are copied to Pt+1, and then the least
structures and algorithms to efficiently store cell informa-
crowded (N j Ptþ1 j) solutions in Fk+1 are added to Pt+1.
tion and modify cell densities.
This approach makes sure that all non-dominated solu-
tions (F1) are included in the next population if j F 1 j pN,
5.3. Elitisim and the secondary selection based on crowding distance
promotes diversity. The complete procedure of NSGA-II is
Elitism in the context of single-objective GA means that given below to demonstrate an implementation of elitism
the best solution found so far during the search always without using a secondary external population.
ARTICLE IN PRESS
1002 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

Procedure NSGA-II: Step 2: Calculate the distance between all pairs of


clusters ci and cj as follows:
Step 1: Create a random parent population P0 of size N. 1 X
d ci ;cj ¼ dðx; yÞ.
Set t ¼ 0. jci j  jcj j x2ci ;y2cj
Step 2: Apply crossover and mutation to P0 to create
Here, the distance dðx; yÞ can be calculated in objective
offspring population Q0 of size N.
function space using Eq. (3) or in decision variable space
Step 3: If the stopping criterion is satisfied, stop and
using Eq. (5).
return to Pt.
Step 3: Merge the cluster pair ci and cj with the
Step 4: Set Rt ¼ Pt [ Qt .
minimum distance among all clusters into a new cluster.
Step 5: Using the fast non-dominated sorting algorithm,
Step 4: If j C j pN, go to Step 5, else go to Step 2.
identify the non-dominated fronts F1, F2, y,Fk in Rt.
Step 5: For each cluster, determine a solution with the
Step 6: For i ¼ 1,y,k do following steps:
minimum average distance to all other solutions in the
Step 6.1: Calculate crowding distance of the solutions
same cluster (called the centroid solution). Keep the
in Fi (as described in Section 5.2.2).
centroid solutions for every cluster and remove other
Step 6.2: Create Pt+1 as follows:
solutions from E.
Case 1: If j Ptþ1 j þ j F i j pN, then set Pt+1
¼ Pt+1[Fi;
Case 2: If j Ptþ1 j þ j F i j 4N, then add the least The final issue is the selection of elitist solutions from E
crowded N j Ptþ1 j solutions from Fi to Pt+1. to be reintroduced to the population. In [11,19,20],
Step 7: Use binary tournament selection based on the solutions for Pt+1 are selected from the combined
crowding distance to select parents from Pt+1. Apply population of Pt and Et. To implement this strategy,
crossover and mutation to Pt+1 to create offspring populations Pt and Et are combined, a fitness value is
population Qt+1 of size N. assigned to each solution in the combined population
Step 8: Set t ¼ t þ 1, and go to Step 3. Pt[Et, and then N solutions are selected for the next
generation Pt+1 based on the assigned fitness values.
Note that when the combined parent and offspring Another strategy is to reserve room for n elitist solutions in
population includes more N non-dominated solutions, the next population [43]. In this strategy, Nn solutions are
NSGA-II becomes as a pure elitist GA where only non- selected from parents and newly created offspring and n
dominated solutions participate in crossover and selection. solutions are selected from Et.
The main advantage of maintaining non-dominated solu- SPEA and SPEA2 are both very effective algorithms that
tions in the population is straightforward implementation. use an external list to store non-dominated solution
In this strategy, the population size is an important GA discovered so far in the search. They are also excellent
parameter since no external archive is used to store examples for the use of external populations. The
discovered non-dominated solutions. procedure of the SPEA2 is given as follows:
Procedure SPEA2:
5.3.2. Elitism with external populations NE ¼ the maximum size of the non-dominated archive
When an external list is used to store elitist solutions, E,
several issues must be addressed. The first issue is which NP ¼ the population size,  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
solutions are going to be stored in elitist list E. Most multi- k ¼ parameter for density calculation k ¼ N E þ N P .
objective GA store non-dominated solutions identified so
far during the search [11], and E is updated each time a new Step 1: Randomly generate an initial solution P0 and set
solution is created by removing elitist solutions dominated E0 ¼ +.
by a new solution or adding the new solution if it is not Step 2: Calculate the fitness of each solution x in Pt [ Et
dominated by any existing elitist solution. This is a as follows: P
computationally expensive operation. Several data struc- Step 2.1: rðx; tÞ ¼ y2Pt [E t ;yx sðy; tÞ where sðy; tÞ is
tures have been proposed to efficiently store, update, and the number of solutions in Pt [ E t dominated by
search in list E [40,41]. Another issue is the size of list E. solution y.
Since there might possibly exist a very large number of Step 2.2: Calculate the density as mðx; tÞ ¼ ðskx þ 1Þ1
Pareto optimal solutions for a problem, the elitist list can where skx is the distance between solution x and its
grow extremely large. Therefore, pruning techniques have kth nearest neighbor.
been proposed to control the size of E. For example, SPEA Step 2.3: Assign a fitness value as f ðx; tÞ ¼
uses the average linkage clustering method [42] to reduce rðx; tÞ þ mðx; tÞ.
the size of E to an upper limit N when the number of the Step 3: Copy all non-dominated solutions in Pt [ E t to
non-dominated solutions exceeds N as follows: E tþ1 . Two cases are possible:
Case 1: If j E tþ1 j 4N E , then truncate j E tþ1 j N E
Step 1: Initially, assign each solution xAE to a cluster ci, solutions by iteratively removing solutions with the
C ¼ fc1 ; c2 ; . . . ; cM g; maximum sk distances. Break any tie by examining sl
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 1003

for l ¼ k21,y,1 sequentially. solution in C, with respect to the calculated infeasibility


Case 2: If j E tþ1 j pN E , copy the best N E  j E tþ1 j measure, then the winner is the least infeasible solution.
dominated solutions according to their fitness values However, if there is a tie, that is, both solutions x and y
from Pt [ E t to E tþ1 . are either better or worse than the best solution in C,
Step 4: If the stopping criterion is satisfied, stop and then their niche count in decision variable space (Eq.
return non-dominated solutions in E tþ1 . (5)) is used for selection. In this case, the solution with
Step 5: Select parents from E tþ1 using binary tourna- the lower niche count is the winner.
ment selection with replacement. Step 4: In the case that solutions x and y are both
Step 6: Apply crossover and mutation operators to the feasible, select a random reference set C among feasible
parents to create NP offspring solutions. Copy offspring solutions in the population. Compare solutions x and y
to Ptþ1 , t ¼ t þ 1, and go to Step 2. to the solutions in set C. If one of them is non-
dominated in set C, and the other is dominated by at
Other examples of elitist approaches using external least one solution, the winner is the former. Otherwise,
populations are PESA [14], RDGA [44], RWGA [43], there is a tie between solutions x and y, and the niche
and DMOEA [20]. count of the solutions is calculated in decision variable
space. The solution with the smaller niche count is the
5.4. Constraint handling winner of the tournament selection.

Most real-world optimization problems include con- The procedure above is a comprehensive approach to
straints that must be satisfied. An excellent survey on the deal with infeasibility while maintaining diversity and
constraint handling techniques used in evolutionary algo- dominance of the population. The main disadvantages of
rithms is given by Coello [45]. A single-objective GA may this procedure are its computational complexity and
use one of four different constraint handling strategies: (i) additional parameters such as the size of reference set C
discarding infeasible solutions (the ‘‘death penalty’’); (ii) and niche size. However, modifications are also possible. In
reducing the fitness of infeasible solutions by using a Step 4, for example, the niche count of the solutions could
penalty function; (iii) crafting genetic operators to always be calculated in objective function space instead of decision
produce feasible solutions; and (iv) transforming infeasible variable space. In Step 3, the solution with the least
solutions to be feasible (‘‘repair’’). Handling of constraints infeasibility could be declared as the winner without
has not been adequately researched for multi-objective GA comparing solutions x and y to a reference set C with
[46]. For instance, all major multi-objective GA assume respect to infeasibility. Such modifications could reduce the
problems without constraints. While constraint handling computational complexity of the procedure.
strategies (i), (iii), and (iv) are directly applicable to the Deb et al. [16] proposed the constrain-domination
multi-objective case, implementation of penalty function concept and a binary tournament selection method based
strategies, which is the most frequently used constraint on it, called a constrained tournament method. A solution
handling strategy in single-objective GA, is not straightfor- x is said to constrain-dominate a solution y if either of the
ward in multi-objective GA due to fact that fitness following cases are satisfied:
assignment is based on the non-dominance rank of a
solution, not on its objective function values. Case 1: Solution x is feasible and solution y is infeasible.
Jimenez et al. [46,47] proposed a niched selection Case 2: Solutions x and y are both infeasible; however,
strategy to address infeasibility in multi-objective problems solution x has a smaller constraint violation than y.
as follows: Case 3: Solutions x and y are both feasible, and solution
x dominates solution y.
Step 1: Randomly chose two solutions x and y from the
population. In the constraint tournament method, first, non-con-
Step 2: If one of the solutions is feasible and the other strain-dominance fronts F1, F2, F3,y, FR are identified in a
one is infeasible, the winner is the feasible solution, and similar way to that defined in [3], but by using the
stop. Otherwise, if both solutions are infeasible go to constrain-domination criterion instead of the regular
Step 3, else go to Step 4. domination concept. Note that set F1 corresponds to the
Step 3: In this case, solutions x and y are both infeasible. set of feasible non-dominated solutions in the population
Then, select a random reference set C among infeasible and front Fi is more preferred than Fj for ioj. In the
solutions in the population. Compare solutions x and y constraint tournament selection, two solutions x and y are
to the solutions in reference set C with respect to their randomly chosen from the population. Between x and y,
degree of infeasibility. In order to achieve this, calculate the winner is the one in a more preferred non-constrain-
a measure of infeasibility (e.g., the number of con- dominance front. If solutions x and y are both in the same
straints violated or total constraint violation) for front, then the winner is decided based on niche counts or
solutions x, y, and those in set C. If one of solutions x crowding distances of the solution. The main advantages of
and y is better and the other one is worse than the best the constrained tournament method are that it requires
ARTICLE IN PRESS
1004 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

fewer parameters and can be easily integrated into a multi- neighborhood solutions. Similarly, Ishibuchi et al. [56]
objective GA. A similar approach, called dominance-based also used the weighted sum of the objective functions to
tournament selection, was used by Coello and Montes [48] evaluate solutions during the local search. However, the
to solve single objective problems with several difficult local search is selectively applied to only promising
constraints using a modified version of NPGA [7]. In this solutions, and weights are also randomly generated,
selection approach, dominance is defined with respect to instead of using the parents’ weight vector. Knowles and
the constraint violations of two solutions. Infeasible Corne [53] presented a memetic version of PAES, called M-
solution x is said to constrain-dominate y if it has fewer PAES. PAES uses the dominance concept to evaluate
or equal constraint violations than solution y for every solutions. Therefore, in M-PAES, a set of local non-
constraint of the problem, but less violation for at least one dominated solutions is used as a comparison set for
constraint. A tie between two constrain-non-dominated solutions investigated during local search. When a new
solutions is resolved by the total constraint violation of the solution is created in the neighborhood, it is only compared
solutions. with this local non-dominated set and necessary updates
are performed. Local search is terminated after a maximum
5.5. Parallel and hybrid multi-objective GA number of local solutions are investigated or a maximum
number of local moves are performed without any
All comparative studies on multi-objective GA agree improvement. Tan et al. [57] proposed applying a local
that elitism and diversity preservation mechanisms improve search procedure to only solutions that are located apart
performance. However, implementing elitism and diversity from others. In addition, the neighborhood size of the local
preservation strategies usually require substantial compu- search depends on the density or crowdedness of solutions.
tational effort and computer memory. In addition, evalua- Being selective in applying a local search, this strategy is
tion of objective functions may take considerable time in computationally efficient while maintaining diversity.
real-life problems. Therefore, researchers have been inter-
ested in reducing execution time and resource requirements 6. Multi-objective GA for reliability optimization
of multi-objective GA using advanced data structures. One
of the latest trends is parallel and distributed processing. Many engineering problems have multiple objectives,
Several recent papers [49–52] presented parallel implemen- including engineering system design and reliability optimi-
tation of multi-objective GA over multiple processors. zation. There have been several interesting and successful
Hybridization of GA with local search algorithms is implementations of multi-objective GA for this class of
frequently applied in single-objective GA. This approach is problems. These are described in the following paragraphs.
usually referred to as a memetic algorithm [53]. Generally, Marseguerra et al. [58] determined optimal surveillance
a local search algorithm proceeds as follows: test intervals using multi-objective GA with the goal of
improving reliability and availability. Their research
Step 1: Start with an initial solution x. implemented a multi-objective GA which explicitly ac-
Step 2: Generate a set of neighbor solutions around counts for the uncertainties in the parameters. The
solution x using a simple perturbation rule. objectives considered were the inverse of the expected
Step 3: If the best solution in the neighborhood set is system failure probability and the inverse of its variance.
better than x, replace x with this solution and go to Step These are used to drive the genetic search toward solutions
2, else stop. which are guaranteed to give optimal performance with
high assurance, i.e., low estimation variance. They success-
A local search algorithm is particularly effective in fully applied their procedure to a complex system, a
finding local optima if the solution space around the initial residual heat removal safety system for a boiling water
solution is convex. This is usually difficult to achieve using reactor.
standard GA operators. In hybridization of multi-objective Martorell et al. [59] studied the selection of technical
GA with local search algorithms, important issues are: (i) specifications and maintenance activities at nuclear power
selecting a solution to apply the local search and (ii) plants to increase reliability, availability and maintain-
identifying a solution in the neighborhood as the new best ability (RAM) of safety-related equipment. However, to
solution when multiple non-dominated local solutions improve RAM, additional limited resources (e.g. money,
exist. Several approaches have been proposed to address work force) are required, posing a multi-objective problem.
these two issues as follows. They demonstrated the viability and significance of their
Paquete and Stutzle [54] described a bi-objective GA proposed approach using multi-objective GA for an
where local search is used to generate initial solutions by emergency diesel generator system.
optimizing only one objective. Deb and Goel [55] applied Additionally, Martorell et al. [60] considered the optimal
local search to only final solutions. In Ishibuchi and allocation of more reliable equipment, testing and main-
Murata’s approach [43], a local search procedure is applied tenance activities to assure high RAM levels for safety-
to each offspring generated by crossover, using the same related systems. For these problems, the decision-maker
weight vector of the offspring’s parents to evaluate encounters a multi-objective optimization problem where
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 1005

the parameters of design, testing and maintenance are salient issues encountered when implementing multi-
decision variables. Solutions were obtained by using both objective GA.
single-objective GA and multi-objective GA, which were Consideration of the computational realities along with
demonstrated to solve the problem of testing and main- the performance of the different methods is needed. Also,
tenance optimization based on unavailability and cost nearly all problems will require some customization of the
criteria. GA approaches to properly handle the objectives, con-
Sasaki and Gen [61] introduced a multi-objective straints, encodings and scale. It is envisioned that the
problem which had fuzzy multiple objective functions Pareto solutions identified by GA would be pared down to
and constraints with a generalized upper bounding (GUB) a representative small set for the designer or engineer to
structure. They solved this problem by using a new further investigate. Therefore, for most implementations it
hybridized GA. This approach leads to a flexible optimal is not vital to find every Pareto optimal solution, but
system design by applying fuzzy goals and fuzzy con- rather, efficiently and reliably identify Pareto optimal
straints. A new chromosome representation was introduced solutions across the range of interest for each objective
in their work. To demonstrate the effectiveness of their function.
method, a large-scale optimal system reliability design In the reliability field, there are complicating factors—
problem was analyzed. namely the usual computational effort require to evaluate
Reliability allocation to minimize total plant costs, or estimate reliability or availability metrics. This makes
subject to an overall plant safety goal, was presented by consideration of computational effort especially relevant
Yang et al. [62]. For their problem, design optimization is and the user might need to define several options with
needed to improve the design, operation and safety of new differing effort (such as bounds, estimation, exact calcula-
and/or existing nuclear power plants. They presented an tion) that are used for different phases of the search or for
approach to determine the reliability characteristics of different solutions or regions. Also, control of the Pareto
reactor systems, subsystems, major components and plant set size is critical to keep the computational effort to a
procedures that are consistent with a set of top-level reasonable level. While these aspects make the use of
performance goals. To optimize the reliability of the multiobjective GA more challenging in the reliability field,
system, the cost for improving and/or degrading the the method still offers the most promise of any with its
reliability of the system was also included in the reliability powerful population-based search and its flexibility.
allocation process creating a multi-objective problem. GA
was applied to the reliability allocation problem of a
typical pressurized water reactor. References
Elegbede and Adjallah [63] presented a methodology to
optimize the availability and the cost of repairable [1] Zitzler E, Deb K, Thiele L. Comparison of multiobjective evolu-
parallel–series systems. It is a multi-objective combinator- tionary algorithms: empirical results. Evol Comput 2000;8(2):173–95.
[2] Holland JH. Adaptation in natural and artificial systems. Ann Arbor:
ial optimization, modeled with continuous and discrete
University of Michigan Press; 1975.
variables. They transformed the problem into a single [3] Goldberg DE. Genetic algorithms in search, optimization, and
objective problem and used traditional GA. machine learning. Reading, MA: Addison-Wesley; 1989.
Deb et al. [64] formulated a bi-objective optimization [4] Jones DF, Mirrazavi SK, Tamiz M. Multiobjective meta-heuristics:
problem of minimizing total wire length and minimizing an overview of the current state-of-the-art. Eur J Oper Res
the failure rate in the printed circuit board design. They 2002;137(1):1–9.
[5] Schaffer JD. Multiple objective optimization with vector evaluated
implemented NSGA-II to solve the problem. The results in genetic algorithms. In: Proceedings of the international conference on
the best Pareto fronts found were analyzed to understand genetic algorithm and their applications, 1985.
the trade-offs between reliability and printed circuit board [6] Fonseca CM, Fleming PJ. Multiobjective genetic algorithms. In: IEE
design. colloquium on ‘Genetic Algorithms for Control Systems Engineering’
(Digest No. 1993/130), 28 May 1993. London, UK: IEE; 1993.
Kumar et al. [65] presented a multi-objective GA
[7] Horn J, Nafpliotis N, Goldberg DE. A niched Pareto genetic
approach to design telecommunication networks while algorithm for multiobjective optimization. In: Proceedings of the first
simultaneously minimizing network performance and de- IEEE conference on evolutionary computation. IEEE world congress
sign costs under a reliability constraint. on computational intelligence, 27–29 June, 1994. Orlando, FL, USA:
IEEE; 1994.
[8] Hajela P, lin C-y. Genetic search strategies in multicriterion optimal
design. Struct Optimization 1992;4(2):99–107.
7. Conclusions [9] Murata T, Ishibuchi H. MOGA: multi-objective genetic algorithms.
In: Proceedings of the 1995 IEEE international conference on
Most real-world engineering problems involve simulta- evolutionary computation, 29 November–1 December, 1995. Perth,
neously optimizing multi-objectives where considerations WA, Australia: IEEE; 1995.
of trade-offs is important. In the last decade, evolutionary [10] Srinivas N, Deb K. Multiobjective optimization using nondominated
sorting in genetic algorithms. J Evol Comput 1994;2(3):221–48.
approaches have been the primary tools to solve real-world [11] Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a
multi-objective problems. This paper presented multi- comparative case study and the strength Pareto approach. IEEE
objective GA by focusing on their components and the Trans Evol Comput 1999;3(4):257–71.
ARTICLE IN PRESS
1006 A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007

[12] Zitzler E, Laumanns M, Thiele L. SPEA2: improving the strength proceedings, 1–3 October, 1990. Dortmund, West Germany: Springer;
Pareto evolutionary algorithm. Swiss Federal Institute Techonology: 1991.
Zurich, Switzerland; 2001. [33] Goldberg DE, Richardson J. Genetic algorithms with sharing for
[13] Knowles JD, Corne DW. Approximating the nondominated front multimodal function optimization. In: Genetic algorithms and their
using the Pareto archived evolution strategy. Evol Comput applications: proceedings of the second international conference on
2000;8(2):149–72. genetic algorithms, 28–31 July, 1987. Cambridge, MA, USA:
[14] Corne DW, Knowles JD, Oates MJ. The Pareto envelope-based Lawrence Erlbaum Associates; 1987.
selection algorithm for multiobjective optimization. In: Proceedings [34] Deb K, Goldberg DE. An investigation of of niche an species
of sixth international conference on parallel problem solving from fromation in genetic function optimization. In: Proceedings of the
Nature, 18–20 September, 2000. Paris, France: Springer; 2000. third international conference on genetic algorithms, George Mason
[15] Corne D, Jerram NR, Knowles J, Oates J. PESA-II: region-based University, 1989.
selection in evolutionary multiobjective optimization. In: Proceedings [35] Miller BL, Shaw MJ. Genetic algorithms with dynamic niche sharing
of the genetic and evolutionary computation conference (GECCO- for multimodal function optimization. In: Proceedings of the 1996
2001), San Francisco, CA, 2001. IEEE international conference on evolutionary computation,
[16] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist ICEC’96, May 20–22, 1996, Nagoya, Japan. Piscataway, NJ, USA:
multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol IEEE; 1996.
Comput 2002;6(2):182–97. [36] Deb K. Multi-objective optimization using evolutionary algorithms.
[17] Sarker R, Liang K-H, Newton C. A new multiobjective evolutionary New York: Wiley; 2001.
algorithm. Eur J Oper Res 2002;140(1):12–23. [37] Van Veldhuizen DA, Lamont GB. Multiobjective evolutionary
[18] Coello CAC, Pulido GT. A micro-genetic algorithm for multi- algorithms: analyzing the state-of-the-art. Evol Comput 2000;8(2):
objective optimization. In: Evolutionary multi-criterion optimization. 125–47.
First international conference, EMO 2001, 7–9 March, 2001. Zurich, [38] Konak A, Smith AE. Multiobjective optimization of survivable
Switzerland: Springer; 2001. networks considering reliability. In: Proceedings of the 10th interna-
[19] Lu H, Yen GG. Rank-density-based multiobjective genetic algorithm tional conference on telecommunication systems. Monterey, CA:
and benchmark test function study. IEEE Trans Evol Comput Naval Postgraduate School; 2002.
2003;7(4):325–43. [39] Konak A, Smith AE. Capacitated network design considering
[20] Yen GG, Lu H. Dynamic multiobjective evolutionary algorithm: survivability: an evolutionary approach. J Eng Optim 2004;36(2):
adaptive cell-based rank and density estimation. IEEE Trans Evol 189–205.
Comput 2003;7(3):253–74. [40] Fieldsend JE, Everson RM, Singh S. Using unconstrained elite
[21] Coello CAC. A comprehensive survey of evolutionary-based multi- archives for multiobjective optimization. IEEE Trans Evol Comput
objective optimization techniques. Knowl Inform Syst 1999;1(3): 2003;7(3):305–23.
269–308. [41] Mostaghim S, Teich J, Tyagi A. Comparison of data structures for
[22] Coello CAC. An updated survey of evolutionary multiobjective storing Pareto-sets in MOEAs. In: Proceedings of the 2002 world
optimization techniques: state of the art and future trends. In: congress on computational intelligence—WCCI’02, 12–17 May, 2002.
Proceedings of the 1999 congress on evolutionary computation- Honolulu, HI, USA: IEEE; 2002.
CEC99, 6–9 July 1999. Washington, DC, USA: IEEE. [42] Morse JN. Reducing the size of the nondominated set: pruning by
[23] Coello CAC. An updated survey of GA-based multiobjective clustering. Comput Oper Res 1980;7(1–2):55–66.
optimization techniques. ACM Comput Surv 2000;32(2):109–43. [43] Ishibuchi H, Murata T. Multi-objective genetic local search
[24] Fonseca CM, Fleming PJ. Genetic algorithms for multiobjective algorithm. In: Proceedings of the IEEE international conference on
optimization: formulation, discussion and generalization. In: Pro- evolutionary computation, 20–22 May, 1996. Nagoya, Japan: IEEE;
ceedings of the ICGA-93: fifth international conference on genetic 1996.
algorithms, 17–22 July 1993. Urbana-Champaign, IL, USA: Morgan [44] Lu H, Yen GG. Rank-density based multiobjective genetic algorithm.
Kaufmann; 1993. In: Proceedings of the 2002 world congress on computational
[25] Fonseca CM, Fleming PJ. Multiobjective optimization and intelligence—WCCI’02, 12–17 May, 2002. Honolulu, HI, USA:
multiple constraint handling with evolutionary algorithms. I. A IEEE; 2002.
unified formulation. IEEE Trans Syst Man Cybern A 1998;28(1): [45] Coello CAC. A survey of constraint handling techniques used with
26–37. evolutionary algorithms. Veracruz, Mexico: Laboratorio Nacional de
[26] Jensen MT. Reducing the run-time complexity of multiobjective EAs: Informtica Avanzada; 1999.
The NSGA-II and other algorithms. IEEE Trans Evol Comput [46] Jimenez F, Gomez-Skarmeta AF, Sanchez G, Deb K. An evolu-
2003;7(5):503–15. tionary algorithm for constrained multi-objective optimization. In:
[27] Xiujuan L, Zhongke S. Overview of multi-objective optimization Proceedings of the 2002 world congress on computational intelli-
methods. J Syst Eng Electron 2004;15(2):142–6. gence—WCCI’02, 12–17 May, 2002. Honolulu, HI, USA: IEEE;
[28] Coello CAC. 2005, http://www.lania.mx/ccoello/EMOO/EMOObib. 2002.
html [47] Jimenez F, Verdegay JL, Gomez-Skarmeta AF. Evolutionary
[29] Knowles J, Corne D. The Pareto archived evolution strategy: a new techniques for constrained multiobjective optimization problems.
baseline algorithm for Pareto multiobjective optimisation. In: In: Workshop on multi-criterion optimization using evolutionary
Proceedings of the 1999 congress on evolutionary computation- methods GECCO-1999, 1999.
CEC99, 6–9 July 1999. Washington, DC, USA: IEEE; 1999. [48] Coello CAC, Montes EM. Constraint-handling in genetic algorithms
[30] Deb K, Agrawal S, Pratap A, Meyarivan T. A fast elitist non- through the use of dominance-based tournament selection. Adv Eng
dominated sorting genetic algorithm for multi-objective optimization: Inform 2002;16(3):193–203.
NSGA-II. In: Proceedings of sixth international conference on [49] de Toro F, Ortega J, Fernandez J, Diaz A. PSFGA: a parallel genetic
parallel problem solving from nature, 18–20 September, 2000. Paris, algorithm for multiobjective optimization. In: Proceedings of the 10th
France: Springer; 2000. Euromicro workshop on parallel, distributed and network-based
[31] Murata T, Ishibuchi H, Tanaka H. Multi-objective genetic algorithm processing, 9–11 January, 2002. Canary Islands, Spain: IEEE
and its applications to flowshop scheduling. Comput Ind Eng Computer Society.
1996;30(4):957–68. [50] Van Veldhuizen DA, Zydallis JB, Lamont GB. Considerations in
[32] Kursawe F. A variant of evolution strategies for vector optimization. engineering parallel multiobjective evolutionary algorithms. IEEE
In: Parallel problem solving from nature. First workshop, PPSN 1 Trans Evol Comput 2003;7(2):144–73.
ARTICLE IN PRESS
A. Konak et al. / Reliability Engineering and System Safety 91 (2006) 992–1007 1007

[51] Wilson LA, Moore MD, Picarazzi JP, Miquel SDS. Parallel genetic [58] Marseguerra M, Zio E, Podofillini L. Optimal reliability/availability
algorithm for search and constrained multi-objective optimization. of uncertain systems via multi-objective genetic algorithms. IEEE
In: Proceedings of the 18th international parallel and distributed Trans Reliab 2004;53(3):424–34.
processing symposium, 26–30 April, 2004. Santa Fe, NM, USA: [59] Martorell S, Villanueva JF, Carlos S, Nebot Y, Sanchez A, Pitarch
IEEE Computer Society; 2004. JL, et al. RAMS+C informed decision-making with application to
[52] Xiong S, Li F. Parallel strength Pareto multiobjective evolutionary multi-objective optimization of technical specifications and main-
algorithm. In: Proceedings of the fourth international conference on tenance using genetic algorithms. Reliab Eng Syst Safety 2005;87(1):
parallel and distributed computing, applications and technologies, 65–75.
27–29 August, 2003. Chengdu, China: IEEE; 2003. [60] Martorell S, Sanchez A, Carlos S, Serradell V. Alternatives and
[53] Knowles JD, Corne DW. M-PAES: a memetic algorithm for challenges in optimizing industrial safety using genetic algorithms.
multiobjective optimization. In: Proceedings of the 2000 congress Reliab Eng Syst Safety 2004;86(1):25–38.
on evolutionary computation, 16–19 July, 2000. La Jolla, CA, USA: [61] Sasaki M, Gen M. A method of fuzzy multi-objective nonlinear
IEEE; 2000. programming with GUB structure by hybrid genetic algorithm. Int J
[54] Paquete L, Stutzle T. A two-phase local search for the biobjective Smart Eng Syst Des 2003;5(4):281–8.
traveling salesman problem. In: Evolutionary multi-criterion optimi- [62] Yang J-E, Hwang M-J, Sung T-Y, Jin Y. Application of genetic
zation. Proceedings of the second international conference, EMO algorithm for reliability allocation in nuclear power plants. Reliab
2003, 8–11 April, 2003. Faro, Portugal: Springer; 2003. Eng Syst Safety 1999;65(3):229–38.
[55] Deb K, Goel T. A hybrid multi-objective evolutionary approach to [63] Elegbede C, Adjallah K. Availability allocation to repairable systems
engineering shape design. In: Evolutionary multi-criterion optimiza- with genetic algorithms: a multi-objective formulation. Reliab Eng
tion. Proceedings of the first international conference, EMO 2001, Syst Safety 2003;82(3):319–30.
7–9 March, 2001. Zurich, Switzerland: Springer; 2001. [64] Deb K, Jain P, Gupta NK, Maji HK. Multiobjective placement of
[56] Ishibuchi H, Yoshida T, Murata T. Balance between genetic search electronic components using evolutionary algorithms. IEEE Trans
and local search in memetic algorithms for multiobjective permuta- Components Packaging Technol 2004;27(3):480–92.
tion flowshop scheduling. IEEE Trans Evol Comput 2003;7(2): [65] Kumar R, Parida PP, Gupta M. Topological design of communica-
204–23. tion networks using multiobjective genetic optimization. In:
[57] Tan KC, Lee TH, Khor EF. Evolutionary algorithms with dynamic Proceedings of the 2002 world congress on computational
population size and local exploration for multiobjective optimization. intelligence—WCCI’02, 12–17 May, 2002. Honolulu, HI, USA:
IEEE Trans Evol Comput 2001;5(6):565–88. IEEE; 2002.

You might also like