Multiobjective Optimization Using Evolutionary Alg
Multiobjective Optimization Using Evolutionary Alg
Multiobjective Optimization Using Evolutionary Alg
net/publication/220045365
CITATIONS READS
762 3,217
1 author:
Kalyan Deb
Michigan State University
362 PUBLICATIONS 54,753 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Kalyan Deb on 03 June 2014.
Abstract
As the name suggests, multi-objective optimization involves optimizing a number of objectives si-
multaneously. The problem becomes challenging when the objectives are of conflict to each other, that
is, the optimal solution of an objective function is different from that of the other. In solving such
problems, with or without the presence of constraints, these problems give rise to a set of trade-off opti-
mal solutions, popularly known as Pareto-optimal solutions. Due to the multiplicity in solutions, these
problems were proposed to be solved suitably using evolutionary algorithms which use a population ap-
proach in its search procedure. Starting with parameterized procedures in early nineties, the so-called
evolutionary multi-objective optimization (EMO) algorithms is now an established field of research and
application with many dedicated texts and edited books, commercial softwares and numerous freely
downloadable codes, a biannual conference series running successfully since 2001, special sessions and
workshops held at all major evolutionary computing conferences, and full-time researchers from uni-
versities and industries from all around the globe. In this chapter, we provide a brief introduction to
its operating principles and outline the current research and application studies of EMO.
1 Introduction
In the past 15 years, evolutionary multi-objective optimization (EMO) has become a popular and useful
field of research and application. Evolutionary optimization (EO) algorithms use a population based
approach in which more than one solution participates in an iteration and evolves a new population of
solutions in each iteration. The reasons for their popularity are many: (i) EOs do not require any derivative
information (ii) EOs are relatively simple to implement and (iii) EOs are flexible and have a wide-spread
applicability. For solving single-objective optimization problems, particularly in finding a single optimal
solution, the use of a population of solutions may sound redundant, in solving multi-objective optimization
problems an EO procedure is a perfect choice [1]. The multi-objective optimization problems, by nature,
give rise to a set of Pareto-optimal solutions which need a further processing to arrive at a single preferred
solution. To achieve the first task, it becomes quite a natural proposition to use an EO, because the use
of population in an iteration helps an EO to simultaneously find multiple non-dominated solutions, which
portrays a trade-off among objectives, in a single simulation run.
In this chapter, we present a brief description of an evolutionary optimization procedure for single-
objective optimization. Thereafter, we describe the principles of evolutionary multi-objective optimization.
Then, we discuss some salient developments in EMO research. It is clear from these discussions that EMO
is not only being found to be useful in solving multi-objective optimization problems, it is also helping
to solve other kinds of optimization problems in a better manner than they are traditionally solved. As
a by-product, EMO-based solutions are helping to reveal important hidden knowledge about a problem
– a matter which is difficult to achieve otherwise. EMO procedures with a decision making concept are
1
discussed as well. Some of these ideas require further detailed studies and this chapter mentions some such
current and future topics in this direction.
2
of variables), so that on an average one variable gets mutated per solution. In the context of real-parameter
optimization, a simple Gaussian probability distribution with a predefined variance can be used with its
mean at the child variable value [1]. This operator allows an EO to search locally around a solution and is
independent on the location of other solutions in the population.
The elitism operator combines the old population with the newly created population and chooses to
keep better solutions from the combined population. Such an operation makes sure that an algorithm has
a monotonically non-degrading performance. [8] proved an asymptotic convergence of a specific EO but
having elitism and mutation as two essential operators.
Finally, the user of an EO needs to choose termination criteria. Often, a predetermined number of
generations is used as a termination criterion. For goal attainment problems, an EO can be terminated as
soon as a solution with a predefined goal or a target solution is found. In many studies [2, 9, 10, 11], a ter-
mination criterion based on the statistics of the current population vis-a-vis that of the previous population
to determine the rate of convergence is used. In other more recent studies, theoretical optimality conditions
(such as the extent of satisfaction of Karush-Kuhn-Tucker (KKT) conditions) are used to determine the
termination of a real-parameter EO algorithm [12]. Although EOs are heuristic based, the use of such
theoretical optimality concepts in an EO can also be used to test their converging abilities towards local
optimal solutions.
To demonstrate the working of the above-mentioned GA, we show four snapshots of a typical simulation
run on the following constrained optimization problem:
10 points are used and the GA is run for 100 generations. The SBX recombination operator is used with
probability of pc = 0.9 and index ηc = 10. The polynomial mutation operator is used with a probability of
pm = 0.5 with an index of ηm = 50. Figures 1 to 4 show the populations at generation zero, 5, 40 and 100,
respectively. It can be observed that in only five generations, all 10 population members become feasible.
Thereafter, the points come close to each other and creep towards the constrained minimum point.
The EA procedure is a population-based stochastic search procedure which iteratively emphasizes its
better population members, uses them to recombine and perturb locally in the hope of creating new
and better populations until a predefined termination criterion is met. The use of a population helps
to achieve an implicit parallelism [2, 13, 14] in an EO’s search mechanism (causing an inherent parallel
search in different regions of the search space), a matter which makes an EO computationally attractive
for solving difficult problems. In the context of certain Boolean functions, a computational time saving
to find the optimum varying polynomial to the population size is proven [15]. On one hand, the EO
procedure is flexible, thereby allowing a user to choose suitable operators and problem-specific information
to suit a specific problem. On the other hand, the flexibility comes with an onus on the part of a user to
choose appropriate and tangible operators so as to create an efficient and consistent search [16]. However,
the benefits of having a flexible optimization procedure, over their more rigid and specific optimization
algorithms, provide hope in solving difficult real-world optimization problems involving non-differentiable
objectives and constraints, non-linearities, discreteness, multiple optima, large problem sizes, uncertainties
in computation of objectives and constraints, uncertainties in decision variables, mixed type of variables,
and others.
A wiser approach to solving optimization problems of the real world would be to first understand the
niche of both EO and classical methodologies and then adopt hybrid procedures employing the better of
the two as the search progresses over varying degrees of search-space complexity from start to finish. As
demonstrated in the above typical GA simulation, there are two phases in the search of a GA. First, the
GA exhibits a more global search by maintaining a diverse population, thereby discovering potentially
good regions of interest. Second, a more local search takes place by bringing the population members
closer together. Although the above GA degenerates to both these search phases automatically without
any external intervention, a more efficient search can be achieved if the later local search phase can be
identified and executed with a more specialized local search algorithm.
3
Figure 1: Initial population. Figure 2: Population at generation 5.
4
3 Evolutionary Multi-objective Optimization (EMO)
A multi-objective optimization problem involves a number of objective functions which are to be either
minimized or maximized. As in a single-objective optimization problem, the multi-objective optimization
problem may contain a number of constraints which any feasible solution (including all optimal solutions)
must satisfy. Since objectives can be either minimized or maximized, we state the multi-objective opti-
mization problem in its general form:
Minimize/Maximize fm (x), m = 1, 2, . . . , M ;
subject to gj (x) ≥ 0, j = 1, 2, . . . , J;
hk (x) = 0, k = 1, 2, . . . , K; (2)
(L) (U )
xi ≤ xi ≤ xi , i = 1, 2, . . . , n.
Definition 3.1 A solution x(1) is said to dominate the other solution x(2) , if both the following conditions
are true:
1. The solution x(1) is no worse than x(2) in all objectives. Thus, the solutions are compared based on
their objective function values (or location of the corresponding points (z(1) and z(2) ) on the objective
space).
2. The solution x(1) is strictly better than x(2) in at least one objective.
For a given set of solutions (or corresponding points on the objective space, for example, those shown
in Figure 5(a)), a pair-wise comparison can be made using the above definition and whether one point
dominates the other can be established. All points which are not dominated by any other member of the
f2 (minimize) f2 (minimize)
6
6
2 2
5 5 Non−dominated
front
4 4
3 5 3 5
1 1
3
1 3 1
2 6 10 14 18 2 6 10 14 18
f1 (maximize) f1 (maximize)
(a) (b)
Figure 5: A set of points and the first non-domination front are shown.
5
set are called the non-dominated points of class one, or simply the non-dominated points. For the set of
six solutions shown in the figure, they are points 3, 5, and 6. One property of any two such points is that
a gain in an objective from one point to the other happens only due to a sacrifice in at least one other
objective. This trade-off property between the non-dominated points makes the practitioners interested
in finding a wide variety of them before making a final choice. These points make up a front when viewed
them together on the objective space; hence the non-dominated points are often visualized to represent a
non-domination front. The computational effort needed to select the points of the non-domination front
from a set of N points is O(N log N ) for 2 and 3 objectives, and O(N logM −2 N ) for M > 3 objectives [18].
With the above concept, now it is easier to define the Pareto-optimal solutions in a multi-objective
optimization problem. If the given set of points for the above task contain all points in the search space
(assuming a countable number), the points lying on the non-domination front, by definition, do not get
dominated by any other point in the objective space, hence are Pareto-optimal points (together they con-
stitute the Pareto-optimal front) and the corresponding pre-images (decision variable vectors) are called
Pareto-optimal solutions. However, more mathematically elegant definitions of Pareto-optimality (includ-
ing the ones for continuous search space problems) exist in the multi-objective literature [17, 19].
6
Multi−objective
optimization problem
Minimize f1
Minimize f2
......
Minimize f M
Step 1
subject to constraints
IDEAL
Multi−objective
optimizer
Step 2
continuously improve their solutions. To this effect, a recent simulation study [12] has demonstrated that a
particular EMO procedure, starting from random non-optimal solutions, can progress towards theoretical
Karush-Kuhn-Tucker (KKT) points with iterations in real-valued multi-objective optimization problems.
The main difference and advantage of using an EMO compared to a posteriori MCDM procedures is
that multiple trade-off solutions can be found in a single simulation run, as most a posteriori MCDM
methodologies would require multiple applications.
In Step 1 of the EMO-based multi-objective optimization (the task shown vertically downwards in
Figure 6), multiple trade-off, non-dominated points are found. Thereafter, in Step 2 (the task shown
horizontally, towards the right), higher-level information is used to choose one of the obtained trade-off
points. This dual task allows an interesting feature, if applied for solving single-objective optimization
problems. It is easy to realize that a single-objective optimization is a degenerate case of multi-objective
optimization, as shown in details in another study [20]. In the case of single-objective optimization having
only one globally optimal solution, Step 1 will ideally find only one solution, thereby not requiring us to
proceed to Step 2. However, in the case of single-objective optimization having multiple global optima,
both steps are necessary to first find all or multiple global optima, and then to choose one solution from
them by using a higher-level information about the problem. Thus, although seems ideal for multi-objective
optimization, the framework suggested in Figure 6 can be ideally thought as a generic principle for both
single and multiple objective optimization.
7
f2
Local fronts
Initial
points
Infeasible
regions
Pareto−optimal
front
f1
This requires an algorithm to strike a good balance between the extent of these tasks its search operators
must do to overcome the above-mentioned difficulties reliably and quickly. When multiple simulations are
to performed to find a set of Pareto-optimal solutions, the above balancing act must have to performed in
every single simulation. Since simulations are performed independently, no information about the success
or failure of previous simulations is used to speed up the process. In difficult multi-objective optimization
problems, such memory-less a posteriori methods may demand a large overall computational overhead
to get a set of Pareto-optimal solutions. Moreover, even though the convergence can be achieved in some
problems, independent simulations can never guarantee finding a good distribution among obtained points.
EMO, as mentioned earlier, constitutes an inherent parallel search. When a population member over-
comes certain difficulties and make a progress towards the Pareto-optimal front, its variable values and
their combination reflect this fact. When a recombination takes place between this solution and other pop-
ulation members, such valuable information of variable value combinations gets shared through variable
exchanges and blending, thereby making the overall task of finding multiple trade-off solutions a parallelly
processed task.
At any generation t, the offspring population (say, Qt ) is first created by using the parent population (say,
Pt ) and the usual genetic operators. Thereafter, the two populations are combined together to form a new
population (say, Rt ) of size 2N . Then, the population Rt classified into different non-domination classes.
Thereafter, the new population is filled by points of different non-domination fronts, one at a time. The
filling starts with the first non-domination front (of class one) and continues with points of the second
non-domination front, and so on. Since the overall population size of Rt is 2N , not all fronts can be
accommodated in N slots available for the new population. All fronts which could not be accommodated
are deleted. When the last allowed front is being considered, there may exist more points in the front than
the remaining slots in the new population. This scenario is illustrated in Figure 8. Instead of arbitrarily
discarding some members from the last front, the points which will make the diversity of the selected points
the highest are chosen.
The crowded-sorting of the points of the last front which could not be accommodated fully is achieved
in the descending order of their crowding distance values and points from the top of the ordered list are
chosen. The crowding distance di of point i is a measure of the objective space around i which is not
occupied by any other solution in the population. Here, we simply calculate this quantity di by estimating
8
Non−dominated Crowding
sorting distance Pt+1
sorting
F1
Pt F2
F3
Qt
Rejected
Rt
Figure 9: The crowding distance cal-
culation.
Figure 8: Schematic of the NSGA-II procedure.
the perimeter of the cuboid (Figure 9) formed by using the nearest neighbors in the objective space as the
vertices (we call this the crowding distance).
Next, we show snapshots of a typical NSGA-II simulation on a two-objective test problem:
Minimize f1 (x) = x1 , h
p i
Minimize f2 (x) = g(x) 1 − f1 (x)/g(x) ,
ZDT2 : where g(x) = 1 + 299
P30 (3)
i=2 xi
0 ≤ x ≤ 1,
1
−1 ≤ xi ≤ 1, i = 2, 3, . . . , 30.
NSGA-II is run with a population size of 100 and for 100 generations. The variables are used as real
numbers and an SBX recombination operator with pc = 0.9 and distribution index of ηc = 10 and a
polynomial mutation operator [1] with pm = 1/n (n is the number of variables) and distribution index of
ηm = 20 are used. Figure 10 is the initial population shown on the objective space. Figures 11, 12, and 13
show populations at generations 10, 30 and 100, respectively. The figures illustrates how the operators of
NSGA-II cause the population to move towards the Pareto-optimal front with generations. At generation
100, the population comes very close to the true Pareto-optimal front.
4 Applications of EMO
Since the early development of EMO algorithms in 1993, they have been applied to many real-world and
interesting optimization problems. Descriptions of some of these studies can be found in books [1, 22, 23],
dedicated conference proceedings [24, 25, 26, 27], and domain-specific books, journals and proceedings.
In this section, we describe one case study which clearly demonstrates the EMO philosophy which we
described in Section 3.1.
9
Figure 10: Initial population. Figure 11: Population at generation 10.
Figure 12: Population at generation 30. Figure 13: Population at generation 100.
10
On the Earth–Mars rendezvous mission, the study found interesting trade-off solutions [28]. Using a
population of size 150, the NSGA was run for 30 generations. The obtained non-dominated solutions are
shown in Figure 14 for two of the three objectives and some selected solutions are shown in Figure 15. It is
22
36
900 73 72
Mass Delivered to Target (kg.) 132
800
44
700
600
500
400
300
200
100
0
1 1.5 2 2.5 3 3.5
Transfer Time (yrs.)
clear that there exist short-time flights with smaller delivered payloads (solution marked 44) and long-time
flights with larger delivered payloads (solution marked 36). Solution 44 can deliver a mass of 685.28 kg
and requires about 1.12 years. On other hand, an intermediate solution 72 can deliver almost 862 kg with
a travel time of about 3 years. In these figures, each continuous part of a trajectory represents a thrusting
arc and each dashed part of a trajectory represents a coasting arc. It is interesting to note that only a
small improvement in delivered mass occurs in the solutions between 73 and 72 with a sacrifice in flight
time of about an year.
The multiplicity in trade-off solutions, as depicted in Figure 15, is what we envisaged in discovering in
a multi-objective optimization problem by using a posteriori procedure, such as an EMO algorithm. This
aspect was also discussed in Figure 6. Once such a set of solutions with a good trade-off among objectives is
obtained, one can analyze them for choosing a particular solution. For example, in this problem context, it
makes sense to not choose a solution between points 73 and 72 due to poor trade-off between the objectives
in this range. On the other hand, choosing a solution within points 44 and 73 is worthwhile, but which
particular solution to choose depends on other mission related issues. But by first finding a wide range
of possible solutions and revealing the shape of front, EMO can help narrow down the choices and allow
a decision maker to make a better decision. Without the knowledge of such a wide variety of trade-off
solutions, a proper decision-making may be a difficult task. Although one can choose a scalarized objective
(such as the -constraint method with a particular vector) and find the resulting optimal solution, the
decision-maker will always wonder what solution would have been derived if a different vector was chosen.
For example, if 1 = 2.5 years is chosen and mass delivered to the target is maximized, a solution in between
points 73 and 72 will be found. As discussed earlier, this part of the Pareto-optimal front does not provide
the best trade-offs between objectives that this problem can offer. A lack of knowledge of good trade-off
regions before a decision is made may allow the decision maker to settle for a solution which, although
optimal, may not be a good compromised solution. The EMO procedure allows a flexible and a pragmatic
procedure for finding a well-diversified set of solutions simultaneously so as to enable picking a particular
region for further analysis or a particular solution for implementation.
11
Figure 15: Four trade-off trajectories.
Definition 5.1 A solution x(i) is said to ‘constrained-dominate’ a solution x(j) (or x(i) c x(j) ), if any
of the following conditions are true:
1. Solution x(i) is feasible and solution x(j) is not.
2. Solutions x(i) and x(j) are both infeasible, but solution x(i) has a smaller constraint violation, which
can be computed by adding the normalized violation of all constraints:
J
X K
X
CV(x) = hḡj (x)i + abs(h̄k (x)),
j=1 k=1
12
where hαi is −α, if α < 0 and is zero, otherwise. The normalization is achieved with the population
minimum (hgj imin ) and maximum (hgj imax ) constraint violations: ḡj (x) = (hgj (x)i−hgj imin )/(hgj imax −
hgj imin ).
3. Solutions x(i) and x(j) are feasible and solution x(i) dominates solution x(j) in the usual sense (Def-
inition 3.1).
The above change in the definition requires a minimal change in the NSGA-II procedure described earlier.
Figure 16 shows the non-domination fronts on a six-membered population due to the introduction of
two constraints (the minimization problem is described as CONSTR elsewhere [1]). In the absence of the
constraints, the non-domination fronts (shown by dashed lines) would have been ((1,3,5), (2,6), (4)),
but in their presence, the new fronts are ((4,5), (6), (2), (1), (3)). The first non-domination front
10
8 4
3
1 2
6 5 3
f2 4
6
4 5
Front 2
Front 1
2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
f1
consists of the “best” (that is, non-dominated and feasible) points from the population and any feasible
point lies on a better non-domination front than an infeasible point.
13
performance comparisons, thereby making them difficult to use in practice. In addition, unary and binary
attainment indicators of [34, 35] are of great importance. Figures 17 and 18 illustrate the hypervolume
and attainment indicators. Attainment surface is useful to determine a representative front obtained from
Search boundary W
f2 f2 Frequency distribution
A A B
25% 50% 75%
B
B
A
C D
cross−line
Pareto−optimal front
E
f1
f1
multiple runs of an EMO algorithm. In general, 50% surface can be used to indicate the front that is
dominated by 50% of all obtained non-dominated points.
14
8 Multiobjectivization
Interestingly, the act of finding multiple trade-off solutions using an EMO procedure has found its appli-
cation outside the realm of solving multi-objective optimization problems per se. The concept of finding
multiple trade-off solutions using an EMO procedure is applied to solve other kinds of optimization prob-
lems that are otherwise not multi-objective in nature. For example, the EMO concept is used to solve
constrained single-objective optimization problems by converting the task into a two-objective optimiza-
tion task of additionally minimizing an aggregate constraint violation [40]. This eliminates the need to
specify a penalty parameter while using a penalty based constraint handling procedure. A recent study
[41] utilizes a bi-objective NSGA-II to find a Pareto-optimal frontier corresponding to minimizations of the
objective function and constraint violation. The frontier is then used to estimate an appropriate penalty
parameter, which is then used to formulate a penalty based local search problem and is solved using a
classical optimization method. The approach is shown to require an order or two magnitude less function
evaluations than the existing constraint handling methods on a number of standard test problems.
A well-known difficulty in genetic programming studies, called the ‘bloating’, arises due to the continual
increase in size of genetic programs with iteration. The reduction of bloating by minimizing the size
of programs as an additional objective helped find high-performing solutions with a smaller size of the
code [42]. Minimizing the intra-cluster distance and maximizing inter-cluster distance simultaneously in
a bi-objective formulation of a clustering problem is found to yield better solutions than the usual single-
objective minimization of the ratio of the intra-cluster distance to the inter-cluster distance [43]. A recent
edited book [44] describes many such interesting applications in which EMO methodologies have helped
solve problems which are otherwise (or traditionally) not treated as multi-objective optimization problems.
15
65 200
60 180
55 160
Number of Laminations
140
50
Cost ($)
120
45
100
40
80
All Y!type elect. connection
35
All Y!type lamination 60
30 18 turns per coil
40
Guage 16
25 20
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5
Peak Torque (N!m)
be combined with mathematical optimization techniques having local convergence properties. A simple-
minded approach would be to start the optimization task with an EMO and the solutions obtained from
EMO can be improved by optimizing a composite objective derived from multiple objectives to ensure a
good spread by using a local search technique. Another approach would be to use a local search technique
as a mutation-like operator in an EMO so that all population members are at least guaranteed local optimal
solutions. A study [48] has demonstrated that the the latter approach is an overall better approach from
a computational point of view.
However, the use of a local search technique within an EMO has another advantage. Since, a local search
can find a weak or a near Pareto-optimal point, the presence of such super-individual in a population can
cause other near Pareto-optimal solutions to be found as a outcome of recombination of the super-individual
with other population members. A recent study has demonstrated this aspect [49].
10 Practical EMOs
Here, we describe some recent advances of EMO in which different practicalities are considered.
16
makes an EMO procedure slow and computationally less attractive. However, practically speaking, even
if an algorithm can find tens of thousands of Pareto-optimal solutions for a multi-objective optimization
problem, besides simply getting an idea of the nature and shape of the front, they are simply too many
to be useful for any decision making purposes. Keeping these views in mind, EMO researchers have taken
two different approaches in dealing with large-objective problems.
17
the Pareto-optimal front. Another EMO study used a fuzzy dominance [58] relation (instead of Pareto-
dominance), in which superiority of one solution over another in any objective is defined in a fuzzy manner.
Many other such definitions are possible and can be implemented based on the problem context.
18
a robust frontier which may be different from the globally Pareto-optimal front. Each and every point on
the robust frontier is then guaranteed to be less sensitive to uncertainties in decision variables and problem
parameters. Some such studies in EMO are [62, 63].
When the evaluation of constraints under uncertainties in decision variables and problem parameters are
considered, deterministic constraints become stochastic (they are also known as ’chance constraints’) and
involves a reliability index (R) to handle the constraints. A constraint g(x) ≥ 0 then becomes Prob(g(x) ≥
0) ≥ R. In order to find left side of the above chance constraint, a separate optimization methodology [64],
is needed, thereby making the overall algorithm a bi-level optimization procedure. Approximate single-loop
algorithms exist [65] and recently one such methodology has been integrated with an EMO [61] and shown
to find a ’reliable’ frontier corresponding a specified reliability index, instead of the Pareto-optimal frontier,
in problems having uncertainty in decision variables and problem parameters. More such methodologies are
needed, as uncertainties is an integral part of practical problem-solving and multi-objective optimization
researchers must look for better and faster algorithms to handle them.
11 Conclusions
This chapter has introduced the fast-growing field of multi-objective optimization based on evolutionary
algorithms. First, the principles of single-objective evolutionary optimization (EO) techniques have been
discussed so that readers can visualize the differences between evolutionary optimization and classical
optimization methods. The EMO principle of handling multi-objective optimization problems is to find a
representative set of Pareto-optimal solutions. Since an EO uses a population of solutions in each iteration,
EO procedures are potentially viable techniques to capture a number of trade-off near-optimal solutions in a
single simulation run. This chapter has described a number of popular EMO methodologies, presented some
simulation studies on test problems, and discussed how EMO principles can be useful in solving real-world
multi-objective optimization problems through a case study of spacecraft trajectory optimization.
Finally, this chapter has discussed the potential of EMO and its current research activities. The principle
of EMO has been utilized to solve other optimization problems that are otherwise not multi-objective in
nature. The diverse set of EMO solutions have been analyzed to find hidden common properties that can
act as valuable knowledge to a user. EMO procedures have been extended to enable them to handle various
19
practicalities. Finally, the EMO task is now being suitably combined with decision-making activities in
order to make the overall approach more useful in practice.
EMO addresses an important and inevitable fact of problem-solving tasks. EMO has enjoyed a steady
rise of popularity in a short time. EMO methodologies are being extended to address practicalities. In
the area of evolutionary computing and optimization, EMO research and application currently stands as
one of the fastest growing fields. EMO methodologies are still to be applied to many areas of science and
engineering. With such applications, the true value and importance of EMO will become evident.
Acknowledgments
This chapter contains some excerpts from previous publications by the same author entitled ‘Introduction
to Evolutionary Multi-Objective Optimization’, In J. Branke, K. Deb, K. Miettinen and R. Slowinski
(Eds.) Multiobjective Optimization: Interactive and Evolutionary Approaches (LNCS 5252), 2008, Berlin:
Springer-Verlag., (pp. 59–96) and ‘Recent Developments in Evolutionary Multi-Objective Optimization’ in
M. Ehrgott et al. (Eds.) Trends in Multiple Criteria Decision Analysis, 2010, Berlin: Springer-Verlag, (pp.
339-368). This paper will appear as a chapter in a Springer book entitled ’Multi-objective Evolutionary
Optimisation for Product Design and Manufacturing’ edited by Lihui Wang, Amos Ng, and Kalyanmoy
Deb in 2011.
References
[1] Deb K. Multi-objective optimization using evolutionary algorithms. Chichester, UK: Wiley; 2001.
[2] Goldberg DE. Genetic Algorithms for Search, Optimization, and Machine Learning. Reading, MA:
Addison-Wesley; 1989.
[3] Deb K, Reddy AR, Singh G. Optimal scheduling of casting sequence using genetic algorithms. Journal
of Materials and Manufacturing Processes. 2003;18(3):409–432.
[4] Deb K. An introduction to genetic algorithms. Sādhanā. 1999;24(4):293–315.
[5] Deb K, Agrawal RB. Simulated binary crossover for continuous search space. Complex Systems.
1995;9(2):115–148.
[6] Deb K, Anand A, Joshi D. A computationally efficient evolutionary algorithm for real-parameter
optimization. Evolutionary Computation Journal. 2002;10(4):371–395.
[7] Storn R, Price K. Differential evolution – A fast and efficient heuristic for global optimization over
continuous spaces. Journal of Global Optimization. 1997;11:341–359.
[8] Rudolph G. Convergence analysis of canonical genetic algorithms. IEEE Transactions on Neural
Network. 1994;5(1):96–101.
[9] Michalewicz Z. Genetic Algorithms + Data Structures = Evolution Programs. Berlin: Springer-Verlag;
1992.
[10] Gen M, Cheng R. Genetic Algorithms and Engineering Design. New York: Wiley; 1997.
[11] Bäck T, Fogel D, Michalewicz Z, editors. Handbook of Evolutionary Computation. Bristol: Institute
of Physics Publishing and New York: Oxford University Press; 1997.
[12] Deb K, Tiwari R, Dixit M, Dutta J. Finding trade-off solutions close to KKT points using evolutionary
multi-objective optimization. In: Proceedings of the Congress on Evolutionary Computation (CEC-
2007); 2007. p. 2109–2116.
[13] Holland JH. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: MIT Press; 1975.
[14] Vose MD, Wright AH, Rowe JE. Implicit parallelism. In: Proceedings of GECCO 2003 (Lecture Notes
in Computer Science. vol. 2723–2724. Springer-Verlag; 2003. .
20
[15] Jansen T, Wegener I. On the utility of populations. In: Proceedings of the Genetic and Evolutionary
Computation Conference (GECCO 2001). San Mateo, CA: Morgan Kaufmann; 2001. p. 375–382.
[16] Radcliffe NJ. Forma analysis and random respectful recombination. In: Proceedings of the Fourth
International Conference on Genetic Algorithms; 1991. p. 222–229.
[17] Miettinen K. Nonlinear Multiobjective Optimization. Boston: Kluwer; 1999.
[18] Kung HT, Luccio F, Preparata FP. On finding the maxima of a set of vectors. Journal of the
Association for Computing Machinery. 1975;22(4):469–476.
[19] Ehrgott M. Multicriteria Optimization. Berlin: Springer; 2000.
[20] Deb K, Tiwari S. Omni-optimizer: A generic evolutionary algorithm for global optimization. European
Journal of Operations Research (EJOR). 2008;185(3):1062–1087.
[21] Deb K, Agrawal S, Pratap A, Meyarivan T. A fast and Elitist multi-objective Genetic Algorithm:
NSGA-II. IEEE Transactions on Evolutionary Computation. 2002;6(2):182–197.
[22] Coello CAC, VanVeldhuizen DA, Lamont G. Evolutionary Algorithms for Solving Multi-Objective
Problems. Boston, MA: Kluwer; 2002.
[23] Osyczka A. Evolutionary algorithms for single and multicriteria design optimization. Heidelberg:
Physica-Verlag; 2002.
[24] Zitzler E, Deb K, Thiele L, Coello CAC, Corne DW. Proceedings of the First Evolutionary Multi-
Criterion Optimization (EMO-01) Conference (Lecture Notes in Computer Science (LNCS) 1993).
Heidelberg: Springer; 2001.
[25] Fonseca C, Fleming P, Zitzler E, Deb K, Thiele L. Proceedings of the Second Evolutionary Multi-
Criterion Optimization (EMO-03) Conference (Lecture Notes in Computer Science (LNCS) 2632).
Heidelberg: Springer; 2003.
[26] Coello CAC, Aguirre AH, Zitzler E, editors. Evolutionary Multi-Criterion Optimization: Third Inter-
national Conference. Berlin, Germany: Springer; 2005. LNCS 3410.
[27] Obayashi S, Deb K, Poloni C, Hiroyasu T, Murata T, editors. Evolutionary Multi-Criterion Optimiza-
tion, 4th International Conference, EMO 2007, Matsushima, Japan, March 5-8, 2007, Proceedings.
vol. 4403 of Lecture Notes in Computer Science. Springer; 2007.
[28] Coverstone-Carroll V, Hartmann JW, Mason WJ. Optimal multi-objective low-thurst spacecraft tra-
jectories. Computer Methods in Applied Mechanics and Engineering. 2000;186(2–4):387–402.
[29] Srinivas N, Deb K. Multi-Objective function optimization using non-dominated sorting genetic algo-
rithms. Evolutionary Computation Journal. 1994;2(3):221–248.
[30] Sauer CG. Optimization of multiple target electric propulsion trajectories. In: AIAA 11th Aerospace
Science Meeting; 1973. Paper Number 73-205.
[31] Knowles JD, Corne DW. On metrics for comparing nondominated sets. In: Congress on Evolutionary
Computation (CEC-2002). Piscataway, NJ: IEEE Press; 2002. p. 711–716.
[32] Hansen MP, Jaskiewicz A. Evaluating the quality of approximations to the non-dominated set. Lyngby:
Institute of Mathematical Modelling, Technical University of Denmark; 1998. IMM-REP-1998-7.
[33] Zitzler E, Thiele L, Laumanns M, Fonseca CM, Fonseca VG. Performance assessment of multiobjective
optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation. 2003;7(2):117–
132.
[34] Fonseca CM, Fleming PJ. On the performance assessment and comparison of stochastic multiobjective
optimizers. In: Voigt HM, Ebeling W, Rechenberg I, Schwefel HP, editors. Parallel Problem Solving
from Nature (PPSN IV). Berlin: Springer; 1996. p. 584–593. Also available as Lecture Notes in
Computer Science 1141.
21
[35] Fonseca CM, da Fonseca VG, Paquete L. Exploring the performance of stochastic multiobjective opti-
misers with the second-order attainment function. In: Third International Conference on Evolutionary
Multi-Criterion Optimization, EMO-2005. Berlin: Springer; 2005. p. 250–264.
[36] Deb K, Sundar J, Uday N, Chaudhuri S. Reference Point Based Multi-Objective Optimization Us-
ing Evolutionary Algorithms. International Journal of Computational Intelligence Research (IJCIR).
2006;2(6):273–286.
[37] Deb K, Kumar A. Interactive evolutionary multi-objective optimization and decision-making using
reference direction method. In: Proceedings of the Genetic and Evolutionary Computation Conference
(GECCO-2007). New York: The Association of Computing Machinery (ACM); 2007. p. 781–788.
[38] Deb K, Kumar A. Light Beam Search Based Multi-objective Optimization using Evolutionary Algo-
rithms. In: Proceedings of the Congress on Evolutionary Computation (CEC-07); 2007. p. 2125–2132.
[39] Deb K, Sinha A, Kukkonen S. Multi-objective test problems, linkages and evolutionary methodologies.
In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2006). New York:
The Association of Computing Machinery (ACM); 2006. p. 1141–1148.
[40] Coello CAC. Treating objectives as constraints for single objective optimization. Engineering Opti-
mization. 2000;32(3):275–308.
[41] Deb K, Datta R. A Fast and Accurate Solution of Constrained Optimization Problems Using a
Hybrid Bi-Objective and Penalty Function Approach. In: Proceedings of the IEEE World Congress
on Computational Intelligence (WCCI-2010); 2010. .
[42] Bleuler S, Brack M, Zitzler E. Multiobjective genetic programming: Reducing bloat using SPEA2.
In: Proceedings of the 2001 Congress on Evolutionary Computation; 2001. p. 536–543.
[43] Handl J, Knowles JD. An evolutionary approach to multiobjective clustering. IEEE Transactions on
Evolutionary Computation. 2007;11(1):56–76.
[44] Knowles JD, Corne DW, Deb K. Multiobjective problem solving from nature. Springer Natural
Computing Series, Springer-Verlag; 2008.
[45] Deb K, Srinivasan A. Innovization: Innovating design principles through optimization. In: Proceedings
of the Genetic and Evolutionary Computation Conference (GECCO-2006). New York: ACM; 2006. p.
1629–1636.
[46] Deb K, Sindhya K. Deciphering innovative principles for optimal electric brushless D.C. permanent
magnet motor design. In: Proceedings of the World Congress on Computational Intelligence (WCCI-
2008). Piscatway, NY: IEEE Press; 2008. p. 2283–2290.
[47] Bandaru S, Deb K. Towards automating the discovery of certain innovative design principles through
a clustering based optimization technique. Engineering Optimization. in press;.
[48] Deb K, Goel T. A hybrid multi-objective evolutionary approach to engineering shape design. In: Pro-
ceedings of the First International Conference on Evolutionary Multi-Criterion Optimization (EMO-
01); 2001. p. 385–399.
[49] Sindhya K, Deb K, Miettinen K. A local search based evolutionary multi-objective optimization
technique for fast and accurate convergence. In: Proceedings of the Parallel Problem Solving From
Nature (PPSN-2008). Berlin, Germany: Springer-Verlag; 2008. .
[50] Khare V, Yao X, Deb K. Performance Scaling of Multi-objective Evolutionary Algorithms. In:
Proceedings of the Second Evolutionary Multi-Criterion Optimization (EMO-03) Conference (LNCS
2632); 2003. p. 376–390.
[51] Luque M, Miettinen K, Eskelinen P, Ruiz F. Incorporating preference information in interactive
reference point based methods for multiobjective optimization. Omega. 2009;37(2):450–462.
22
[52] Branke J, Deb K. Integrating user preferences into evolutionary multi-objective optimization. In: Jin
Y, editor. Knowledge Incorporation in Evolutionary Computation. Heidelberg, Germany: Springer;
2004. p. 461–477.
[53] Deb K, Zope P, Jain A. Distributed Computing of Pareto-Optimal Solutions Using Multi-Objective
Evolutionary Algorithms. In: Proceedings of the Second Evolutionary Multi-Criterion Optimization
(EMO-03) Conference (LNCS 2632); 2003. p. 535–549.
[54] Deb K, Saxena D. Searching For Pareto-Optimal Solutions Through Dimensionality Reduction for
Certain Large-Dimensional Multi-Objective Optimization Problems. In: Proceedings of the World
Congress on Computational Intelligence (WCCI-2006); 2006. p. 3352–3360.
[55] Saxena DK, Deb K. Non-linear Dimensionality Reduction Procedures for Certain Large-Dimensional
Multi-Objective Optimization Problems: Employing Correntropy and a Novel Maximum Variance
Unfolding. In: Proceedings of the Fourth International Conference on Evolutionary Multi-Criterion
Optimization (EMO-2007); 2007. p. 772–787.
[56] Brockhoff D, Zitzler E. Dimensionality Reduction in Multiobjective Optimization: The Minimum
Objective Subset Problem. In: Waldmann KH, Stocker UM, editors. Operations Research Proceedings
2006. Springer; 2007. p. 423–429.
[57] Brockhoff D, Zitzler E. Offline and Online Objective Reduction in Evolutionary Multiobjective Opti-
mization Based on Objective Conflicts. Institut für Technische Informatik und Kommunikationsnetze,
ETH Zürich; 2007. 269.
[58] Farina M, Amato P. A Fuzzy Definition of Optimality for Many Criteria Optimization Problems.
IEEE Trans on Systems, Man and Cybernetics Part A: Systems and Humans. 2004;34(3):315–326.
[59] Branke J. Evolutionary Optimization in Dynamic Environments. Heidelberg, Germany: Springer;
2001.
[60] Deb K, Rao UB, Karthik S. Dynamic Multi-Objective Optimization and Decision-Making Using
Modified NSGA-II: A Case Study on Hydro-Thermal Power Scheduling Bi-Objective Optimization
Problems. In: Proceedings of the Fourth International Conference on Evol. Multi-Criterion Optimiza-
tion (EMO-2007); 2007. .
[61] Deb K, Gupta S, Daum D, Branke J, Mall A, Padmanabhan D. Reliability-based optimization using
evolutionary algorithms. IEEE Trans on Evolutionary Computation. in press;.
[62] Deb K, Gupta H. Introducing robustness in multi-objective optimization. Evolutionary Computation
Journal. 2006;14(4):463–494.
[63] Basseur M, Zitzler E. Handling Uncertainty in Indicator-Based Multiobjective Optimization. Inter-
national Journal of Computational Intelligence Research. 2006;2(3):255–272.
[64] Cruse TR. Reliability-based mechanical design. New York: Marcel Dekker; 1997.
[65] Du X, Chen W. Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic
Design. ASME Transactions on Journal of Mechanical Design. 2004;126(2):225–233.
[66] El-Beltagy MA, Nair PB, Keane AJ. Metamodelling techniques for evolutionary optimization of
computationally expensive problems: promises and limitations. In: Proceedings of the Genetic and
Evolutionary Computation Conference (GECCO-1999). San Mateo, CA: Morgan Kaufman; 1999. p.
196–203.
[67] Giannakoglou KC. Design of optimal aerodynamic shapes using stochastic optimization methods and
computational intelligence. Progress in Aerospace Science. 2002;38(1):43–76.
[68] Nain PKS, Deb K. Computationally effective search and optimization procedure using coarse to fine
approximations. In: Proceedings of the Congress on Evolutionary Computation (CEC-2003); 2003. p.
2081–2088.
23
[69] Deb K, Nain PKS. In: An Evolutionary Multi-objective Adaptive Meta-modeling Procedure Using
Artificial Neural Networks. Berlin, Germany: Springer; 2007. p. 297–322.
[70] Emmerich MTM, Giannakoglou KC, Naujoks B. Single and multiobjective evolutionary optimization
assisted by Gaussian random field metamodels. IEEE Transactions on Evolutionary Computation.
2006;10(4):421–439.
[71] Emmerich M, Naujoks B. Metamodel-Assisted Multiobjective Optimisation Strategies and Their
Application in Airfoil Design. In: Adaptive Computing in Design and Manufacture VI. London, UK:
Springer; 2004. p. 249–260.
24