Multi-objective Optimisation Using
Multi-objective Optimisation Using
Kalyanmoy Deb
K. Deb (&)
Department of Mechanical Engineering, Indian Institute of Technology,
Kanpur, Uttar Pradesh 208016, India
e-mail: [email protected]
URL: http://www.iitk.ac.in/kangal/deb.htm
1.1 Introduction
In the past 15 years, EMO has become a popular and useful field of research and
application. Evolutionary optimisation (EO) algorithms use a population-based
approach in which more than one solution participates in an iteration and evolves a
new population of solutions in each iteration. The reasons for their popularity are
many: (i) EOs do not require any derivative information, (ii) EOs are relatively
simple to implement, and (iii) EOs are flexible and have a wide-spread applica-
bility. For solving single-objective optimisation problems, particularly in finding a
single optimal solution, the use of a population of solutions may sound redundant,
in solving multi-objective optimisation problems an EO procedure is a perfect
choice [1]. The multi-objective optimisation problems, because their attributes,
give rise to a set of Pareto-optimal solutions, which need further processing to
arrive at a single preferred solution. To achieve the first task, it becomes quite a
natural proposition to use an EO, because the use of population in an iteration
helps an EO to simultaneously find multiple non-dominated solutions, which
portrays a trade-off among objectives, in a single simulation run.
In this chapter, we present a brief description of an evolutionary optimisation
procedure for single-objective optimisation. Thereafter, we describe the principles
of EMO. Then, we discuss some salient developments in EMO research. It is clear
from these discussions that EMO is not only being found to be useful in solving
multi-objective optimisation problems, it is also helping to solve other kinds of
optimisation problems more efficiently than they are traditionally solved. As a
by-product, EMO-based solutions are helping to elicit very valuable insight about
a problem—a which is difficult to achieve otherwise. EMO procedures with a
decision making concept are discussed as well. Some of these ideas require further
detailed studies and this chapter mentions some such topics for current and future
research in this direction.
Each child solution, created by the crossover operator, is then perturbed in its
vicinity by a mutation operator [2]. Every variable is mutated with a mutation
probability pm ; usually set as 1=n (n is the number of variables), so that on an
average one variable gets mutated per solution. In the context of real-parameter
optimisation, a simple Gaussian probability distribution with a predefined variance
can be used with its mean at the child variable value [1]. This operator allows an
EO to search locally around a solution and is independent on the location of other
solutions in the population.
The elitism operator combines the old population with the newly created
population and chooses to keep better solutions from the combined population.
Such an operation makes sure that an algorithm has a monotonically non-
degrading performance. Rudolph [8] proved an asymptotic convergence of a
specific EO but having elitism and mutation as two essential operators.
Finally, the user of an EO needs to choose termination criteria. Often, a pre-
determined number of generations is used as a termination criterion. For goal
attainment problems, an EO can be terminated as soon as a solution with a pre-
defined goal or a target solution is found. In many studies [2, 9–11], a termination
criterion based on the statistics of the current population vis-a-vis that of the
previous population to determine the rate of convergence is used. In other more
recent studies, theoretical optimality conditions (such as the extent of satisfaction
of Karush–Kuhn–Tucker (KKT) conditions) are used to determine the termination
of a real-parameter EO algorithm [12]. Although EOs are heuristic based, the use
of such theoretical optimality concepts in an EO can also be used to test their
converging abilities towards local optimal solutions.
To demonstrate the working of the above-mentioned GA, we show four
snapshots of a typical simulation run on the following constrained optimisation
problem:
Ten points are used and the GA is run for 100 generations. The SBX recombi-
nation operator is used with probability of pc ¼ 0:9 and index gc ¼ 10: The
polynomial mutation operator is used with a probability of pm ¼ 0:5 with an index
of gm ¼ 50: Figures 1.1, 1.2, 1.3 and 1.4 show the populations at generation zero,
5, 40 and 100, respectively. It can be observed that in only five generations, all 10
population members become feasible. Thereafter, the points come close to each
other and creep towards the constrained minimum point.
The EA procedure is a population-based stochastic search procedure which
iteratively emphasises its better population members, uses them to recombine and
perturb locally in the hope of creating new and better populations until a prede-
fined termination criterion is met. The use of a population helps to achieve an
1 Multi-objective Optimisation Using Evolutionary Algorithms 7
implicit parallelism [2, 13, 14] in an EO’s search mechanism (causing an inherent
parallel search in different regions of the search space), a process which makes an
EO computationally attractive for solving difficult problems. In the context of
certain Boolean functions, a computational time saving to find the optimum
varying polynomial to the population size is proven [15]. On the one hand, the EO
procedure is flexible, thereby allowing a user to choose suitable operators and
problem-specific information to suit a specific problem. On the other hand, the
flexibility comes with the onus on the part of a user to choose appropriate and
tangible operators so as to create an efficient and consistent search [16]. However,
the benefits of having a flexible optimisation procedure, over their more rigid and
specific optimisation algorithms, provide fensibility in solving difficult real-world
optimisation problems involving non-differentiable objectives and constraints,
8 K. Deb
later local search phase can be identified and executed with a more specialized
local search algorithm.
(a) (b)
Fig. 1.5 A set of points and the first non-domination front are shown
above definition and whether one point dominates the other can be established. All
points which are not dominated by any other member of the set are called the non-
dominated points of class one, or simply the non-dominated points. For the set of
six solutions shown in the figure, they are points 3, 5, and 6. One property of any
two such points is that a gain in an objective from one point to the other happens
only because of a sacrifice in at least one other objective. This trade-off property
between the non-dominated points makes the practitioners interested in finding a
wide variety of them before making a final choice. These points make up a front
when they are viewed together on the objective space; hence the non-dominated
points are often visualized to represent a non-domination front. The computational
effort needed to select the points of the non-domination front from a set of N points
is OðN log NÞ for two and three objectives, and OðN logM2 NÞ for M [ 3 objec-
tives [18].
With the above concept, now it is easier to define the Pareto-optimal solutions
in a multi-objective optimisation problem. If the given set of points for the above
task contain all points in the search space (assuming a countable number), the
points lying on the non-domination front, by definition, do not get dominated by
any other point in the objective space, hence are Pareto-optimal points (together
they constitute the Pareto-optimal front) and the corresponding pre-images
(decision variable vectors) are called Pareto-optimal solutions. However, more
mathematically elegant definitions of Pareto-optimality (including the ones for
continuous search space problems) exist in the multi-objective literature [17, 19].
solutions. We discuss a little later some EMO algorithms describing how such dual
emphasis is provided, but now discuss qualitatively the difference between a
posteriori MCDM and EMO approaches.
Consider Fig. 1.7, in which we sketch how multiple independent parametric
single-objective optimisations may find different Pareto-optimal solutions. The
Pareto-optimal front corresponds to global optimal solutions of several scalarised
objectives. However, during the course of an optimisation task, algorithms must
overcome a number of difficulties, such as infeasible regions, local optimal
solutions, flat regions of objective functions, isolation of optimum, etc., to con-
verge to the global optimal solution. Moreover, because of practical limitations, an
optimisation task must also be completed in a reasonable computational time. This
requires an algorithm to strike a good balance between the extent of these tasks its
search operators must do to overcome the above-mentioned difficulties reliably
and quickly. When multiple simulations are to performed to find a set of Pareto-
optimal solutions, the above balancing act must have to performed in every single
simulation. Since simulations are performed independently, no information about
the success or failure of previous simulations is used to speed up the process. In
difficult multi-objective optimisation problems, such memory-less a posteriori
methods may demand a large overall computational overhead to get a set of
Pareto-optimal solutions. Moreover, even though the convergence can be achieved
in some problems, independent simulations can never guarantee finding a good
distribution among obtained points.
EMO, as mentioned earlier, constitutes an inherent parallel search. When a
population member overcomes certain difficulties and make a progress towards the
Pareto-optimal front, its variable values and their combination reflect this fact.
When a recombination takes place between this solution and other population
members, such valuable information of variable value combinations gets shared
through variable exchanges and blending, thereby making the overall task of
finding multiple trade-off solutions a parallelly processed task.
14 K. Deb
The NSGA-II procedure [21] is one of the popularly used EMO procedures which
attempt to find multiple Pareto-optimal solutions in a multi-objective optimisation
problem and has the following three features:
1. it uses an elitist principle,
2. it uses an explicit diversity preserving mechanism, and
3. it emphasises non-dominated solutions.
At any generation t; the offspring population ðsay; Qt Þ is first created by using
the parent population ðsay; Pt Þ and the usual genetic operators. Thereafter, the two
populations are combined together to form a new population ðsay; Rt Þ of size 2N:
Then, the population Rt classified into different non-domination classes. There-
after, the new population is filled by points of different non-domination fronts, one
at a time. The filling starts with the first non-domination front (of class one) and
continues with points of the second non-domination front, and so on. Since the
overall population size of Rt is 2N; not all fronts can be accommodated in N slots
available for the new population. All fronts which could not be accommodated are
deleted. When the last allowed front is being considered, there may exist more
points in the front than the remaining slots in the new population. This scenario is
illustrated in Fig. 1.8. Instead of arbitrarily discarding some members from the last
front, the points which will make the diversity of the selected points the highest are
chosen.
The crowded-sorting of the points of the last front which could not be
accommodated fully is achieved in the descending order of their crowding distance
values and points from the top of the ordered list are chosen. The crowding
distance di of point i is a measure of the objective space around i which is not
occupied by any other solution in the population. Here, we simply calculate this
quantity di by estimating the perimeter of the cuboid (Fig. 1.9) formed by using
the nearest neighbors in the objective space as the vertices (we call this the
crowding distance).
1 Multi-objective Optimisation Using Evolutionary Algorithms 15
Since the early development of EMO algorithms in 1993, they have been applied
to many real-world and interesting optimisation problems. Descriptions of some of
these studies can be found in books [1, 22, 23], dedicated conference proceedings
[24–27], and domain-specific books, journals and proceedings. In this section, we
describe one case study which clearly demonstrates the EMO philosophy which
we described in Sect. 1.3.1.
and long-time flights with larger delivered payloads (solution marked 36). Solution
44 can deliver a mass of 685.28 kg and requires about 1.12 years. On other hand,
an intermediate solution 72 can deliver almost 862 kg with a travel time of about
3 years. In these figures, each continuous part of a trajectory represents a thrusting
arc and each dashed part of a trajectory represents a coasting arc. It is interesting to
note that only a small improvement in delivered mass occurs when comparing the
solutions 73 and 72 with a sacrifice in flight time of about an year.
The multiplicity in trade-off solutions, as depicted in Fig. 1.15, is what we
envisaged in discovering in a multi-objective optimisation problem by using a
posteriori procedure, such as an EMO algorithm. This aspect was also discussed in
Fig. 1.6. Once such a set of solutions with a good trade-off among objectives is
obtained, one can analyze them for choosing a particular solution. For example, in
this problem context, it makes sense to not choose a solution between points 73
and 72 attributable to poor trade-off between the objectives in this range. On the
other hand, choosing a solution within points 44 and 73 is worthwhile, but which
particular solution to choose depends on other mission related issues. But by first
finding a wide range of possible solutions and revealing the shape of front, EMO
can help narrow down the choices and allow a decision maker to make a better
decision. Without the knowledge of such a wide variety of trade-off solutions, a
1 Multi-objective Optimisation Using Evolutionary Algorithms 19
proper decision-making may be a difficult task. Although one can choose a scal-
arised objective (such as the -constraint method with a particular vector) and
find the resulting optimal solution, the decision-maker will always wonder what
solution would have been derived if a different vector was chosen. For example,
if 1 ¼ 2:5 years is chosen and mass delivered to the target is maximised, a
solution in between points 73 and 72 will be found. As discussed earlier, this part
of the Pareto-optimal front does not provide the best trade-offs between objectives
that this problem can offer. A lack of knowledge of good trade-off regions before a
decision is made may allow the decision maker to settle for a solution which,
although optimal, may not be a good compromised solution. The EMO procedure
allows a flexible and a pragmatic procedure for finding a well-diversified set of
solutions simultaneously so as to enable picking a particular region for further
analysis or a particular solution for implementation.
The constraint handling method modifies the binary tournament selection, where
two solutions are picked from the population and the better solution is chosen. In
the presence of constraints, each solution can be either feasible or infeasible. Thus,
there may be at most three situations: (i) both solutions are feasible, (ii) one is
feasible and other is not, and (iii) both are infeasible. We consider each case by
simply redefining the domination principle as follows (we call it the constrained-
domination condition for any two solutions xðiÞ and xðjÞ Þ :
Definition 2 A solution xðiÞ is said to ‘constrained-dominate’ a solution
xðjÞ ðor xðiÞ c xðjÞ Þ; if any of the following conditions are true:
20 K. Deb
There are two goals of an EMO procedure: (i) a good convergence to the Pareto-
optimal front and (ii) a good diversity in obtained solutions. As both are conflicting
in nature, comparing two sets of trade-off solutions also require different perfor-
mance measures. In the early years of EMO research, three different sets of per-
formance measures were used:
1. metrics evaluating convergence to the known Pareto-optimal front (such as
error ratio, distance from reference set, etc.),
2. metrics evaluating spread of solutions on the known Pareto-optimal front (such
as spread, spacing, etc.), and
3. metrics evaluating certain combinations of convergence and spread of solutions
(such as hypervolume, coverage, R-metrics, etc.).
A detailed study [31] comparing most existing performance metrics based on
out-performance relations has concluded that R-metrics suggested by [32] are the
best. However, a study has argued that a single unary performance measure (any of
the first two metrics described above in the enumerated list) cannot adequately
determine a true winner, as both aspects of convergence and diversity cannot be
1 Multi-objective Optimisation Using Evolutionary Algorithms 21
measured by a single performance metric [33]. That study also concluded that
binary performance metrics (indicating usually two different values when a set of
solutions A is compared withB and B is compared with A), such as epsilon-
indicator, binary hypervolume indicator, utility indicators R1 to R3, etc., are better
measures for multi-objective optimisation. The flip side is that the binary metrics
computes MðM 1Þ performance values for two algorithms in an M-objective
optimisation problem, by analysing all pair-wise performance comparisons,
thereby making them difficult to use in practice. In addition, unary and binary
attainment indicators of [34, 35] are of great importance. Figures 1.17 and 1.18
illustrate the hypervolume and attainment indicators. Attainment surface is useful
to determine a representative front obtained from multiple runs of an EMO
algorithm. In general, 50% surface can be used to indicate the front that is dom-
inated by 50% of all obtained non-dominated points.
1.8 Multi-objectivisation
Interestingly, the act of finding multiple trade-off solutions using an EMO pro-
cedure has found its application outside the realm of solving multi-objective
optimisation problems per se. The concept of finding multiple trade-off solutions
using an EMO procedure is applied to solve other kinds of optimisation problems
that are otherwise not multi-objective in nature. For example, the EMO concept is
used to solve constrained single-objective optimisation problems by converting the
task into a two-objective optimisation task of additionally minimizing an aggre-
gate constraint violation [40]. This eliminates the need to specify a penalty
parameter while using a penalty based constraint handling procedure. A recent
study [41] utilises a bi-objective NSGA-II to find a Pareto-optimal frontier cor-
responding to minimizations of the objective function and constraint violation. The
frontier is then used to estimate an appropriate penalty parameter, which is then
used to formulate a penalty based local search problem and is solved using a
classical optimisation method. The approach is shown to require an order or two
1 Multi-objective Optimisation Using Evolutionary Algorithms 23
magnitude less function evaluations than the existing constraint handling methods
on a number of standard test problems.
A well-known difficulty in genetic programming studies, called the ‘bloating’,
arises because of the continual increase in size of genetic programs with iteration.
The reduction of bloating by minimizing the size of programs as an additional
objective helped find high-performing solutions with a smaller size of the code
[42]. Minimizing the intra-cluster distance and maximizing inter-cluster distance
simultaneously in a bi-objective formulation of a clustering problem is found to
yield better solutions than the usual single-objective minimization of the ratio of
the intra-cluster distance to the inter-cluster distance [43]. A recently published
book [44] describes many such interesting applications in which EMO method-
ologies have helped solve problems which are otherwise (or traditionally) not
treated as multi-objective optimisation problems.
The search operators used in EMO are generic. There is no guarantee that an EMO
will find any Pareto-optimal solution in a finite number of solution evaluations for
a randomly chosen problem. However, as discussed above, EMO methodologies
provide adequate emphasis to currently non-dominated and isolated solutions so
that population members progress towards the Pareto-optimal front iteratively. To
make the overall procedure faster and to perform the task with a more guaranteed
manner, EMO methodologies must be combined with mathematical optimisation
techniques having local convergence properties. A simple-minded approach would
be to start the optimisation task with an EMO and the solutions obtained from
EMO can be improved by optimising a composite objective derived from multiple
objectives to ensure a good spread by using a local search technique. Another
approach would be to use a local search technique as a mutation-like operator in an
EMO so that all population members are at least guaranteed local optimal solu-
tions. A study [48] has demonstrated that the latter approach is an overall better
approach from a computational point of view.
However, the use of a local search technique within an EMO has another
advantage. As, a local search can find a weak or a near Pareto-optimal point, the
presence of such super-individual in a population can cause other near Pareto-
optimal solutions to be found as a an outcome of recombination of the super-
individual with other population members. A recent study has demonstrated this
aspect [49].
1 Multi-objective Optimisation Using Evolutionary Algorithms 25
With the success of EMO in two and three objective problems, it has become an
obvious quest to investigate if an EMO procedure can also be used to solve four or
more objective problems. An earlier study [50] with eight objectives revealed
somewhat negative results. EMO methodologies work by emphasizing non-dom-
inated solutions in a population. Unfortunately, as the number of objectives
increase, most population members in a randomly created population tend to
become non-dominated to each other. For example, in a three-objective scenario,
about 10% members in a population of size 200 are non-dominated, whereas in a
10-objective problem scenario, as high as 90% members in a population of size
200 are non-dominated. Thus, in a large-objective problem, an EMO algorithm
runs out of space to introduce new population members into a generation, thereby
causing a stagnation in the performance of an EMO algorithm. Moreover, an
exponentially large population size is needed to represent a large-dimensional
Pareto-optimal front. This makes an EMO procedure slow and computationally
less attractive. However, practically speaking, even if an algorithm can find tens of
thousands of Pareto-optimal solutions for a multi-objective optimisation problem,
besides simply getting an idea of the nature and shape of the front, they are simply
too many to be useful for any decision making purposes. Keeping these views in
mind, EMO researchers have taken two different approaches in dealing with large-
objective problems.
to be non-dominated, thereby making rooms for new and hopefully better solutions
to be found and stored.
The computational efficiency and accuracy observed in some EMO imple-
mentations have led a distributed EMO study [53] in which each processor in a
distributed computing environment receives a unique cone for defining domina-
tion. The cones are designed carefully so that at the end of such a distributed
computing EMO procedure, solutions are found to exist in various parts of the
complete Pareto-optimal front. A collection of these solutions together is then able
to provide a good representation of the entire original Pareto-optimal front.
Many practical optimisation problems can easily list a large of number of objec-
tives (often more than 10), as many different criteria or goals are often of interest
to practitioners. In most instances, it is not entirely definite whether the chosen
objectives are all in conflict with each other or not. For example, minimization of
weight and minimization of cost of a component or a system are often mistaken to
have an identical optimal solution, but may lead to a range of trade-off optimal
solutions. Practitioners do not take any chance and tend to include all (or as many
as possible) objectives into the optimisation problem formulation. There is another
fact which is more worrisome. Two apparently conflicting objectives may show a
good trade-off when evaluated with respect to some randomly created solutions.
But if these two objectives are evaluated for solutions close to their optima. they
tend to show a good correlation. That is, although objectives can exhibit con-
flicting behavior for random solutions, near their Pareto-optimal front, the conflict
vanishes and optimum of one can approach close to the optimum of the other.
Thinking of the existence of such problems in practice, recent studies [54, 55]
have performed linear and non-linear principal component analysis (PCA) to a set
of EMO-produced solutions. Objectives causing positively correlated relationship
between each other on the obtained NSGA-II solutions are identified and are
declared as redundant. The EMO procedure is then restarted with non-redundant
objectives. This combined EMO–PCA procedure is continued until no further
reduction in the number of objectives is possible. The procedure has handled
practical problems involving five and more objectives and has shown to reduce the
choice of real conflicting objectives to a few. On test problems, the proposed
approach has shown to reduce an initial 50-objective problem to the correct three-
objective Pareto-optimal front by eliminating 47 redundant objectives. Another
study [56] used an exact and a heuristic-based conflict identification approach on a
given set of Pareto-optimal solutions. For a given error measure, an effort is made
to identify a minimal subset of objectives which do not alter the original domi-
nance structure on a set of Pareto-optimal solutions. This idea has recently been
introduced within an EMO [57], but a continual reduction of objectives through a
successive application of the above procedure would be interesting.
1 Multi-objective Optimisation Using Evolutionary Algorithms 27
This is a promising area of EMO research and definitely more and more of
computationally faster objective-reduction techniques are needed for the purpose.
In this direction, the use of alternative definitions of domination is important. One
such idea redefined the definition of domination: a solution is said to dominate
another solution, if the former solution is better than latter in more objectives. This
certainly excludes finding the entire Pareto-optimal front and helps an EMO to
converge near the intermediate and central part of the Pareto-optimal front.
Another EMO study used a fuzzy dominance [58] relation (instead of Pareto-
dominance), in which superiority of one solution over another in any objective is
defined in a fuzzy manner. Many other such definitions are possible and can be
implemented based on the problem context.
A major surge in EMO research has taken place in handling uncertainties among
decision variables and problem parameters in multi-objective optimisation. Prac-
tice is full of uncertainties and almost no parameter, dimension, or property can be
guaranteed to be fixed at a value it is aimed at. In such scenarios, evaluation of a
solution is not precise, and the resulting objective and constraint function values
becomes probabilistic quantities. optimisation algorithms are usually designed to
handle such stochasticities by using crude methods, such as the Monte Carlo
simulation of stochasticities in uncertain variables and parameters and by
sophisticated stochastic programming methods involving nested optimisation
techniques [61]. When these effects are taken care of during the optimisation
process, the resulting solution is usually different from the optimum solution of the
problem and is known as a ‘robust’ solution. Such an optimisation procedure will
then find a solution which may not be the true global optimum solution, but one
which is less sensitive to uncertainties in decision variables and problem param-
eters. In the context of multi-objective optimisation, a consideration of uncer-
tainties for multiple objective functions will result in a robust frontier which may
be different from the globally Pareto-optimal front. Each and every point on the
robust frontier is then guaranteed to be less sensitive to uncertainties in decision
variables and problem parameters. Some such studies in EMO are [62, 63].
When the evaluation of constraints under uncertainties in decision variables and
problem parameters are considered, deterministic constraints become stochastic
(they are also known as ‘chance constraints’) and involves a reliability index ðRÞ to
handle the constraints. A constraint gðxÞ 0 then becomes Prob ðgðxÞ 0Þ R: In
order to find left side of the above chance constraint, a separate optimisation
methodology [64], is needed, thereby making the overall algorithm a bi-level
optimisation procedure. Approximate single-loop algorithms exist [65] and
recently one such methodology has been integrated with an EMO [61] and shown
1 Multi-objective Optimisation Using Evolutionary Algorithms 29
1.11 Conclusions
Acknowledgments The author acknowledges the support and his association with University of
Skövde, Sweden and Aalto University School of Economics, Helsinki. This chapter contains
some excerpts from previous publications by the same author entitled ‘Introduction to Evolu-
tionary Multi-Objective optimisation’, in J. Branke, K. Deb, K. Miettinen and R. Slowinski (Eds.)
Multiobjective Optimization: Interactive and Evolutionary Approaches (LNCS 5252) (pp. 59–96),
2008, Berlin: Springer and ‘Recent Developments in Evolutionary Multi-Objective Optimization’
in M. Ehrgott et al. (Eds.) Trends in Multiple Criteria Decision Analysis (pp. 339-368), 2010,
Berlin: Springer.
References
6. Deb, K., Anand, A., Joshi, D. (2002). A computationally efficient evolutionary algorithm for
real-parameter optimisation. Evolutionary Computation Journal 10(4):371–395
7. Storn, R., Price, K. (1997). Differential evolution—A fast and efficient heuristic for global
optimisation over continuous spaces. Journal of Global Optimization 11:341–359
8. Rudolph, G. (1994). Convergence analysis of canonical genetic algorithms. IEEE
Transactions on Neural Network 5(1):96–101
9. Michalewicz, Z. (1992). Genetic Algorithms ? Data Structures = Evolution Programs.
Berlin: Springer.
10. Gen, M., & Cheng, R. (1997). Genetic algorithms and engineering design. New York: Wiley.
11. Bäck, T., Fogel, D., & Michalewicz, Z. (Eds.). (1997). Handbook of evolutionary
computation. Bristol/New York: Institute of Physics Publishing/Oxford University Press.
12. Deb, K., Tiwari, R., Dixit, M., & Dutta, J. (2007). Finding trade-off solutions close to KKT
points using evolutionary multi-objective optimisation. In Proceedings of the congress on
evolutionary computation (CEC-2007) (pp. 2109–2116)
13. Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: MIT
Press.
14. Vose, M. D., Wright, A. H., & Rowe, J. E. (2003). Implicit parallelism. In Proceedings of
GECCO 2003 (lecture notes in computer science) (Vol. 2723–2724). Heidelberg: Springer.
15. Jansen, T., & Wegener, I. (2001). On the utility of populations. In Proceedings of the genetic
and evolutionary computation conference (GECCO 2001) (pp. 375–382). San Mateo, CA:
Morgan Kaufmann.
16. Radcliffe, N. J. (1991). Forma analysis and random respectful recombination. In Proceedings
of the fourth international conference on genetic algorithms (pp. 222–229).
17. Miettinen, K. (1999). Nonlinear multiobjective optimisation. Boston: Kluwer.
18. Kung, H. T., Luccio, F., & Preparata, F. P. (1975). On finding the maxima of a set of vectors.
Journal of the Association for Computing Machinery 22(4):469–476.
19. Ehrgott, M. (2000). Multicriteria optimisation. Berlin: Springer.
20. Deb, K., & Tiwari, S. (2008). Omni-optimiser: A generic evolutionary algorithm for global
optimisation. European Journal of Operations Research 185(3):1062–1087
21. Deb, K., Agrawal, S., Pratapm, A., & Meyarivan, T. (2002). A fast and elitist multi-objective
genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2):182–197
22. Coello, C. A. C., Van Veldhuizen, D. A., & Lamont, G. (2002). Evolutionary algorithms for
solving multi-objective problems. Boston, MA: Kluwer.
23. Osyczka, A. (2002). Evolutionary algorithms for single and multicriteria design optimisation.
Heidelberg: Physica-Verlag.
24. Zitzler, E., Deb, K., Thiele, L., Coello, C. A. C., & Corne, D. W. (2001). Proceedings of the
first evolutionary multi-criterion optimisation (EMO-01) conference (lecture notes in
computer science 1993). Heidelberg: Springer.
25. Fonsecam, C., Fleming, P., Zitzler, E., Deb, K., & Thiele, L. (2003). Proceedings of the
Second Evolutionary Multi-Criterion Optimization (EMO-03) conference (lecture notes in
computer science) (Vol. 2632). Heidelberg: Springer.
26. Coello, C. A. C., Aguirre, A. H., & Zitzler, E. (Eds.). (2005). Evolutionary multi-criterion
optimisation: Third international conference LNCS (Vol. 3410). Berlin, Germany: Springer.
27. Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., & Murata, T. (Eds.). (2007). Evolutionary
multi-criterion optimisation, 4th international conference, EMO 2007, Matsushima, Japan,
March 5–8, 2007, Proceedings. Lecture notes in computer science (Vol. 4403). Heidelberg:
Springer.
28. Coverstone-Carroll, V., Hartmann, J. W., & Mason, W. J. (2000). Optimal multi-objective
low-thurst spacecraft trajectories. Computer Methods in Applied Mechanics and Engineering
186(2–4):387–402
29. Srinivas, N., & Deb, K. (1994). Multi-objective function optimisation using non-dominated
sorting genetic algorithms. Evolutionary Computation Journal 2(3):221–248.
30. Sauer, C. G. (1973). Optimization of multiple target electric propulsion trajectories. In AIAA
11th aerospace science meeting (pp. 73–205).
32 K. Deb
31. Knowles, J. D., & Corne, D. W. (2002). On metrics for comparing nondominated sets. In
Congress on evolutionary computation (CEC-2002) (pp. 711–716). Piscataway, NJ: IEEE
Press.
32. Hansen, M. P., & Jaskiewicz, A. (1998). Evaluating the quality of approximations to the non-
dominated set IMM-REP-1998-7. Lyngby: Institute of Mathematical Modelling Technical
University of Denmark.
33. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., & Fonseca, V. G. (2003). Performance
assessment of multiobjective optimisers: An analysis and review. IEEE Transactions on
Evolutionary Computation 7(2):117–132
34. Fonseca, C. M., & Fleming, P. J. (1996). On the performance assessment and comparison of
stochastic multiobjective optimisers. In H. M. Voigt, W. Ebeling, I. Rechenberg, & H.
P. Schwefel (Eds.), Parallel problem solving from nature (PPSN IV) (pp. 584–593). Berlin:
Springer. Also available as Lecture notes in computer science (Vol. 1141).
35. Fonseca, C. M., da Fonseca, V. G., & Paquete, L. (2005). Exploring the performance of
stochastic multiobjective optimisers with the second-order attainment function. In Third
international conference on evolutionary multi-criterion optimisation, EMO-2005 (pp. 250–
264). Berlin: Springer.
36. Deb, K., Sundar, J., Uday, N., & Chaudhuri, S. (2006). Reference point based multi-objective
optimisation using evolutionary algorithms. International Journal of Computational
Intelligence Research 2(6):273–286
37. Deb, K., & Kumar, A. (2007). Interactive evolutionary multi-objective optimisation and
decision-making using reference direction method. In Proceedings of the genetic and
evolutionary computation conference (GECCO-2007) (pp. 781–788). New York: The
Association of Computing Machinery (ACM).
38. Deb, K., & Kumar, A. (2007). Light beam search based multi-objective optimisation using
evolutionary algorithms. In Proceedings of the congress on evolutionary computation (CEC-
07) (pp. 2125–2132).
39. Deb, K., Sinha, A., & Kukkonen, S. (2006). Multi-objective test problems, linkages and
evolutionary methodologies. In Proceedings of the genetic and evolutionary computation
conference (GECCO-2006) (pp. 1141–1148). New York: The Association of Computing
Machinery (ACM).
40. Coello, C. A. C. (2000). Treating objectives as constraints for single objective optimisation.
Engineering Optimization 32(3):275–308
41. Deb, K., & Datta, R. (2010). A fast and accurate solution of constrained optimisation
problems using a hybrid bi-objective and penalty function approach. In Proceedings of the
IEEE World Congress on Computational Intelligence (WCCI-2010).
42. Bleuler, S., Brack, M., & Zitzler, E. (2001). Multiobjective genetic programming: Reducing
bloat using SPEA2. In Proceedings of the 2001 congress on evolutionary computation (pp.
536–543).
43. Handl, J., & Knowles, J. D. (2007). An evolutionary approach to multiobjective clustering.
IEEE Transactions on Evolutionary Computation 11(1):56–76
44. Knowles, J. D., Corne, D. W., & Deb, K. (2008). Multiobjective problem solving from nature.
Springer natural computing series. Berlin: Springer.
45. Deb, K., & Srinivasan, A. (2006). Innovization: Innovating design principles through
optimisation. In Proceedings of the genetic and evolutionary computation conference
(GECCO-2006) (pp. 1629–1636). New York: ACM.
46. Deb, K., & Sindhya, K. (2008). Deciphering innovative principles for optimal electric
brushless D.C. permanent magnet motor design. In Proceedings of the world congress on
computational intelligence (WCCI-2008) (pp. 2283–2290). Piscataway, NY: IEEE Press.
47. Bandaru, S., & Deb, K. (in press). Towards automating the discovery of certain innovative
design principles through a clustering based optimisation technique. Engineering Optimization.
doi:10.1080/0305215X.2010.528410
1 Multi-objective Optimisation Using Evolutionary Algorithms 33
48. Deb, K., & Goel, T. (2001). A hybrid multi-objective evolutionary approach to engineering
shape design. In Proceedings of the first international conference on evolutionary multi-
criterion optimisation (EMO-01) (pp. 385–399).
49. Sindhya, K., Deb, K., & Miettinen, K. (2008). A local search based evolutionary multi-
objective optimisation technique for fast and accurate convergence. In Proceedings of the
parallel problem solving from nature (PPSN-2008). Berlin, Germany: Springer.
50. Khare, V., Yao, X., & Deb, K. (2003). Performance scaling of multi-objective evolutionary
algorithms. In Proceedings of the second evolutionary multi-criterion optimisation (EMO-03)
conference (LNCS) (Vol. 2632, pp. 376–390).
51. Luque, M., Miettinen, K., Eskelinen, P., & Ruiz, F. (2009). Incorporating preference
information in interactive reference point based methods for multiobjective optimisation.
Omega 37(2):450–462
52. Branke, J., & Deb, K. (2004). Integrating user preferences into evolutionary multi-objective
optimisation. In Y. Jin (Ed.), Knowledge incorporation in evolutionary computation (pp.
461–477). Heidelberg, Germany: Springer.
53. Deb, K., Zope, P., & Jain, A. (2003). Distributed computing of Pareto-optimal solutions using
multi-objective evolutionary algorithms. In Proceedings of the second evolutionary multi-
criterion optimisation (EMO-03) conference (LNCS) (Vol. 2632, pp. 535–549).
54. Deb, K., & Saxena, D. (2006). Searching for Pareto-optimal solutions through dimensionality
reduction for certain large-dimensional multi-objective optimisation problems. In
Proceedings of the world congress on computational intelligence (WCCI-2006) (pp. 3352–
3360).
55. Saxena, D. K., & Deb, K. (2007) Non-linear dimensionality reduction procedures for certain
large-dimensional multi-objective optimisation problems: Employing correntropy and a
novel maximum variance unfolding. In Proceedings of the fourth international conference on
evolutionary multi-criterion optimisation (EMO-2007) (pp. 772–787).
56. Brockhoff, D., & Zitzler, E. (2007) Dimensionality reduction in multiobjective optimisation:
The minimum objective subset problem. In K. H. Waldmann, & U. M. Stocker (Eds.),
Operations research proceedings 2006 (pp. 423–429). Heidelberg: Springer.
57. Brockhoff, D., & Zitzler, E. (2007). Offline and online objective reduction in evolutionary
multiobjective optimisation based on objective conflicts (p. 269). ETH Zürich: Institut für
Technische Informatik und Kommunikationsnetze.
58. Farina, M., & Amato, P. (2004). A fuzzy definition of optimality for many criteria
optimisation problems. IEEE Transactions on Systems, Man and Cybernetics, Part A:
Systems and Humans 34(3):315–326.
59. Branke, J. (2001). Evolutionary optimisation in dynamic environments. Heidelberg,
Germany: Springer.
60. Deb, K., Rao, U. B., & Karthik, S. (2007). Dynamic multi-objective optimisation and
decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling
bi-objective optimisation problems. In Proceedings of the fourth international conference on
evolutionary multi-criterion optimisation (EMO-2007).
61. Deb, K., Gupta, S., Daum, D., Branke, J., Mall, A., & Padmanabhan, D. (2009). Reliability-
based optimisation using evolutionary algorithms. IEEE Transactions on Evolutionary
Computation 13(5):1054–1074
62. Deb, K., & Gupta, H. (2006). Introducing robustness in multi-objective optimisation.
Evolutionary Computation Journal 14(4):463–494
63. Basseur, M., & Zitzler, E. (2006). Handling uncertainty in indicator-based multiobjective
optimisation. International Journal of Computational Intelligence Research 2(3):255–272
64. Cruse, T. R. (1997). Reliability-based mechanical design. New York: Marcel Dekker.
65. Du, X., & Chen, W. (2004). Sequential optimisation and reliability assessment method for
efficient probabilistic design. ASME Transactions on Journal of Mechanical Design
126(2):225–233.
66. El-Beltagy, M. A., Nair, P. B., & Keane, A. J. (1999). Metamodelling techniques for
evolutionary optimisation of computationally expensive problems: Promises and limitations.
34 K. Deb