0% found this document useful (0 votes)
8 views

Multi-objective Optimisation Using

Uploaded by

amiralirezaei
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Multi-objective Optimisation Using

Uploaded by

amiralirezaei
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 1

Multi-objective Optimisation Using


Evolutionary Algorithms:
An Introduction

Kalyanmoy Deb

Abstract As the name suggests, multi-objective optimisation involves optimising


a number of objectives simultaneously. The problem becomes challenging when
the objectives are of conflicting characteristics to each other, that is, the optimal
solution of an objective function is different from that of the other. In the course of
solving such problems, with or without the presence of constraints, these problems
give rise to a set of trade-off optimal solutions, popularly known as Pareto-optimal
solutions. Because of the multiplicity in solutions, these problems were proposed
to be solved suitably using evolutionary algorithms using a population approach
in its search procedure. Starting with parameterized procedures in early 90s, the
so-called evolutionary multi-objective optimisation (EMO) algorithms is now an
established field of research and application with many dedicated texts and edited
books, commercial softwares and numerous freely downloadable codes, a biannual
conference series running successfully since 2001, special sessions and workshops
held at all major evolutionary computing conferences, and full-time researchers
from universities and industries from all around the globe. In this chapter, we
provide a brief introduction to its operating principles and outline the current
research and application studies of evolutionary multi-objective optmisation (EMO).

K. Deb (&)
Department of Mechanical Engineering, Indian Institute of Technology,
Kanpur, Uttar Pradesh 208016, India
e-mail: [email protected]
URL: http://www.iitk.ac.in/kangal/deb.htm

L. Wang et al. (eds.), Multi-objective Evolutionary Optimisation for Product 3


Design and Manufacturing, DOI: 10.1007/978-0-85729-652-8_1,
 Springer-Verlag London Limited 2011
4 K. Deb

1.1 Introduction

In the past 15 years, EMO has become a popular and useful field of research and
application. Evolutionary optimisation (EO) algorithms use a population-based
approach in which more than one solution participates in an iteration and evolves a
new population of solutions in each iteration. The reasons for their popularity are
many: (i) EOs do not require any derivative information, (ii) EOs are relatively
simple to implement, and (iii) EOs are flexible and have a wide-spread applica-
bility. For solving single-objective optimisation problems, particularly in finding a
single optimal solution, the use of a population of solutions may sound redundant,
in solving multi-objective optimisation problems an EO procedure is a perfect
choice [1]. The multi-objective optimisation problems, because their attributes,
give rise to a set of Pareto-optimal solutions, which need further processing to
arrive at a single preferred solution. To achieve the first task, it becomes quite a
natural proposition to use an EO, because the use of population in an iteration
helps an EO to simultaneously find multiple non-dominated solutions, which
portrays a trade-off among objectives, in a single simulation run.
In this chapter, we present a brief description of an evolutionary optimisation
procedure for single-objective optimisation. Thereafter, we describe the principles
of EMO. Then, we discuss some salient developments in EMO research. It is clear
from these discussions that EMO is not only being found to be useful in solving
multi-objective optimisation problems, it is also helping to solve other kinds of
optimisation problems more efficiently than they are traditionally solved. As a
by-product, EMO-based solutions are helping to elicit very valuable insight about
a problem—a which is difficult to achieve otherwise. EMO procedures with a
decision making concept are discussed as well. Some of these ideas require further
detailed studies and this chapter mentions some such topics for current and future
research in this direction.

1.2 Evolutionary Optimisation for Single-Objective


Optimisation

EO principles are different from classical optimisation methodologies in the fol-


lowing main ways [2]:
• An EO procedure does not usually use gradient information in its search pro-
cess. Thus, EO methodologies are direct search procedures, allowing them to be
applied to a wide variety of optimisation problems.
• An EO procedure uses more than one solution (a population approach) in an
iteration, unlike in most classical optimisation algorithms which updates one
solution in each iteration (a point approach). The use of a population has a
number of advantages: (i) it provides an EO with a parallel processing power
1 Multi-objective Optimisation Using Evolutionary Algorithms 5

achieving a computationally quick overall search, (ii) it allows an EO to find


multiple optimal solutions, thereby facilitating the solution of multi-modal and
multi-objective optimisation problems, and (iii) it provides an EO with the
ability to normalise decision variables (as well as objective and constraint
functions) within an evolving population using the population-best minimum
and maximum values.
• An EO procedure uses stochastic operators, unlike deterministic operators used
in most classical optimisation methods. The operators tend to achieve a desired
effect by using higher probabilities towards desirable outcomes, as opposed to
using predetermined and fixed transition rules. This allows an EO algorithm to
negotiate multiple optima and other complexities better and provide them with
a global perspective in their search. An EO begins its search with a population
of solutions usually created at random within a specified lower and upper bound
on each variable. Thereafter, the EO procedure enters into an iterative operation
of updating the current population to create a new population by the use of four
main operators: selection, crossover, mutation and elite-preservation. The
operation stops when one or more pre-specified termination criteria are met.
The initialisation procedure usually involve a random creation of solutions. If in
a problem the knowledge of some good solutions is available, it is better to use
such information in creating the initial population. Elsewhere [3], it is highlighted
that for solving complex real-world optimisation problems, such a customised
initialisation is useful and also helpful in achieving a faster search. After the
population members are evaluated, the selection operator chooses above-average
(in other words, better) solutions with a larger probability to fill an intermediate
mating pool. For this purpose, several stochastic selection operators have been
developed as discussed in the EO literature. In its simplest form (called the
tournament selection [4]), two solutions can be picked at random from the eval-
uated population and the better of the two (in terms of its evaluated order) can be
picked.
The ‘variation’ operator is a collection of a number of operators (such as
crossover, mutation, etc.) which are used to generate a modified population. The
purpose of the crossover operator is to pick two or more solutions (parents) ran-
domly from the mating pool and create one or more solutions by exchanging
information among the parent solutions. The crossover operator is applied with a
crossover probability ðpc 2 ½0; 1Þ; indicating the proportion of population mem-
bers participating in the crossover operation. The remaining ð1  pc Þ proportion of
the population is simply copied to the modified (child) population. In the context
of real-parameter optimisation having n real-valued variables and involving a
crossover with two parent solutions, such that each variable may be crossed at a
time. A probability distribution which depends on the difference between the two
parent variable values is often used to create two new numbers as child values
around the two parent values [5]. Besides the variable-wise recombination oper-
ators, vector-wise recombination operators also suggested to propagate the cor-
relation among variables of parent solutions to the created child solutions [6, 7].
6 K. Deb

Each child solution, created by the crossover operator, is then perturbed in its
vicinity by a mutation operator [2]. Every variable is mutated with a mutation
probability pm ; usually set as 1=n (n is the number of variables), so that on an
average one variable gets mutated per solution. In the context of real-parameter
optimisation, a simple Gaussian probability distribution with a predefined variance
can be used with its mean at the child variable value [1]. This operator allows an
EO to search locally around a solution and is independent on the location of other
solutions in the population.
The elitism operator combines the old population with the newly created
population and chooses to keep better solutions from the combined population.
Such an operation makes sure that an algorithm has a monotonically non-
degrading performance. Rudolph [8] proved an asymptotic convergence of a
specific EO but having elitism and mutation as two essential operators.
Finally, the user of an EO needs to choose termination criteria. Often, a pre-
determined number of generations is used as a termination criterion. For goal
attainment problems, an EO can be terminated as soon as a solution with a pre-
defined goal or a target solution is found. In many studies [2, 9–11], a termination
criterion based on the statistics of the current population vis-a-vis that of the
previous population to determine the rate of convergence is used. In other more
recent studies, theoretical optimality conditions (such as the extent of satisfaction
of Karush–Kuhn–Tucker (KKT) conditions) are used to determine the termination
of a real-parameter EO algorithm [12]. Although EOs are heuristic based, the use
of such theoretical optimality concepts in an EO can also be used to test their
converging abilities towards local optimal solutions.
To demonstrate the working of the above-mentioned GA, we show four
snapshots of a typical simulation run on the following constrained optimisation
problem:

Minimise f ðxÞ ¼ ðx21 þ x2  11Þ2 þ ðx1 þ x22  7Þ2


subject to g1 ðxÞ  26  ðx1  5Þ2  x22  0;
ð1:1Þ
g2 ðxÞ  20  4x1  x2  0;
0  ðx1 ; x2 Þ  6:

Ten points are used and the GA is run for 100 generations. The SBX recombi-
nation operator is used with probability of pc ¼ 0:9 and index gc ¼ 10: The
polynomial mutation operator is used with a probability of pm ¼ 0:5 with an index
of gm ¼ 50: Figures 1.1, 1.2, 1.3 and 1.4 show the populations at generation zero,
5, 40 and 100, respectively. It can be observed that in only five generations, all 10
population members become feasible. Thereafter, the points come close to each
other and creep towards the constrained minimum point.
The EA procedure is a population-based stochastic search procedure which
iteratively emphasises its better population members, uses them to recombine and
perturb locally in the hope of creating new and better populations until a prede-
fined termination criterion is met. The use of a population helps to achieve an
1 Multi-objective Optimisation Using Evolutionary Algorithms 7

Fig. 1.1 Initial population

Fig. 1.2 Population at


generation 5

implicit parallelism [2, 13, 14] in an EO’s search mechanism (causing an inherent
parallel search in different regions of the search space), a process which makes an
EO computationally attractive for solving difficult problems. In the context of
certain Boolean functions, a computational time saving to find the optimum
varying polynomial to the population size is proven [15]. On the one hand, the EO
procedure is flexible, thereby allowing a user to choose suitable operators and
problem-specific information to suit a specific problem. On the other hand, the
flexibility comes with the onus on the part of a user to choose appropriate and
tangible operators so as to create an efficient and consistent search [16]. However,
the benefits of having a flexible optimisation procedure, over their more rigid and
specific optimisation algorithms, provide fensibility in solving difficult real-world
optimisation problems involving non-differentiable objectives and constraints,
8 K. Deb

Fig. 1.3 Population at


generation 40

Fig. 1.4 Population at


generation 100

non-linearities, discreteness, multiple optima, large problem sizes, uncertainties in


computation of objectives and constraints, uncertainties in decision variables,
mixed type of variables, and others.
A wiser approach to solving optimisation problems of the real world would be
to first understand the niche of both EO and classical methodologies and then
adopt hybrid procedures employing the better of the two as the search progresses
over varying degrees of search-space complexity from start to finish. As demon-
strated in the above typical GA simulation, there are two phases in the search of a
GA. First, the GA exhibits a more global search by maintaining a diverse popu-
lation, thereby discovering potentially good regions of interest. Second, a more
local search takes place by bringing the population members closer together.
Although the above GA degenerates to both these search phases automatically
without any external intervention, a more efficient search can be achieved if the
1 Multi-objective Optimisation Using Evolutionary Algorithms 9

later local search phase can be identified and executed with a more specialized
local search algorithm.

1.3 Evolutionary Multi-objective Optimisation

A multi-objective optimisation problem involves a number of objective functions


which are to be either minimised or maximised. As in a single-objective optimi-
sation problem, the multi-objective optimisation problem may contain a number of
constraints which any feasible solution (including all optimal solutions) must
satisfy. Since objectives can be either minimised or maximised, we state the multi-
objective optimisation problem in its general form:
9
Minimise/Maximise fm ðxÞ; m ¼ 1; 2; . . .; M; >>
subject to gj ðxÞ  0; j ¼ 1; 2; . . .; J; =
hk ðxÞ ¼ 0; k ¼ 1; 2; . . .; K; > ð1:2Þ
ðLÞ ðUÞ
>
;
xi  xi  xi ; i ¼ 1; 2; . . .; n:

A solution x 2 Rn is a vector of n decision variables: x ¼ ðx1 ; x2 ; . . .; xn ÞT : The


solutions satisfying the constraints and variable bounds constitute a feasible
decision variable space S  Rn : One of the striking differences between single-
objective and multi-objective optimisation is that in multi-objective optimisation
the objective functions constitute a multi-dimensional space, in addition to the
usual decision variable space. This additional M-dimensional space is called the
objective space, Z  RM : For each solution x in the decision variable space, there
exists a point z 2 RM in the objective space, denoted by fðxÞ ¼ z ¼
ðz1 ; z2 ; . . .; zM ÞT : To make the descriptions clear, we refer a ‘solution’ as a variable
vector and a ‘point’ as the corresponding objective vector.
The optimal solutions in multi-objective optimisation can be defined from a
mathematical concept of partial ordering. In the parlance of multi-objective
optimisation, the term domination is used for this purpose. In this section, we
restrict ourselves to discuss unconstrained (without any equality, inequality or
bound constraints) optimisation problems. The domination between two solutions
is defined as follows [1, 17]:
Definition 1 A solution xð1Þ is said to dominate the other solution xð2Þ ; if both the
following conditions are true:
1. The solution xð1Þ is no worse than xð2Þ in all objectives. Thus, the solutions are
compared based on their objective function values (or location of the corre-
sponding points ðzð1Þ and zð2Þ Þ on the objective space).
2. The solution xð1Þ is strictly better than xð2Þ in at least one objective.
For a given set of solutions (or corresponding points on the objective space, for
example, those shown in Fig. 1.5a), a pair-wise comparison can be made using the
10 K. Deb

(a) (b)

Fig. 1.5 A set of points and the first non-domination front are shown

above definition and whether one point dominates the other can be established. All
points which are not dominated by any other member of the set are called the non-
dominated points of class one, or simply the non-dominated points. For the set of
six solutions shown in the figure, they are points 3, 5, and 6. One property of any
two such points is that a gain in an objective from one point to the other happens
only because of a sacrifice in at least one other objective. This trade-off property
between the non-dominated points makes the practitioners interested in finding a
wide variety of them before making a final choice. These points make up a front
when they are viewed together on the objective space; hence the non-dominated
points are often visualized to represent a non-domination front. The computational
effort needed to select the points of the non-domination front from a set of N points
is OðN log NÞ for two and three objectives, and OðN logM2 NÞ for M [ 3 objec-
tives [18].
With the above concept, now it is easier to define the Pareto-optimal solutions
in a multi-objective optimisation problem. If the given set of points for the above
task contain all points in the search space (assuming a countable number), the
points lying on the non-domination front, by definition, do not get dominated by
any other point in the objective space, hence are Pareto-optimal points (together
they constitute the Pareto-optimal front) and the corresponding pre-images
(decision variable vectors) are called Pareto-optimal solutions. However, more
mathematically elegant definitions of Pareto-optimality (including the ones for
continuous search space problems) exist in the multi-objective literature [17, 19].

1.3.1 Principle of EMO’s Search

In the context of multi-objective optimisation, the extremist principle of finding the


optimum solution cannot be applied to any one particular objective alone, when
the rest of the objectives are also important. Different solutions may produce trade-
offs (conflicting outcomes among objectives) among different objectives. A
solution that is extreme (in a better sense) with respect to one objective requires a
1 Multi-objective Optimisation Using Evolutionary Algorithms 11

compromise in other objectives. This prohibits one to choose a solution which is


optimal with respect to only one objective. This clearly suggests two ideal goals of
multi-objective optimisation:
1. Find a set of solutions which lie on the Pareto-optimal front, and
2. Find a set of solutions which are diverse enough to represent the entire range of
the Pareto-optimal front. EMO algorithms attempt to follow both the above
principles similar to the other a posteriori multiple criteria decision making
(MCDM) methods (refer to this chapter).
Although one fundamental difference between single and multiple objective
optimisation lies in the cardinality in the optimal set, from a practical standpoint a
user needs only one solution, no matter whether the associated optimisation
problem is single or multi-objective. The user is now in a dilemma. As a number of
solutions are optimal, the obvious question arises: Which of these optimal solu-
tions must one choose? This is not an easy question to answer. It involves higher-
level information which is often non-technical, qualitative and experience-driven.
However, if a set of many trade-off solutions are already worked out or available,
one can evaluate the pros and cons of each of these solutions based on all such
non-technical and qualitative, yet important, considerations and compare them to
make a choice. Thus, in a multi-objective optimisation, ideally the effort must be
made in finding the set of trade-off optimal solutions by considering all objectives
to be important. After a set of such trade-off solutions are found, a user can then
use higher-level qualitative considerations to make a choice. As an EMO proce-
dure deals with a population of solutions in every iteration, it makes them intuitive
to be applied in multi-objective optimisation to find a set of non-dominated
solutions. Like other a posteriori MCDM methodologies, an EMO based procedure
works with the following principle in handling multi-objective optimisation
problems:
Step 1. Find multiple non-dominated points as close to the Pareto-optimal front
as possible, with a wide trade-off among objectives.
Step 2. Choose one of the obtained points using higher-level information.
Figure 1.6 shows schematically the principles, followed in an EMO procedure.
As EMO procedures are heuristic based, they may not guarantee in finding Pareto-
optimal points, as a theoretically provable optimisation method would do for
tractable (for example, linear or convex) problems. But EMO procedures have
essential operators to constantly improve the evolving non-dominated points (from
the point of view of convergence and diversity discussed above) similar to the way
most natural and artificial evolving systems continuously improve their solutions.
To this effect, a recent simulation study [12] has demonstrated that a particular
EMO procedure, starting from random non-optimal solutions, can progress
towards theoretical KKT points with iterations in real-valued multi-objective
optimisation problems. The main difference and advantage of using an EMO
compared with a posteriori MCDM procedures is that multiple trade-off solutions
can be found in a single simulation run, as most a posteriori MCDM methodol-
ogies would require multiple applications.
12 K. Deb

Fig. 1.6 Schematic of a


two-step multi-objective
optimisation procedure

In Step 1 of the EMO-based multi-objective optimisation (the task shown


vertically downwards in Fig. 1.6), multiple trade-off, non-dominated points are
found. Thereafter, in Step 2 (the task shown horizontally, towards the right),
higher-level information is used to choose one of the obtained trade-off points.
This dual task allows an interesting feature, if applied for solving single-objective
optimisation problems. It is easy to realize that a single-objective optimisation is a
degenerate case of multi-objective optimisation, as shown in details in another
study [20]. In the case of single-objective optimisation having only one globally
optimal solution, Step 1 will ideally find only one solution, thereby not requiring
us to proceed to Step 2. However, in the case of single-objective optimisation
having multiple global optima, both steps are necessary to first find all or multiple
global optima, and then to choose one solution from them by using a higher-level
information about the problem. Thus, although seems ideal for multi-objective
optimisation, the framework suggested in Fig. 1.6 can be ideally thought as a
generic principle for both single and multiple objective optimisation.

1.3.2 Generating Classical Methods and EMO

In the generating MCDM approach, the task of finding multiple Pareto-optimal


solutions is achieved by executing many independent single-objective optimisa-
tions, each time finding a single Pareto-optimal solution. A parametric scalarizing
approach (such as the weighted-sum approach, -constraint approach, and others)
can be used to convert multiple objectives into a parametric single-objective
function. By simply varying the parameters (weight vector or -vector) and opti-
mising the scalarised function, different Pareto-optimal solutions can be found. In
contrast, in an EMO, multiple Pareto-optimal solutions are attempted to be found
in a single simulation by emphasizing multiple non-dominated and isolated
1 Multi-objective Optimisation Using Evolutionary Algorithms 13

Fig. 1.7 Generative MCDM


methodology employs
multiple, independent single-
objective optimisations

solutions. We discuss a little later some EMO algorithms describing how such dual
emphasis is provided, but now discuss qualitatively the difference between a
posteriori MCDM and EMO approaches.
Consider Fig. 1.7, in which we sketch how multiple independent parametric
single-objective optimisations may find different Pareto-optimal solutions. The
Pareto-optimal front corresponds to global optimal solutions of several scalarised
objectives. However, during the course of an optimisation task, algorithms must
overcome a number of difficulties, such as infeasible regions, local optimal
solutions, flat regions of objective functions, isolation of optimum, etc., to con-
verge to the global optimal solution. Moreover, because of practical limitations, an
optimisation task must also be completed in a reasonable computational time. This
requires an algorithm to strike a good balance between the extent of these tasks its
search operators must do to overcome the above-mentioned difficulties reliably
and quickly. When multiple simulations are to performed to find a set of Pareto-
optimal solutions, the above balancing act must have to performed in every single
simulation. Since simulations are performed independently, no information about
the success or failure of previous simulations is used to speed up the process. In
difficult multi-objective optimisation problems, such memory-less a posteriori
methods may demand a large overall computational overhead to get a set of
Pareto-optimal solutions. Moreover, even though the convergence can be achieved
in some problems, independent simulations can never guarantee finding a good
distribution among obtained points.
EMO, as mentioned earlier, constitutes an inherent parallel search. When a
population member overcomes certain difficulties and make a progress towards the
Pareto-optimal front, its variable values and their combination reflect this fact.
When a recombination takes place between this solution and other population
members, such valuable information of variable value combinations gets shared
through variable exchanges and blending, thereby making the overall task of
finding multiple trade-off solutions a parallelly processed task.
14 K. Deb

Fig. 1.8 Schematic of the


NSGA-II procedure

1.3.3 Elitist Non-dominated Sorting GA or NSGA-II

The NSGA-II procedure [21] is one of the popularly used EMO procedures which
attempt to find multiple Pareto-optimal solutions in a multi-objective optimisation
problem and has the following three features:
1. it uses an elitist principle,
2. it uses an explicit diversity preserving mechanism, and
3. it emphasises non-dominated solutions.
At any generation t; the offspring population ðsay; Qt Þ is first created by using
the parent population ðsay; Pt Þ and the usual genetic operators. Thereafter, the two
populations are combined together to form a new population ðsay; Rt Þ of size 2N:
Then, the population Rt classified into different non-domination classes. There-
after, the new population is filled by points of different non-domination fronts, one
at a time. The filling starts with the first non-domination front (of class one) and
continues with points of the second non-domination front, and so on. Since the
overall population size of Rt is 2N; not all fronts can be accommodated in N slots
available for the new population. All fronts which could not be accommodated are
deleted. When the last allowed front is being considered, there may exist more
points in the front than the remaining slots in the new population. This scenario is
illustrated in Fig. 1.8. Instead of arbitrarily discarding some members from the last
front, the points which will make the diversity of the selected points the highest are
chosen.
The crowded-sorting of the points of the last front which could not be
accommodated fully is achieved in the descending order of their crowding distance
values and points from the top of the ordered list are chosen. The crowding
distance di of point i is a measure of the objective space around i which is not
occupied by any other solution in the population. Here, we simply calculate this
quantity di by estimating the perimeter of the cuboid (Fig. 1.9) formed by using
the nearest neighbors in the objective space as the vertices (we call this the
crowding distance).
1 Multi-objective Optimisation Using Evolutionary Algorithms 15

Fig. 1.9 The crowding


distance calculation

Fig. 1.10 Initial population

Next, we show snapshots of a typical NSGA-II simulation on a two-objective


test problem:
8
>
>Minimize f1 ðxÞ ¼ x1 ;  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
>
>Minimize f2 ðxÞ ¼ gðxÞ 1  f1 ðxÞ=gðxÞ ;
< P30
ZDT2 : where gðxÞ ¼ 1 þ 29 9
i¼2 xi
ð1:3Þ
>
>
>
> 0  x1  1;
:
1  xi  1; i ¼ 2; 3; . . .; 30:
NSGA-II is run with a population size of 100 and for 100 generations. The
variables are used as real numbers and an SBX recombination operator with
pc ¼ 0:9 and distribution index of gc ¼ 10 and a polynomial mutation operator [1]
with pm ¼ 1=n (n is the number of variables) and distribution index of gm ¼ 20 are
used. Figure 1.10 is the initial population shown on the objective space. Fig-
ures 1.11, 1.12 and 1.13 show populations at generations 10, 30 and 100,
respectively. The figures illustrates how the operators of NSGA-II cause the
population to move towards the Pareto-optimal front with generations. At gener-
ation 100, the population comes very close to the true Pareto-optimal front.
16 K. Deb

Fig. 1.11 Population at


generation 10

Fig. 1.12 Population at


generation 30

Fig. 1.13 Population at


generation 100
1 Multi-objective Optimisation Using Evolutionary Algorithms 17

Fig. 1.14 Obtained non-


dominated solutions using
NSGA

1.4 Applications of EMO

Since the early development of EMO algorithms in 1993, they have been applied
to many real-world and interesting optimisation problems. Descriptions of some of
these studies can be found in books [1, 22, 23], dedicated conference proceedings
[24–27], and domain-specific books, journals and proceedings. In this section, we
describe one case study which clearly demonstrates the EMO philosophy which
we described in Sect. 1.3.1.

1.4.1 Spacecraft Trajectory Design

Coverstone-Carroll et al. [28] proposed a multi-objective optimisation technique


using the original non-dominated sorting algorithm (NSGA) [29] to find multiple
trade-off solutions in a spacecraft trajectory optimisation problem. To evaluate a
solution (trajectory), the SEPTOP (Solar Electric Propulsion Trajectory optimi-
sation) software [30] is called for, and the delivered payload mass and the total
time of flight are calculated. The multi-objective optimisation problem has eight
decision variables controlling the trajectory, three objective functions: (i) maxi-
mize the delivered payload at destination, (ii) maximize the negative of the time of
flight, and (iii) maximize the total number of heliocentric revolutions in the tra-
jectory, and three constraints limiting the SEPTOP convergence error and mini-
mum and maximum bounds on heliocentric revolutions.
On the Earth–Mars rendezvous mission, the study found interesting trade-off
solutions [28]. Using a population of size 150, the NSGA was run for 30 gener-
ations. The obtained non-dominated solutions are shown in Fig. 1.14 for two of the
three objectives and some selected solutions are shown in Fig. 1.15. It is clear that
there exist short-time flights with smaller delivered payloads (solution marked 44)
18 K. Deb

Fig. 1.15 Four trade-off trajectories

and long-time flights with larger delivered payloads (solution marked 36). Solution
44 can deliver a mass of 685.28 kg and requires about 1.12 years. On other hand,
an intermediate solution 72 can deliver almost 862 kg with a travel time of about
3 years. In these figures, each continuous part of a trajectory represents a thrusting
arc and each dashed part of a trajectory represents a coasting arc. It is interesting to
note that only a small improvement in delivered mass occurs when comparing the
solutions 73 and 72 with a sacrifice in flight time of about an year.
The multiplicity in trade-off solutions, as depicted in Fig. 1.15, is what we
envisaged in discovering in a multi-objective optimisation problem by using a
posteriori procedure, such as an EMO algorithm. This aspect was also discussed in
Fig. 1.6. Once such a set of solutions with a good trade-off among objectives is
obtained, one can analyze them for choosing a particular solution. For example, in
this problem context, it makes sense to not choose a solution between points 73
and 72 attributable to poor trade-off between the objectives in this range. On the
other hand, choosing a solution within points 44 and 73 is worthwhile, but which
particular solution to choose depends on other mission related issues. But by first
finding a wide range of possible solutions and revealing the shape of front, EMO
can help narrow down the choices and allow a decision maker to make a better
decision. Without the knowledge of such a wide variety of trade-off solutions, a
1 Multi-objective Optimisation Using Evolutionary Algorithms 19

Fig. 1.16 Non-constrained-


domination fronts

proper decision-making may be a difficult task. Although one can choose a scal-
arised objective (such as the -constraint method with a particular  vector) and
find the resulting optimal solution, the decision-maker will always wonder what
solution would have been derived if a different  vector was chosen. For example,
if 1 ¼ 2:5 years is chosen and mass delivered to the target is maximised, a
solution in between points 73 and 72 will be found. As discussed earlier, this part
of the Pareto-optimal front does not provide the best trade-offs between objectives
that this problem can offer. A lack of knowledge of good trade-off regions before a
decision is made may allow the decision maker to settle for a solution which,
although optimal, may not be a good compromised solution. The EMO procedure
allows a flexible and a pragmatic procedure for finding a well-diversified set of
solutions simultaneously so as to enable picking a particular region for further
analysis or a particular solution for implementation.

1.5 Constraint Handling in EMO

The constraint handling method modifies the binary tournament selection, where
two solutions are picked from the population and the better solution is chosen. In
the presence of constraints, each solution can be either feasible or infeasible. Thus,
there may be at most three situations: (i) both solutions are feasible, (ii) one is
feasible and other is not, and (iii) both are infeasible. We consider each case by
simply redefining the domination principle as follows (we call it the constrained-
domination condition for any two solutions xðiÞ and xðjÞ Þ :
Definition 2 A solution xðiÞ is said to ‘constrained-dominate’ a solution
xðjÞ ðor xðiÞ c xðjÞ Þ; if any of the following conditions are true:
20 K. Deb

1. Solution xðiÞ is feasible and solution xðjÞ is not.


2. Solutions xðiÞ and xðjÞ are both infeasible, but solution xðiÞ has a smaller con-
straint violation, which can be computed by adding the normalised violation of
all constraints:
X
J X
K
CVðxÞ ¼ h
gj ðxÞi þ absðhk ðxÞÞ;
j¼1 k¼1

where hai is  a; if a\0 and is zero, otherwise. The normalization is


achieved with the population minimum ðhgj imin Þ and maximum ðhgj imax Þ
constraint violations: gj ðxÞ ¼ ðhgj ðxÞi  hgj imin Þ=ðhgj imax  hgj imin Þ:
3. Solutions xðiÞ and xðjÞ are feasible and solution xðiÞ dominates solution xðjÞ in the
usual sense (Definition 1).
The above change in the definition requires a minimal change in the NSGA-II
procedure described earlier. Figure 1.16 shows the non-domination fronts on a six-
membered population because of the introduction of two constraints (the mini-
mization problem is described as CONSTR elsewhere [1]). In the absence of the
constraints, the non-domination fronts (shown by dashed lines) would have been
((1,3,5), (2,6), (4)), but in their presence, the new fronts are ((4,5), (6), (2), (1),
(3)). The first non-domination front consists of the ‘best’ (that is, non-dominated
and feasible) points from the population and any feasible point lies on a better non-
domination front than an infeasible point.

1.6 Performance Measures Used in EMO

There are two goals of an EMO procedure: (i) a good convergence to the Pareto-
optimal front and (ii) a good diversity in obtained solutions. As both are conflicting
in nature, comparing two sets of trade-off solutions also require different perfor-
mance measures. In the early years of EMO research, three different sets of per-
formance measures were used:
1. metrics evaluating convergence to the known Pareto-optimal front (such as
error ratio, distance from reference set, etc.),
2. metrics evaluating spread of solutions on the known Pareto-optimal front (such
as spread, spacing, etc.), and
3. metrics evaluating certain combinations of convergence and spread of solutions
(such as hypervolume, coverage, R-metrics, etc.).
A detailed study [31] comparing most existing performance metrics based on
out-performance relations has concluded that R-metrics suggested by [32] are the
best. However, a study has argued that a single unary performance measure (any of
the first two metrics described above in the enumerated list) cannot adequately
determine a true winner, as both aspects of convergence and diversity cannot be
1 Multi-objective Optimisation Using Evolutionary Algorithms 21

Fig. 1.17 The hypervolume


enclosed by the non-
dominated solutions

measured by a single performance metric [33]. That study also concluded that
binary performance metrics (indicating usually two different values when a set of
solutions A is compared withB and B is compared with A), such as epsilon-
indicator, binary hypervolume indicator, utility indicators R1 to R3, etc., are better
measures for multi-objective optimisation. The flip side is that the binary metrics
computes MðM  1Þ performance values for two algorithms in an M-objective
optimisation problem, by analysing all pair-wise performance comparisons,
thereby making them difficult to use in practice. In addition, unary and binary
attainment indicators of [34, 35] are of great importance. Figures 1.17 and 1.18
illustrate the hypervolume and attainment indicators. Attainment surface is useful
to determine a representative front obtained from multiple runs of an EMO
algorithm. In general, 50% surface can be used to indicate the front that is dom-
inated by 50% of all obtained non-dominated points.

1.7 EMO and Decision-Making

Finding a set of representative Pareto-optimal solutions using an EMO procedure


is only half the task; choosing a single preferred solution from the obtained set is
also an equally important task. There are three main directions of developments in
this direction.
In the a priori approach, preference information of a decision-maker (DM) is
used to focus the search effort into a part of the Pareto-optimal front, instead of the
entire frontier. For this purpose, a reference point approach [36], a reference
direction approach [37], ‘light beam’ approach [38], etc. have been incorporated in
a NSGA-II procedure to find a preferred part of the Pareto-optimal frontier.
In the a posteriori approach, preference information is used after a set of rep-
resentative Pareto-optimal solutions are found by an EMO procedure. The MCDM
approaches including reference point method, Tschebyscheff metric method, etc.
[17] can be used. This approach is now believed to be applicable only to two, three
or at most four-objective problems. As the number of objectives increase, EMO
22 K. Deb

Fig. 1.18 The attainment


surface is created for a
number of non-dominated
solutions

methodologies exhibit difficulties in converging close to the Pareto-optimal front


and the a posteriori approaches become a difficult proposition.
In the interactive approach, decison maker (DM) preference information is
integrated to an EMO algorithm during the optimisation run. In the progressively
interactive EMO approach [39], the DM is called after every s generations and is
presented with a few well-diversified solutions chosen from the current non-
dominated front. The DM is then asked to rank the solutions according to pref-
erence. The information is then processed through an optimisation task to capture
DM’s preference using an utility function. This utility function is then used to
drive NSGA-II’s search till the procedure is repeated in the next DM call.
The decision-making procedure integrated with an EMO procedure makes the
multi-objective optimisation procedure complete. More such studies must now be
executed to make EMO more usable in practice.

1.8 Multi-objectivisation

Interestingly, the act of finding multiple trade-off solutions using an EMO pro-
cedure has found its application outside the realm of solving multi-objective
optimisation problems per se. The concept of finding multiple trade-off solutions
using an EMO procedure is applied to solve other kinds of optimisation problems
that are otherwise not multi-objective in nature. For example, the EMO concept is
used to solve constrained single-objective optimisation problems by converting the
task into a two-objective optimisation task of additionally minimizing an aggre-
gate constraint violation [40]. This eliminates the need to specify a penalty
parameter while using a penalty based constraint handling procedure. A recent
study [41] utilises a bi-objective NSGA-II to find a Pareto-optimal frontier cor-
responding to minimizations of the objective function and constraint violation. The
frontier is then used to estimate an appropriate penalty parameter, which is then
used to formulate a penalty based local search problem and is solved using a
classical optimisation method. The approach is shown to require an order or two
1 Multi-objective Optimisation Using Evolutionary Algorithms 23

magnitude less function evaluations than the existing constraint handling methods
on a number of standard test problems.
A well-known difficulty in genetic programming studies, called the ‘bloating’,
arises because of the continual increase in size of genetic programs with iteration.
The reduction of bloating by minimizing the size of programs as an additional
objective helped find high-performing solutions with a smaller size of the code
[42]. Minimizing the intra-cluster distance and maximizing inter-cluster distance
simultaneously in a bi-objective formulation of a clustering problem is found to
yield better solutions than the usual single-objective minimization of the ratio of
the intra-cluster distance to the inter-cluster distance [43]. A recently published
book [44] describes many such interesting applications in which EMO method-
ologies have helped solve problems which are otherwise (or traditionally) not
treated as multi-objective optimisation problems.

1.8.1 Knowledge Discovery Through EMO

One striking difference between a single-objective optimisation and multi-objec-


tive optimisation is the cardinality of the solution set. In the latter, multiple
solutions are the outcome and each solution is theoretically an optimal solution
corresponding to a particular trade-off among the objectives. Thus, if an EMO
procedure can find solutions close to the true Pareto-optimal set, what we have in
our hand are a number of high-performing solutions trading-off the conflicting
objectives considered in the study. As they are all near optimal, these solutions can
be analyzed for finding properties which are common to them. Such a procedure
can then become a systematic approach in deciphering important and hidden
properties which optimal and high-performing solutions must have for that
problem. In a number of practical problem-solving tasks, the so-called innoviza-
tion procedure is shown to find important insight into high-performing solutions
[45]. Figure 1.19 shows that of the five decision variables involved in an electric
motor design problem involving minimum cost and maximum peak-torque, four
variables have identical values for all Pareto-optimal solutions [46]. Of the two
allowable electric connections, the ‘Y’-type connection; of three laminations, ‘Y’-
type lamination; of 10–80 different turns, 18 turns, and of 16 different wire sizes,
16-gauge wire remain common to all Pareto-optimal solutions. The only way the
solutions differ, relates to having different number of laminations. In fact, for a
motor having more peak-torque, a linearly increasing number of laminations
becomes a recipe for optimal more design. Such useful properties are expected to
exist in practical problems, as they follow certain scientific and engineering
principles at the core, but finding them through a systematic scientific procedure
had not been paid much attention in the past. The principle of first searching for
multiple trade-off and high-performing solutions using a multi-objective optimi-
sation procedure and then analysing them to discover useful knowledge certainly
remains a viable way forward. The current efforts [47] to automate the knowledge
24 K. Deb

Fig. 1.19 Innovization study


of an electric motor design
problem

extraction procedure through a sophisticated data-mining task is promising and


should make the overall approach more appealing to the practitioners.

1.9 Hybrid EMO Procedures

The search operators used in EMO are generic. There is no guarantee that an EMO
will find any Pareto-optimal solution in a finite number of solution evaluations for
a randomly chosen problem. However, as discussed above, EMO methodologies
provide adequate emphasis to currently non-dominated and isolated solutions so
that population members progress towards the Pareto-optimal front iteratively. To
make the overall procedure faster and to perform the task with a more guaranteed
manner, EMO methodologies must be combined with mathematical optimisation
techniques having local convergence properties. A simple-minded approach would
be to start the optimisation task with an EMO and the solutions obtained from
EMO can be improved by optimising a composite objective derived from multiple
objectives to ensure a good spread by using a local search technique. Another
approach would be to use a local search technique as a mutation-like operator in an
EMO so that all population members are at least guaranteed local optimal solu-
tions. A study [48] has demonstrated that the latter approach is an overall better
approach from a computational point of view.
However, the use of a local search technique within an EMO has another
advantage. As, a local search can find a weak or a near Pareto-optimal point, the
presence of such super-individual in a population can cause other near Pareto-
optimal solutions to be found as a an outcome of recombination of the super-
individual with other population members. A recent study has demonstrated this
aspect [49].
1 Multi-objective Optimisation Using Evolutionary Algorithms 25

1.10 Practical EMOs

Here, we describe some recent advances of EMO in which different practicalities


are considered.

1.10.1 EMO for Many Objectives

With the success of EMO in two and three objective problems, it has become an
obvious quest to investigate if an EMO procedure can also be used to solve four or
more objective problems. An earlier study [50] with eight objectives revealed
somewhat negative results. EMO methodologies work by emphasizing non-dom-
inated solutions in a population. Unfortunately, as the number of objectives
increase, most population members in a randomly created population tend to
become non-dominated to each other. For example, in a three-objective scenario,
about 10% members in a population of size 200 are non-dominated, whereas in a
10-objective problem scenario, as high as 90% members in a population of size
200 are non-dominated. Thus, in a large-objective problem, an EMO algorithm
runs out of space to introduce new population members into a generation, thereby
causing a stagnation in the performance of an EMO algorithm. Moreover, an
exponentially large population size is needed to represent a large-dimensional
Pareto-optimal front. This makes an EMO procedure slow and computationally
less attractive. However, practically speaking, even if an algorithm can find tens of
thousands of Pareto-optimal solutions for a multi-objective optimisation problem,
besides simply getting an idea of the nature and shape of the front, they are simply
too many to be useful for any decision making purposes. Keeping these views in
mind, EMO researchers have taken two different approaches in dealing with large-
objective problems.

1.10.1.1 Finding a Partial Set

Instead of finding the complete Pareto-optimal front in a problem having a large


number of objectives, EMO procedures can be used to find only a part of the
Pareto-optimal front. This can be achieved by indicating preference information by
various means. Ideas, such as reference point based EMO [36, 51], ‘light beam
search’ [38], biased sharing approaches [52], cone dominance [53], etc. are sug-
gested for this purpose. Each of these studies have shown that up to 10 and 20-
objective problems, although finding the complete fortier is a difficulty, finding a
partial frontier corresponding to certain preference information is not that difficult
a proposition. Despite the dimension of the partial frontier being identical to that of
the complete Pareto-optimal frontier, the closeness of target points in representing
the desired partial frontier helps make only a small fraction of an EMO population
26 K. Deb

to be non-dominated, thereby making rooms for new and hopefully better solutions
to be found and stored.
The computational efficiency and accuracy observed in some EMO imple-
mentations have led a distributed EMO study [53] in which each processor in a
distributed computing environment receives a unique cone for defining domina-
tion. The cones are designed carefully so that at the end of such a distributed
computing EMO procedure, solutions are found to exist in various parts of the
complete Pareto-optimal front. A collection of these solutions together is then able
to provide a good representation of the entire original Pareto-optimal front.

1.10.1.2 Identifying and Eliminating Redundant Objectives

Many practical optimisation problems can easily list a large of number of objec-
tives (often more than 10), as many different criteria or goals are often of interest
to practitioners. In most instances, it is not entirely definite whether the chosen
objectives are all in conflict with each other or not. For example, minimization of
weight and minimization of cost of a component or a system are often mistaken to
have an identical optimal solution, but may lead to a range of trade-off optimal
solutions. Practitioners do not take any chance and tend to include all (or as many
as possible) objectives into the optimisation problem formulation. There is another
fact which is more worrisome. Two apparently conflicting objectives may show a
good trade-off when evaluated with respect to some randomly created solutions.
But if these two objectives are evaluated for solutions close to their optima. they
tend to show a good correlation. That is, although objectives can exhibit con-
flicting behavior for random solutions, near their Pareto-optimal front, the conflict
vanishes and optimum of one can approach close to the optimum of the other.
Thinking of the existence of such problems in practice, recent studies [54, 55]
have performed linear and non-linear principal component analysis (PCA) to a set
of EMO-produced solutions. Objectives causing positively correlated relationship
between each other on the obtained NSGA-II solutions are identified and are
declared as redundant. The EMO procedure is then restarted with non-redundant
objectives. This combined EMO–PCA procedure is continued until no further
reduction in the number of objectives is possible. The procedure has handled
practical problems involving five and more objectives and has shown to reduce the
choice of real conflicting objectives to a few. On test problems, the proposed
approach has shown to reduce an initial 50-objective problem to the correct three-
objective Pareto-optimal front by eliminating 47 redundant objectives. Another
study [56] used an exact and a heuristic-based conflict identification approach on a
given set of Pareto-optimal solutions. For a given error measure, an effort is made
to identify a minimal subset of objectives which do not alter the original domi-
nance structure on a set of Pareto-optimal solutions. This idea has recently been
introduced within an EMO [57], but a continual reduction of objectives through a
successive application of the above procedure would be interesting.
1 Multi-objective Optimisation Using Evolutionary Algorithms 27

This is a promising area of EMO research and definitely more and more of
computationally faster objective-reduction techniques are needed for the purpose.
In this direction, the use of alternative definitions of domination is important. One
such idea redefined the definition of domination: a solution is said to dominate
another solution, if the former solution is better than latter in more objectives. This
certainly excludes finding the entire Pareto-optimal front and helps an EMO to
converge near the intermediate and central part of the Pareto-optimal front.
Another EMO study used a fuzzy dominance [58] relation (instead of Pareto-
dominance), in which superiority of one solution over another in any objective is
defined in a fuzzy manner. Many other such definitions are possible and can be
implemented based on the problem context.

1.10.2 Dynamic EMO

Dynamic optimisation involves objectives, constraints, or problem parameters


which change over time. This means that as an algorithm is approaching the opti-
mum of the current problem, the problem definition has changed and now the
algorithm must solve a new problem. Often, in such dynamic optimisation problems,
an algorithm is usually not expected to find the optimum, instead it is best expected
to track the changing optimum with iteration. The performance of a dynamic opti-
miser then depends on how close it is able to track the true optimum (which is
changing with iteration or time). Thus, practically speaking, optimisation algorithms
may hope to handle problems which do not change significantly with time. From the
algorithm’s point of view, as in these problems the problem is not expected to change
too much from one time instance to another and some good solutions to the current
problem are already at hand in a population, researchers ventured to solving such
dynamic optimisation problems using evolutionary algorithms [59].
A recent study [60] proposed the following procedure for dynamic optimisation
involving single or multiple objectives. Let PðtÞ be a problem which changes with
time t ðfrom t ¼ 0 to t ¼ TÞ: Despite the continual change in the problem, we
assume that the problem is fixed for a time period s; which is not known a priori
and the aim of the (offline) dynamic optimisation study is to identify a suitable
value of s for an accurate as well computationally faster approach. For this pur-
pose, an optimisation algorithm with s as a fixed time period is run from t ¼ 0 to
t ¼ T with the problem assumed fixed for every s time period. A measure CðsÞ
determines the performance of the algorithm and is compared with a pre-specified
and expected value CL : If CðsÞ  CL ; for the entire time domain of the execution of
the procedure, we declare s to be a permissible length of stasis. Then, we try with a
reduced value of s and check if a smaller length of statis is also acceptable. If not,
we increase s to allow the optimisation problem to remain stasis for a longer time
so that the chosen algorithm can now have more iterations (time) to perform better.
Such a procedure will eventually come up with a time period s which would be
the smallest time of statis allowed for the optimisation algorithm to work based on
28 K. Deb

chosen performance requirement. Based on this study, a number of test problems


and a hydro-thermal power dispatch problem have been recently tackled [60].
In the case of dynamic multi-objective problem solving tasks, there is an addi-
tional difficulty which is worth mentioning here. Not only does an EMO algorithm
needs to find or track the changing Pareto-optimal fronts, in a real-world imple-
mentation, it must also accommodate an immediate decision about which solution to
implement from the current front before the problem changes to a new one. Deci-
sion-making analysis is considered to be time-consuming involving execution of
analysis tools, higher-level considerations, and sometimes group discussions. If
dynamic EMO is to be applied in practice, automated procedures for making
decisions must be developed. Although it is not clear how to generalize such an
automated decision-making procedure in different problems, problem-specific tools
are certainly possible and certainly a worthwhile and fertile area for research.

1.10.3 Uncertainty Handling Using EMO

A major surge in EMO research has taken place in handling uncertainties among
decision variables and problem parameters in multi-objective optimisation. Prac-
tice is full of uncertainties and almost no parameter, dimension, or property can be
guaranteed to be fixed at a value it is aimed at. In such scenarios, evaluation of a
solution is not precise, and the resulting objective and constraint function values
becomes probabilistic quantities. optimisation algorithms are usually designed to
handle such stochasticities by using crude methods, such as the Monte Carlo
simulation of stochasticities in uncertain variables and parameters and by
sophisticated stochastic programming methods involving nested optimisation
techniques [61]. When these effects are taken care of during the optimisation
process, the resulting solution is usually different from the optimum solution of the
problem and is known as a ‘robust’ solution. Such an optimisation procedure will
then find a solution which may not be the true global optimum solution, but one
which is less sensitive to uncertainties in decision variables and problem param-
eters. In the context of multi-objective optimisation, a consideration of uncer-
tainties for multiple objective functions will result in a robust frontier which may
be different from the globally Pareto-optimal front. Each and every point on the
robust frontier is then guaranteed to be less sensitive to uncertainties in decision
variables and problem parameters. Some such studies in EMO are [62, 63].
When the evaluation of constraints under uncertainties in decision variables and
problem parameters are considered, deterministic constraints become stochastic
(they are also known as ‘chance constraints’) and involves a reliability index ðRÞ to
handle the constraints. A constraint gðxÞ  0 then becomes Prob ðgðxÞ  0Þ  R: In
order to find left side of the above chance constraint, a separate optimisation
methodology [64], is needed, thereby making the overall algorithm a bi-level
optimisation procedure. Approximate single-loop algorithms exist [65] and
recently one such methodology has been integrated with an EMO [61] and shown
1 Multi-objective Optimisation Using Evolutionary Algorithms 29

to find a ‘reliable’ frontier corresponding a specified reliability index, instead of


the Pareto-optimal frontier, in problems having uncertainty in decision variables
and problem parameters. More such methodologies are needed, as uncertainties is
an integral part of practical problem-solving and multi-objective optimisation
researchers must look for better and faster algorithms to handle them.

1.10.4 Meta-Model Assisted EMO

The practice of optimisation algorithms is often limited by the computational


overheads associated with evaluating solutions. Certain problems involving
expensive computations, such as numerical solution of partial differential equa-
tions describing the physics of the problem, finite difference computations
involving an analysis of a solution, computational fluid dynamics simulation to
study the performance of a solution over a changing environment, etc. In some
such problems, evaluation of each solution to compute constraints and objective
functions may take a few hours to a day or two. In such scenarios, even if an
optimisation algorithm needs 100 solutions to get anywhere close to a good and
feasible solution, the application needs an easy three to six months of continuous
computational time. In most practical purposes, this is considered a ‘luxury’ in an
industrial set-up. optimisation researchers are constantly on their toes in coming up
with approximate yet faster algorithms.
Meta-models for objective functions and constraints have been developed for
this purpose. Two different approaches are mostly followed. In one approach, a
sample of solutions are used to generate a meta-model (approximate model of the
original objectives and constraints) and then efforts have been made to find the
optimum of the meta-model, assuming that the optimal solutions of both the meta-
model and the original problem are similar to each other [66, 67]. In the other
approach, a successive meta-modelling approach is used in which the algorithm
starts to solve the first meta-model obtained from a sample of the entire search
space [68–70]. As the solutions start to focus near the optimum region of the meta-
model, a new and more accurate meta-model is generated in the region dictated by
the solutions of the previous optimisation. A coarse-to-fine-grained meta-model-
ling technique based on artificial neural networks is shown to reduce the com-
putational effort by about 30–80% on different problems [68]. Other successful
meta-modelling implementations for multi-objective optimisation based on Kri-
ging and response surface methodologies exist [70, 71].

1.11 Conclusions

This chapter has introduced the fast-growing field of multi-objective optimisation


based on evolutionary algorithms. First, the principles of single-objective EO
techniques have been discussed so that readers can visualize the differences
30 K. Deb

between EO and classical optimisation methods. The EMO principle of handling


multi-objective optimisation problems is to find a representative set of Pareto-
optimal solutions. Since an EO uses a population of solutions in each iteration, EO
procedures are potentially viable techniques to capture a number of trade-off near-
optimal solutions in a single simulation run. This chapter has described a number
of popular EMO methodologies, presented some simulation studies on test prob-
lems, and discussed how EMO principles can be useful in solving real-world
multi-objective optimisation problems through a case study of spacecraft trajectory
optimisation.
Finally, this chapter has discussed the potential of EMO and its current research
activities. The principle of EMO has been utilised to solve other optimisation
problems that are otherwise not multi-objective in nature. The diverse set of EMO
solutions have been analyzed to find hidden common properties that can act as
valuable knowledge to a user. EMO procedures have been extended to enable them
to handle various practicalities. Finally, the EMO task is now being suitably
combined with decision-making activities in order to make the overall approach
more useful in practice.
EMO addresses an important and inevitable fact of problem-solving tasks.
EMO has enjoyed a steady rise of popularity in a short time. EMO methodologies
are being extended to address practicalities. In the area of evolutionary computing
and optimisation, EMO research and application currently stands as one of the
fastest growing fields. EMO methodologies are still to be applied to many areas of
science and engineering. With such applications, the true value and importance of
EMO will become evident.

Acknowledgments The author acknowledges the support and his association with University of
Skövde, Sweden and Aalto University School of Economics, Helsinki. This chapter contains
some excerpts from previous publications by the same author entitled ‘Introduction to Evolu-
tionary Multi-Objective optimisation’, in J. Branke, K. Deb, K. Miettinen and R. Slowinski (Eds.)
Multiobjective Optimization: Interactive and Evolutionary Approaches (LNCS 5252) (pp. 59–96),
2008, Berlin: Springer and ‘Recent Developments in Evolutionary Multi-Objective Optimization’
in M. Ehrgott et al. (Eds.) Trends in Multiple Criteria Decision Analysis (pp. 339-368), 2010,
Berlin: Springer.

References

1. Deb, K. (2001). Multi-objective optimisation using evolutionary algorithms. Chichester, UK:


Wiley.
2. Goldberg, D. E. (1989). Genetic algorithms for search, optimisation, and machine learning.
Reading, MA: Addison-Wesley.
3. Deb, K., Reddy, A. R., & Singh, G. (2003). Optimal scheduling of casting sequence using
genetic algorithms. Journal of Materials and Manufacturing Processes 18(3):409–432.
4. Deb, K. (1999). An introduction to genetic algorithms. Sadhana. 24(4):293–315
5. Deb, K., & Agrawal, R. B. (1995). Simulated binary crossover for continuous search space.
Complex Systems 9(2):115–148
1 Multi-objective Optimisation Using Evolutionary Algorithms 31

6. Deb, K., Anand, A., Joshi, D. (2002). A computationally efficient evolutionary algorithm for
real-parameter optimisation. Evolutionary Computation Journal 10(4):371–395
7. Storn, R., Price, K. (1997). Differential evolution—A fast and efficient heuristic for global
optimisation over continuous spaces. Journal of Global Optimization 11:341–359
8. Rudolph, G. (1994). Convergence analysis of canonical genetic algorithms. IEEE
Transactions on Neural Network 5(1):96–101
9. Michalewicz, Z. (1992). Genetic Algorithms ? Data Structures = Evolution Programs.
Berlin: Springer.
10. Gen, M., & Cheng, R. (1997). Genetic algorithms and engineering design. New York: Wiley.
11. Bäck, T., Fogel, D., & Michalewicz, Z. (Eds.). (1997). Handbook of evolutionary
computation. Bristol/New York: Institute of Physics Publishing/Oxford University Press.
12. Deb, K., Tiwari, R., Dixit, M., & Dutta, J. (2007). Finding trade-off solutions close to KKT
points using evolutionary multi-objective optimisation. In Proceedings of the congress on
evolutionary computation (CEC-2007) (pp. 2109–2116)
13. Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor, MI: MIT
Press.
14. Vose, M. D., Wright, A. H., & Rowe, J. E. (2003). Implicit parallelism. In Proceedings of
GECCO 2003 (lecture notes in computer science) (Vol. 2723–2724). Heidelberg: Springer.
15. Jansen, T., & Wegener, I. (2001). On the utility of populations. In Proceedings of the genetic
and evolutionary computation conference (GECCO 2001) (pp. 375–382). San Mateo, CA:
Morgan Kaufmann.
16. Radcliffe, N. J. (1991). Forma analysis and random respectful recombination. In Proceedings
of the fourth international conference on genetic algorithms (pp. 222–229).
17. Miettinen, K. (1999). Nonlinear multiobjective optimisation. Boston: Kluwer.
18. Kung, H. T., Luccio, F., & Preparata, F. P. (1975). On finding the maxima of a set of vectors.
Journal of the Association for Computing Machinery 22(4):469–476.
19. Ehrgott, M. (2000). Multicriteria optimisation. Berlin: Springer.
20. Deb, K., & Tiwari, S. (2008). Omni-optimiser: A generic evolutionary algorithm for global
optimisation. European Journal of Operations Research 185(3):1062–1087
21. Deb, K., Agrawal, S., Pratapm, A., & Meyarivan, T. (2002). A fast and elitist multi-objective
genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2):182–197
22. Coello, C. A. C., Van Veldhuizen, D. A., & Lamont, G. (2002). Evolutionary algorithms for
solving multi-objective problems. Boston, MA: Kluwer.
23. Osyczka, A. (2002). Evolutionary algorithms for single and multicriteria design optimisation.
Heidelberg: Physica-Verlag.
24. Zitzler, E., Deb, K., Thiele, L., Coello, C. A. C., & Corne, D. W. (2001). Proceedings of the
first evolutionary multi-criterion optimisation (EMO-01) conference (lecture notes in
computer science 1993). Heidelberg: Springer.
25. Fonsecam, C., Fleming, P., Zitzler, E., Deb, K., & Thiele, L. (2003). Proceedings of the
Second Evolutionary Multi-Criterion Optimization (EMO-03) conference (lecture notes in
computer science) (Vol. 2632). Heidelberg: Springer.
26. Coello, C. A. C., Aguirre, A. H., & Zitzler, E. (Eds.). (2005). Evolutionary multi-criterion
optimisation: Third international conference LNCS (Vol. 3410). Berlin, Germany: Springer.
27. Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., & Murata, T. (Eds.). (2007). Evolutionary
multi-criterion optimisation, 4th international conference, EMO 2007, Matsushima, Japan,
March 5–8, 2007, Proceedings. Lecture notes in computer science (Vol. 4403). Heidelberg:
Springer.
28. Coverstone-Carroll, V., Hartmann, J. W., & Mason, W. J. (2000). Optimal multi-objective
low-thurst spacecraft trajectories. Computer Methods in Applied Mechanics and Engineering
186(2–4):387–402
29. Srinivas, N., & Deb, K. (1994). Multi-objective function optimisation using non-dominated
sorting genetic algorithms. Evolutionary Computation Journal 2(3):221–248.
30. Sauer, C. G. (1973). Optimization of multiple target electric propulsion trajectories. In AIAA
11th aerospace science meeting (pp. 73–205).
32 K. Deb

31. Knowles, J. D., & Corne, D. W. (2002). On metrics for comparing nondominated sets. In
Congress on evolutionary computation (CEC-2002) (pp. 711–716). Piscataway, NJ: IEEE
Press.
32. Hansen, M. P., & Jaskiewicz, A. (1998). Evaluating the quality of approximations to the non-
dominated set IMM-REP-1998-7. Lyngby: Institute of Mathematical Modelling Technical
University of Denmark.
33. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., & Fonseca, V. G. (2003). Performance
assessment of multiobjective optimisers: An analysis and review. IEEE Transactions on
Evolutionary Computation 7(2):117–132
34. Fonseca, C. M., & Fleming, P. J. (1996). On the performance assessment and comparison of
stochastic multiobjective optimisers. In H. M. Voigt, W. Ebeling, I. Rechenberg, & H.
P. Schwefel (Eds.), Parallel problem solving from nature (PPSN IV) (pp. 584–593). Berlin:
Springer. Also available as Lecture notes in computer science (Vol. 1141).
35. Fonseca, C. M., da Fonseca, V. G., & Paquete, L. (2005). Exploring the performance of
stochastic multiobjective optimisers with the second-order attainment function. In Third
international conference on evolutionary multi-criterion optimisation, EMO-2005 (pp. 250–
264). Berlin: Springer.
36. Deb, K., Sundar, J., Uday, N., & Chaudhuri, S. (2006). Reference point based multi-objective
optimisation using evolutionary algorithms. International Journal of Computational
Intelligence Research 2(6):273–286
37. Deb, K., & Kumar, A. (2007). Interactive evolutionary multi-objective optimisation and
decision-making using reference direction method. In Proceedings of the genetic and
evolutionary computation conference (GECCO-2007) (pp. 781–788). New York: The
Association of Computing Machinery (ACM).
38. Deb, K., & Kumar, A. (2007). Light beam search based multi-objective optimisation using
evolutionary algorithms. In Proceedings of the congress on evolutionary computation (CEC-
07) (pp. 2125–2132).
39. Deb, K., Sinha, A., & Kukkonen, S. (2006). Multi-objective test problems, linkages and
evolutionary methodologies. In Proceedings of the genetic and evolutionary computation
conference (GECCO-2006) (pp. 1141–1148). New York: The Association of Computing
Machinery (ACM).
40. Coello, C. A. C. (2000). Treating objectives as constraints for single objective optimisation.
Engineering Optimization 32(3):275–308
41. Deb, K., & Datta, R. (2010). A fast and accurate solution of constrained optimisation
problems using a hybrid bi-objective and penalty function approach. In Proceedings of the
IEEE World Congress on Computational Intelligence (WCCI-2010).
42. Bleuler, S., Brack, M., & Zitzler, E. (2001). Multiobjective genetic programming: Reducing
bloat using SPEA2. In Proceedings of the 2001 congress on evolutionary computation (pp.
536–543).
43. Handl, J., & Knowles, J. D. (2007). An evolutionary approach to multiobjective clustering.
IEEE Transactions on Evolutionary Computation 11(1):56–76
44. Knowles, J. D., Corne, D. W., & Deb, K. (2008). Multiobjective problem solving from nature.
Springer natural computing series. Berlin: Springer.
45. Deb, K., & Srinivasan, A. (2006). Innovization: Innovating design principles through
optimisation. In Proceedings of the genetic and evolutionary computation conference
(GECCO-2006) (pp. 1629–1636). New York: ACM.
46. Deb, K., & Sindhya, K. (2008). Deciphering innovative principles for optimal electric
brushless D.C. permanent magnet motor design. In Proceedings of the world congress on
computational intelligence (WCCI-2008) (pp. 2283–2290). Piscataway, NY: IEEE Press.
47. Bandaru, S., & Deb, K. (in press). Towards automating the discovery of certain innovative
design principles through a clustering based optimisation technique. Engineering Optimization.
doi:10.1080/0305215X.2010.528410
1 Multi-objective Optimisation Using Evolutionary Algorithms 33

48. Deb, K., & Goel, T. (2001). A hybrid multi-objective evolutionary approach to engineering
shape design. In Proceedings of the first international conference on evolutionary multi-
criterion optimisation (EMO-01) (pp. 385–399).
49. Sindhya, K., Deb, K., & Miettinen, K. (2008). A local search based evolutionary multi-
objective optimisation technique for fast and accurate convergence. In Proceedings of the
parallel problem solving from nature (PPSN-2008). Berlin, Germany: Springer.
50. Khare, V., Yao, X., & Deb, K. (2003). Performance scaling of multi-objective evolutionary
algorithms. In Proceedings of the second evolutionary multi-criterion optimisation (EMO-03)
conference (LNCS) (Vol. 2632, pp. 376–390).
51. Luque, M., Miettinen, K., Eskelinen, P., & Ruiz, F. (2009). Incorporating preference
information in interactive reference point based methods for multiobjective optimisation.
Omega 37(2):450–462
52. Branke, J., & Deb, K. (2004). Integrating user preferences into evolutionary multi-objective
optimisation. In Y. Jin (Ed.), Knowledge incorporation in evolutionary computation (pp.
461–477). Heidelberg, Germany: Springer.
53. Deb, K., Zope, P., & Jain, A. (2003). Distributed computing of Pareto-optimal solutions using
multi-objective evolutionary algorithms. In Proceedings of the second evolutionary multi-
criterion optimisation (EMO-03) conference (LNCS) (Vol. 2632, pp. 535–549).
54. Deb, K., & Saxena, D. (2006). Searching for Pareto-optimal solutions through dimensionality
reduction for certain large-dimensional multi-objective optimisation problems. In
Proceedings of the world congress on computational intelligence (WCCI-2006) (pp. 3352–
3360).
55. Saxena, D. K., & Deb, K. (2007) Non-linear dimensionality reduction procedures for certain
large-dimensional multi-objective optimisation problems: Employing correntropy and a
novel maximum variance unfolding. In Proceedings of the fourth international conference on
evolutionary multi-criterion optimisation (EMO-2007) (pp. 772–787).
56. Brockhoff, D., & Zitzler, E. (2007) Dimensionality reduction in multiobjective optimisation:
The minimum objective subset problem. In K. H. Waldmann, & U. M. Stocker (Eds.),
Operations research proceedings 2006 (pp. 423–429). Heidelberg: Springer.
57. Brockhoff, D., & Zitzler, E. (2007). Offline and online objective reduction in evolutionary
multiobjective optimisation based on objective conflicts (p. 269). ETH Zürich: Institut für
Technische Informatik und Kommunikationsnetze.
58. Farina, M., & Amato, P. (2004). A fuzzy definition of optimality for many criteria
optimisation problems. IEEE Transactions on Systems, Man and Cybernetics, Part A:
Systems and Humans 34(3):315–326.
59. Branke, J. (2001). Evolutionary optimisation in dynamic environments. Heidelberg,
Germany: Springer.
60. Deb, K., Rao, U. B., & Karthik, S. (2007). Dynamic multi-objective optimisation and
decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling
bi-objective optimisation problems. In Proceedings of the fourth international conference on
evolutionary multi-criterion optimisation (EMO-2007).
61. Deb, K., Gupta, S., Daum, D., Branke, J., Mall, A., & Padmanabhan, D. (2009). Reliability-
based optimisation using evolutionary algorithms. IEEE Transactions on Evolutionary
Computation 13(5):1054–1074
62. Deb, K., & Gupta, H. (2006). Introducing robustness in multi-objective optimisation.
Evolutionary Computation Journal 14(4):463–494
63. Basseur, M., & Zitzler, E. (2006). Handling uncertainty in indicator-based multiobjective
optimisation. International Journal of Computational Intelligence Research 2(3):255–272
64. Cruse, T. R. (1997). Reliability-based mechanical design. New York: Marcel Dekker.
65. Du, X., & Chen, W. (2004). Sequential optimisation and reliability assessment method for
efficient probabilistic design. ASME Transactions on Journal of Mechanical Design
126(2):225–233.
66. El-Beltagy, M. A., Nair, P. B., & Keane, A. J. (1999). Metamodelling techniques for
evolutionary optimisation of computationally expensive problems: Promises and limitations.
34 K. Deb

In Proceedings of the genetic and evolutionary computation conference (GECCO-1999) (pp.


196–203). San Mateo, CA: Morgan Kaufmann.
67. Giannakoglou, K. C. (2002). Design of optimal aerodynamic shapes using stochastic
optimisation methods and computational intelligence. Progress in Aerospace Science
38(1):43–76.
68. Nain, P. K. S., & Deb, K. (2003). Computationally effective search and optimisation
procedure using coarse to fine approximations. In Proceedings of the congress on
evolutionary computation (CEC-2003) (pp. 2081–2088).
69. Deb, K., & Nain, P. K. S. (2007). In An Evolutionary multi-objective adaptive meta-modeling
procedure using artificial neural networks (pp. 297–322). Berlin, Germany: Springer.
70. Emmerich, M. T. M, Giannakoglou, K. C., & Naujoks, B. (2006). Single and multiobjective
evolutionary optimisation assisted by Gaussian random field metamodels. IEEE Transactions
on Evolutionary Computation 10(4):421–439
71. Emmerich, M., & Naujoks, B. (2004). Metamodel-assisted multiobjective optimisation
strategies and their application in airfoil design. In Adaptive computing in design and
manufacture VI (pp. 249–260). London, UK: Springer.

You might also like