11 Real World Applications of Multiobjec
11 Real World Applications of Multiobjec
11.1 Introduction
J. Branke et al. (Eds.): Multiobjective Optimization, LNCS 5252, pp. 285–327, 2008.
c Springer-Verlag Berlin Heidelberg 2008
286 T. Stewart et al.
In examining the case studies presented here, it may be seen that the
applications may be distinguished along two primary dimensions, namely:
The number of objectives which may be:
• Few, i.e. 2 or 3 (impacts of which can be visualized graphically);
• Moderate, perhaps ranging from 4 to around 20;
• Large, up to hundreds of objectives
The level of interaction with decision makers, i.e. the involvement of policy
makers, stakeholders or advisers outside of the technical team. The level
of such interaction may be:
• Low, such as in many engineering design problems (but for an excep-
tion, see Section 11.7) where the analyst is part of the engineering
team concerned with identifying a few potentially good designs;
• Moderate, such as in operational management or interactive design
problems where solutions may need to be modified in the light of pro-
fessional or technical experience from other areas of expertise;
• Intensive, such as in public sector planning or strategic management
problems, where acceptable alternatives are constructed by interaction
between decision makers and other stakeholders, facilitated by the an-
alyst.
Not all combinations of number of objectives and level of interaction may
necessarily occur. For example, the public sector planning or strategic man-
agement problems which require intensive interactions, also tend often to be
associated with larger numbers of objectives. In the case studies reported in
this chapter, we have attempted to provide summaries of a number of real
case studies in which the authors have been involved, and which do between
them illustrate all three levels for each dimension identified above. Table 11.1
summarizes the placement of each of the cases along the above two dimensions.
These studies exemplify the wide range of problems to which multiobjec-
tive optimization methods can and have been applied. Half of the case studies
deal with engineering design problems, which is clearly an important area
of application, but even within this category there is a wide diversity. For
example, we have two examples from aircraft design, but one (Section 11.2)
focuses on the trade-off between robustness and cost in aircraft design, while
the other (Section 11.7) deals with the need to provide a broad holistic in-
teractive decision support to aircraft designers. Although both applications
relate to aircraft design, the issues raised are substantially different so that
different sections in this chapter are devoted to each of them.
Other applications range over operational management of supply chains,
effective treatment of cancers and conflicts between environmental, social and
economic factors in regional planning.
A perhaps less usual application is that described in Section 11.4. Here
the multiobjective optimization methods are applied not directly to design,
operational or strategic decisions, but to the development of understanding of
molecular processes in synthesizing drugs.
Although the number of objectives are typically low (two or three) if the ge-
ometrical constraints are not counted, aerodynamic design optimization is a
challenging engineering task for a number of reasons. Firstly, aerodynamic
optimization often needs to deal with a large number of design parameters.
Secondly, no analytical function is available for evaluating the performance of
a given design, and as a result many gradient-based optimization techniques
are inapplicable. Thirdly, to evaluate the quality of designs, either computa-
tionally expensive computational fluid dynamics (CFD) simulations have to
be performed or costly experiments have to be conducted. Finally, aerody-
namic optimization involves multiple disciplines and more than one objective
must be considered.
In recent years, evolutionary algorithms have successfully been applied to
single and multiobjective aerodynamic optimization (Obayashi et al., 2000;
Olhofer et al., 2000; Hasenjäger et al., 2005). Despite the success that has
been achieved in evolutionary aerodynamic optimization, several issues must
be carefully addressed.
11.2.2 Methodology
Geometric Representation
Finding a proper representation scheme is the first and most important step
toward successful optimization of aerodynamic structures. A few general cri-
teria can be mentioned for choosing an appropriate geometric representation.
Firstly, the representation should be sufficiently flexible to describe highly
complex structures. An overly constrained representation will produce only
288 T. Stewart et al.
Robustness Considerations
11.2.3 An Example
model. At the second level, each CFD simulation is again parallelized on four
computers using the node-only model. Consequently, if the population size is
P , the needed number of computing nodes will be 4P +1.
An efficient model-based evolutionary multiobjective optimization algo-
rithm, the regularity modeling multiobjective estimation of distribution algo-
rithm (RM-MEDA) (Zhang et al., 2008), has been employed for the optimiza-
tion of the 3D turbine blade. RM-MEDA is in principle a variant of estimation
of distribution algorithms (EDA) (Larranaga and Lozano, 2001). Instead of
using Gaussian models, a first-order principle curve has been used to model
the regularly distributed Pareto-optimal solutions complemented by a Gaus-
sian model. As demonstrated in Jin et al. (2008), by modeling the regularity
in the distribution of Pareto-optimal solutions, the scalability of the EDA can
be greatly improved. Furthermore, unlike most EDAs, which require a large
population size, RM-MEDA performs well even with a small population size.
In this example, a population size of 20 has been used.
The optimization results from two independent runs are plotted in Fig.
11.2, in each of which the population has been evolved for 100 generations.
Note, however, that the population was initialized not randomly, but with
solutions from previous optimization runs using weighted aggregation ap-
proaches (Hasenjäger et al., 2005). Compared to the results reported in Hasen-
jäger et al. (2005), we see that the non-dominated solutions obtained by the
RM-MEDA are better in terms of both coverage and accuracy.
292 T. Stewart et al.
40.000
35.000
30.000
PS variation
25.000
20.000
15.000
10.000
5.000
9 9.5 10 10.5 11 11.5 12 12.5 13
pressure loss
Fig. 11.2. Solutions obtained from two independent evolutionary runs. The dia-
monds denote the solutions from the first run, and the squares from the second
run.
11.3.2 Methodology
The evolutionary process: Here the crossing over was done between two enti-
ties both of which are essentially self-sustaining neural networks. This process
is further elaborated in Figure 11.5. A self-adaptive real-coded mutation was
performed on the weights, which draws its inspirations from Differential Evo-
lution (Price et al., 2005).
The multi-objective algorithm used in this study utilized a Moore neigh-
bourhood inhabited by two distinct species: the predators and the preys. The
preys are a family of sparse neural networks, initiated randomly as a popu-
lation, and they evolved in the usual genetic way. The members of the prey
population differed from each other both by the topology of the lower part
connections and the corresponding weight values. The predators in this algo-
rithm are a family of externally induced entities, which do not evolve, and the
major purpose of their presence is to prune the prey populations based upon
the fitness values. A two dimensional lattice was constructed as a computa-
tional space and both the predators and the prey were randomly introduced
there, where each of them would have its own neighbourhood. The basic idea
propagated in this algorithm inherits some of the concepts of cellular au-
tomata in Moore’s neighbourhood. However, unlike cellular automata, here
the lattice here does not denote the discretized physical space; it is just a
mathematical construction that facilitates a smooth implementation of this
algorithm. Further details are available in the original work (Pettersson et al.,
2007a).
The method seems to have worked better when the initial population is de-
liberately generated in the vicinity of the estimated nadir region. The progress
of the rank-one members is captured in Figure 11.6 and a computed Pareto
frontier is shown in Figure 11.7. Each discrete point in the frontier denotes a
neural net with a different ability of prediction than the others. Some typi-
cal examples are shown in Figure 11.8. As the ultimate choice between them
remains the task of the decision maker, the conservative middle ground ‘B’
shown in Figure 11.7 should be adequate for most applications.
This novel method of multi-objective analysis is not just to benefit the
steel industries: basically it is robust enough to handle noisy data irrespective
of their sources. Very recently this methodology has been augmented further
through the use of Kalman filters (Saxén et al., 2007), and it has also been
effectively utilized for identifying the most important in-signal in a very large
network (Pettersson et al., 2007b), rendering it of further interest to the soft
computing researchers at large.
294 T. Stewart et al.
Fig. 11.5. The crossover scheme. The shaded regions are participating in the
crossover process.
11 Real-World Applications of Multiobjective Optimization 295
Fig. 11.8. Data prediction through three networks A (top), B (middle) and C
(bottom). The lighter lines denote actual observations for a period of 200 days and
the darker lines are the predicted values provided in (Pettersson et al., 2007a).
The docking of a highly flexible small molecule (the ligand) to the active site of
a highly flexible macromolecule (the receptor) is described in this Section. See
Morris et al. (1998); MacKerell Jr. (2004) for a more detailed discussion of the
problem. The ability to predict the final docked structure of the intermolecular
complex has a great importance for the development of new drugs as docking
modifies the biological and chemical behavior of the receptor. Most of the
current docking methods account only for the ligand flexibility and consider
the receptor as a rigid body because the inclusion of receptor flexibility could
involve thousands of degrees of freedom. Current research in this field is faced
with this problem. The application described here focuses on a different aspect
of the docking procedure: the optimization methodology applied to find the
best docked structure. The application of a multi-objective approach to the
docking problem based on the Pareto optimization of different features of a
docked structure is proposed. It is shown that this approach allows for the
identification of the dominating interactions that drives the global process.
A drug performs its activity by binding itself to the receptor molecule,
usually a protein. In their bounded structure, the molecules gain complemen-
tary chemical and geometrical properties that are essential for the therapeutic
11 Real-World Applications of Multiobjective Optimization 297
function of the drug. The computational process that searches for a ligand that
best fits the binding site of the receptor from a geometrical and chemical point
of view is known as molecular docking.
A molecule is represented by its atoms and the bonds connecting them.
Atoms are described mainly by their Van der Waals radius that roughly de-
fines their volume; bonds are described by their lengths (the distance between
atoms), by the angle between two consecutive bonds and by their conforma-
tional state (the dihedral angle between three consecutive bonds). Molecules
are not static systems. At room temperature they perform a variety of motions
each one having a characteristic time scale. Since the time scales of stretching
(changes in bond lengths) and bending (changes in bond angles) have greater
time scales than conformational motions (changes in dihedral angles), bond
lengths and bond angles can be considered fixed. Thus, from the docking point
of view, only conformational degrees of freedom are important.
Typically, ligands have from 3 to 15 conformational degrees of freedom;
their values define the conformational state of the ligand. Receptors have typ-
ically from 1000 to 3000 conformational degrees of freedom, so the dimension
of the complete search space for best docked conformation becomes compu-
tationally unaffordable even for routine cases. The most widely used simplifi-
cation is to consider only the ligand flexibility, so reducing the complexity of
the search space.
The different possible ligand conformations are ranked according to their
fitness with the receptor. What this fitness stands for is one of the key aspects
of molecular docking and differentiates various docking methodologies. Most
of the docking fitness functions are based on the calculation of the total en-
ergy of the docked structure. Energy based fitness functions are built starting
from force fields which represent a functional form of the potential energy of a
molecule. They are composed of a combination of different terms that can be
classified in bonded terms (regarding bond energies, bond angles, bond con-
formations) and non-bonded terms (Van der Waals and electrostatic). This
energy can be calculated in various ways, ranging from quantum mechanics
to empirical methods. Obviously, a more “exact” fitness function as derived
from quantum mechanical simulations strongly impacts on the computational
complexity and is applicable only for small systems on massive parallel com-
puters; the opportunity to use “rough” empirical models creates the possibility
of treating more realistic cases.
In summary, a docking procedure is composed by two main elements: a fit-
ness function to score different conformations for the molecular complex and
a search procedure to explore the space of possible conformations. In current
docking approaches, the bonded and non-bonded terms both contribute to
the fitness function and the optimization has a single objective equal to their
weighted sum. The weights are determined by statistical analysis of exper-
imental data. The proposed multi-objective optimization approach incorpo-
rates two conflicting objectives, i.e. the concurrent minimization of internal
298 T. Stewart et al.
11.4.2 Methodology
MOGA-II
MOGA-II is an improved version of the MOGA (Multi-Objective Genetic Al-
gorithm) of Poloni (Poloni and Pediroda, 1997). It uses smart multi-search
elitism for robustness and directional crossover for fast convergence. The ef-
ficiency of MOGA-II is controlled by its operators (classical crossover, di-
rectional crossover, mutation and selection) and by the use of elitism. The
internal encoding of MOGA-II is implemented as in classical genetic algo-
rithms. Elitism plays a crucial role in multi-objective optimization because it
helps preserving the individuals that are closest to the Pareto front and the
ones that have the best dispersion. MOGA-II uses four different operators for
reproduction: one point crossover, directional crossover, mutation and selec-
tion. At each step of the reproduction process, one of the four operators is
chosen with regard to the predefined operator probabilities.
A strong characteristic of this algorithm is the directional crossover that
is slightly different from other crossover operators and assumes that a direc-
tion of improvement can be detected comparing the fitness of individuals. A
novel operator called evolutionary direction crossover is introduced and it is
shown that even in the case of a complex multi-modal function this operator
outperforms classical crossover. The direction of improvement is evaluated by
comparing the fitness of the individual Indi from generation t with the fit-
ness of its parents belonging to generation t − 1. The new individual is then
created by moving in a randomly weighted direction that lies within the ones
individuated by the given individual and his parents.
11.4.3 Results
A “bound docking” experiment was performed: on the basis of the x-ray struc-
ture of the complex, the receptor coordinates were separated from those of the
ligand, and then an attempt made to reconstruct the original x-ray structure
by docking the ligand to the receptor. Starting from the scoring function of
equation (11.1), Autodock gives the values for the internal energy of the lig-
and and for the intermolecular ligand-receptor interaction energy. These two
outputs were assigned as the objective of the optimization.
The tests were conducted on PDB code 1KV3 chain A co-crystallized with
GDP (http://www.rcsb.org/pdb). The resulting Pareto front is reported in
Figure 11.9 (in which the units for the axes are Kcal/mol).
Fig. 11.9. Pareto frontier for the molecular docking problem. Energies are in
Kcal/mol. Boxed values represent RMSD in Angstrom between the candidate solu-
tion and the original x-ray structure.
The squared values represent the root mean squared deviation (RMSD) in Å
(angstrom) between the candidate solution and the original x-ray structure.
Typically, RMSD values less than 1.5 Å are considered as good solutions.
It is possible to note that in this case the docking process is mainly driven
by the intermolecular energy. This information could be useful for a deeper
understanding of the effective relative influence of the contributions of the
scoring function to this particular docking process. From a practical point of
view, it could also be useful for the design of a tailored scoring function for the
300 T. Stewart et al.
docking of similar drug candidates. Also note the presence of a “knee” point
(RMSD=1.27 Å). This is a particularly interesting solution of the docking
problem in which a small improvement in the minimization of the ligand
energy leads to a large deterioration of the intermolecular energy.
Apart from desired dose levels in the tumour and surrounding critical struc-
tures, so called “importance factors” for these entities need to be specified as
input. The software then employs heuristics to find a good treatment plan,
which is presented to the planner. If it is unsatisfactory the importance factors
have to be changed and the process will be repeated. Treatment planners are
aware of the inefficiency of this approach. So the goal was to investigate the
possibility of a planning system that would calculate several plans right away
and provides decision support for choosing an appropriate one.
11.5.2 Methodology
Mathematical models for the intensity optimisation problem are based on the
discretisation of the body and the beams. The body is divided into volume
elements (voxels) represented by dose points. Voxels are cubic and their edge
length is defined by the slice thickness and resolution of the patient’s CAT
images and is in the range of a few mm. at most. Deposited dose is calculated
for one dose point in every voxel and assumed to be the same throughout
the voxel. A beam is discretised into beam elements (bixels). Their size is
defined by the number of leafs of the collimator and the number of stops for
each leaf. The number of voxels may be tens or hundreds of thousands and
the number of bixels can be up to 1,000 per beam. The relationship between
intensity and dose is linear, i.e., d = Ax where x is a vector of bixel intensities.
The entries aij of A represent the rate at which dose is deposited in voxel i
by bixel j. Finally, d is a dose vector that represents the discretised dose
distribution in the patient. The computation of the values aij is referred to
as dose calculation.
While most optimisation models in the medical physics literature have a
single objective, they do try to accommodate the conflicting goals of destroying
tumour cells and sparing healthy tissue. Almost all can be interpreted as
weighted sum scalarisations of multi-objective programming models, where
the weights are the importance factors mentioned above. Almost all of these
multi-objective models are convex problems, so that their efficient sets can be
mapped to one another. We decided to use a multi-objective version of the
model of Holder (2003), which has some nice mathematical properties. Here
A is decomposed by rows into AT , AC , and AN depending on whether a voxel
belongs to the tumour, critical structures, or normal tissue. Accordingly, T U B
and T LB are vectors of upper and lower bounds on the dose delivered to the
tumour voxels; CU B is a vector of upper bounds for the critical structure
voxels; and N U B a vector of upper bounds for the remaining normal tissue
voxels. The objectives of the model are to minimise the violation of any of the
lower and upper bounds and can be stated as shown in (11.2). αU B, βU B,
and γU B are parameters to restrict the deviations to clinically relevant values.
302 T. Stewart et al.
min{(α, β, γ) : T LB − αe ≦ AT x ≦ T U B, AC x ≦ CU B + βe,
AN x ≦ N U B + γe, 0 ≦ α ≦ αU B,
min CU Bi ≦ β ≦ βU B, 0 ≦ γ ≦ γU B, 0 ≦ x}, (11.2)
i
From the planners’ point of view the whole set of nondominated points is not
very useful, since it is infinite. Also, for the same reason of imprecision in dose
calculation mentioned above, planners would not distinguish between plans if
they differ only by very small amounts. It is necessary to select a finite set of
nondominated points (efficient solutions). The nondominated extreme points
and associated basic solutions have only mathematical relevance, but no clini-
11 Real-World Applications of Multiobjective Optimization 303
100 100
90 90 100
80 80
80
γ
70 70
γ
60
γ
60 60
50 40
50
−20 0 0
5
0 10
15 −40
10
−40 −20
−20 0
α 20 20 0 20 20
β −40
α β α −20
20 0
20
β
Fig. 11.10. ε-nondominated set with ε = 0.1 (a) and ε = 0.005 (b) and set of
representative nondominated points (c).
cal meaning. The selection of plans should represent the whole nondominated
set, but guarantee a certain minimal difference between the points. We devel-
oped a method to determine a representative subset of nondominated points
in Shao and Ehrgott (2007). The method first constructs an equidistant lat-
tice of points placed with distance d on a simplex S (the reference plane) that
supports Y at the minimiser of eT y over Y and such that Y ⊂ S + R3≧ . For
each lattice point q an LP
min{t : q + te ∈ Y, t ≧ 0}
is solved. If the optimal value is t̂, the point q + t̂e is tested for nondominance.
It can be shown√that the distance between remaining nondominated points is
between d and 3d. Figure 11.10 (c) shows a representative set for the same
example shown in Figure 11.10 (a) and (b).
Since the representative points are all nondominated the planners now
have a choice between several plans. By the theory of linear programming,
we know that they are all optimal solutions of some weighted sum problem
using importance factors as used in current practice. Moreover, the whole
range of such solutions is represented. To support planners in the choice of
a plan, visual aids are necessary within a decision suppor system. Planners
are used to judging the quality of a plan by looking at isodose curves and
dose volume histograms (DVH). The former are colour-wash pictures showing
curves of equal dose superimposed on CAT pictures. The latter are plots
of the percentage of tumour and critical structures against dose levels, see
Figure 11.11.
The representative set of solutions (treatment plans) is stored in a database
and input to the software Carina (Ehrgott and Winz, 2008) which first pro-
poses a balanced solution of (11.2) (with as equal as possible values of α, β, γ)
displaying the corresponding DVH and isodose plots as well as some informa-
tion on available trade-offs. The planner can then specify changes (going to a
neighbouring solution, searching for solutions with specific values, or for solu-
304 T. Stewart et al.
Fig. 11.11. Isodose curves and dose-volume histogram for a brain tumour treatment
plan.
tions satisfying some thresholds). This process is continued until the planner
accepts a treatment plan.
The interaction with the treatment planner is therefore ex-post, allowing
the likely time consuming plan calculation to be decoupled from plan selection.
As a consequnce, plan selection becomes faster as it is based on information
retrieval from databse, a real-time operation. Moreover, the specification of
dose levels is more natural than the “guessing” of importance factors.
11.5.4 Remarks
Fig. 11.12. Supply Chain Planning Matrix Fleischmann et al. (2005), p. 87.
306 T. Stewart et al.
Fig. 11.13. Supply chain planning matrix using mySAP SCM terminology.
11 Real-World Applications of Multiobjective Optimization 307
11.6.2 Methodology
One of the main aspects of the SNP planning process is the cost-based
plan determination. The following cost types are used to build a cost model
which represents the business scenario of value base planning:
• Penalties for not meeting customer demand / forecast,
• Penalties for late satisfaction of customer demand / forecast (location
product specific)
• Penalties for not meeting safety stock / safety days’ supply requirements
(location product specific, linear or piecewise linear)
• Storage cost (location product specific)
• Penalty for exceeding maximum stock level / maximum coverage (location
product specific, linear or piecewise linear)
• External procurement cost (linear or piecewise linear, location product
specific)
• Handling in / out cost (location product specific)
• Transportation cost (transportation lane, product and means of transport
specific, linear or piecewise linear)
• Variable production cost (production process specific, linear or piecewise
linear)
• Fixed production cost / setup cost (production process specific)
• Resource utilization cost (resource specific)
• Costs for additional resource utilization (e.g. use of additional shifts, re-
source specific)
• Cost for falling below minimum resource utilization.
The definition of the cost model is of crucial importance for controlling the
behaviour of the SNP optimizer. One of the central questions is whether to
maximize service level – which usually means using high penalties for non
and late delivery – or to maximize profits – which requires use of realistic sale
prices. In the case study scenario, the non delivery cost levels reflect real sale
prices sufficiently close to enable a profit maximization logic.
Another important feature of the case study scenario and the resulting
cost model is inventory control. High seasonality effects and long campaign
durations necessitate considerable build-up of stocks. To avoid an unbalanced
build-up of stock, soft constraints for safety stock and maximum stock levels
are used. To achieve an even better inventory levelling across products and
locations, piecewise linear cost functions for falling below safety stock as well
as for exceeding maximum stock levels are employed. In SNP optimization all
revelevant constraints can be considered, including
• capacities for production, transportation, handling and storage resources,
• maximum location product specific storage quantities,
• minimum, maximum and fixed production lot sizes,
• minimum, maximum and fixed transportation lot sizes,
• minimum production campaign lot sizes.
11 Real-World Applications of Multiobjective Optimization 309
The short term planning process is dealt with in the Production Planning and
Detailed Scheduling module within SAP APO.
PP/DS focuses on determining an optimal production sequence on key
resources. In PP/DS, a more detailed modelling than on the SNP planning
level is chosen. On the basis of the results determined in SNP optimization,
a detailed schedule which considers additional resources and products is cre-
ated. This schedule is fully executable and there is no need for manual planner
intervention, even though manual re-planning and adjustments are fully sup-
ported within the PP/DS module. An executable plan can only be ensured
by considering additional complex constraints in PP/DS optimization. These
additional constraints include:
• Time-continuous planning
• Sequence-dependent setup and cleaning operations.
As the value based planning part is handled within SNP, the PP/DS optimizer
uses a different objective function than the SNP optimizer. The following goals
can be weighted in the objective function, which is subject to minimization:
• Sum of delays and maximum delay against given due dates
• Setup time and Setup cost
• Makespan (i.e. time interval between first and last activity for optimizing
the compactness of the plan)
• Resource cost (i.e. costs associated with the selection of alternative re-
sources)
The main objective of the PP/DS optimizer run in the scenario at hand is to
minimize setup times and costs on resources without incurring too much delay
against the order due dates. For some resource groups, resource costs are also
used to ensure that priority is given to the ‘best’ (i.e. fastest, cheapest, etc.)
resources.
11.6.3 Remarks
We have seen that both in Supply Network Planning and in Detailed Schedul-
ing there are a huge number of objectives to be minimized. However these
objectives can be mastered by forming a 4-level hierarchy.
On the root or top level, two dimensions of the second level can be differen-
tiated: Service degree and real costs. The objective of real costs differentiates
at the third level between for example:
310 T. Stewart et al.
There are few things humans build that are more complicated than air-
craft. Not only are the reliability requirements enormous, given the fatal conse-
quences of failure, but the system itself strides a multitude of areas in physics,
such as aerodynamics, thermodynamics, mechanics, and materials. This con-
volution of disciplines has led historically to a very sequential design process,
tackling the various disciplinary issues separately: aerodynamicists only tried
to maximize the performance of the wing (or even just an airfoil), propulsion
engineers tried to build the largest engines, structural engineers tried to build
the sturdiest airframe, while material scientists attempted to only utilize the
lightest and sturdiest materials. As a consequence, the design process itself was
a highly inefficient iterative process of ever changing airplane configurations,
only reconciled by rare, experienced individuals that were proficient in all (or
at least many) disciplines. As these people retired, and significant computa-
tional power became available, a new design process emerged, attempting to
satisfy the concerns of all disciplines concurrently: Multidisciplinary (Design)
Optimization. MDO is inherently a multicriteria optimization problem, since
each discipline contributes at least one objective function that potentially
conflicts with the objective(s) of the other disciplines. The following example
demonstrates the ability of one MCDM technique, Interactive Evolutionary
Design, to address the difficult task of balancing the different disciplinary
objectives when determining the preliminary design configuration of a Super-
sonic Business Jet.
Figure 11.14 outlines the interactive evolutionary design process employed
for this application example (see Bandte and Malinchik, 2004, for background
discussion). After the problem is set-up by defining design variables, objec-
tives and constraints and sufficient feasibility has been established, a GA is
being interrupted after several generations to display the current population
via spider-graphs and Pareto Frontier displays for objective values as well as
visualizations of the aircraft configurations. Based on this information the
designer can make some choices regarding objective preferences and features
of interest, redirecting search and influencing selection respectively. To limit
the scope of this example, only the redirection of the search through objective
preferences is being implemented here. However, as exemplified later, designer
selection of features of interest is an important part of interactive evolutionary
design and should not be neglected in general. The following sections lay out
in detail all tasks performed over several iterations for this example.
As in any design problem, the first step is to define the independent param-
eters, objectives and constraints, as well as evaluation functions that describe
the objectives’ dependencies on the independent variables. For this interactive
evolutionary design environment, this step also identifies the genotype repre-
sentation of a design alternative, the fitness evaluation function, influenced by
the objectives, and how to handle design alternatives that violate constraints.
The supersonic business jet is described by five groups of design variables,
displayed in a screen shot presented in Figure 11.15. The first group, general,
consists of variables for the vehicle, some of which could be designated design
requirements. The other four groups contain geometric parameters for the
wing, fuselage, empennage, and engine. The engine group also entails propul-
sion performance parameters relevant to the design. All in all, the chromo-
some for this supersonic business jet contains 35 variables that can be varied
to identify the best solution.
A mix of economic, size, and vehicle performance parameters were chosen
as objectives in this example, with a special emphasis on noise generation,
since it is anticipated to be a primary concern for a supersonic aircraft. Hence,
for the initial loop the boom loudness and, as a counter weight, the acquisition
cost are given slightly higher importance of 20%, while all other objectives are
set at 10%. Furthermore, certain noise levels could be prohibitively large and
prevent the design from getting regulatory approval. Hence, some of the noise
objectives have to have constraint values imposed on them. In addition to
these constraints, the design has to fulfill certain FAA requirements regarding
take-off and landing distances as well as approach speed. Furthermore, the
amount of available fuel has to be more than what is needed for the design
mission. Finally, fitness is calculated via a weighted sum of normalized ob-
jective values, penalized by a 20% increase in value whenever at least one
constraint is violated. Note that the “best solution” is identified as the one
with the lowest fitness, i.e. objective function values. All constraints, objec-
tives, normalization values and preferences are also displayed in Figure 11.15.
Run GA
Since the initial objective preferences were already specified at problem defi-
nition, the GA can be executed next without requiring further input from the
designer. The GA chosen for this example is one of the most general found
11 Real-World Applications of Multiobjective Optimization 313
in the literature Holland (1975); Mitchell (1996); Haupt and Haupt (1998). It
has a population size of 20 and makes use of a real valued 35-gene represen-
tation, limited to the ranges selected at problem set-up. New generations are
created from a population with a two individuals elite pool and proportionate
probabilistic selection for crossover. The crossover algorithm utilizes a strat-
egy with one splice point that is selected at random within the chromosome.
Since the design variables are grouped in logical categories, this crossover al-
gorithm enables, for example, a complete swap of the engine or fuselage-engine
assembly between parents. Parent solutions are being replaced with offspring.
Each member of the new population has a 15% probability for mutation at
ten random genes, sampling a new value from a uniform distribution over the
entire range of design variable values. The GA in this example is used for
demonstration purposes only and therefore employs just a small population.
A population size of 50 to 100 seems more appropriate for a more elaborate
version of the presented interactive evolutionary design approach.
Display Information
Once the GA has executed 80 generations, it is interrupted for the first time to
display the population of design alternatives found to this point. The designers
are presented with information that is intended to provide maximum insight
314 T. Stewart et al.
into the search process and the solutions it is yielding. In order to allow for a
reasonable display size of the aircraft configuration, only the four best design
alternatives, based on fitness and highest diversity in geometrical features, are
presented in detail on the top of the left hand side of the display. A screen
shot of the displayed information is presented in Figure 11.15, highlighting the
individual with the best/lowest fitness, which is also enlarged to provide the
designers with a more detailed view of the selected configuration. The design
variable values for the highlighted alternative and their respective ranges are
presented below this larger image, completing the “chromosome information”
on the left hand side of the pane.
On the right hand side of the pane, the designers can find the objective
and constraint information pertaining to the population and the highlighted
individual. On the top, a simple table outlines the specific objective values for
the highlighted alternative, as well as the objective preferences and normal-
ization factors used to generate the fitness values for the current population.
Below the table, a spider graph compares the four presented alternatives on
the basis of their normalized objective values, while to the right four graphs
display the objective values for the entire population, including its Pareto
frontier (highlighted individual in black). Below the spider chart, a table lists
the constraint parameter values for the highlighted alternative as well as the
respective constraint values. A green font represents constraint parameter val-
ues near, orange font right around, and red font way beyond the constraint
value. Finally at the bottom, three graphs display the population with respect
to its member’s constraint parameter values as well as the infeasible region,
superimposed. These graphs in particular indicate the level of feasibility in
the current population.
Provide Input
This step represents the central interaction point of the human with the IEC
environment. Here they process the information displayed and communicate
preferences for objectives, features of interest in particular designs, whether
specific design variable values should be held constant in future iterations,
what parameter setting the GA should run with in the next iteration (e.g.
a condition that identifies the end of the GA iteration), or whether specific
design alternatives should serve as parents for the next generation.
Analyzing the data provided, it is noticeable that all objectives except
for the boom loudness are being satisfied well. Consequently, in an attempt
to achieve satisfactory levels for the boom loudness in the next iteration, its
preference is increased to 30%, reducing the acquisition cost’s importance to
10%. This feedback is provided to the GA via a pop-up screen (not displayed
here) that allows the designer to enter the new preference values for the next
iteration. With this new preference information the GA is executed for another
80 generations.
11 Real-World Applications of Multiobjective Optimization 315
Examining the population after another 80 generations yields that the last
set of preferences still did not emphasize the boom loudness enough, since
boom loudness improved only marginally (from 87.12 to 86.96 dB). On the
other hand, sideline noise did improve significantly (from 93.65 to 90.94 dB),
so that for the next iteration all emphasis can be given to boom loudness. To
keep the score even for all other objectives, they are being kept at 5% with
boom loudness at 65%.
For this iteration 160 more generations were executed to produce the popula-
tion displayed in the screen shot presented in Figure 11.16. In part due to the
longer GA run, a very good solution, #7172, is found after 400 generations
with largely improved values for almost all the objectives. This result is some-
what surprising, considering most objectives had only a 5% level of preference
and the one objective with 65%, boom loudness, improved only marginally.
This result can be attributed to an exemplified effect from summing corre-
lated objective function values that are caused by similar design alternatives
(with similar design variable settings) exhibiting similarly good (or bad) val-
ues for all objectives except boom loudness. The fact that boom loudness is
a conflicting objective, specifically with sideline noise, can also be observed
from the pronounced Pareto frontier in the second objective chart from the
top.
However, the presented solution after 400 generations seems to satisfy
the objectives better than the published solution in Buonanno et al. (2002),
generated by MATLAB c ’s Fmincon function (The MathWorks Inc. (2008)).
So it could be concluded that none of these objective values are dramatically
out of sync or range and the presented individual is the final solution.
The work described here was motivated by problems of land use allocation or
re-allocation in the Netherlands. Land which is already intensely developed
316 T. Stewart et al.
xrcℓ set to zero) and upper and lower bounds on the total area allocated to a
R C
particular land use (i.e. on r=1 c=1 xrcℓ ).
Some objectives relate to directly quantifiable costs and benefits, and tend
to be additive in nature. For example, if all such objectives (without loss in
generality) are expressed as costs then:
R C Λ
i
fi (x) = βrcℓ xrcℓ
r=1 c=1 ℓ=1
i
where βrcℓ is the cost in terms of objective i associated with allocating land
use ℓ to cell (r, c).
As initially described by Aerts et al. (2005), however, a critical manage-
ment objective is to ensure that land uses are sufficiently compact to allow
integrated planning and management. Aerts et al. (2005) introduce essentially
one measure of compactness, related to the numbers of cells adjacent to cells
of the same land use. This concept was extended in our work by means of a
more detailed evaluation of the fundamental underlying management objec-
tives. Defining a cluster of cells as a connected set of cells allocated to a single
land use, three measures of performance for each land use type were identified
as follows:
• Numbers of clusters for each land use, Cℓ : These measure the degree of
fragmentation of land uses, and minimization of the number of clusters
would seek to ensure that areas of the same land use are connected as far
as possible.
• Relative magnitude of the largest cluster for each land use: Maximization
of the ratio Lℓ = nL L
ℓ /Nℓ is sought, where nℓ and Nℓ are respectively
the number of cells in the largest cluster and the total numbers of cells
allocated for land use ℓ. If multiple clusters are formed, then it would
often be better to have at least one large consolidated cluster, than for all
clusters to be relatively small.
• Compactness of land uses, denoted by Rℓ , defined by a weighted average
across all clusters for land use ℓ of the ratio of the perimeter to the square
root of the area of the cluster. This measure should be minimized as a
compact area for one land use (e.g. a square or circular region) may be
easier to manage than a long thread-like cluster.
The above measures define an additional 3Λ objectives, as the compactness
goals need to be achieved for each land use individually. Furthermore, the
calculation of Cℓ , Lℓ and Rℓ require the execution of a clustering algorithm, so
that these additional objectives are non-linear and computationally expensive.
The total number of objectives is thus k = k0 + 3Λ, where k0 is the number
of additive objectives.
318 T. Stewart et al.
11.8.2 Methodology
is given in Figure 11.17, which presents three land use maps. The numbers
in the maps indicate nine potential land use types, namely: 1. Intensive agri-
culture; 2. Extensive agriculture; 3. Residence; 4. Industry; 5. Day recreation;
6. Overnight recreation; 7. Wet natural area; 8. Water (recreational); and 9.
Water (limited access).
Final Solution
Fig. 11.17. Three land use maps generated from the LUDSS.
The upper left hand map displays the original land use pattern. The upper
right hand map was generated in the first optimization step, and provides a
very compact allocation of land uses. However, the costs were deemed to be
too high, largely because of the extent of agricultural land reclaimed from wet
areas. For this reason, the priority on the cost attribute was increased. Some
more fragmentation of the land uses was then re-introduced, much of which
was acceptable except for that of the agricultural land. Also, the values as-
sociated with conservation goals were found to be unsatisfactory. Adjustment
320 T. Stewart et al.
of these priorities led after the 8th iteration to the lower map in Figure 11.17,
and this was found to represent a satisfactory compromise.
11.9.2 Methodology
grasp the total trade-off context on the basis of numerical information only,
for problems with such a large number of objective functions.
– distortion
– curvature of field
– color balance
– resolution
– MTF
– CCI
In lens design, there is the further difficulty of nonlinear optimization in addi-
tion to the large number of objective functions: Scalarized optimization prob-
lems are usually highly nonlinear and highly multi-modal. Moreover, those
functional forms are not given explicitly in terms of design variables. Those
function values are evaluated on the basis of some kind of simulation (ray
trace). Therefore, it is difficult to obtain a global minimum for the objective
function.
So far, engineers use specific software in lens design. Their main attention
has been directed to how they obtain a global optimum for the scalarized
objective function, while the linearly weighted sum scalization function is
applied. It will be surely a good subject to investigate how interactive multi-
objective optimization techniques can work in lens design.
for more more than two or three objectives, although the Chapter 9 in this
book seeks to extend such opportunities.
The clear challenge to future research lies precisely in the interface be-
tween these implicit and explicit methods of searching the Pareto Frontier.
An opportunity may lie in using the interactive methods using surrogate mea-
sures of performance for an initial exploration, but using the explicit search
methods (linked to appropriate visualization) to refine the exploration across
those objectives which are found most critical to the final decisions in the
most promising regions of the decision space.
References
Aerts, J.C.J.H., van Herwijnen, M., Janssen, R., Stewart, T.J.: Evaluating spatial
design techniques for solving land-use allocation problems. Journal of Environ-
mental Planning and Management 48(1), 121–142 (2005)
Alexandrov, N.M., Dennis, J.E., Lewis, R.M., Torczon, V.: A trust region framework
for managing use of approximation models in optimization. Journal on Structural
Optimization 15(1), 16–23 (1998)
Arima, T., Sonoda, T., Shirotori, M., Tamura, A., Kikuchi, K.: A numerical investi-
gation of transonic axial compressor rotor flow using a low-Reynolds-number k − ǫ
turbulence model. ASME Journal of Turbomachinery 121(1), 44–58 (1999)
Bandte, O., Malinchik, S.: A broad and narrow approach to interactive evolutionary
design – an aircraft design example. In: Deb, K., et al. (eds.) GECCO 2004. LNCS,
vol. 3103, pp. 883–895. Springer, Heidelberg (2004)
Bartsch, H., Bickenbach, P.: Supply Chain Management mit SAP APO. Galileo
Press, Bonn (2001)
Belton, V., Stewart, T.J.: Multiple Criteria Decision Analysis: An Integrated Ap-
proach. Kluwer Academic Publishers, Boston (2002)
Benson, H.: An outer approximation algorithm for generating all efficient extreme
points in the outcome set of a multiple objective linear programming problem.
Journal of Global Optimization 13(1), 1–24 (1998)
Buonanno, M., Lim, C., Mavris, D.N.: Impact of configuration and requirements on
the sonic boom of a quiet supersonic jet. Presented at World Aviation Congress,
Phoenix, AZ (2002)
Charnes, A., Cooper, W.: Management Models and Industrial Applications of Linear
Programming, vol. 1. John Wiley, New York (1961)
Dickersbach, J.T.: Supply Chain Management with APO, 2nd edn. Springer, Berlin
(2005)
Ehrgott, M., Winz, I.: Interactive decision support in radiation therapy treatment
planning. OR Spectrum 30, 311–329 (2008)
Ehrgott, M., Holder, A., Reese, J.: Beam selection in radiotherapy design. In: Linear
Algebra and Its Applications, vol. 428, pp. 1272–1312 (2008a)
Ehrgott, M., Hamacher, H.W., Nußbaum, M.: Decomposition of matrices and static
multileaf collimators: A survey. In: Alves, C.J.S., Pardalos, P.M., Vicente, L.N.
(eds.) Optimization in Medicine. Springer Series in Optimization and Its Applica-
tions, vol. 12, pp. 25–46. Springer Science & Business Media, New York (2008b)
11 Real-World Applications of Multiobjective Optimization 325
Emmerich, M., Giannakoglou, K., Naujoks, B.: Single and multi-objective evolution-
ary optimization assisted by Gaussian random field meta-models. IEEE Transac-
tions on Evolutionary Computation 10(4), 421–439 (2006)
Fleischmann, B., Meyr, H., Wagner, M.: Advanced planning. In: Stadtler, H., Kilger,
C. (eds.) Supply Chain Management and Advanced Planning. Concepts, Models,
Software and Case Studies, 3rd edn., pp. 81–106. Springer, Berlin (2005)
Hasenjäger, M., Sendhoff, B., Sonoda, T., Arima, T.: Three dimensional evolution-
ary aerodynamic design optimization using single and multi-objective approaches.
In: Schilling, R., Haase, W., Periaux, J., Baier, H., Bugeda, G. (eds.) Evolutionary
and Deterministic Methods for Design, Optimization and Control with Applica-
tions to Industrial and Societal Problems EUROGEN 2005, Munich, FLM (2005)
Haupt, R.L., Haupt, S.E.: Practical Genetic Algorithms. John Wiley & Sons, New
York (1998)
Holder, A.: Designing radiotherapy plans with elastic constraints and interior point
methods. Health Care Management Science 6, 5–16 (2003)
Holland, J.: Adaptation in Natural and Artificial Systems. The University of Michi-
gan Press, Ann Arbor (1975)
Janssen, R., van Herwijnen, M., Stewart, T.J., Aerts, J.C.J.H.: Multiobjective deci-
sion support for land use planning. Environment and Planning B, Planning and
Design. To appear (2007)
Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computa-
tion. Soft Computing 9(1), 3–12 (2005)
Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments – A survey.
IEEE Transactions on Evolutionary Computation 9(3), 303–317 (2005)
Jin, Y., Olhofer, M., Sendhoff, B.: On evolutionary optimization with approximate
fitness functions. In: Genetic and Evolutionary Computation Conference, pp. 786–
792. Morgan Kaufmann, San Francisco (2000)
Jin, Y., Olhofer, M., Sendhoff, B.: A framework for evolutionary optimization with
approximate fitness functions. IEEE Transactions on Evolutionary Computa-
tion 6(5), 481–494 (2002)
Jin, Y., Olhofer, M., Sendhoff, B.: On evolutionary optimization of large problems
using small populations. In: Wang, L., Chen, K., Ong, Y.S. (eds.) ICNC 2005.
LNCS, vol. 3611, pp. 1145–1154. Springer, Heidelberg (2005)
Jin, Y., Zhou, A., Zhang, Q., Tsang, E.: Modeling regularity to improve scalabil-
ity of model-based multi-objective optimization algorithms. In: Multiobjective
Problem Solving from Nature. Natural Computing Series, pp. 331–356. Springer,
Heidelberg (2008)
Larranaga, P., Lozano, J.A. (eds.): Estimation of Distribution Algorithms: A New
Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht
(2001)
Li, X.-D.: A real-coded predator-prey genetic algorithm for multiobjective optimiza-
tion. In: Fonseca, C.M., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) EMO
2003. LNCS, vol. 2632, pp. 207–221. Springer, Heidelberg (2003)
Lim, D., Ong, Y.-S., Jin, Y., Sendhoff, B., Lee, B.S.: Inverse multi-objective robust
evolutionary optimization. Genetic Programming and Evolvable Machines 7(4),
383–404 (2007)
MacKerell Jr., A.D.: Empirical force fields for biological macromolecules: Overview
and issues. Journal of Computational Chemistry 25(13), 1584–1604 (2004)
326 T. Stewart et al.
Price, K., Storn, R.N., Lampinen, J.A. (eds.): Differential Evolution: A Practical
Approach to Global Optimizations. Springer, Berlin (2005)
Saxén, H., Pettersson, F., Gunturu, K.: Evolving nonlinear time-series models of
the hot metal silicon content in the blast furnace. Materials and Manufacturing
Processes 22, 577–584 (2007)
Shao, L.: A survey of beam intensity optimization in IMRT. In: Halliburton, T. (ed.)
Proceedings of the 40th Annual Conference of the Operational Research Society
of New Zealand, Wellington, 2-3 December 2005, pp. 255–264 (2005), Available
online at http://secure.orsnz.org.nz/conf40/content/paper/Shao.pdf
Shao, L., Ehrgott, M.: Finding representative nondominated points in multiobjec-
tive linear programming. In: IEEE Symposium on Computational Intelligence in
Multi-Criteria Decision Making, pp. 245–252. IEEE Computer Society Press, Los
Alamitos (2007)
Shao, L., Ehrgott, M.: Approximately solving multiobjective linear programmes in
objective space and an application in radiotherapy treatment planning. Mathe-
matical Methods of Operations Research (2008)
Stewart, T.J., Janssen, R., van Herwijnen, M.: A genetic algorithm approach to
multiobjective land use planning. Computers and Operations Research 32, 2293–
2313 (2004)
Takagi, H.: Interactive evolutionary computation: Fusion of the capacities of EC op-
timization and human evaluation. Proceedings of the IEEE 89, 1275–1296 (2001)
The MathWorks Inc. (2008)
Tsutsui, S., Ghosh, A.: Genetic algorithms with a robust solution searching scheme.
IEEE Transactions on Evolutionary Computation 1(3), 201–208 (1997)
Wierzbicki, A.P.: Reference point approaches. In: Gal, T., Stewart, T.J., Hanne, T.
(eds.) Multicriteria Decision Making: Advances in MCDM Models, Algorithms,
Theory, and Applications, Kluwer Academic Publishers, Boston (1999)
Zhang, Q., Zhou, A., Jin, Y.: RM-MEDA: A regularity model-based multi-objective
estimation of distribution algorithm. IEEE Transactions on Evolutionary Com-
putation 12(1), 41–63 (2008)